I guess the issue is with Nova only, Although openstack server list --all-projects --host compute1  is showing correct number of VM  but galera is having wrong number of VMs.


MariaDB [nova]> SELECT running_vms FROM compute_nodes WHERE deleted_at IS NULL \G;
*************************** 1. row ***************************
running_vms: 0
*************************** 2. row ***************************
running_vms: 1
*************************** 3. row ***************************
running_vms: 20

I tried to use below variables that also didn't work out:
nova_nova_conf_overrides:
  keystone_authtoken:
    service_token_roles_required: True
    service_token_roles: admin
  service_user:
    send_service_user_token: True
    region_name: "{{ nova_service_region }}"
    auth_type: password
    username: "{{ nova_service_user_name }}"
    password: "{{ nova_service_password }}"
    project_name: "{{ nova_service_project_name }}"
    user_domain_id: "{{ nova_service_user_domain_id }}"
    project_domain_id: "{{ nova_service_project_domain_id }}"
    auth_url: "{{ keystone_service_adminurl }}"
    insecure: "{{ keystone_service_adminuri_insecure | bool }}"

enforce_scope and enforce_new_defaults default value is false:

    cfg.BoolOpt('enforce_scope',
                default=False,
                help=_('This option controls whether or not to enforce scope '
                       'when evaluating policies. If ``True``, the scope of '
                       'the token used in the request is compared to the '
                       '``scope_types`` of the policy being enforced. If the '
                       'scopes do not match, an ``InvalidScope`` exception '
                       'will be raised. If ``False``, a message will be '
                       'logged informing operators that policies are being '
                       'invoked with mismatching scope.')),
    cfg.BoolOpt('enforce_new_defaults',
                default=False,
                help=_('This option controls whether or not to use old '
                       'deprecated defaults when evaluating policies. If '
                       '``True``, the old deprecated defaults are not going '
                       'to be evaluated. This means if any existing token is '
                       'allowed for old defaults but is disallowed for new '
                       'defaults, it will be disallowed. It is encouraged to '
                       'enable this flag along with the ``enforce_scope`` '
                       'flag so that you can get the benefits of new defaults '
                       'and ``scope_type`` together')),

I have set enforce_scope and enforce_new_defaults to false in /openstack/venvs/nova-23.3.1.dev4/lib/python3.8/site-packages/oslo_policy/tests/test_policy.py file as well but I am still facing same issue.

Regards,
Danish

On Thu, Nov 21, 2024 at 12:23 AM Ghanshyam Mann <gmann@ghanshyammann.com> wrote:
 ---- On Tue, 19 Nov 2024 22:34:07 -0800  Dmitriy Rabotyagov  wrote ---
 > Hey,
 >
 > This looks like some issue with Horizon itself.
 > Though, Wallaby has reached the End Of Maintenance state, so I would
 > not expect much on fixing the issue for W. I would suggest try
 > upgrading Horizon to some maintained release and see if that fixes the
 > issue. As there were a series of issues with scopes (including system
 > and domain scopes) which were addressed in Horizon since Wallaby (ie
 > https://bugs.launchpad.net/horizon/+bug/2054799)
 >
 > ср, 20 нояб. 2024 г. в 07:16, Danish Khan danish52.jmi@gmail.com>:
 > >
 > > Dear All,
 > >
 > > I have upgraded one of my test openstack cluster from victoria to Wallaby. After upgrading, the hypervisor is not showing correct numbers of VMs on compute nodes from browser.
 > >
 > > But openstack CLI is giving correct number of VMs.
 > >
 > > I am getting below error in horizon container when I click on "hypervisor" from browser:
 > >
 > > Nov 20 11:17:29 horizon-container-2b244cbd apache2[43317]: [wsgi:error] [pid 43317:tid 139897181898496] [remote x.x.x.151:37180] /openstack/venvs/horizon-23.3.1.dev4/lib/python3.8/site-packages/oslo_policy/policy.py:1065: UserWarning: Policy os_compute_api:os-migrate-server:migrate failed scope check. The token used to make the request was domain scoped but the policy requires ['system', 'project'] scope. This behavior may change in the future where using the intended scope is required
 > >
 > > I did try few thing but that did not work.

We had the system scope things in Nova in wallaby but those were disabled by default. I think that
is why you are getting correct result via CLI. If system scope error is raised from Horizon then check if
"[oslo.policy] enforce_scope and enforce_new_defaults"[1] is enabled in the horizon. Those can be done
in horizon side conf or in nova.conf.

I would also check the oslo.policy version here in case you end up with the 4.4.0 in your env. I enabled those
configs by default in oslo.policy 4.4.0, so if you are using the latest oslo.policy with Wallaby, then also the same
issue can occur. But as you saying CLI is working fine, then I think the Nova server and olso.policy is fine and it
seems issue on Horizon side.

[1] https://github.com/openstack/oslo.policy/blob/unmaintained/wallaby/oslo_policy/opts.py#L27-L38

-gmann
 > >
 > > Regards,
 > > Danish Khan
 >