Horizon connection errors from object store

Michel Niyoyita micou12 at gmail.com
Fri Sep 10 07:55:35 UTC 2021


Actually from the CLI on both side I did not get any errors. containers are
successfully created and added in ceph . The problem is to do it through
the dashboard .

Michel

On Fri, Sep 10, 2021 at 9:47 AM Eugen Block <eblock at nde.ag> wrote:

> Check the log files, the dashboard errors suggest that it could be a
> policy issue, the openstack-dashboard-error-logs should be more
> verbose. Then check keystone and rgw logs. What errors do you get when
> you try to create a container from CLI (add '--debug' flag to see
> more)? Did you source the correct openrc file?
>
>
> Zitat von Michel Niyoyita <micou12 at gmail.com>:
>
> > Hello Eugen
> >
> > Thank you for your continuous support. now The dashboard is stable is not
> > dsconnected as before , unfotunately I am not able to create containers
> and
> > see the list of created one using openstack CLI or ceph side.
> >
> > below is my ceph.conf :
> >
> > [client.rgw.ceph-osd3]
> > rgw frontends = "beast port=8080"
> > rgw dns name = ceph-osd3
> > rgw enable usage log = true
> >
> > rgw thread pool size = 512
> > rgw keystone api version = 3
> > rgw keystone url = http://kolla-open1:5000
> >
> > rgw keystone admin user = rgw
> > rgw keystone admin password = c8igBKQqEon8jXaG68TkcWgNI4E77m2K3bJD7fCU
> > rgw keystone admin domain = default
> > rgw keystone admin project = service
> > rgw keystone accepted roles = admin,Member,_member_,member,swiftoperator
> > rgw keystone verify ssl = false
> > rgw s3 auth use keystone = true
> > rgw keystone revocation interval = 0
> >
> >
> > [client.rgw.ceph-osd3.rgw0]
> > host = ceph-osd3
> > keyring = /var/lib/ceph/radosgw/ceph-rgw.ceph-osd3.rgw0/keyring
> > log file = /var/log/ceph/ceph-rgw-ceph-osd3.rgw0.log
> > rgw frontends = beast endpoint=ceph-osd3:8080
> > rgw thread pool size = 512
> >
> > openstack role assignment lis --names output:
> >
> >
> > (kolla-open1) stack at kolla-open1:~$ openstack role assignment list
> --names
> >
> >
> >
> +------------------+------------------------------------+-------+-----------------+--------
> >      ----------+--------+-----------+
> > | Role             | User                               | Group | Project
> >       | Domain                 | System | Inherited |
> >
> +------------------+------------------------------------+-------+-----------------+--------
> >      ----------+--------+-----------+
> > | swiftoperator    | operator:swift at Default             |       |
> > service at Default |                        |        | False     |
> > | admin            | rgw at Default                        |       |
> > service at Default |                        |        | False     |
> > | member           | rgw at Default                        |       |
> > service at Default |                        |        | False     |
> > | admin            | cinder at Default                     |       |
> > service at Default |                        |        | False     |
> > | admin            | neutron at Default                    |       |
> > service at Default |                        |        | False     |
> > | admin            | placement at Default                  |       |
> > service at Default |                        |        | False     |
> > | admin            | nova at Default                       |       |
> > service at Default |                        |        | False     |
> > | admin            | admin at Default                      |       |
> > admin at Default   |                        |        | False     |
> > | heat_stack_owner | admin at Default                      |       |
> > admin at Default   |                        |        | False     |
> > | admin            | admin at Default                      |       |
> > service at Default |                        |        | False     |
> > | member           | admin at Default                      |       |
> > service at Default |                        |        | False     |
> > | admin            | glance at Default                     |       |
> > service at Default |                        |        | False     |
> > | member           | operator at Default                   |       |
> > service at Default |                        |        | False     |
> > | _member_         | operator at Default                   |       |
> > service at Default |                        |        | False     |
> > | admin            | heat at Default                       |       |
> > service at Default |                        |        | False     |
> > | admin            | heat_domain_admin at heat_user_domain |       |
> >       | heat_us      er_domain |        | False     |
> > | admin            | admin at Default                      |       |
> >       |                        | all    | False     |
> >
> +------------------+------------------------------------+-------+-----------------+--------
> >
> > Michel
> >
> >
> > On Fri, Sep 10, 2021 at 9:33 AM Michel Niyoyita <micou12 at gmail.com>
> wrote:
> >
> >> Hello Eugen
> >>
> >> Thank you for your continuous support. now The dashboard is stable is
> not
> >> dsconnected as before , unfotunately I am not able to create containers
> and
> >> see the list of created one using openstack CLI or ceph side. you will
> find
> >> the image at the end.
> >>
> >> below is my ceph.conf :
> >>
> >> [client.rgw.ceph-osd3]
> >> rgw frontends = "beast port=8080"
> >> rgw dns name = ceph-osd3
> >> rgw enable usage log = true
> >>
> >> rgw thread pool size = 512
> >> rgw keystone api version = 3
> >> rgw keystone url = http://kolla-open1:5000
> >>
> >> rgw keystone admin user = rgw
> >> rgw keystone admin password = c8igBKQqEon8jXaG68TkcWgNI4E77m2K3bJD7fCU
> >> rgw keystone admin domain = default
> >> rgw keystone admin project = service
> >> rgw keystone accepted roles = admin,Member,_member_,member,swiftoperator
> >> rgw keystone verify ssl = false
> >> rgw s3 auth use keystone = true
> >> rgw keystone revocation interval = 0
> >>
> >>
> >> [client.rgw.ceph-osd3.rgw0]
> >> host = ceph-osd3
> >> keyring = /var/lib/ceph/radosgw/ceph-rgw.ceph-osd3.rgw0/keyring
> >> log file = /var/log/ceph/ceph-rgw-ceph-osd3.rgw0.log
> >> rgw frontends = beast endpoint=ceph-osd3:8080
> >> rgw thread pool size = 512
> >>
> >> openstack role assignment lis --names output:
> >>
> >>
> >> (kolla-open1) stack at kolla-open1:~$ openstack role assignment list
> --names
> >>
> >>
> >>
> +------------------+------------------------------------+-------+-----------------+--------
> >>      ----------+--------+-----------+
> >> | Role             | User                               | Group |
> Project
> >>         | Domain                 | System | Inherited |
> >>
> +------------------+------------------------------------+-------+-----------------+--------
> >>      ----------+--------+-----------+
> >> | swiftoperator    | operator:swift at Default             |       |
> >> service at Default |                        |        | False     |
> >> | admin            | rgw at Default                        |       |
> >> service at Default |                        |        | False     |
> >> | member           | rgw at Default                        |       |
> >> service at Default |                        |        | False     |
> >> | admin            | cinder at Default                     |       |
> >> service at Default |                        |        | False     |
> >> | admin            | neutron at Default                    |       |
> >> service at Default |                        |        | False     |
> >> | admin            | placement at Default                  |       |
> >> service at Default |                        |        | False     |
> >> | admin            | nova at Default                       |       |
> >> service at Default |                        |        | False     |
> >> | admin            | admin at Default                      |       |
> >> admin at Default   |                        |        | False     |
> >> | heat_stack_owner | admin at Default                      |       |
> >> admin at Default   |                        |        | False     |
> >> | admin            | admin at Default                      |       |
> >> service at Default |                        |        | False     |
> >> | member           | admin at Default                      |       |
> >> service at Default |                        |        | False     |
> >> | admin            | glance at Default                     |       |
> >> service at Default |                        |        | False     |
> >> | member           | operator at Default                   |       |
> >> service at Default |                        |        | False     |
> >> | _member_         | operator at Default                   |       |
> >> service at Default |                        |        | False     |
> >> | admin            | heat at Default                       |       |
> >> service at Default |                        |        | False     |
> >> | admin            | heat_domain_admin at heat_user_domain |       |
> >>         | heat_us      er_domain |        | False     |
> >> | admin            | admin at Default                      |       |
> >>         |                        | all    | False     |
> >>
> >>
> +------------------+------------------------------------+-------+-----------------+--------
> >>
> >> [image: image.png]
> >>
> >> Michel
> >>
> >>
> >> On Thu, Sep 9, 2021 at 2:15 PM Eugen Block <eblock at nde.ag> wrote:
> >>
> >>> Hi,
> >>>
> >>> I could reproduce this in my lab environment. The issue must be either
> >>> in your ceph.conf on the RGW host(s) or your openstack role
> >>> assigments. I have a dedicated user for my setup as you can see in my
> >>> previous response. The user "rgw" gets then assigned the "member" role
> >>> to the "service" project. If I login to Horizon dashboard with this
> >>> user I can see the object-storage panel and see existing containers
> >>> for that user. If I login as admin and try to see the container panel
> >>> I get logged out, too. If I replace "rgw" with "admin" in the
> >>> ceph.conf and restart the RGW it works. But note that in this case the
> >>> admin user has to have the proper role assignment, too.
> >>>
> >>> So to achieve this you need to add a matching role (from "rgw keystone
> >>> accepted roles") for your admin user in the respective project, like
> >>> this:
> >>>
> >>> # replace rgw with admin in your case, PROJECT_ID is "service" in my
> case
> >>> openstack role add --user rgw --project <PROJECT_ID> member
> >>>
> >>> # check with
> >>> openstack role assignment list --names
> >>>
> >>> To make it easier to follow, please share your current ceph.conf and
> >>> the openstack role assignment output.
> >>>
> >>> Regards,
> >>> Eugen
> >>>
> >>>
> >>>
> >>> Zitat von Michel Niyoyita <micou12 at gmail.com>:
> >>>
> >>> > Hello team ,
> >>> >
> >>> > I am facing an issue when I am trying to connect to the object store
> >>> > containers on the horizon dashboad . Once click on containers it
> >>> > automatically disconnect. please find below logs I am getting and
> help
> >>> for
> >>> > further analysis.
> >>> >
> >>> > [Thu Sep 09 06:35:22.185771 2021] [wsgi:error] [pid 167:tid
> >>> > 139887608641280] [remote 10.10.29.150:55130] Attempted scope to
> domain
> >>> > Default failed, will attempt to scope to another domain.
> >>> > [Thu Sep 09 06:35:22.572522 2021] [wsgi:error] [pid 167:tid
> >>> > 139887608641280] [remote 10.10.29.150:55130] Login successful for
> user
> >>> > "admin" using domain "Default", remote address 10.10.29.150.
> >>> > [Thu Sep 09 06:35:51.494815 2021] [wsgi:error] [pid 166:tid
> >>> > 139887608641280] [remote 10.10.29.150:55806] REQ: curl -i
> >>> > http://ceph-mon2:8080/swift/v1?format=json&limit=1001 -X GET -H
> >>> > "X-Auth-Token: gAAAAABhOasqHFyB..." -H "Accept-Encoding: gzip"
> >>> > [Thu Sep 09 06:35:51.495140 2021] [wsgi:error] [pid 166:tid
> >>> > 139887608641280] [remote 10.10.29.150:55806] RESP STATUS: 401
> >>> Unauthorized
> >>> > [Thu Sep 09 06:35:51.495541 2021] [wsgi:error] [pid 166:tid
> >>> > 139887608641280] [remote 10.10.29.150:55806] RESP HEADERS:
> >>> > {'Content-Length': '119', 'X-Trans-Id':
> >>> > 'tx00000000000000000000f-006139ab44-9fc1a-default',
> >>> > 'X-Openstack-Request-Id':
> >>> > 'tx00000000000000000000f-006139ab44-9fc1a-default', 'Accept-Ranges':
> >>> > 'bytes', 'Content-Type': 'application/json; charset=utf-8', 'Date':
> >>> 'Thu,
> >>> > 09 Sep 2021 06:35:51 GMT', 'Connection': 'Keep-Alive'}
> >>> > [Thu Sep 09 06:35:51.495792 2021] [wsgi:error] [pid 166:tid
> >>> > 139887608641280] [remote 10.10.29.150:55806] RESP BODY:
> >>> >
> >>>
> b'{"Code":"AccessDenied","RequestId":"tx00000000000000000000f-006139ab44-9fc1a-default","HostId":"9fc1a-default-default"}'
> >>> > [Thu Sep 09 06:35:51.498743 2021] [wsgi:error] [pid 166:tid
> >>> > 139887608641280] [remote 10.10.29.150:55806] Unauthorized:
> >>> > /api/swift/containers/
> >>> > [Thu Sep 09 06:35:52.924169 2021] [wsgi:error] [pid 166:tid
> >>> > 139887608641280] [remote 10.10.29.150:55806] REQ: curl -i
> >>> > http://ceph-mon2:8080/swift/v1?format=json&limit=1001 -X GET -H
> >>> > "X-Auth-Token: gAAAAABhOasqHFyB..." -H "Accept-Encoding: gzip"
> >>> > [Thu Sep 09 06:35:52.924520 2021] [wsgi:error] [pid 166:tid
> >>> > 139887608641280] [remote 10.10.29.150:55806] RESP STATUS: 401
> >>> Unauthorized
> >>> > [Thu Sep 09 06:35:52.924789 2021] [wsgi:error] [pid 166:tid
> >>> > 139887608641280] [remote 10.10.29.150:55806] RESP HEADERS:
> >>> > {'Content-Length': '119', 'X-Trans-Id':
> >>> > 'tx000000000000000000010-006139ab48-9fc1a-default',
> >>> > 'X-Openstack-Request-Id':
> >>> > 'tx000000000000000000010-006139ab48-9fc1a-default', 'Accept-Ranges':
> >>> > 'bytes', 'Content-Type': 'application/json; charset=utf-8', 'Date':
> >>> 'Thu,
> >>> > 09 Sep 2021 06:35:52 GMT', 'Connection': 'Keep-Alive'}
> >>> > [Thu Sep 09 06:35:52.925034 2021] [wsgi:error] [pid 166:tid
> >>> > 139887608641280] [remote 10.10.29.150:55806] RESP BODY:
> >>> >
> >>>
> b'{"Code":"AccessDenied","RequestId":"tx000000000000000000010-006139ab48-9fc1a-default","HostId":"9fc1a-default-default"}'
> >>> > [Thu Sep 09 06:35:52.929398 2021] [wsgi:error] [pid 166:tid
> >>> > 139887608641280] [remote 10.10.29.150:55806] Unauthorized:
> >>> > /api/swift/containers/
> >>> > [Thu Sep 09 06:35:52.935799 2021] [wsgi:error] [pid 166:tid
> >>> > 139887608641280] [remote 10.10.29.150:56016] Logging out user
> "admin".
> >>> > [Thu Sep 09 06:35:53.061489 2021] [wsgi:error] [pid 166:tid
> >>> > 139887608641280] [remote 10.10.29.150:55806] Logging out user "".
> >>> > [Thu Sep 09 06:35:54.541593 2021] [wsgi:error] [pid 165:tid
> >>> > 139887608641280] [remote 10.10.29.150:55852] The request's session
> was
> >>> > deleted before the request completed. The user may have logged out
> in a
> >>> > concurrent request, for example.
> >>> > [Thu Sep 09 06:35:54.542896 2021] [wsgi:error] [pid 165:tid
> >>> > 139887608641280] [remote 10.10.29.150:55852] Bad Request:
> >>> > /api/swift/policies/
> >>> > [Thu Sep 09 06:35:54.566055 2021] [wsgi:error] [pid 167:tid
> >>> > 139887608641280] [remote 10.10.29.150:55860] The request's session
> was
> >>> > deleted before the request completed. The user may have logged out
> in a
> >>> > concurrent request, for example.
> >>> > [Thu Sep 09 06:35:54.567130 2021] [wsgi:error] [pid 167:tid
> >>> > 139887608641280] [remote 10.10.29.150:55860] Bad Request:
> >>> /api/swift/info/
> >>> > (kolla-open1) stack at kolla-open1
> >>> > :/var/lib/docker/volumes/kolla_logs/_data/horizon$
> >>> >
> >>> > Michel
> >>>
> >>>
> >>>
> >>>
> >>>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210910/3d23998f/attachment-0001.html>


More information about the openstack-discuss mailing list