[ceilometer] radosgw pollster

Christian Zunker christian.zunker at codecentric.cloud
Wed Feb 27 08:09:24 UTC 2019


Hi Florian,

have you tried different permissions for your ceilometer user in radosgw?
According to the docs you need an admin user:
https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#ceph-object-storage
Our user has these caps:
usage=read,write;metadata=read,write;users=read,write;buckets=read,write

We also had to add the requests-aws pip package to query radosgw from
ceilometer:
https://docs.openstack.org/openstack-ansible/latest/user/ceph/ceilometer.html

Christian


Am Di., 26. Feb. 2019 um 13:15 Uhr schrieb Florian Engelmann <
florian.engelmann at everyware.ch>:

> Hi Christian,
>
> Am 2/26/19 um 11:00 AM schrieb Christian Zunker:
> > Hi Florian,
> >
> > which version of OpenStack are you using?
> > The radosgw metric names were different in some versions:
> > https://bugs.launchpad.net/ceilometer/+bug/1726458
>
> we do use Rocky and Ceilometer 11.0.1. I am still lost with that error.
> As far as I am able to understand python it looks like the error is
> happening in polling.manager line 222:
>
>
> https://github.com/openstack/ceilometer/blob/11.0.1/ceilometer/polling/manager.py#L222
>
> But I do not understand why. I tried to enable debug logging but the
> error does not log any additional information.
> The poller is not even trying to reach/poll our RadosGWs. Looks like
> that manger is blocking those polls.
>
> All the best,
> Florian
>
>
> >
> > Christian
> >
> > Am Fr., 22. Feb. 2019 um 17:40 Uhr schrieb Florian Engelmann
> > <florian.engelmann at everyware.ch <mailto:florian.engelmann at everyware.ch
> >>:
> >
> >     Hi,
> >
> >     I failed to poll any usage data from our radosgw. I get
> >
> >     2019-02-22 17:23:57.461 24 INFO ceilometer.polling.manager [-]
> Polling
> >     pollster radosgw.containers.objects in the context of
> >     radosgw_300s_pollsters
> >     2019-02-22 17:23:57.462 24 ERROR ceilometer.polling.manager [-]
> Prevent
> >     pollster radosgw.containers.objects from polling [<Project
> >     description=,
> >     domain_id=xx9d9975088a4d93922e1d73c7217b3b, enabled=True,
> >
> >     [...]
> >
> >     id=xx90a9b1d4be4d75b4bd08ab8107e4ff, is_domain=False, links={u'self':
> >     u'http://keystone-admin.service.xxxxxxx:35357/v3/projects on source
> >     radosgw_300s_pollsters anymore!: PollsterPermanentError
> >
> >     Configurations like:
> >     cat polling.yaml
> >     ---
> >     sources:
> >           - name: radosgw_300s_pollsters
> >             interval: 300
> >             meters:
> >               - radosgw.usage
> >               - radosgw.objects
> >               - radosgw.objects.size
> >               - radosgw.objects.containers
> >               - radosgw.containers.objects
> >               - radosgw.containers.objects.size
> >
> >
> >     Also tried radosgw.api.requests instead of radowsgw.usage.
> >
> >     ceilometer.conf
> >     [...]
> >     [service_types]
> >     radosgw = object-store
> >
> >     [rgw_admin_credentials]
> >     access_key = xxxxx0Z0xxxxxxxxxxxx
> >     secret_key = xxxxxxxxxxxxlRExxcPxxxxxxoNxxxxxxOxxxx
> >
> >     [rgw_client]
> >     implicit_tenants = true
> >
> >     Endpoints:
> >     | xxxxxxx | region | swift        | object-store    | True    | admin
> >        | http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s  |
> >     | xxxxxxx | region | swift        | object-store    | True    |
> >     internal
> >        | http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s  |
> >     | xxxxxxx | region | swift        | object-store    | True    |
> public
> >        | https://s3.somedomain.com/swift/v1/AUTH_%(tenant_id)s       |
> >
> >     Ceilometer user:
> >     {
> >           "user_id": "ceilometer",
> >           "display_name": "ceilometer",
> >           "email": "",
> >           "suspended": 0,
> >           "max_buckets": 1000,
> >           "auid": 0,
> >           "subusers": [],
> >           "keys": [
> >               {
> >                   "user": "ceilometer",
> >                   "access_key": "xxxxxxxxxxxxxxxxxx",
> >                   "secret_key": "xxxxxxxxxxxxxxxxxxxxxxxxx"
> >               }
> >           ],
> >           "swift_keys": [],
> >           "caps": [
> >               {
> >                   "type": "buckets",
> >                   "perm": "read"
> >               },
> >               {
> >                   "type": "metadata",
> >                   "perm": "read"
> >               },
> >               {
> >                   "type": "usage",
> >                   "perm": "read"
> >               },
> >               {
> >                   "type": "users",
> >                   "perm": "read"
> >               }
> >           ],
> >           "op_mask": "read, write, delete",
> >           "default_placement": "",
> >           "placement_tags": [],
> >           "bucket_quota": {
> >               "enabled": false,
> >               "check_on_raw": false,
> >               "max_size": -1,
> >               "max_size_kb": 0,
> >               "max_objects": -1
> >           },
> >           "user_quota": {
> >               "enabled": false,
> >               "check_on_raw": false,
> >               "max_size": -1,
> >               "max_size_kb": 0,
> >               "max_objects": -1
> >           },
> >           "temp_url_keys": [],
> >           "type": "rgw"
> >     }
> >
> >
> >     radosgw config:
> >     [client.rgw.xxxxxxxxxxx]
> >     host = somehost
> >     rgw frontends = "civetweb port=7480 num_threads=512"
> >     rgw num rados handles = 8
> >     rgw thread pool size = 512
> >     rgw cache enabled = true
> >     rgw dns name = s3.xxxxxx.xxx
> >     rgw enable usage log = true
> >     rgw usage log tick interval = 30
> >     rgw realm = public
> >     rgw zonegroup = xxx
> >     rgw zone = xxxxx
> >     rgw resolve cname = False
> >     rgw usage log flush threshold = 1024
> >     rgw usage max user shards = 1
> >     rgw usage max shards = 32
> >     rgw_keystone_url = https://keystone.xxxxxxxxxxxxx
> >     rgw_keystone_admin_domain = default
> >     rgw_keystone_admin_project = service
> >     rgw_keystone_admin_user = swift
> >     rgw_keystone_admin_password =
> >     xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
> >     rgw_keystone_accepted_roles = member,_member_,admin
> >     rgw_keystone_accepted_admin_roles = admin
> >     rgw_keystone_api_version = 3
> >     rgw_keystone_verify_ssl = false
> >     rgw_keystone_implicit_tenants = true
> >     rgw_keystone_admin_tenant = default
> >     rgw_keystone_revocation_interval = 0
> >     rgw_keystone_token_cache_size = 0
> >     rgw_s3_auth_use_keystone = true
> >     rgw_max_attr_size = 1024
> >     rgw_max_attrs_num_in_req = 32
> >     rgw_max_attr_name_len = 64
> >     rgw_swift_account_in_url = true
> >     rgw_swift_versioning_enabled = true
> >     rgw_enable_apis = s3,swift,swift_auth,admin
> >     rgw_swift_enforce_content_length = true
> >
> >
> >
> >
> >     Any idea whats going on?
> >
> >     All the best,
> >     Florian
> >
> >
> >
>
> --
>
> EveryWare AG
> Florian Engelmann
> Senior UNIX Systems Engineer
> Zurlindenstrasse 52a
> CH-8003 Zürich
>
> tel: +41 44 466 60 00
> fax: +41 44 466 60 10
> mail: mailto:florian.engelmann at everyware.ch
> web: http://www.everyware.ch
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190227/996f869c/attachment-0001.html>


More information about the openstack-discuss mailing list