[ceilometer] radosgw pollster

Florian Engelmann florian.engelmann at everyware.ch
Thu Feb 28 10:21:38 UTC 2019


Hi Christian,

after adding requests-aws to the container the PollsterPermanentError is 
gone and the ceilometer polling log looks clean to me:

2019-02-28 10:29:46.163 24 INFO ceilometer.polling.manager [-] Polling 
pollster radosgw.containers.objects in the context of radosgw_300s_pollsters
2019-02-28 10:29:46.167 24 INFO ceilometer.polling.manager [-] Polling 
pollster radosgw.objects in the context of radosgw_300s_pollsters
2019-02-28 10:29:46.172 24 INFO ceilometer.polling.manager [-] Polling 
pollster radosgw.objects.size in the context of radosgw_300s_pollsters
2019-02-28 10:29:46.177 24 INFO ceilometer.polling.manager [-] Polling 
pollster radosgw.objects.containers in the context of radosgw_300s_pollsters
2019-02-28 10:29:46.182 24 INFO ceilometer.polling.manager [-] Polling 
pollster radosgw.usage in the context of radosgw_300s_pollsters


I still do not get any data in gnocchi:

openstack metric resource list -c type -f value | sort -u
generic
image
instance
instance_disk
instance_network_interface
network
volume

My RadosGW logs do show the poller requests:
2019-02-28 10:29:54.767266 7f67a550e700 20 HTTP_ACCEPT=*/*
2019-02-28 10:29:54.767272 7f67a550e700 20 HTTP_ACCEPT_ENCODING=gzip, 
deflate
2019-02-28 10:29:54.767274 7f67a550e700 20 HTTP_AUTHORIZATION=AWS 
0APxxxxxxxxx:vxxxxxxxICq/qQXxxxxxxxxxuuc=
[...]
2019-02-28 10:29:54.767294 7f67a550e700 20 REMOTE_ADDR=10.xx.xxx.xxx
2019-02-28 10:29:54.767295 7f67a550e700 20 REQUEST_METHOD=GET
2019-02-28 10:29:54.767296 7f67a550e700 20 REQUEST_URI=/admin/usage
2019-02-28 10:29:54.767297 7f67a550e700 20 SCRIPT_URI=/admin/usage
[...]
in_hosted_domain_s3website=0 s->info.domain=rgw.xxxxxx 
s->info.request_uri=/admin/usage
2019-02-28 10:29:54.767479 7f67a550e700 10 handler=16RGWHandler_Usage
2019-02-28 10:29:54.767491 7f67a550e700  2 req 215268:0.000183::GET 
/admin/usage::getting op 0
2019-02-28 10:29:54.767495 7f67a550e700 10 op=15RGWOp_Usage_Get
2019-02-28 10:29:54.767496 7f67a550e700  2 req 215268:0.000196::GET 
/admin/usage:get_usage:verifying requester
2019-02-28 10:29:54.767528 7f67a550e700 20 
rgw::auth::StrategyRegistry::s3_main_strategy_t: trying 
rgw::auth::s3::AWSAuthStrategy
2019-02-28 10:29:54.767530 7f67a550e700 20 
rgw::auth::s3::AWSAuthStrategy: trying rgw::auth::s3::S3AnonymousEngine
2019-02-28 10:29:54.767558 7f67a550e700 20 
rgw::auth::s3::S3AnonymousEngine denied with reason=-1
2019-02-28 10:29:54.767561 7f67a550e700 20 
rgw::auth::s3::AWSAuthStrategy: trying 
rgw::auth::s3::AWSv2ExternalAuthStrategy
2019-02-28 10:29:54.767573 7f67a550e700 20 
rgw::auth::s3::AWSv2ExternalAuthStrategy: trying 
rgw::auth::keystone::EC2Engine
2019-02-28 10:29:54.767602 7f67a550e700 10 get_canon_resource(): 
dest=/admin/usage
2019-02-28 10:29:54.767605 7f67a550e700 10 string_to_sign:
2019-02-28 10:29:54.767630 7f67a550e700 20 sending request to 
https://keystone.xxxxxxxxx/v3/auth/tokens
2019-02-28 10:29:54.767678 7f67a550e700 20 ssl verification is set to off
2019-02-28 10:29:55.194088 7f67a550e700 20 sending request to 
https://keystone.xxxxxxxx/v3/s3tokens
2019-02-28 10:29:55.194115 7f67a550e700 20 ssl verification is set to off
2019-02-28 10:29:55.226841 7f67a550e700 20 
rgw::auth::keystone::EC2Engine denied with reason=-2028
2019-02-28 10:29:55.226852 7f67a550e700 20 
rgw::auth::s3::AWSv2ExternalAuthStrategy denied with reason=-2028
2019-02-28 10:29:55.226855 7f67a550e700 20 
rgw::auth::s3::AWSAuthStrategy: trying rgw::auth::s3::LocalEngine
2019-02-28 10:29:55.226878 7f67a550e700 10 get_canon_resource(): 
dest=/admin/usage
2019-02-28 10:29:55.226881 7f67a550e700 10 string_to_sign:
2019-02-28 10:29:55.226943 7f67a550e700 15 string_to_sign=GET
[...]
2019-02-28 10:29:55.226972 7f67a550e700 15 compare=0
2019-02-28 10:29:55.227099 7f67a550e700 20 rgw::auth::s3::LocalEngine 
granted access
2019-02-28 10:29:55.227103 7f67a550e700 20 
rgw::auth::s3::AWSAuthStrategy granted access
2019-02-28 10:29:55.227106 7f67a550e700  2 req 215268:0.459806::GET 
/admin/usage:get_usage:normalizing buckets and tenants
2019-02-28 10:29:55.227109 7f67a550e700  2 req 215268:0.459809::GET 
/admin/usage:get_usage:init permissions
2019-02-28 10:29:55.227136 7f67a550e700  2 req 215268:0.459826::GET 
/admin/usage:get_usage:recalculating target
2019-02-28 10:29:55.227140 7f67a550e700  2 req 215268:0.459841::GET 
/admin/usage:get_usage:reading permissions
2019-02-28 10:29:55.227142 7f67a550e700  2 req 215268:0.459842::GET 
/admin/usage:get_usage:init op
2019-02-28 10:29:55.227143 7f67a550e700  2 req 215268:0.459844::GET 
/admin/usage:get_usage:verifying op mask
2019-02-28 10:29:55.227145 7f67a550e700 20 required_mask= 0 user.op_mask=7
2019-02-28 10:29:55.227146 7f67a550e700  2 req 215268:0.459846::GET 
/admin/usage:get_usage:verifying op permissions
2019-02-28 10:29:55.227148 7f67a550e700  2 req 215268:0.459848::GET 
/admin/usage:get_usage:verifying op params
2019-02-28 10:29:55.227149 7f67a550e700  2 req 215268:0.459850::GET 
/admin/usage:get_usage:pre-executing
2019-02-28 10:29:55.227150 7f67a550e700  2 req 215268:0.459851::GET 
/admin/usage:get_usage:executing
2019-02-28 10:29:55.232830 7f67a550e700  2 req 215268:0.465530::GET 
/admin/usage:get_usage:completing
2019-02-28 10:29:55.232858 7f67a550e700  2 req 215268:0.465559::GET 
/admin/usage:get_usage:op status=0
2019-02-28 10:29:55.232863 7f67a550e700  2 req 215268:0.465564::GET 
/admin/usage:get_usage:http status=200
2019-02-28 10:29:55.232865 7f67a550e700  1 ====== req done 
req=0x7f67a55080a0 op status=0 http_status=200 ======
2019-02-28 10:29:55.232894 7f67a550e700  1 civetweb: 0x55ced7bd2000: 
10.0.81.59 - - [28/Feb/2019:10:29:54 +0100] "GET 
/admin/usage?uid=2534a3e876ee41f088098fxxxxxxxxx%242534a3e876ee41f088098f53xxxxxxx 
HTTP/1.1" 200 0 - python-requests/2.19.1


Using curl to do the same request gives me the correct usage results:
{"entries":[{"user":"a772e4abxxxxxxxx4559d$a772e4ab888exxxxxxxf4559d","buckets":[{"bucket":"","time":"2019-02-26 
12:00:00.000000Z","epoch":1551182400,"owner":"a772e4ab88xxxxxxx88e4f039b3430d688f4559d","categories":[{"category":"list_buckets","bytes_sent":36,"bytes_received":0,"ops":3,"successful_ops":0}]},{"bucket":"-","time":"2019-02-17 
15:00:00.000000Z","epoch":1550415600,"owner":"a772e4ab88xxxxxxxa772e4ab888e4f039b3430d688f4559d","categories":[{"category":"get_obj","bytes_sent":0,"bytes_received":0,"ops":1,"successful_ops":0},{"category":"list_bucket","bytes_sent":132,"bytes_received":0,"ops":1,"successful_ops":0}]},{"bucket":"info","time":"2019-02-26 
12:00:00.000000Z","epoch":1551182400,"owner":"a772e4ab888e4f039xxxxxx72e4ab888e4f039b3430d688f4559d","categories":[{"category":"RGWMovedPermanently","bytes_sent":0,"bytes_received":0,"ops":3,"successful_ops":3}]},{"bucket":"test","time":"2019-02-17 
15:00:00.000000Z","epoch":1550415600,"owner":"a772e4ab8xxxxxxd$a772e4ab888e4f039b3430d688f4559d","categories":[{"category":"create_bucket","bytes_sent":0,"bytes_received":0,"ops":1,"successful_ops":1},{"category":"list_bucket","bytes_sent":1368,"bytes_received":0,"ops":18,"successful_ops":18},{"category":"put_obj","bytes_sent":0,"bytes_received":10,"ops":1,"successful_ops":1}]}]}],"summary":[{"user":"a772e4abxxxx4f0xxx8f4559d$a772e4ab88xxxxxx30d688f4559d","categories":[{"category":"RGWMovedPermanently","bytes_sent":0,"bytes_received":0,"ops":3,"successful_ops":3},{"category":"create_bucket","bytes_sent":0,"bytes_received":0,"ops":1,"successful_ops":1},{"category":"get_obj","bytes_sent":0,"bytes_received":0,"ops":1,"successful_ops":0},{"category":"list_bucket","bytes_sent":1500,"bytes_received":0,"ops":19,"successful_ops":18},{"category":"list_buckets","bytes_sent":36,"bytes_received":0,"ops":3,"successful_ops":0},{"category":"put_obj","bytes_sent":0,"bytes_received":10,"ops":1,"successful_ops":1}],"total":{"bytes_sent":153* 
Connection #0 to host 10.xxx.xxx.xxx left intact


My custom archive policy (custom_gnocchi_resources.yaml) looks like:
[...]
   - resource_type: ceph_account
     metrics:
       radosgw.objects:
       radosgw.objects.size:
       radosgw.objects.containers:
       radosgw.api.request:
       radosgw.containers.objects:
       radosgw.containers.objects.size:
[...]

and the pipeline:
[...]
sources:
     - name: meter_source
       meters:
           - "*"
       sinks:
           - meter_sink
[...]
sinks:
     - name: meter_sink
       transformers:
       publishers:
           - 
gnocchi://?resources_definition_file=%2Fetc%2Fceilometer%2Fcustom_gnocchi_resources.yaml
[...]

Anything wrong with my archive policy or the pipeline?

All the best,
Florian

Am 2/27/19 um 8:38 PM schrieb Engelmann Florian:
> Hi Christian,
> 
> 
> looks like a hit:
> 
> 
> https://github.com/openstack/ceilometer/commit/c9eb2d44df7cafde1294123d66445ebef4cfb76d
> 
> 
> You made my day!
> 
> 
> I will test tomorrow and report back!
> 
>> 
> All the best,
> 
> Florian
> 
> 
> ------------------------------------------------------------------------
> *From:* Engelmann Florian <florian.engelmann at everyware.ch>
> *Sent:* Wednesday, February 27, 2019 8:33 PM
> *To:* Christian Zunker
> *Cc:* openstack-discuss at lists.openstack.org
> *Subject:* Re: [ceilometer] radosgw pollster
> 
> Hi Christian,
> 
> 
> thank you for your feedback and help! Permissions are fine as I tried to 
> poll the Endpoint successfully with curl and the user (key + secret) we 
> created (and is configured in ceilometer.conf).
> 
> I saw the requests-aws is used in OSA and it is indeed missing in the 
> kolla container (we use "source" not binary).
> 
> 
> https://github.com/openstack/kolla/blob/master/docker/ceilometer/ceilometer-base/Dockerfile.j2
> 
> 
> I will build a new ceilometer container including requests-aws tomorrow 
> to see if this fixes the problem.
> 
> 
> All the best,
> 
> Florian
> 
> 
> ------------------------------------------------------------------------
> *From:* Christian Zunker <christian.zunker at codecentric.cloud>
> *Sent:* Wednesday, February 27, 2019 9:09 AM
> *To:* Engelmann Florian
> *Cc:* openstack-discuss at lists.openstack.org
> *Subject:* Re: [ceilometer] radosgw pollster
> Hi Florian,
> 
> have you tried different permissions for your ceilometer user in radosgw?
> According to the docs you need an admin user:
> https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#ceph-object-storage
> Our user has these caps:
> usage=read,write;metadata=read,write;users=read,write;buckets=read,write
> 
> We also had to add the requests-aws pip package to query radosgw from 
> ceilometer:
> https://docs.openstack.org/openstack-ansible/latest/user/ceph/ceilometer.html
> 
> Christian
> 
> 
> Am Di., 26. Feb. 2019 um 13:15 Uhr schrieb Florian Engelmann 
> <florian.engelmann at everyware.ch <mailto:florian.engelmann at everyware.ch>>:
> 
>     Hi Christian,
> 
>     Am 2/26/19 um 11:00 AM schrieb Christian Zunker:
>      > Hi Florian,
>      >
>      > which version of OpenStack are you using?
>      > The radosgw metric names were different in some versions:
>      > https://bugs.launchpad.net/ceilometer/+bug/1726458
> 
>     we do use Rocky and Ceilometer 11.0.1. I am still lost with that error.
>     As far as I am able to understand python it looks like the error is
>     happening in polling.manager line 222:
> 
>     https://github.com/openstack/ceilometer/blob/11.0.1/ceilometer/polling/manager.py#L222
> 
>     But I do not understand why. I tried to enable debug logging but the
>     error does not log any additional information.
>     The poller is not even trying to reach/poll our RadosGWs. Looks like
>     that manger is blocking those polls.
> 
>     All the best,
>     Florian
> 
> 
>      >
>      > Christian
>      >
>      > Am Fr., 22. Feb. 2019 um 17:40 Uhr schrieb Florian Engelmann
>      > <florian.engelmann at everyware.ch
>     <mailto:florian.engelmann at everyware.ch>
>     <mailto:florian.engelmann at everyware.ch
>     <mailto:florian.engelmann at everyware.ch>>>:
>      >
>      >     Hi,
>      >
>      >     I failed to poll any usage data from our radosgw. I get
>      >
>      >     2019-02-22 17:23:57.461 24 INFO ceilometer.polling.manager
>     [-] Polling
>      >     pollster radosgw.containers.objects in the context of
>      >     radosgw_300s_pollsters
>      >     2019-02-22 17:23:57.462 24 ERROR ceilometer.polling.manager
>     [-] Prevent
>      >     pollster radosgw.containers.objects from polling [<Project
>      >     description=,
>      >     domain_id=xx9d9975088a4d93922e1d73c7217b3b, enabled=True,
>      >
>      >     [...]
>      >
>      >     id=xx90a9b1d4be4d75b4bd08ab8107e4ff, is_domain=False,
>     links={u'self':
>      >     u'http://keystone-admin.service.xxxxxxx:35357/v3/projects on
>     source
>      >     radosgw_300s_pollsters anymore!: PollsterPermanentError
>      >
>      >     Configurations like:
>      >     cat polling.yaml
>      >     ---
>      >     sources:
>      >           - name: radosgw_300s_pollsters
>      >             interval: 300
>      >             meters:
>      >               - radosgw.usage
>      >               - radosgw.objects
>      >               - radosgw.objects.size
>      >               - radosgw.objects.containers
>      >               - radosgw.containers.objects
>      >               - radosgw.containers.objects.size
>      >
>      >
>      >     Also tried radosgw.api.requests instead of radowsgw.usage.
>      >
>      >     ceilometer.conf
>      >     [...]
>      >     [service_types]
>      >     radosgw = object-store
>      >
>      >     [rgw_admin_credentials]
>      >     access_key = xxxxx0Z0xxxxxxxxxxxx
>      >     secret_key = xxxxxxxxxxxxlRExxcPxxxxxxoNxxxxxxOxxxx
>      >
>      >     [rgw_client]
>      >     implicit_tenants = true
>      >
>      >     Endpoints:
>      >     | xxxxxxx | region | swift        | object-store    | True   
>     | admin
>      >        |
>     http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s  |
>      >     | xxxxxxx | region | swift        | object-store    | True    |
>      >     internal
>      >        |
>     http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s  |
>      >     | xxxxxxx | region | swift        | object-store    | True   
>     | public
>      >        | https://s3.somedomain.com/swift/v1/AUTH_%(tenant_id)s   
>         |
>      >
>      >     Ceilometer user:
>      >     {
>      >           "user_id": "ceilometer",
>      >           "display_name": "ceilometer",
>      >           "email": "",
>      >           "suspended": 0,
>      >           "max_buckets": 1000,
>      >           "auid": 0,
>      >           "subusers": [],
>      >           "keys": [
>      >               {
>      >                   "user": "ceilometer",
>      >                   "access_key": "xxxxxxxxxxxxxxxxxx",
>      >                   "secret_key": "xxxxxxxxxxxxxxxxxxxxxxxxx"
>      >               }
>      >           ],
>      >           "swift_keys": [],
>      >           "caps": [
>      >               {
>      >                   "type": "buckets",
>      >                   "perm": "read"
>      >               },
>      >               {
>      >                   "type": "metadata",
>      >                   "perm": "read"
>      >               },
>      >               {
>      >                   "type": "usage",
>      >                   "perm": "read"
>      >               },
>      >               {
>      >                   "type": "users",
>      >                   "perm": "read"
>      >               }
>      >           ],
>      >           "op_mask": "read, write, delete",
>      >           "default_placement": "",
>      >           "placement_tags": [],
>      >           "bucket_quota": {
>      >               "enabled": false,
>      >               "check_on_raw": false,
>      >               "max_size": -1,
>      >               "max_size_kb": 0,
>      >               "max_objects": -1
>      >           },
>      >           "user_quota": {
>      >               "enabled": false,
>      >               "check_on_raw": false,
>      >               "max_size": -1,
>      >               "max_size_kb": 0,
>      >               "max_objects": -1
>      >           },
>      >           "temp_url_keys": [],
>      >           "type": "rgw"
>      >     }
>      >
>      >
>      >     radosgw config:
>      >     [client.rgw.xxxxxxxxxxx]
>      >     host = somehost
>      >     rgw frontends = "civetweb port=7480 num_threads=512"
>      >     rgw num rados handles = 8
>      >     rgw thread pool size = 512
>      >     rgw cache enabled = true
>      >     rgw dns name = s3.xxxxxx.xxx
>      >     rgw enable usage log = true
>      >     rgw usage log tick interval = 30
>      >     rgw realm = public
>      >     rgw zonegroup = xxx
>      >     rgw zone = xxxxx
>      >     rgw resolve cname = False
>      >     rgw usage log flush threshold = 1024
>      >     rgw usage max user shards = 1
>      >     rgw usage max shards = 32
>      >     rgw_keystone_url = https://keystone.xxxxxxxxxxxxx
>      >     rgw_keystone_admin_domain = default
>      >     rgw_keystone_admin_project = service
>      >     rgw_keystone_admin_user = swift
>      >     rgw_keystone_admin_password =
>      >     xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>      >     rgw_keystone_accepted_roles = member,_member_,admin
>      >     rgw_keystone_accepted_admin_roles = admin
>      >     rgw_keystone_api_version = 3
>      >     rgw_keystone_verify_ssl = false
>      >     rgw_keystone_implicit_tenants = true
>      >     rgw_keystone_admin_tenant = default
>      >     rgw_keystone_revocation_interval = 0
>      >     rgw_keystone_token_cache_size = 0
>      >     rgw_s3_auth_use_keystone = true
>      >     rgw_max_attr_size = 1024
>      >     rgw_max_attrs_num_in_req = 32
>      >     rgw_max_attr_name_len = 64
>      >     rgw_swift_account_in_url = true
>      >     rgw_swift_versioning_enabled = true
>      >     rgw_enable_apis = s3,swift,swift_auth,admin
>      >     rgw_swift_enforce_content_length = true
>      >
>      >
>      >
>      >
>      >     Any idea whats going on?
>      >
>      >     All the best,
>      >     Florian
>      >
>      >
>      >
> 
>     -- 
> 
>     EveryWare AG
>     Florian Engelmann
>     Senior UNIX Systems Engineer
>     Zurlindenstrasse 52a
>     CH-8003 Zürich
> 
>     tel: +41 44 466 60 00
>     fax: +41 44 466 60 10
>     mail: mailto:florian.engelmann at everyware.ch
>     <mailto:florian.engelmann at everyware.ch>
>     web: http://www.everyware.ch
> 
> 
> 

-- 

EveryWare AG
Florian Engelmann
Senior UNIX Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:florian.engelmann at everyware.ch
web: http://www.everyware.ch
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5230 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190228/f47bb31e/attachment-0001.bin>


More information about the openstack-discuss mailing list