[Openstack] [Ceilometer/Heat/Havana]: Ceilometer statistics not available for custom metrics.
Juha Tynninen
juha.tynninen at tieto.com
Mon Apr 7 05:20:52 UTC 2014
Hi Eoghan,
Many thanks for the clarification.
Br,
-Juha
On 4 April 2014 18:26, Eoghan Glynn <eglynn at redhat.com> wrote:
>
>
> > Hi Juha,
> >
> > Smells like a bug in the sample POST API, in the sense that:
> >
> > "resource_metadata" : { ... "user_metadata": {"server_group":
> "Group_B"} }
> >
> > is flattened to:
> >
> > "metadata": { ... "user_metadata.server_group": "Group_B" }
> >
> > in the metering message generated from the sample.
> >
> > I'll dig some more and file bug. BTW what exact version are you using?
>
> Here's the promised bug with a detailed explanation of why this issue
> occurs:
>
> https://bugs.launchpad.net/ceilometer/+bug/1302664
>
> Cheers,
> Eoghan
>
> > Cheers,
> > Eoghan
> >
> > ----- Original Message -----
> > > Hi Eoghan,
> > >
> > > Thank you. Tried this, but unfortunately it seems the user_metadata
> part
> > > causes a failure.
> > > Command having just AutoScalingGroupName as a metadata goes through ok:
> > >
> > > curl -X POST -H 'X-Auth-Token: f66288ad283d4ee0a322d03c95db2a4b' -H
> > > 'Content-Type: application/json' -d '[ { "counter_name": "vm_cpu_load",
> > > "resource_id": "8d326543-cde9-4c3c-9c7e-3973cfbcb057",
> "resource_metadata"
> > > : { "AutoScalingGroupName": "tykyauto-Group_B-ljhkoj244qzh" },
> > > "counter_unit": "%", "counter_volume": 11, "counter_type": "gauge" } ]'
> > > http://192.168.100.5:8777/v2/meters/vm_cpu_load
> > >
> > > ...but when I add the user_metadata part:
> > >
> > > curl -X POST -H 'X-Auth-Token: f66288ad283d4ee0a322d03c95db2a4b' -H
> > > 'Content-Type: application/json' -d '[ { "counter_name": "vm_cpu_load",
> > > "resource_id": "8d326543-cde9-4c3c-9c7e-3973cfbcb057",
> "resource_metadata"
> > > : { "AutoScalingGroupName": "tykyauto-Group_B-ljhkoj244qzh",
> > > "user_metadata": {"server_group": "Group_B"} }, "counter_unit": "%",
> > > "counter_volume": 11, "counter_type": "gauge" } ]'
> > > http://192.168.100.5:8777/v2/meters/vm_cpu_load
> > >
> > > ...the following error occurs:
> > >
> > > <43>Apr 4 09:08:56 node-6
> > > ceilometer-ceilometer.collector.dispatcher.database ERROR: Failed to
> record
> > > metering data: not okForStorage
> > > Traceback (most recent call last):
> > > File
> > >
> "/usr/lib/python2.7/dist-packages/ceilometer/collector/dispatcher/database.py",
> > > line 65, in record_metering_data
> > > self.storage_conn.record_metering_data(meter)
> > > File
> > > "/usr/lib/python2.7/dist-packages/ceilometer/storage/impl_mongodb.py",
> line
> > > 451, in record_metering_data
> > > upsert=True,
> > > File "/usr/lib/python2.7/dist-packages/pymongo/collection.py", line
> 487,
> > > in update
> > > check_keys, self.__uuid_subtype), safe)
> > > File "/usr/lib/python2.7/dist-packages/pymongo/mongo_client.py", line
> > > 969, in _send_message
> > > rv = self.__check_response_to_last_error(response)
> > > File "/usr/lib/python2.7/dist-packages/pymongo/mongo_client.py", line
> > > 911, in __check_response_to_last_error
> > > raise OperationFailure(details["err"], details["code"])
> > > OperationFailure: not okForStorage
> > >
> > > I have mongo configured as a database for Ceilometer (otherwise
> > > ceilometer-alarm-evaluator error ERROR: Server-side error: "metaquery
> not
> > > implemented" occurs).
> > >
> > > In our environment the following versions of Ceilometer components have
> > > been installed:
> > >
> > > 2013.2.2:
> > > ceilometer-agent-central
> > > ceilometer-alarm-evaluator
> > > ceilometer-alarm-notifier
> > > ceilometer-api
> > > ceilometer-collector
> > > ceilometer-common
> > > python-ceilometer
> > >
> > > 1:1.0.5:
> > > python-ceilometerclient
> > >
> > > Br,
> > > -Juha
> > >
> > >
> > > On 3 April 2014 17:18, Eoghan Glynn <eglynn at redhat.com> wrote:
> > >
> > > >
> > > > Juha,
> > > >
> > > > Your problem is the embedded period in the metadata key:
> > > > "metering.server_group"
> > > >
> > > > If the metric were gathered by ceilometer itself in the usual way,
> then
> > > > the
> > > > compute agent would transform that problematic payload as follows,
> from:
> > > >
> > > > { ..., "resource_metadata" : { "AutoScalingGroupName":
> > > > "tykyauto-Group_B-hmknsgn35efz", "metering.server_group": "Group_B"
> },
> > > > ...
> > > > }
> > > >
> > > > to:
> > > >
> > > > { ..., "resource_metadata" : { "AutoScalingGroupName":
> > > > "tykyauto-Group_B-hmknsgn35efz", "user_metadata": {"server_group":
> > > > "Group_B"} }, ... }
> > > >
> > > > You should follow the same pattern.
> > > >
> > > > Cheers,
> > > > Eoghan
> > > >
> > > > ----- Original Message -----
> > > > > Maybe this is because I didn't fill in instance related metadata
> > > > > (scaling
> > > > > group name and such) to the REST call I made when adding custom
> metric
> > > > data
> > > > > to Ceilometer. I tried to create metric data again, now with
> metadata
> > > > > filled:
> > > > >
> > > > > $ curl -X POST -H 'X-Auth-Token: 0722fcd0f403425bb8564808c37e8dc8'
> -H
> > > > > 'Content-Type: application/json' -d '[ { "counter_name":
> "vm_cpu_load",
> > > > > "resource_id": "e7eaf484-38b6-4689-8490-40aa8f0df8ae",
> > > > "resource_metadata" :
> > > > > { "AutoScalingGroupName": "tykyauto-Group_B-hmknsgn35efz",
> > > > > "metering.server_group": "Group_B" }, "counter_unit": "%",
> > > > "counter_volume":
> > > > > 11, "counter_type": "gauge" } ]'
> > > > > http://192.168.100.5:8777/v2/meters/vm_cpu_load
> > > > >
> > > > > ...but as a result I can see the following error in ceilometer log:
> > > > >
> > > > > <43>Apr 3 14:24:01 node-6
> > > > ceilometer-ceilometer.collector.dispatcher.database
> > > > > ERROR: Failed to record metering data: not okForStorage
> > > > > Traceback (most recent call last):
> > > > > File
> > > > >
> > > >
> "/usr/lib/python2.7/dist-packages/ceilometer/collector/dispatcher/database.py",
> > > > > line 65, in record_metering_data
> > > > > self.storage_conn.record_metering_data(meter)
> > > > > File
> > > >
> "/usr/lib/python2.7/dist-packages/ceilometer/storage/impl_mongodb.py",
> > > > > line 451, in record_metering_data
> > > > > upsert=True,
> > > > > File "/usr/lib/python2.7/dist-packages/pymongo/collection.py", line
> > > > > 487,
> > > > in
> > > > > update
> > > > > check_keys, self.__uuid_subtype), safe)
> > > > > File "/usr/lib/python2.7/dist-packages/pymongo/mongo_client.py",
> line
> > > > 969, in
> > > > > _send_message
> > > > > rv = self.__check_response_to_last_error(response)
> > > > > File "/usr/lib/python2.7/dist-packages/pymongo/mongo_client.py",
> line
> > > > 911, in
> > > > > __check_response_to_last_error
> > > > > raise OperationFailure(details["err"], details["code"])
> > > > > OperationFailure: not okForStorage
> > > > >
> > > > > Hmm. What am I doing wrong here?
> > > > >
> > > > > Thanks,
> > > > > -Juha
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On 2 April 2014 14:04, Juha Tynninen < juha.tynninen at tieto.com >
> wrote:
> > > > >
> > > > >
> > > > >
> > > > > Hi,
> > > > >
> > > > > I'm sending custom Ceilometer metrics from inside VM instance with
> REST
> > > > call
> > > > > to
> > > > > http://192.168.100.5:8777/v2/meters/vm_cpu_load .
> > > > >
> > > > > This is successful and I can see the entered metric data with
> > > > > Ceilometer:
> > > > >
> > > > > # ceilometer sample-list -m vm_cpu_load
> > > > > -q="resource_id=91951d0a-9a43-4894-99fb-ac67a1098771" | tail -n +4
> |
> > > > head -n
> > > > > -1 | sort -k 12
> > > > > ...
> > > > > | 91951d0a-9a43-4894-99fb-ac67a1098771 | vm_cpu_load | gauge | 2.6
> | %
> > > > > | |
> > > > > | 2014-03-30T19:20:38.080000 |
> > > > > | 91951d0a-9a43-4894-99fb-ac67a1098771 | vm_cpu_load | gauge | 3.3
> | %
> > > > > | |
> > > > > | 2014-03-30T19:20:58.223000 |
> > > > > | 91951d0a-9a43-4894-99fb-ac67a1098771 | vm_cpu_load | gauge | 2.6
> | %
> > > > > | |
> > > > > | 2014-03-30T19:21:18.078000 |
> > > > > | 91951d0a-9a43-4894-99fb-ac67a1098771 | vm_cpu_load | gauge |
> 28.6 | %
> > > > > | |
> > > > > | 2014-03-30T19:21:38.894000 |
> > > > > | 91951d0a-9a43-4894-99fb-ac67a1098771 | vm_cpu_load | gauge | 1.0
> | %
> > > > > | |
> > > > > | 2014-03-30T19:21:59.370000 |
> > > > > | 91951d0a-9a43-4894-99fb-ac67a1098771 | vm_cpu_load | gauge | 2.3
> | %
> > > > > | |
> > > > > | 2014-03-30T19:22:20.255000 |
> > > > > | 91951d0a-9a43-4894-99fb-ac67a1098771 | vm_cpu_load | gauge | 0.3
> | %
> > > > > | |
> > > > > | 2014-03-30T19:22:40.351000 |
> > > > > | 91951d0a-9a43-4894-99fb-ac67a1098771 | vm_cpu_load | gauge | 1.9
> | %
> > > > > | |
> > > > > | 2014-03-30T19:23:00.317000 |
> > > > >
> > > > > # ceilometer meter-list | grep vm_cpu_load | grep
> > > > > 91951d0a-9a43-4894-99fb-ac67a1098771
> > > > > | vm_cpu_load | gauge | % | 91951d0a-9a43-4894-99fb-ac67a1098771 |
> > > > > | 2884e2f624224227bb63d77a040126e6 |
> a12aee6f0da04d8d976eb4b761a73e14 |
> > > > >
> > > > > I've started the instance with a Heat template having AutoScaling
> > > > defined and
> > > > > I'm trying to base the scaling actions to this custom metric.
> > > > > The problem is that the autoscaling does not occur.
> > > > >
> > > > > "Resources" : {
> > > > >
> > > > > "Group_B" : {
> > > > > "Type" : "AWS::AutoScaling::AutoScalingGroup",
> > > > > "Properties" : {
> > > > > "AvailabilityZones" : { "Fn::GetAZs" : ""},
> > > > > "LaunchConfigurationName" : { "Ref" : "Group_B_Config" },
> > > > > "MinSize" : "1",
> > > > > "MaxSize" : "3",
> > > > > "Tags" : [
> > > > > { "Key" : "metering.server_group", "Value" : "Group_B" }
> > > > > ],
> > > > > "VPCZoneIdentifier" : [ { "Ref" : "Private Application Subnet ID"
> } ]
> > > > > }
> > > > > },
> > > > > ...
> > > > > "Group_B_Config" : {
> > > > > "Type" : "AWS::AutoScaling::LaunchConfiguration",
> > > > > "Properties": {
> > > > > "ImageId" : { "Ref" : "Image Id" },
> > > > > "InstanceType" : { "Ref" : "Instance Type" },
> > > > > "KeyName" : { "Ref" : "Key Name" }
> > > > > }
> > > > > },
> > > > > ...
> > > > > "CPUAlarmHigh": {
> > > > > "Type": "OS::Ceilometer::Alarm",
> > > > > "Properties": {
> > > > > "description": "Scale-up if CPU is greater than 80% for 60
> seconds",
> > > > > "meter_name": "vm_cpu_load",
> > > > > "statistic": "avg",
> > > > > "period": "60",
> > > > > "evaluation_periods": "1",
> > > > > "threshold": "80",
> > > > > "alarm_actions":
> > > > > [ {"Fn::GetAtt": ["ScaleUpPolicy", "AlarmUrl"]} ],
> > > > > "matching_metadata":
> > > > > {"metadata.user_metadata.server_group": "Group_B" },
> > > > > "comparison_operator": "gt",
> > > > > "repeat_actions" : true
> > > > > }
> > > > > },
> > > > > ...
> > > > > nova show 91951d0a-9a43-4894-99fb-ac67a1098771
> > > > > ...
> > > > > | metadata | { u'AutoScalingGroupName':
> > > > > | u'tykyauto-Group_B-76nubm24bnf6',
> > > > > | u'metering.server_group': u'Group_B'} |
> > > > >
> > > > > For some reason the statistics query does not return anything when
> > > > queried
> > > > > with the scaling group name, this probably explains why auto
> scaling
> > > > actions
> > > > > are not triggered...? Without query parameter data is returned.
> Data is
> > > > > returned also ok for some other counter e.g. for cpu_util.
> > > > >
> > > > > # ceilometer statistics -m vm_cpu_load -q
> > > > > metadata.user_metadata.server_group=Group_B -p 60
> > > > >
> > > > > # ceilometer statistics -m vm_cpu_load
> > > > >
> > > >
> +--------+----------------------------+----------------------------+-------+-----+-------+--------+---------------+------------+----------------------------+----------------------------+
> > > > > | Period | Period Start | Period End | Count | Min | Max | Sum |
> Avg |
> > > > > | Duration | Duration Start | Duration End |
> > > > >
> > > >
> +--------+----------------------------+----------------------------+-------+-----+-------+--------+---------------+------------+----------------------------+----------------------------+
> > > > > | 0 | 2014-03-28T21:14:34.370000 | 2014-03-28T21:14:34.370000 |
> 520 |
> > > > 0.3 |
> > > > > | 100.0 | 5865.5 | 11.2798076923 | 170135.609 |
> > > > 2014-03-28T21:14:34.370000 |
> > > > > | 2014-03-30T20:30:09.979000 |
> > > > >
> > > >
> +--------+----------------------------+----------------------------+-------+-----+-------+--------+---------------+------------+----------------------------+----------------------------+
> > > > >
> > > > > Any ideas what might be the cause for this behaviour...?
> > > > >
> > > > > Many thanks,
> > > > > -Juha
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > Mailing list:
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > > > Post to : openstack at lists.openstack.org
> > > > > Unsubscribe :
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > > >
> > > >
> > >
> >
> > _______________________________________________
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to : openstack at lists.openstack.org
> > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140407/b97f166e/attachment.html>
More information about the Openstack
mailing list