[Openstack-operators] Scaling Ceilometer compute agent?
Bill Jones
bill.jones at sungardas.com
Tue Jun 14 21:26:27 UTC 2016
Thanks for all the pointers.
Vahric, we're running into this in our lab on a compute host with 135
instances and 12 meters, 3 of which we developed.
/Bill
On Tue, Jun 14, 2016 at 2:54 PM, Vahric Muhtaryan <vahric at doruk.net.tr>
wrote:
> Hello Bill
>
> Possible to share how many instance and how many meter per instance you
> collecting and getting this error ?
>
> I guess for scaling purpose , you are talking about this , right
> http://docs.openstack.org/ha-guide/controller-ha-telemetry.html
>
> Regards
> VM
>
> From: Bill Jones <bill.jones at sungardas.com>
> Date: Tuesday 14 June 2016 at 18:03
> To: "openstack-oper." <openstack-operators at lists.openstack.org>
> Subject: [Openstack-operators] Scaling Ceilometer compute agent?
>
> Has anyone had any experience with scaling ceilometer compute agents?
>
> We're starting to see messages like this in logs for some of our compute
> agents:
>
> WARNING ceilometer.openstack.common.loopingcall [-] task <function
> interval_task at 0x2092cf8> run outlasted interval by 293.25 sec
>
> This is an indication that the compute agent failed to execute its
> pipeline processing within the allotted interval (in our case 10 min). The
> result of this is that less instance samples are generated per hour than
> expected, and this causes billing issues for us due to the way we calculate
> usage.
>
> It looks like we have three options for addressing this: make the pipeline
> run faster, increase the interval time, or scale the compute agents. I'm
> investigating the latter.
>
> I think I read in the ceilometer architecture docs that the agents are
> designed to scale, but I don't see anything in the docs on how to
> facilitate that. Any pointers would be appreciated.
>
> Thanks,
> Bill
> _______________________________________________ OpenStack-operators
> mailing list OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160614/eb8ef6f5/attachment.html>
More information about the OpenStack-operators
mailing list