[Openstack-operators] Ceilometer and disk IO

Paras pradhan pradhanparas at gmail.com
Thu Apr 13 16:08:01 UTC 2017

Thanks for the reply. I have see people recommending to use ceph as a
backed for gnocchi but we don't use Ceph yet and looks like gnocchi does
not support cinder backends. We use Dell EQ San for block devices.


On Tue, Apr 11, 2017 at 9:15 PM, Alex Hubner <alex at hubner.net.br> wrote:

> Ceilometer can be a pain in the a* if not properly configured/designed,
> especially when things start to grow. I've already saw the exact same
> situation you described on two different instalations. To make things more
> complicated, some OpenStack distributions use MongoDB as a storage backend
> and do not consider a dedicated infrastructure for Ceilometer, relegating
> this important service to live, by default, in the controller nodes...
> worst: not clearly agreeing on what should be done when the service starts
> to stall rather than simply adding more controller nodes... (yes Red Hat,
> I'm looking to you). You might consider using gnocchi and a ceph storage
> for telemetry as it was already suggested.
> For my 2 cents, here's a nice talk on the matter: https://www.openstack.
> org/videos/video/capacity-planning-saving-money-and-
> maximizing-efficiency-in-openstack-using-gnocchi-and-ceilometer
> []'s
> Hubner
> On Sat, Apr 8, 2017 at 2:00 PM, Paras pradhan <pradhanparas at gmail.com>
> wrote:
>> Hello
>> What kind of storage backend do you guys use if you see disk IO
>> bottlenecks when storing ceilometer events and metrics? In my current
>> configuration I am using 300 GB 10K SAS (in hardware raid 1) and iostat
>> report does not look good (upto 100% unilization) with ceilometer consuming
>> high CPU and Memory.  Does it help adding more spindles and move to raid 10?
>> Thanks!
>> Paras.
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170413/a2ee4253/attachment.html>

More information about the OpenStack-operators mailing list