[openstack-dev] [gnocchi] Redis for storage backend

gordon chung gord at live.ca
Wed Oct 18 17:09:55 UTC 2017

On 2017-10-18 12:15 PM, Yaguang Tang wrote:
> We launched 300vms and each vm has about 10 metrics, OpenStack cluster 
> have 3 controllers and 2 compute nodes(ceph replication is set to 2).

seems smaller than my test, i have 20K metrics in my test

> what we want to archive is to make all metric measures data get 
> processed as soon as possible, metric processing delay is set to 10s, 
> and ceilometer polling interval is 30s.

are you batching the data you push to gnocchi? in gnocchi4.1, the redis 
driver will (attempt to) process immediately, rather than cyclically 
using the metric processing delay.

> when the backend of incoming and storeage is set to ceph, the average of 
> "gnocchi status"
> shows that there are around 7000 measures waiting to be process, but 
> when changing incoming and storage backend to Redis, the result of 
> gnocchi status shows unprocessed measures is around 200.

i should clarify, having a high gnocchi status is not a bad thing, ie, 
if you just send a large spike of measures, it's expected to jump. it's 
bad if never shrinks.

that said, how many workers do you have? i have 18 workers for 20K 
metrics and it takes 2minutes i believe? do you have debug enable? how 
long does it take to process metric?

when i tested gnocchi+ceph vs gnocchi+redis, i didn't see a very large 
difference in performance (redis was definitely better though). maybe 
it's your ceph environment?

> we try to add more metricd process on every controller nodes, to improve 
> the performance of
> calculate and writing speed to storage backend but  have little effect.

performance should increase (relatively) proportionally. ie. if you 2x 
metricd, you should perform (almost) 2x quicker. if you add 5% more 
metricd, you should perform (almost) 5% quicker.



More information about the OpenStack-dev mailing list