[openstack-dev] [gnocchi] typical length of timeseries data
gordon chung
gord at live.ca
Thu Jul 28 22:05:20 UTC 2016
hi folks,
this is probably something to discuss on ops list as well eventually but
what do you think about shrinking the max size of timeseries chunks from
14400 to something smaller? i'm curious to understand what the length of
the typical timeseries is. my main reason for bringing this up is that
even our default 'high' policy doesn't reach 14400 limit so it at most
will only split into two, partially filled objects. as we look to make a
more efficient storage format for v3(?) seems like this may be an
opportunity to change size as well (if necessary)
14400 points roughly equals 128KB object which is cool but maybe we
should target something smaller? 7200points aka 64KB? 3600 points aka
32KB? just for reference our biggest default series is 10080 points
(1min granularity over week).
that said 128KB (at most) might not be that bad from read/write pov and
maybe it's ok to keep it at 14400? i know from the test i did earlier,
the time requirement to read/write increases linearly (7200 point object
takes roughly half time of 14400 point object)[1]. i think the main item
is we don't want it too small that we're updating multiple objects at a
time.
[1] http://www.slideshare.net/GordonChung/gnocchi-profiling-v2/25
cheers,
--
gord
More information about the OpenStack-dev
mailing list