[openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs
Julien Danjou
julien at danjou.info
Fri Apr 28 13:19:58 UTC 2017
On Fri, Apr 28 2017, gordon chung wrote:
> so the tradeoff here is that now we're doing a lot more calls to
> indexer. additionally, we're pulling a lot more unused results from db.
> a single janitor currently just grabs all deleted metrics and starts
> attempting to clean them up one at a time. if we merge, we will have n
> calls to indexer, where n is number of workers, each pulling in all the
> deleted metrics, and then checking to see if the metric is in it's sack,
> and if not, moving on. that's a lot of extra, wasted work. we could
> reduce that work by adding sack information to indexer ;) but that will
> still add significantly more calls to indexer (which we could reduce by
> not triggering cleanup every job interval)
That's not what I meant. You can have the same mechanism as currently,
but then you compute the sacks of all metrics and you
itertools.groupby() per sack on them before locking the sack and
expunging them.
> refresh is currently disabled by default so i think we're ok.
Well you mean it's disabled by default _in the CLI_, not in the API
right?
> what's the timeout for? timeout api's attempt to aggregate metric? i
> think it's a bad experience if we add any timeout since i assume it will
> still return what it can return and then the results become somewhat
> ambiguous.
No, I meant timeout for grabbing the sack's lock. You wouldn't return a
2xx but a 5xx stating the API is unable to compute stuff right now, so
try again without refresh or something.
--
Julien Danjou
// Free Software hacker
// https://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170428/acba2c78/attachment.sig>
More information about the OpenStack-dev
mailing list