<div dir="ltr"><div><div>Hi Ilya,</div><div><br></div><div>Interresting, thanks for sharing.</div><div>So the quick conclusion to your numbers seems indicated that mongodb is more efficient for both reading and writing, </div>
<div>except for 2 cases for retrieving data (meters and resouces listing) .. </div><div><br></div><div>However for the reading operations, </div><div>it's should be confirmed (or precised) where the time is really spent, would be interresting to compute the distribution of times spent by each layer : backend -> api -> cli .. <span id="result_box" class="" lang="en"><span class="">similarly</span> <span class="">to</span> <span class="">what</span></span> you did for collector by custom logging (or by instrumentation..)</div>
<div>To add additional use cases (and to be more relevant) it will be good to use queries executed by billing systems or the alarm evaluator aka filtering a limited subsets of samples (by resource and/or user and/or tenant) .. to see the numbers without retrieving ten of thousands of samples.</div>
<div><br></div><div>btw, others indicators should help to give a good picture, I see for now:</div><div>errors rate, queue lenght (rabbit), returned samples|meters|resources by API calls, missing samples (after the populating)</div>
<div>and some system metrics also.</div><div><br></div><div>what was the caracteristics of serveurs used for these load test?</div><div><br></div><div>my two cents.</div><div><br></div><div><br></div></div></div>