<div dir="ltr">Kurt, <div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span style="font-family:arial,sans-serif;font-size:13px">Speaking generally, I’d like to see the project bake this in over time as<br></span><span style="font-family:arial,sans-serif;font-size:13px">part of the CI process. It’s definitely useful information not just for<br></span><span style="font-family:arial,sans-serif;font-size:13px">the developers but also for operators in terms of capacity planning. We’ve </span> </blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span style="font-size:13px;font-family:arial,sans-serif">talked as a team about doing this with Rally </span><span style="font-size:13px;font-family:arial,sans-serif"> </span><span style="font-size:13px;font-family:arial,sans-serif">(and in fact, some work has</span></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span style="font-family:arial,sans-serif;font-size:13px">been started there), but it may be useful to also run a large-scale test</span> </blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span style="font-family:arial,sans-serif;font-size:13px">on a regular basis (at least per milestone).</span><span style="font-family:arial,sans-serif;font-size:13px"> </span></blockquote><div><br></div><div>I believe, we will be able to generate distributed load and generate at least</div><div>20k rps in K cycle. We've done a lot of work during J in this direction,</div><div>but there is still a lot of to do.</div><div><br></div><div>So you'll be able to use the same tool for gates, local usage and large-scale tests.</div><div><br></div><div>Best regards,</div><div>Boris Pavlovic </div><div><span style="font-family:arial,sans-serif;font-size:13px"><br></span></div><div><span style="font-family:arial,sans-serif;font-size:13px"><br></span></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 12, 2014 at 3:17 AM, Kurt Griffiths <span dir="ltr"><<a href="mailto:kurt.griffiths@rackspace.com" target="_blank">kurt.griffiths@rackspace.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 9/11/14, 2:11 PM, "Devananda van der Veen" <<a href="mailto:devananda.vdv@gmail.com">devananda.vdv@gmail.com</a>><br>
wrote:<br>
<span class=""><br>
>OK - those resource usages sound better. At least you generated enough<br>
>load to saturate the uWSGI process CPU, which is a good point to look<br>
>at performance of the system.<br>
><br>
>At that peak, what was the:<br>
>- average msgs/sec<br>
>- min/max/avg/stdev time to [post|get|delete] a message<br>
<br>
</span>To be honest, it was a quick test and I didn’t note the exact metrics<br>
other than eyeballing them to see that they were similar to the results<br>
that I published for the scenarios that used the same load options (e.g.,<br>
I just re-ran some of the same test scenarios).<br>
<br>
Some of the metrics you mention aren’t currently reported by zaqar-bench,<br>
but could be added easily enough. In any case, I think zaqar-bench is<br>
going to end up being mostly useful to track relative performance gains or<br>
losses on a patch-by-patch basis, and also as an easy way to smoke-test<br>
both python-marconiclient and the service. For large-scale testing and<br>
detailed metrics, other tools (e.g., Tsung, JMeter) are better for the<br>
job, so I’ve been considering using them in future rounds.<br>
<span class=""><br>
>Is that 2,181 msg/sec total, or per-producer?<br>
<br>
</span>That metric was a combined average rate for all producers.<br>
<span class=""><br>
><br>
>I'd really like to see the total throughput and latency graphed as #<br>
>of clients increases. Or if graphing isn't your thing, even just post<br>
>a .csv of the raw numbers and I will be happy to graph it.<br>
><br>
>It would also be great to see how that scales as you add more Redis<br>
>instances until all the available CPU cores on your Redis host are in<br>
</span>>Use.<br>
<br>
Yep, I’ve got a long list of things like this that I’d like to see in<br>
future rounds of performance testing (and I welcome anyone in the<br>
community with an interest to join in), but I have to balance that effort<br>
with a lot of other things that are on my plate right now.<br>
<br>
Speaking generally, I’d like to see the project bake this in over time as<br>
part of the CI process. It’s definitely useful information not just for<br>
the developers but also for operators in terms of capacity planning. We’ve<br>
talked as a team about doing this with Rally (and in fact, some work has<br>
been started there), but it may be useful to also run a large-scale test<br>
on a regular basis (at least per milestone). Regardless, I think it would<br>
be great for the Zaqar team to connect with other projects (at the<br>
summit?) who are working on perf testing to swap ideas, collaborate on<br>
code/tools, etc.<br>
<span class="HOEnZb"><font color="#888888"><br>
--KG<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div>