[openstack-dev] [zaqar] Juno Performance Testing (Round 2)

Boris Pavlovic bpavlovic at mirantis.com
Thu Sep 11 23:36:46 UTC 2014


Kurt,

Speaking generally, I’d like to see the project bake this in over time as
> part of the CI process. It’s definitely useful information not just for
> the developers but also for operators in terms of capacity planning. We’ve
>

talked as a team about doing this with Rally  (and in fact, some work has

been started there), but it may be useful to also run a large-scale test

on a regular basis (at least per milestone).


I believe, we will be able to generate distributed load and generate at
least
20k rps in K cycle. We've done a lot of work during J in this direction,
but there is still a lot of to do.

So you'll be able to use the same tool for gates, local usage and
large-scale tests.

Best regards,
Boris Pavlovic



On Fri, Sep 12, 2014 at 3:17 AM, Kurt Griffiths <
kurt.griffiths at rackspace.com> wrote:

> On 9/11/14, 2:11 PM, "Devananda van der Veen" <devananda.vdv at gmail.com>
> wrote:
>
> >OK - those resource usages sound better. At least you generated enough
> >load to saturate the uWSGI process CPU, which is a good point to look
> >at performance of the system.
> >
> >At that peak, what was the:
> >- average msgs/sec
> >- min/max/avg/stdev time to [post|get|delete] a message
>
> To be honest, it was a quick test and I didn’t note the exact metrics
> other than eyeballing them to see that they were similar to the results
> that I published for the scenarios that used the same load options (e.g.,
> I just re-ran some of the same test scenarios).
>
> Some of the metrics you mention aren’t currently reported by zaqar-bench,
> but could be added easily enough. In any case, I think zaqar-bench is
> going to end up being mostly useful to track relative performance gains or
> losses on a patch-by-patch basis, and also as an easy way to smoke-test
> both python-marconiclient and the service. For large-scale testing and
> detailed metrics, other tools (e.g., Tsung, JMeter) are better for the
> job, so I’ve been considering using them in future rounds.
>
> >Is that 2,181 msg/sec total, or per-producer?
>
> That metric was a combined average rate for all producers.
>
> >
> >I'd really like to see the total throughput and latency graphed as #
> >of clients increases. Or if graphing isn't your thing, even just post
> >a .csv of the raw numbers and I will be happy to graph it.
> >
> >It would also be great to see how that scales as you add more Redis
> >instances until all the available CPU cores on your Redis host are in
> >Use.
>
> Yep, I’ve got a long list of things like this that I’d like to see in
> future rounds of performance testing (and I welcome anyone in the
> community with an interest to join in), but I have to balance that effort
> with a lot of other things that are on my plate right now.
>
> Speaking generally, I’d like to see the project bake this in over time as
> part of the CI process. It’s definitely useful information not just for
> the developers but also for operators in terms of capacity planning. We’ve
> talked as a team about doing this with Rally (and in fact, some work has
> been started there), but it may be useful to also run a large-scale test
> on a regular basis (at least per milestone). Regardless, I think it would
> be great for the Zaqar team to connect with other projects (at the
> summit?) who are working on perf testing to swap ideas, collaborate on
> code/tools, etc.
>
> --KG
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140912/67c8a725/attachment.html>


More information about the OpenStack-dev mailing list