[openstack-dev] [zaqar] Juno Performance Testing (Round 2)

Joe Gordon joe.gordon0 at gmail.com
Wed Sep 17 19:32:30 UTC 2014


On Tue, Sep 16, 2014 at 8:02 AM, Kurt Griffiths <
kurt.griffiths at rackspace.com> wrote:

>  Right, graphing those sorts of variables has always been part of our
> test plan. What I’ve done so far was just some pilot tests, and I realize
> now that I wasn’t very clear on that point. I wanted to get a rough idea of
> where the Redis driver sat in case there were any obvious bug fixes that
> needed to be taken care of before performing more extensive testing. As it
> turns out, I did find one bug that has since been fixed.
>
>  Regarding latency, saying that it "is not important” is an exaggeration;
> it is definitely important, just not the* only *thing that is important.
> I have spoken with a lot of prospective Zaqar users since the inception of
> the project, and one of the common threads was that latency needed to be
> reasonable. For the use cases where they see Zaqar delivering a lot of
> value, requests don't need to be as fast as, say, ZMQ, but they do need
> something that isn’t horribly *slow,* either. They also want HTTP,
> multi-tenant, auth, durability, etc. The goal is to find a reasonable
> amount of latency given our constraints and also, obviously, be able to
> deliver all that at scale.
>

Can you further quantify what you would consider too slow, is it 100ms too
slow.


>
>  In any case, I’ve continue working through the test plan and will be
> publishing further test results shortly.
>
>  > graph latency versus number of concurrent active tenants
>
>  By tenants do you mean in the sense of OpenStack Tenants/Project-ID's or
> in  the sense of “clients/workers”? For the latter case, the pilot tests
> I’ve done so far used multiple clients (though not graphed), but in the
> former case only one “project” was used.
>

multiple  Tenant/Project-IDs


>
>   From: Joe Gordon <joe.gordon0 at gmail.com>
> Reply-To: OpenStack Dev <openstack-dev at lists.openstack.org>
> Date: Friday, September 12, 2014 at 1:45 PM
> To: OpenStack Dev <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)
>
>  If zaqar is like amazon SQS, then the latency for a single message and
> the throughput for a single tenant is not important. I wouldn't expect
> anyone who has latency sensitive work loads or needs massive throughput to
> use zaqar, as these people wouldn't use SQS either. The consistency of the
> latency (shouldn't change under load) and zaqar's ability to scale
> horizontally mater much more. What I would be great to see some other
> things benchmarked instead:
>
>  * graph latency versus number of concurrent active tenants
> * graph latency versus message size
> * How throughput scales as you scale up the number of assorted zaqar
> components. If one of the benefits of zaqar is its horizontal scalability,
> lets see it.
>  * How does this change with message batching?
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140917/4baf47f3/attachment.html>


More information about the OpenStack-dev mailing list