<br><div class="gmail_quote">On Thu, Jan 17, 2013 at 12:34 AM, Ray Pekowski <span dir="ltr"><<a href="mailto:pekowski@gmail.com" target="_blank">pekowski@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br><div class="gmail_quote"><div class="im">On Wed, Jan 16, 2013 at 11:45 PM, Jay Pipes <span dir="ltr"><<a href="mailto:jaypipes@gmail.com" target="_blank">jaypipes@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
2) How are you benchmarking your performance improvements?<br></blockquote></div><div><br>Funny you should ask. You wrote an article or blog on how to create a dummy OpenStack service, so I imagine you would have done it the same way. I can't find the link right now and I didn't actually use your article. I just came across it after I had done my work. I created a bare minimum services that simply exposed a few RPC calls for test purposes. I then wrote a load driver to drive those RPC calls. I replicated it to 9 services (kind of arbitrarily chosen and due to time constraints) on 10 VMs and added 3 RabbitMQ server VMs. Then I did a series of tests maxing out throughput for each service and adding more services over time until all load generator/services were running at maximum rate. I tested for 1, 2 and 3 RabbitMQ servers, with and without mirroring.<br>
<br>It was a Dell internal study. I will check if to see if I can share the results.</div><br></div></blockquote><div> </div><div>I got approval to share the results of my scalability study of RabbitMQ
as used by OpentStack done as part of my work at Dell. It shows the
improvement that comes from the single response queue change proposed by the blueprint. Here is a link:<br><br><a href="https://docs.google.com/file/d/0B-droFdkDaVhVzhsN3RKRlFLODQ/edit">https://docs.google.com/file/d/0B-droFdkDaVhVzhsN3RKRlFLODQ/edit</a> <br>
</div></div><br>Ray<br>