<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Hi Jay, <br>
<br>
Thanks for your comments. Please find my reply in-line.<br>
<br>
On 12/14/2013 12:58 AM, Jay Pipes wrote:<br>
</div>
<blockquote cite="mid:52AB5FCD.20900@gmail.com" type="cite">I have
listed down the methodology I'll be following for this test:-
<br>
<blockquote type="cite"><a class="moz-txt-link-freetext" href="https://wiki.openstack.org/wiki/KeystonePerformance#Identify_CPU.2C_Disk.2C_Memory.2C_Database_bottlenecks">https://wiki.openstack.org/wiki/KeystonePerformance#Identify_CPU.2C_Disk.2C_Memory.2C_Database_bottlenecks</a>
<br>
</blockquote>
<br>
My first suggestion would be to rework the performance
benchmarking work items to have clearer indications regarding
*what are the metrics being tested* in each work item.
<br>
</blockquote>
Performance characterization is an iterative process. I am open to
rework on the work-items as we <br>
go along. <br>
<br>
<blockquote cite="mid:52AB5FCD.20900@gmail.com" type="cite">
<br>
For example, the first work item is "Identify CPU, Disk, Memory,
and Database Bottlenecks".
<br>
<br>
The first test case listed is:
<br>
<br>
"Test #1, Create users in parallel and look for CPU, disk or
memory bottleneck."
<br>
<br>
I think that is a bit too big of an initial bite ;)
<br>
<br>
Instead, it may be more effective to instead break down the
performance analysis based on the metrics you wish to test and the
relative conclusions you wish your work to generate.
<br>
<br>
For example, consider this possible work item:
<br>
<br>
"Determine the maximum number of token authentication calls that
can be performed"
<br>
</blockquote>
Tests like these would be very subjective to the hardware and
software resources we have like<br>
no. of CPUs, Memcahced etc. Its is very important to see if we can
find any obvious bottlenecks.<br>
<blockquote cite="mid:52AB5FCD.20900@gmail.com" type="cite">
<br>
Within that work item, you can then further expand a testing
matrix, like so:
<br>
<br>
* Measure the total number of token authentication calls performed
by a single client against a single-process, Python-only Keystone
server
<br>
* Measure the total number of token authentication calls performed
by a single client against a multi-process Keystone server running
inside an nginx or Apache container server -- with 2, 4, 8, 16,
and 32 pre-forked processes
<br>
</blockquote>
Any pointers on configuring multi-process Keystone would be helpful.
I see a method <br>
mentioned in "Run N keystone Processes" section of following:-<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<a
href="http://blog.gridcentric.com/bid/318277/Boosting-OpenStack-s-Parallel-Performance">http://blog.gridcentric.com/bid/318277/Boosting-OpenStack-s-Parallel-Performance</a>"<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<br>
<blockquote cite="mid:52AB5FCD.20900@gmail.com" type="cite">*
Measure the above using increasing numbers of concurrent clients
-- 10, 50, 100, 500, 1000.
<br>
<br>
There's, of course, nothing wrong with measuring things like CPU,
disk and I/O performance during tests, however there should be a
clear metric that is being measured for each test.
<br>
</blockquote>
Agreed. Let me start collecting results from the tests you suggested
above and I mentioned <br>
on the wiki. Once we have those, we can rework on the work-items.
Does that sound OK ?<br>
<blockquote cite="mid:52AB5FCD.20900@gmail.com" type="cite">
<br>
My second suggestion would be to drop the requirement of using RDO
-- or any version of OpenStack for that matter.
<br>
</blockquote>
My end goal would be to have scripts that one can run on any of the
OpenStack distribution. <br>
RDO is mentioned here an example here. <br>
<blockquote cite="mid:52AB5FCD.20900@gmail.com" type="cite">
<br>
In these kinds of tests, where you are not measuring the
integrated performance of multiple endpoints, but are instead
measuring the performance of a single endpoint (Keystone), there's
no reason, IMHO, to install all of OpenStack. Installing and
serving the Keystone server (and it's various drivers) is all that
is needed. The fewer "balls up in the air" during a benchmarking
session, the fewer side-effects are around to effect the outcome
of the benchmark...
<br>
</blockquote>
Agreed. As mentioned in following I suggested to install just
Keystone on the instances, where the tests would be performed :-<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<a
href="https://wiki.openstack.org/wiki/KeystonePerformance#Test_.231.2C_Create_users_in_parallel_and_look_for_CPU.2C_disk_or_memory_bottleneck.">https://wiki.openstack.org/wiki/KeystonePerformance#Test_.231.2C_Create_users_in_parallel_and_look_for_CPU.2C_disk_or_memory_bottleneck.</a><br>
<br>
<br>
Thanks, <br>
Neependra<br>
<br>
<br>
<blockquote cite="mid:52AB5FCD.20900@gmail.com" type="cite">
<br>
Best,
<br>
-jay
<br>
<br>
<br>
<br>
_______________________________________________
<br>
Mailing list:
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
<br>
Post to : <a class="moz-txt-link-abbreviated" href="mailto:openstack@lists.openstack.org">openstack@lists.openstack.org</a>
<br>
Unsubscribe :
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
<br>
</blockquote>
<br>
</body>
</html>