[Openstack] [Keystone] Keystone performance work

Adam Young ayoung at redhat.com
Fri Dec 13 20:34:30 UTC 2013


On 12/13/2013 02:28 PM, Jay Pipes wrote:
> On 12/13/2013 08:14 AM, Neependra Khare wrote:
>> On 12/12/2013 12:00 PM, Neependra Khare wrote:
>>> On 12/12/2013 01:11 AM, Adam Young wrote:
>>>> Can you indicate which is going to be your first effort?  We
>>>> (Keystone team) can provide some guidance on how to best hammer on it.
>>> Thanks. I am starting by identifying any  CPU, Disk, Memory or
>>> Database bottlenecks.
>> I have listed down the methodology I'll be following for this test:-
>> https://wiki.openstack.org/wiki/KeystonePerformance#Identify_CPU.2C_Disk.2C_Memory.2C_Database_bottlenecks 
>>
>
> Hi Neependra,
>
> My first suggestion would be to rework the performance benchmarking 
> work items to have clearer indications regarding *what are the metrics 
> being tested* in each work item.
>
> For example, the first work item is "Identify CPU, Disk, Memory, and 
> Database Bottlenecks".
>
> The first test case listed is:
>
> "Test #1, Create users in parallel and look for CPU, disk or memory 
> bottleneck."

Here is a script you can modify.

adam.younglogic.com/2013/12/load-keystone-user/


>
> I think that is a bit too big of an initial bite ;)
>
> Instead, it may be more effective to instead break down the 
> performance analysis based on the metrics you wish to test and the 
> relative conclusions you wish your work to generate.
>
> For example, consider this possible work item:
>
> "Determine the maximum number of token authentication calls that can 
> be performed"
>
> Within that work item, you can then further expand a testing matrix, 
> like so:
>
> * Measure the total number of token authentication calls performed by 
> a single client against a single-process, Python-only Keystone server
> * Measure the total number of token authentication calls performed by 
> a single client against a multi-process Keystone server running inside 
> an nginx or Apache container server -- with 2, 4, 8, 16, and 32 
> pre-forked processes
> * Measure the above using increasing numbers of concurrent clients -- 
> 10, 50, 100, 500, 1000.
>
> There's, of course, nothing wrong with measuring things like CPU, disk 
> and I/O performance during tests, however there should be a clear 
> metric that is being measured for each test.
>
> My second suggestion would be to drop the requirement of using RDO -- 
> or any version of OpenStack for that matter.
>
> In these kinds of tests, where you are not measuring the integrated 
> performance of multiple endpoints, but are instead measuring the 
> performance of a single endpoint (Keystone), there's no reason, IMHO, 
> to install all of OpenStack. Installing and serving the Keystone 
> server (and it's various drivers) is all that is needed. The fewer 
> "balls up in the air" during a benchmarking session, the fewer 
> side-effects are around to effect the outcome of the benchmark...
>
> Best,
> -jay
>
>
>
> _______________________________________________
> Mailing list: 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





More information about the Openstack mailing list