[Openstack] [Keystone] Keystone performance work

Neependra Khare nkhare at redhat.com
Mon Dec 16 07:25:39 UTC 2013


Hi Jay,

Thanks for your comments.  Please find my reply in-line.

On 12/14/2013 12:58 AM, Jay Pipes wrote:
> I have listed down the methodology I'll be following for this test:-
>> https://wiki.openstack.org/wiki/KeystonePerformance#Identify_CPU.2C_Disk.2C_Memory.2C_Database_bottlenecks 
>>
>
> My first suggestion would be to rework the performance benchmarking 
> work items to have clearer indications regarding *what are the metrics 
> being tested* in each work item.
Performance characterization is an iterative process. I am open to 
rework on the work-items as we
go along.

>
> For example, the first work item is "Identify CPU, Disk, Memory, and 
> Database Bottlenecks".
>
> The first test case listed is:
>
> "Test #1, Create users in parallel and look for CPU, disk or memory 
> bottleneck."
>
> I think that is a bit too big of an initial bite ;)
>
> Instead, it may be more effective to instead break down the 
> performance analysis based on the metrics you wish to test and the 
> relative conclusions you wish your work to generate.
>
> For example, consider this possible work item:
>
> "Determine the maximum number of token authentication calls that can 
> be performed"
Tests like these would be very subjective to the hardware and software 
resources we have like
no. of CPUs, Memcahced etc.  Its is very important to see if we can find 
any obvious bottlenecks.
>
> Within that work item, you can then further expand a testing matrix, 
> like so:
>
> * Measure the total number of token authentication calls performed by 
> a single client against a single-process, Python-only Keystone server
> * Measure the total number of token authentication calls performed by 
> a single client against a multi-process Keystone server running inside 
> an nginx or Apache container server -- with 2, 4, 8, 16, and 32 
> pre-forked processes
Any pointers on configuring multi-process Keystone would be helpful. I 
see a method
mentioned in "Run N keystone Processes" section of following:-
http://blog.gridcentric.com/bid/318277/Boosting-OpenStack-s-Parallel-Performance"

> * Measure the above using increasing numbers of concurrent clients -- 
> 10, 50, 100, 500, 1000.
>
> There's, of course, nothing wrong with measuring things like CPU, disk 
> and I/O performance during tests, however there should be a clear 
> metric that is being measured for each test.
Agreed. Let me start collecting results from the tests you suggested 
above and I mentioned
on the wiki. Once we have those, we can rework on the work-items. Does 
that sound OK ?
>
> My second suggestion would be to drop the requirement of using RDO -- 
> or any version of OpenStack for that matter.
My end goal would be to have scripts that one can run on any of the 
OpenStack distribution.
RDO is mentioned here an example here.
>
> In these kinds of tests, where you are not measuring the integrated 
> performance of multiple endpoints, but are instead measuring the 
> performance of a single endpoint (Keystone), there's no reason, IMHO, 
> to install all of OpenStack. Installing and serving the Keystone 
> server (and it's various drivers) is all that is needed. The fewer 
> "balls up in the air" during a benchmarking session, the fewer 
> side-effects are around to effect the outcome of the benchmark...
Agreed. As mentioned in following I suggested to install just Keystone 
on the instances, where the tests would be performed :-
https://wiki.openstack.org/wiki/KeystonePerformance#Test_.231.2C_Create_users_in_parallel_and_look_for_CPU.2C_disk_or_memory_bottleneck.


Thanks,
Neependra


>
> Best,
> -jay
>
>
>
> _______________________________________________
> Mailing list: 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131216/9fba058e/attachment.html>


More information about the Openstack mailing list