[Openstack] [Keystone] Keystone performance work

Adam Young ayoung at redhat.com
Tue Dec 17 16:34:47 UTC 2013


On 12/16/2013 02:56 AM, Neependra Khare wrote:
> On 12/14/2013 02:04 AM, Adam Young wrote:
>> On 12/13/2013 02:28 PM, Jay Pipes wrote:
>>> On 12/13/2013 08:14 AM, Neependra Khare wrote:
>>>> On 12/12/2013 12:00 PM, Neependra Khare wrote:
>>>>> On 12/12/2013 01:11 AM, Adam Young wrote:
>>>>>> Can you indicate which is going to be your first effort?  We
>>>>>> (Keystone team) can provide some guidance on how to best hammer 
>>>>>> on it.
>>>>> Thanks. I am starting by identifying any  CPU, Disk, Memory or
>>>>> Database bottlenecks.
>>>> I have listed down the methodology I'll be following for this test:-
>>>> https://wiki.openstack.org/wiki/KeystonePerformance#Identify_CPU.2C_Disk.2C_Memory.2C_Database_bottlenecks 
>>>>
>>>
>>> Hi Neependra,
>>>
>>> My first suggestion would be to rework the performance benchmarking 
>>> work items to have clearer indications regarding *what are the 
>>> metrics being tested* in each work item.
>>>
>>> For example, the first work item is "Identify CPU, Disk, Memory, and 
>>> Database Bottlenecks".
>>>
>>> The first test case listed is:
>>>
>>> "Test #1, Create users in parallel and look for CPU, disk or memory 
>>> bottleneck."
>>
>> Here is a script you can modify.
>>
>> adam.younglogic.com/2013/12/load-keystone-user/
> Would it make a difference if I use/keystoneclient.v2_0/python module 
> to create users as mentioned on the wiki:-
> https://wiki.openstack.org/wiki/KeystonePerformance#Test_.231.2C_Create_users_in_parallel_and_look_for_CPU.2C_disk_or_memory_bottleneck.
While it shouldn't make a difference, it would be good to make the tests 
future proof.   But that approach is good, too.


>
> Thanks,
> Neependra
>
>>
>>
>>>
>>> I think that is a bit too big of an initial bite ;)
>>>
>>> Instead, it may be more effective to instead break down the 
>>> performance analysis based on the metrics you wish to test and the 
>>> relative conclusions you wish your work to generate.
>>>
>>> For example, consider this possible work item:
>>>
>>> "Determine the maximum number of token authentication calls that can 
>>> be performed"
>>>
>>> Within that work item, you can then further expand a testing matrix, 
>>> like so:
>>>
>>> * Measure the total number of token authentication calls performed 
>>> by a single client against a single-process, Python-only Keystone 
>>> server
>>> * Measure the total number of token authentication calls performed 
>>> by a single client against a multi-process Keystone server running 
>>> inside an nginx or Apache container server -- with 2, 4, 8, 16, and 
>>> 32 pre-forked processes
>>> * Measure the above using increasing numbers of concurrent clients 
>>> -- 10, 50, 100, 500, 1000.
>>>
>>> There's, of course, nothing wrong with measuring things like CPU, 
>>> disk and I/O performance during tests, however there should be a 
>>> clear metric that is being measured for each test.
>>>
>>> My second suggestion would be to drop the requirement of using RDO 
>>> -- or any version of OpenStack for that matter.
>>>
>>> In these kinds of tests, where you are not measuring the integrated 
>>> performance of multiple endpoints, but are instead measuring the 
>>> performance of a single endpoint (Keystone), there's no reason, 
>>> IMHO, to install all of OpenStack. Installing and serving the 
>>> Keystone server (and it's various drivers) is all that is needed. 
>>> The fewer "balls up in the air" during a benchmarking session, the 
>>> fewer side-effects are around to effect the outcome of the benchmark...
>>>
>>> Best,
>>> -jay
>>>
>>>
>>>
>>> _______________________________________________
>>> Mailing list: 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe : 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>> _______________________________________________
>> Mailing list: 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131217/384541cc/attachment.html>


More information about the Openstack mailing list