[Openstack] Help - swift with low IOPS
yuan.zhou at intel.com
Thu Sep 5 03:43:21 UTC 2013
Would you have a check on the iostat data of your storage nodes? There seems like to be another bottleneck in the disks.
With 3k tps, assuming there's no bottleneck in the network and CPU part, the IOPS for each disk would be:
3000 * 3(3-replicas) / 50(# of disks) = 180
That's really a heavy load given that the average object size in your application is 500KB. I'd suggest adding more disks to the cluster in addition.
From: Jonathan Lu [mailto:jojokururu at gmail.com]
Sent: Wednesday, September 04, 2013 5:04 PM
Cc: openstack at lists.openstack.org
Subject: Re: [Openstack] Help - swift with low IOPS
If all your requests are PUT, then the expected network bandwidth should be:
3000*500/1024*8 = 11718.75Mbps , which means more than 10Gbps
According to your environment, there are 2 proxies ( giga network?) and that is far away from the expectation.
For example, in our production there are also 2 proxies with giga network bandwith, 5 nodes each with 3T*12. The limit of the performance is about 600Mbps. We think that hit the bottleneck of the network.
Uploading large objects will consume more network bandwidth and lower CPU, so 1st step you'd better improve your network to meet your requirement.
On 2013/9/4 16:18, pangj wrote:
> How does a setup make swift have the capability of 3000 TPS, for the
> average size 500KB of files? What architecture, hardware, network etc
> will be expected? Thanks for any suggestion.
> On 2013-9-3 17:11, pangj wrote:
>> Thanks for the info. I have checked this one days ago.
>> Given the case, in the production environment, we have 3 nodes with
>> SSD for account/container servers, 5 nodes each with 2T * 10 for
>> object storage servers. Currently having two proxies. All servers are
>> 16 core CPU powered, storage server (account/container/object) has
>> 48GB memory, proxy has 16GB memory. The average size for an object is
>> about 500KB, there are many containers (more than 100 millions), but
>> objects under each container are few (less than 20). There is only
>> one account for TempAuth.
>> Is there any better way to optimize the architecture?
> Mailing list:
> Post to : openstack at lists.openstack.org
> Unsubscribe :
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack at lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
More information about the Openstack