[Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift
Leander Bessa Beernaert
leanderbb at gmail.com
Mon Jan 14 13:35:01 UTC 2013
According to the info below, i think the current size is 256 right? If I
format the storage partition, will that automatically clear all the
contents from the storage or do I need to clean something else as well?
Output from xfs_info:
meta-data=/dev/sda3 isize=256 agcount=4, agsize=13309312
blks
= sectsz=512 attr=2
data = bsize=4096 blocks=53237248, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=25994, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
On Mon, Jan 14, 2013 at 1:29 PM, Leander Bessa Beernaert <
leanderbb at gmail.com> wrote:
> By stopping, do you mean halt the service (kill the process) or is it a
> change in the configuration file?
>
>
> On Mon, Jan 14, 2013 at 1:20 PM, Robert van Leeuwen <
> Robert.vanLeeuwen at spilgames.com> wrote:
>
>> On Mon, Jan 14, 2013 at 11:02 AM, Leander Bessa Beernaert <
>> leanderbb at gmail.com> wrote:
>>
>>> Hello all,
>>>
>>>
>>> I'm trying to upload 200GB of 200KB files to Swift. I'm using 4
>>> clients (each hosted on a different machine) with 10 threads each uploading
>>> files using the official python-swiftclient. Each thread is uploading to a
>>> separate container.
>>>
>>> I have 5 storage nodes and 1 proxy node. The nodes are all running
>>> with a replication factor of 3. Each node has a quad-core i3 processor, 4GB
>>> of RAM and a gigabit network interface.
>>>
>>> Is there any way I can speed up this process? At the moment it takes
>>> about 20 seconds per file or more.
>>>
>>>
>> It is very likely the system is starved on IO's.
>> As a temporary workaround you can stop the object-replicator and
>> object-auditor during the import to have less daemons competing for IO's.
>>
>> Some general troubleshooting tips:
>> Use iotop to look for the processes consuming io's
>>
>> Assuming you use XFS:
>> Make sure the filesystem is created with the appropriate inode size as
>> described in the docs.
>> (e.g. mkfs.xfs -i size=1024)
>>
>> Also with lots of files you need quite a bit of memory to cache the
>> inodes into memory.
>> Use the xfs runtime stats to get some indication about the cache:
>> http://xfs.org/index.php/Runtime_Stats
>> xs_dir_lookup and xs_ig_missed will give some indication how much IO's
>> are spend on the inode lookups
>>
>> You can look at slabtop to see how much memory is used by the inode cache.
>>
>> Cheers,
>> Robert
>>
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack at lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130114/68ed3fde/attachment.html>
More information about the Openstack
mailing list