[Openstack] [Swift] Cache pressure tuning
Jonathan Lu
jojokururu at gmail.com
Tue Jun 18 03:21:25 UTC 2013
Hi Hugo,
I know the tombstone mechanism. In my opinion after the reclaim
time, the object of xxx.tombstone will be deleted at all. Is that right?
Maybe I misunderstand the doc :( ...
We try to "colddown" the swift system ( just wait for the
reclaiming) and test, but the result is not satisfying.
Thanks,
Jonathan Lu
On 2013/6/18 11:04, Kuo Hugo wrote:
> Hi Jonathan ,
>
> How did you perform "delete all the objects in the storage" ? Those
> deleted objects still consume inodes in tombstone status until the
> reclaim time.
> Would you mind to compare the result of $> sudo cat /proc/slabinfo |
> grep xfs , before/after set the vfs_cache_pressure
>
> spongebob at patrick1:~$ sudo cat /proc/slabinfo | grep xfs
> xfs_ili 70153 70182 216 18 1 : tunables 0 0 0 :
> slabdata 3899 3899 0
> xfs_inode 169738 1702081024 16 4 : tunables 0 0 0 :
> slabdata 10638 10638 0
> xfs_efd_item 60 60 400 20 2 : tunables 0 0
> 0 : slabdata 3 3 0
> xfs_buf_item 234 234 224 18 1 : tunables 0 0
> 0 : slabdata 13 13 0
> xfs_trans 28 28 280 14 1 : tunables 0 0
> 0 : slabdata 2 2 0
> xfs_da_state 32 32 488 16 2 : tunables 0 0
> 0 : slabdata 2 2 0
> xfs_btree_cur 38 38 208 19 1 : tunables 0 0
> 0 : slabdata 2 2 0
> xfs_log_ticket 40 40 200 20 1 : tunables 0 0
> 0 : slabdata 2 2 0
>
>
> Hi Robert,
> The performance degradation still there even only main swift workers
> are running in storage node. ( stop replicator/updater/auditor ). In
> my knowing.
> I'll check xs_dir_lookup and xs_ig_missed here. Thanks
>
>
>
>
>
>
> +Hugo Kuo+
> hugo at swiftstack.com <mailto:hugo at swiftstack.com>
> tonytkdk at gmail.com
> <mailto:tonytkdk at gmail.com>
> +886 935004793
>
>
> 2013/6/18 Jonathan Lu <jojokururu at gmail.com <mailto:jojokururu at gmail.com>>
>
> On 2013/6/17 18:59, Robert van Leeuwen wrote:
>
> I'm facing the issue about the performance degradation,
> and once I glanced that changing the value in /proc/sys
> /vm/vfs_cache_pressure will do a favour.
> Can anyone explain to me whether and why it is useful?
>
> Hi,
>
> When this is set to a lower value the kernel will try to keep
> the inode/dentry cache longer in memory.
> Since the swift replicator is scanning the filesystem
> continuously it will eat up a lot of iops if those are not in
> memory.
>
> To see if a lot of cache misses are happening, for xfs, you
> can look at xs_dir_lookup and xs_ig_missed.
> ( look at http://xfs.org/index.php/Runtime_Stats )
>
> We greatly benefited from setting this to a low value but we
> have quite a lot of files on a node ( 30 million)
> Note that setting this to zero will result in the OOM killer
> killing the machine sooner or later.
> (especially if files are moved around due to a cluster change ;)
>
> Cheers,
> Robert van Leeuwen
>
>
> Hi,
> We set this to a low value(20) and the performance is better
> than before. It seems quite useful.
>
> According to your description, this issue is related with the
> object quantity in the storage. We delete all the objects in the
> storage but it doesn't help anything. The only method to recover
> is to format and re-mount the storage node. We try to install
> swift on different environment but this degradation problem seems
> to be an inevitable one.
>
> Cheers,
> Jonathan Lu
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> <https://launchpad.net/%7Eopenstack>
> Post to : openstack at lists.launchpad.net
> <mailto:openstack at lists.launchpad.net>
> Unsubscribe : https://launchpad.net/~openstack
> <https://launchpad.net/%7Eopenstack>
> More help : https://help.launchpad.net/ListHelp
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130618/7526d05a/attachment.html>
More information about the Openstack
mailing list