<div dir="ltr">Hi Jonathan , <div><br></div><div style>How did you perform "<span style="font-family:arial,sans-serif;font-size:14px"><font color="#ff9900">delete all the objects in the storage</font>" ? Those deleted objects still consume inodes in tombstone status until the reclaim time. </span></div>
<div style><font face="arial, sans-serif"><span style="font-size:14px">Would you mind to compare the result of $> </span></font><span style="font-family:arial,sans-serif;font-size:14px">sudo cat /proc/slabinfo | grep xfs , before/after set the vfs_cache_pressure </span></div>
<div style><span style="font-family:arial,sans-serif;font-size:14px"><br></span></div><div style><font face="arial, sans-serif"><div style="font-size:14px"><font color="#f1c232">spongebob</font>@<font color="#0000ff">patrick1:~$</font> sudo cat /proc/slabinfo | grep xfs</div>
<div style="font-size:14px"><span style="font-family:arial;font-size:small"><font color="#999999">xfs_ili </font><font color="#ff0000">70153 70182</font><font color="#999999"> 216 18 1 : tunables 0 0 0 : slabdata 3899 3899 0</font></span><br>
</div></font><font color="#999999">xfs_inode </font><font color="#ff0000">169738 170208</font><font color="#999999"> 1024 16 4 : tunables 0 0 0 : slabdata 10638 10638 0<br>xfs_efd_item 60 60 400 20 2 : tunables 0 0 0 : slabdata 3 3 0<br>
xfs_buf_item 234 234 224 18 1 : tunables 0 0 0 : slabdata 13 13 0<br>xfs_trans 28 28 280 14 1 : tunables 0 0 0 : slabdata 2 2 0<br>xfs_da_state 32 32 488 16 2 : tunables 0 0 0 : slabdata 2 2 0<br>
xfs_btree_cur 38 38 208 19 1 : tunables 0 0 0 : slabdata 2 2 0<br>xfs_log_ticket 40 40 200 20 1 : tunables 0 0 0 : slabdata 2 2 0</font></div>
<div style><span style="font-family:arial,sans-serif;font-size:14px"><br></span></div><div style><br></div><div style><font face="arial, sans-serif"><span style="font-size:14px">Hi Robert, </span></font></div><div style><font face="arial, sans-serif"><span style="font-size:14px">The performance degradation still there even only main swift workers are running in storage node. ( stop replicator/updater/auditor ). In my knowing. </span></font></div>
<div style><font face="arial, sans-serif"><span style="font-size:14px">I'll check xs_dir_lookup and xs_ig_missed here. Thanks</span></font></div><div style><font face="arial, sans-serif"><span style="font-size:14px"><br>
</span></font></div><div style><font face="arial, sans-serif"><span style="font-size:14px"><br></span></font></div><div style><font face="arial, sans-serif"><span style="font-size:14px"><br></span></font></div><div style>
<span style="font-family:arial,sans-serif;font-size:14px"><br></span></div><div style><span style="font-family:arial,sans-serif;font-size:14px"><br></span></div><div style><span style="font-family:arial,sans-serif;font-size:14px"> </span></div>
</div><div class="gmail_extra"><br clear="all"><div><div dir="ltr"><div>+Hugo Kuo+</div><div><a href="mailto:hugo@swiftstack.com" target="_blank">hugo@swiftstack.com</a><br></div><div><a href="mailto:tonytkdk@gmail.com" target="_blank">tonytkdk@gmail.com<br>
</a></div><div>+886 935004793<br></div></div></div>
<br><br><div class="gmail_quote">2013/6/18 Jonathan Lu <span dir="ltr"><<a href="mailto:jojokururu@gmail.com" target="_blank">jojokururu@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">On 2013/6/17 18:59, Robert van Leeuwen wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I'm facing the issue about the performance degradation, and once I glanced that changing the value in /proc/sys<br>
/vm/vfs_cache_pressure will do a favour.<br>
Can anyone explain to me whether and why it is useful?<br>
</blockquote>
Hi,<br>
<br>
When this is set to a lower value the kernel will try to keep the inode/dentry cache longer in memory.<br>
Since the swift replicator is scanning the filesystem continuously it will eat up a lot of iops if those are not in memory.<br>
<br>
To see if a lot of cache misses are happening, for xfs, you can look at xs_dir_lookup and xs_ig_missed.<br>
( look at <a href="http://xfs.org/index.php/Runtime_Stats" target="_blank">http://xfs.org/index.php/<u></u>Runtime_Stats</a> )<br>
<br>
We greatly benefited from setting this to a low value but we have quite a lot of files on a node ( 30 million)<br>
Note that setting this to zero will result in the OOM killer killing the machine sooner or later.<br>
(especially if files are moved around due to a cluster change ;)<br>
<br>
Cheers,<br>
Robert van Leeuwen<br>
</blockquote>
<br></div>
Hi,<br>
We set this to a low value(20) and the performance is better than before. It seems quite useful.<br>
<br>
According to your description, this issue is related with the object quantity in the storage. We delete all the objects in the storage but it doesn't help anything. The only method to recover is to format and re-mount the storage node. We try to install swift on different environment but this degradation problem seems to be an inevitable one.<br>
<br>
Cheers,<br>
Jonathan Lu<div class="HOEnZb"><div class="h5"><br>
<br>
______________________________<u></u>_________________<br>
Mailing list: <a href="https://launchpad.net/~openstack" target="_blank">https://launchpad.net/~<u></u>openstack</a><br>
Post to : <a href="mailto:openstack@lists.launchpad.net" target="_blank">openstack@lists.launchpad.net</a><br>
Unsubscribe : <a href="https://launchpad.net/~openstack" target="_blank">https://launchpad.net/~<u></u>openstack</a><br>
More help : <a href="https://help.launchpad.net/ListHelp" target="_blank">https://help.launchpad.net/<u></u>ListHelp</a><br>
</div></div></blockquote></div><br></div>