[Openstack] [SWIFT] Bad replication performance after adding new drives

Klaus Schürmann klaus.schuermann at mediabeam.com
Wed Feb 11 13:43:33 UTC 2015


My xfs inode size is 265 Bytes. 
When I increase the memory and change vm.vfs_cache_pressure to 1, is it possible to store the inode tree in the memory? 
Maybe the random disk seeks are the problem.

My disks are all Hitachi SATA 3 or 4 TB with 7.200 rpm.
The movement of 1250 partitions should go faster then 2-3 weeks, even if there are more then 20.000 objects stored in each partition. In our case that's only 2,5 GB in each partition.

Here is a iostat snapshot:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          16.78    0.00   19.75    8.28    0.00   55.19

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     1.40   10.20    2.80   220.00   115.20    51.57     0.10    7.51    9.57    0.00   7.26   9.44
sdb               0.00     0.00   16.40    3.60   300.00   187.20    48.72     0.18    8.84   10.78    0.00   7.08  14.16
sdc               0.00     0.00   16.00    4.80   303.20   273.40    55.44     0.30   14.19   18.45    0.00   8.88  18.48
sdd               0.00     0.20  494.80  137.60  3693.60  1074.30    15.08     0.12    0.19    0.23    0.05   0.10   6.32
sde               0.00     0.00  143.80    2.60  1028.80   123.90    15.75     1.15    7.84    7.97    0.62   6.49  95.04
sdf               0.00     0.00   11.40    1.00   270.40    42.40    50.45     0.14   11.29   12.28    0.00   8.06  10.00
sdg               0.00     0.00   16.60    5.40   226.40   316.10    49.32     0.23   10.51   13.78    0.44   8.25  18.16
sdh               0.00     0.00   94.80    3.80   681.60   156.70    17.00     1.27   12.51   13.00    0.21   9.89  97.52
sdi               0.00     0.00   43.40    4.60  1741.60   210.30    81.33     1.15   24.70   27.32    0.00  11.18  53.68
sdj               0.00     1.20   17.00  135.40   194.40  1482.30    22.00     0.67    4.39   18.26    2.65   1.42  21.68
sdk               0.00     0.00   26.20    4.20  1382.40   183.30   103.01     0.24    7.84    9.04    0.38   7.34  22.32
sdl               0.00     1.20   14.40  139.80   223.20   779.10    13.00     0.55    3.58   12.67    2.64   0.98  15.04



-----Ursprüngliche Nachricht-----
Von: Robert van Leeuwen [mailto:Robert.vanLeeuwen at spilgames.com] 
Gesendet: Dienstag, 10. Februar 2015 12:23
An: Klaus Schürmann; 'openstack at lists.openstack.org'
Betreff: RE: [Openstack] [SWIFT] Bad replication performance after adding new drives

> I set the vfs_cache_pressure to 10 and moved container- and account-server to SSD harddrives.
> The normal performance for object writes and reads are quite ok.

> But why takes moving some partions to only two new harddisks so much time?
> Will it be faster if I add more memory?

My guess: Probably the source disks/servers are slow.
When the inode tree is not in memory it will do a lot of random reads to the disks (for both the inode tree and the actual file).
An rsync of any directory will become slow on the source side ( iirc you can see this in the replicator log) You should be able to see in e.g. atop if the source or destination disks are the limiting factor.

If the source is the issue it might help to increase the maximum number of simultaneous rsync processes so you have more parallel slow processes ;) Note that this can have impact on the general speed of the Swift cluster.
More memory will probably help a bit.

Cheers,
Robert




More information about the Openstack mailing list