[swift] Adding disks - one by one or all lightly weighted?

Tim Burke tim at swiftstack.com
Fri Jan 24 21:23:56 UTC 2020


On Thu, 2020-01-23 at 12:34 +1300, Mark Kirkwood wrote:
> Hi,
> 
> We are wanting to increase the number of disks in each of our
> storage 
> nodes - from 4 to 12.
> 
> I'm wondering whether it is better to:
> 
> 1/ Add 1st new disk (with a reduced weight)...increase the weight
> until 
> full, then repeat for next disk etc
> 
> 2/ Add 'em all with a (much i.e 1/8 of that in 1/ ) reduced 
> weight...increase the weights until done
> 
> Thoughts?
> 
> regards
> 
> Mark
> 
> 

Hi Mark,

I'd go with option 2 -- the quicker you can get all of the new disks
helping with load, the better. Gradual weight adjustments seem like a
good idea; they should help keep your replication traffic reasonable.
Note that as long as you're waiting a full replication cycle between
rebalances, though, swift should only be moving a single replica at a
time, even if you added the new devices at full weight.

Of course, tripling capacity like this (assuming that the new disks are
the same size as the existing ones) tends to take a while. You should
probably familiarize yourself with the emergency replication options
and consider enabling some of them until your rings reflect the new
topology; see

 * 
https://github.com/openstack/swift/blob/2.23.0/etc/object-server.conf-sample#L290-L298
 * 
https://github.com/openstack/swift/blob/2.23.0/etc/object-server.conf-sample#L300-L307
and
 * 
https://github.com/openstack/swift/blob/2.23.0/etc/object-server.conf-sample#L353-L364

These can be really useful to speed up rebalances, though swift's
durability guarantees take a bit of a hit -- so turn them back off once
you've had a cycle or two with the drives at full weight! If the
existing drives are full or nearly so (which IME tends to be the case
when there's a large capacity increase), those may be necessary to get
the system back to a state where it can make good progress.

Good luck!

Tim




More information about the openstack-discuss mailing list