[Openstack] Swift Ring Maintenance
clay.gerrard at gmail.com
Mon Aug 25 20:00:43 UTC 2014
If you're truly not concerned about the replication overhead the balance is
going to require the minimal amount of data movement if you just remove and
re-add the devices in their new zone with a single rebalance and ring-push.
That may or may not be a good idea to recommend depending on how much data
you have an the speed of your replication network and object servers.
As a general rule I might suggest doing a single device and seeing how it
goes. But if you have more replica's that zones currently you're really
going to want the latest ring-builder code that landed this week or you'll
be in for a none good time - definitely recommend you copy your builder out
to a temporary directory and validate the new weights before pushing.
Keeping good backups of your builders could really save you if things go
On Mon, Aug 25, 2014 at 8:12 AM, Lillie Ross-CDSR11 <
Ross.Lillie at motorolasolutions.com> wrote:
> Thanks John,
> Yes, I’ve read this article, but thanks for pointing me at it again.
> On Aug 22, 2014, at 2:29 PM, John Dickinson <me at not.mn> wrote:
> > You've actually identified the issues involved. Here's a writeup on how
> you can do it, and the general best-practice for capacity management in
> > https://swiftstack.com/blog/2012/04/09/swift-capacity-management/
> > --John
> > On Aug 22, 2014, at 11:50 AM, Lillie Ross-CDSR11 <
> Ross.Lillie at motorolasolutions.com> wrote:
> >> All,
> >> I want to reconfigure a number of disks in my Swift storage cluster to
> reside in different zones, and I’m unsure of the best way to accomplish
> >> One way would be to set the drive weights to 0 and wait for data to
> migrate off the drives, then remove the drive from their current zone and
> re-add the drive to the new zone, rebalance and push the new ring files out
> to the cluster.
> >> Or I could simply remove the drives, re-add the drives to their new
> zones, rebalance and push out the updated ring files.
> >> Is one approach better than the other, or is there a better way than
> I’ve outlined above? Since any approach would be performed over a weekend,
> I’m not concerned about the effects of cluster performance as partitions
> are shuffled around.
> >> Thoughts and inputs are welcome.
> >> Thanks,
> >> Ross
> >> _______________________________________________
> >> Mailing list:
> >> Post to : openstack at lists.openstack.org
> >> Unsubscribe :
> Mailing list:
> Post to : openstack at lists.openstack.org
> Unsubscribe :
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Openstack