[openstack-dev] [swift] Problem balancing a ring file with multiple regions

Adrian Smith adrian_f_smith at dell.com
Thu Mar 21 09:40:28 UTC 2013


Of course. A second rebalance did the trick.

Thanks John.

On 20 March 2013 16:46, John Dickinson <me at not.mn> wrote:
> It's the "unique-as-possible" placement that swift uses. Since regions are "more unique" than zones, it is placing 3 replicas into 2 regions and needs a couple of rebalances to settle down. What you are seeing is exactly the same as if you had 1 region and only 2 zones.
>
> --John
>
>
> On Mar 20, 2013, at 9:40 AM, Adrian Smith <adrian_f_smith at dell.com> wrote:
>
>> I'm putting together a simple test cluster across two DCs using the
>> new region level. Each DC has two storage nodes, each with two
>> devices. The logical setup looks like this,
>>
>> region 1
>>   zone 1
>>      10.21.146.76
>>         sdb1
>>         sdc1
>>   zone 2
>>      10.21.146.77
>>         sdb1
>>         sdc1
>> region 2
>>   zone 1
>>      10.49.123.77
>>         sdb1
>>         sdc1
>>   zone 2
>>      10.49.123.78
>>         sdb1
>>         sdc1
>>
>> Obviously this isn't an ideal setup since I only have four zones
>> rather than the preferred five but for my basic needs it's fine.
>>
>> I create a ring,
>>
>> swift-ring-builder account.builder create 10 3 1
>> swift-ring-builder account.builder add r1z1-10.21.146.76:6002/sdb1 100
>> swift-ring-builder account.builder add r1z1-10.21.146.76:6002/sdc1 100
>> swift-ring-builder account.builder add r1z2-10.21.146.77:6002/sdb1 100
>> swift-ring-builder account.builder add r1z2-10.21.146.77:6002/sdc1 100
>> swift-ring-builder account.builder add r2z1-10.49.123.77:6002/sdb1 100
>> swift-ring-builder account.builder add r2z1-10.49.123.77:6002/sdc1 100
>> swift-ring-builder account.builder add r2z2-10.49.123.78:6002/sdb1 100
>> swift-ring-builder account.builder add r2z2-10.49.123.78:6002/sdc1 100
>>
>> and then attempt to rebalance it with,
>>
>> swift-ring-builder account.builder rebalance
>>
>> The output is...
>>
>> Device r1z1-10.21.146.76:6002/sdb1_"" with 100.0 weight got id 0
>> Device r1z1-10.21.146.76:6002/sdc1_"" with 100.0 weight got id 1
>> Device r1z2-10.21.146.77:6002/sdb1_"" with 100.0 weight got id 2
>> Device r1z2-10.21.146.77:6002/sdc1_"" with 100.0 weight got id 3
>> Device r2z1-10.49.123.77:6002/sdb1_"" with 100.0 weight got id 4
>> Device r2z1-10.49.123.77:6002/sdc1_"" with 100.0 weight got id 5
>> Device r2z2-10.49.123.78:6002/sdb1_"" with 100.0 weight got id 6
>> Device r2z2-10.49.123.78:6002/sdc1_"" with 100.0 weight got id 7
>> Reassigned 1024 (100.00%) partitions. Balance is now 33.07.
>> -------------------------------------------------------------------------------
>> NOTE: Balance of 33.07 indicates you should push this
>>      ring, wait at least 1 hours, and rebalance/repush.
>> -------------------------------------------------------------------------------
>>
>> Any idea why the balance is 33.07 rather than 0?
>>
>> I've tried using a partition power of 18 but get the same result
>> (balance of 33.33 with 262144 partitions).
>>
>> I'm using Swift 1.8.0-rc1.
>>
>> Thanks
>> Adrian
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list