[Openstack] Rebalancing issue

Clay Gerrard clay.gerrard at gmail.com
Tue Sep 17 18:29:25 UTC 2013


I'm not sure what version ships with the "official ubuntu packages" - but
sometime back last-year/Folsom/1.5 Sam landed a change in partition
placement [1]  and you stopped needing to contort your zones into made up
things [2].  You can have a cluster with a single failure domain if that's
what you got and partition placement will still keep do the right thing.
 There's some advantages to using less zones because the builder has more
flexibility moving things around; but if you *actually* have fault tolerant
partitions in your deployment (beyond a single server) you want to bake
that into your data placement so you can take advantage of it should you
*actually* loose a whole zone.

Zone per server is not needed, offers no advantage on newer versions of
swift, may not be a problem, but can cause balancing issues.

-Clay

1. generally referred to as "unique-as-possible"
2. http://swiftstack.com/blog/2013/02/25/data-placement-in-swift/


On Mon, Sep 16, 2013 at 10:36 PM, Morten Møller Riis <mmr at gigahost.dk>wrote:

> Thank you Clay! That makes sense!
>
> I wasn't aware that zones needed to be about the same size. The first 3
> devices are going away. As the names might reveal they are running on
> software raid partitions which is a bad idea. So I guess some of the
> problem will correct itself when they are removed completely.
>
> I'm using the official ubuntu packages so I'll have to wait until the
> commit makes it into those.
>
> At the moment the zones represent different servers (but same rack, DC).
> Is that a bad idea?
>
> Mvh / Best regards
> Morten Møller Riis
> Gigahost ApS
> mmr at gigahost.dk
>
>
>
>
> On Sep 17, 2013, at 4:11 AM, Clay Gerrard <clay.gerrard at gmail.com> wrote:
>
> So zones 6 & 7 have drastically more weight than 1, 2, 3, 4, 5 - but since
> you have three replicas unique-as-possible is trying hard to fill up 3, 4,
> 5 and leaving 6 & 7 with room to spare and nothing to move.
>
> You want to try to keep your zones roughly the same size overall, and
> before this change:
>
> https://review.openstack.org/#/c/41802/
>
> ... it was almost a requirement.  If possible, can you checkout that
> change and try to rebalance this .builder with that new code?
>
> If not the easiest thing may be to set device 2's weight to zero - I'm
> pretty sure that will force those partitions to move. You may also want to
> combine zones 4 and 5, by setting the weight of the device in one zone to 0
> and readding the device in the other zone.  Good luck!
>
> -Clay
>
>
>
> On Sun, Sep 15, 2013 at 7:41 PM, Morten Møller Riis <mmr at gigahost.dk>wrote:
>
>> This is an example. It's the same for objects/containers/accounts (I
>> recently lower the weight of the first two to 0 in preparation of replacing
>> them).
>>
>> $ swift-ring-builder account.builder
>> account.builder, build version 56
>> 262144 partitions, 3 replicas, 7 zones, 15 devices, 306.67 balance
>> The minimum number of hours before a partition can be reassigned is 1
>> Devices:    id  zone      ip address  port      name weight partitions
>> balance meta
>>              0     1     10.44.1.101  6002       md2   0.00          0
>>  0.00
>>              1     2     10.44.1.102  6002       md2   0.00          0
>>  0.00
>>              2     3     10.44.1.103  6002       md2  25.00      26431
>>  306.67
>>              5     4     10.44.1.104  6002      sda1 150.00      58928
>> 51.11
>>              6     4     10.44.1.104  6002      sdb1 150.00      58928
>> 51.11
>>              7     5     10.44.1.105  6002      sda1 150.00      58928
>> 51.11
>>              8     5     10.44.1.105  6002      sdb1 150.00      58929
>> 51.11
>>              9     6     10.44.1.106  6002      sdc1 300.00      65536
>>  -15.97
>>             10     6     10.44.1.106  6002      sdd1 300.00      65536
>>  -15.97
>>             11     6     10.44.1.106  6002      sde1 300.00      65536
>>  -15.97
>>             12     6     10.44.1.106  6002      sdf1 300.00      65536
>>  -15.97
>>             13     7     10.44.1.107  6002      sda1 300.00      65536
>>  -15.97
>>             14     7     10.44.1.107  6002      sdb1 300.00      65536
>>  -15.97
>>             15     7     10.44.1.107  6002      sdc1 300.00      65536
>>  -15.97
>>             16     7     10.44.1.107  6002      sdd1 300.00      65536
>>  -15.97
>> $ swift-ring-builder account.builder rebalance
>> Cowardly refusing to save rebalance as it did not change at least 1%.
>> $
>>
>>
>>
>>
>>  Mvh / Best regards
>> Morten Møller Riis
>> Gigahost ApS
>> mmr at gigahost.dk
>>
>>
>>
>>
>> On Sep 14, 2013, at 2:03 AM, Clay Gerrard <clay.gerrard at gmail.com> wrote:
>>
>> Those two statements do seem in contrast - run `swift-ring-builer
>> account.builder` and check what is the current balance?    Can you paste
>> the output?  Maybe you have an unbalanced region/zone/server and it's just
>> can't do any better than it is?
>>
>> -Clay
>>
>>
>> On Thu, Sep 12, 2013 at 11:53 PM, Morten Møller Riis <mmr at gigahost.dk>wrote:
>>
>>> When adjusting weights and rebalancing I get the message:
>>>
>>> NOTE: Balance of 306.68 indicates you should push this
>>>       ring, wait at least 1 hours, and rebalance/repush.
>>>
>>> However, waiting a couple of hours and running swift-ring-builder
>>> account.builder rebalance, it says that it has nothing to rebalance.
>>>
>>> Cowardly refusing to save rebalance as it did not change at least 1%.
>>>
>>>
>>> What am I getting wrong here?
>>>
>>>
>>>  Mvh / Best regards
>>> Morten Møller Riis
>>> Gigahost ApS
>>> mmr at gigahost.dk
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130917/c8f614ad/attachment.html>


More information about the Openstack mailing list