[Openstack] (no subject)

Roman Kravets softded at gmail.com
Fri Mar 28 15:20:00 UTC 2014


Adam,

I understend it, but I see that Swift always make object-replication for
all date on cluseter.
It make big load to hard drives on server.
Is it normal?

--
Best regards,
Roman Kravets


On Fri, Mar 28, 2014 at 5:29 AM, Adam Lawson <alawson at aqorn.com> wrote:

> Swift is said to be eventually consistent because the data is stored then
> eventually distributed in a balanced way. You don't need to manually
> re-balance the rings constantly. Swift will do that for you. Re-balancing
> rings is usually initiated after you *change the ring structure*(add/remove regions, add/remove zones, change device weights, etc).
>
> In your case since you only have one node, Swift will distribute the
> replicas across all 3 zones assuming you've configured 3x replication. When
> you add a node and update the rings, yes you'll want to re-balance. That
> will tell Swift to put a replica on the new node since Swift default
> behavior is to keep replica placements "as unique as possible". That's the
> actual Swift vernacular everyone uses. ; )
>
> Unique replica placement strategy is as follows:
>
> Region (if defined) > Zone > Node > Device > Device with fewest replicas
>
>
> Good luck.
>
> Adam
>
>
> *Adam Lawson*
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (888) 406-7620
>
>
>
> On Thu, Mar 27, 2014 at 12:48 PM, Roman Kravets <softded at gmail.com> wrote:
>
>> Dear Adam,
>>
>> I have one storage server and 12 hard drives on it.
>> For test I split disk to 4 zones. If I rightly understood, swift load
>> date during "re-balance ring" and load data right away to correct node.
>>
>> --
>> Best regards,
>> Roman Kravets
>>
>>
>> On Thu, Mar 27, 2014 at 10:05 PM, Adam Lawson <alawson at aqorn.com> wrote:
>>
>>> Probably has to do with the fact you (I'm guessing) don't have very many
>>> drives on that server. Is that a correct statement? I know that even with
>>> 50 drives across a cluster (still very small), rings balance is at 100%
>>> until the rings are adequately balanced. Look at your ring stats, drive
>>> count and 5 zones for more consistent reports.
>>>
>>>
>>> *Adam Lawson*
>>> AQORN, Inc.
>>> 427 North Tatnall Street
>>> Ste. 58461
>>> Wilmington, Delaware 19801-2230
>>> Toll-free: (888) 406-7620
>>>
>>>
>>>
>>> On Thu, Mar 27, 2014 at 10:20 AM, Кравец Роман <softded at gmail.com>wrote:
>>>
>>>> Hello.
>>>>
>>>> I installed Openstack Swift to test server and upload 50 gb data.
>>>> Now I see it in the log:
>>>> root at storage1:/var/staff/softded# tail -n 1000 -f /var/log/syslog  |
>>>> grep  replicated
>>>> Mar 27 19:44:24 storage1 object-replicator 112746/187053 (60.27%)
>>>> partitions replicated in 300.01s (375.81/sec, 3m remaining)
>>>> Mar 27 19:47:44 storage1 object-replicator 187053/187053 (100.00%)
>>>> partitions replicated in 499.71s (374.32/sec, 0s remaining)
>>>> Mar 27 19:53:14 storage1 object-replicator 112863/187068 (60.33%)
>>>> partitions replicated in 300.01s (376.20/sec, 3m remaining)
>>>> Mar 27 19:56:29 storage1 object-replicator 187068/187068 (100.00%)
>>>> partitions replicated in 494.53s (378.27/sec, 0s remaining)
>>>> Mar 27 20:01:59 storage1 object-replicator 112343/187080 (60.05%)
>>>> partitions replicated in 300.01s (374.47/sec, 3m remaining)
>>>> Mar 27 20:05:18 storage1 object-replicator 187080/187080 (100.00%)
>>>> partitions replicated in 498.55s (375.25/sec, 0s remaining)
>>>> Mar 27 20:10:48 storage1 object-replicator 112417/187092 (60.09%)
>>>> partitions replicated in 300.01s (374.71/sec, 3m remaining)
>>>>
>>>> Why object-replicator show different percent every time?
>>>>
>>>> Thank you!
>>>>
>>>> --
>>>> Best regards,
>>>> Roman Kravets
>>>>
>>>> _______________________________________________
>>>> Mailing list:
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> Post to     : openstack at lists.openstack.org
>>>> Unsubscribe :
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140328/244c7b5c/attachment.html>


More information about the Openstack mailing list