[Openstack] Swift - set preferred proxy/datanodes (cross datacenter schema)

Leandro Reox leandro.reox at gmail.com
Tue Dec 6 20:52:08 UTC 2011


Thank you all ! I been reading the container sync functionality, we will
walk that path for sure,  i just was hoping that a magic conf flag like
"preferred_datanodes =" exists hahaha :)

Best regards
Lean

On Tue, Dec 6, 2011 at 5:42 PM, andi abes <andi.abes at gmail.com> wrote:

> sorry, should have included the link:
> http://swift.openstack.org/overview_container_sync.html
>
>
> On Tue, Dec 6, 2011 at 2:49 PM, andi abes <andi.abes at gmail.com> wrote:
>
>> You could try to use the container sync added in 1.4.4.
>>
>> The scheme would be to setup 2 separate clusters in each data center.
>> Obviously requests will be satisfied locally.
>> You will also setup your containers identically, and configure them to
>> sync, to make sure data is available in both DC's.
>>
>> You might want to consider how many replicas you want in each data
>> center, and how you'd recover from failures, rather than just setting up 2
>> DC x 3-5 replicas for each object.
>>
>> a.
>>
>>
>> On Tue, Dec 6, 2011 at 1:49 PM, Caitlin Bestler <
>> Caitlin.Bestler at nexenta.com> wrote:
>>
>>>  Lendro Reox asked:****
>>>
>>>
>>>
>>> > We're replicating our datacenter in another location (something like
>>> Amazons east and west coast) , thinking about our applications and****
>>>
>>> > our use of Swift, is there any way that we can set up weights for our
>>> datanodes so if a request enter via for example DATACENTER 1 ,****
>>>
>>> >  then we want the main copy of the data being written on a datanode
>>> on the SAME datacenter o read from the same datacenter, so****
>>>
>>> > when we want to read it and comes from a proxy node of the same
>>> datacenter we dont add delay of the latency between the two datacenters.
>>> > The moto is "if a request to write or read enters via DATACENTER 1
>>> then is served via proxynodes/datanodes located on DATACENTER 1",
>>> > then the replicas gets copied across zones over both datacenters.****
>>>
>>> > Routing the request to especific proxy nodes is easy, but dont know
>>> if swift has a way to manage this internally too for the datanodes ****
>>>
>>> ** **
>>>
>>> I don’t see how you would accomplish that with the current Swift
>>> infrastructure.****
>>>
>>> ** **
>>>
>>> An object is hashed to a partition, and the ring determines where
>>> replicas of that partition are stored.****
>>>
>>> ** **
>>>
>>> What you seem to be suggesting is that when an object is created in
>>> region X that it should be assigned to partition that is primarily stored
>>> in region X,****
>>>
>>> While if the same object had been created in region Y it would be
>>> assigned to a partition that is primary stored in region Y.****
>>>
>>> ** **
>>>
>>> The problem is that “where this object was first created” is not a
>>> contributor to the hash algorithm, nor could it be since there is no way
>>> for someone****
>>>
>>> trying to get that object to know where it was first created.****
>>>
>>> ** **
>>>
>>> What I think you are looking for is a solution where you have **two**
>>> rings, DATACENTER-WEST and DATACENTER-EAST. Both of these rings would have
>>> ****
>>>
>>> an adequate number of replicas to function independently, but would
>>> asynchronously update each other to provide eventual consistency.****
>>>
>>> ** **
>>>
>>> That would use more disk space, but avoids making all updates wait for
>>> the data to be updated at each site.****
>>>
>>> ** **
>>>
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to     : openstack at lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20111206/d0ccaa87/attachment.html>


More information about the Openstack mailing list