[openstack-dev] [swift] zones and partition

Matthew Oliver matt at oliver.net.au
Mon Feb 22 03:32:11 UTC 2016


Kiru,

That just means you have put even weight on all your drives, so your
telling swift to store it that way.

So short answer is  there is more to it then that. Sure evenly balanced
makes life easier. But it doesn't have to be the case. You can set drive
weights and overload factor to tune/balance data placement throughout the
cluster. Further you have more then just regions and zones, swift knows
about servers and disks. And will always attempt to keep the objects and
disburse and durable as possible.

If there is ever a case for a some partitions to have 2 replicas on the one
zone, then you'd find they live on different servers or if there is only 1
server, different disks. Therefore adding more failure domains, the better
your data is durability stored.

Regards,
Matt

On Mon, Feb 22, 2016 at 2:00 PM, Kirubakaran Kaliannan <
kirubak at zadarastorage.com> wrote:

>
>
> Hi,
>
>
>
> I have 3 ZONEs, with different capacity in each. Say I have 4 X 1TB disk
>  (r0z1 - 1TB, r0z2 - 1TB,r0 z3 - 2TB ).
>
>
>
> The ring builder (rebalance code), keep ¼-partitions of all 3 replica in
> Zone-3. This is the current default  behavior from the rebalance code.
>
> This puts pressure to the storage user to evenly increase the storage
> capacity across the zones. Is this is the correct understanding I have ?
>
>
>
> If so, Why have we chosen this approach, rather cant we enforce zone based
> partition (but the partition size on Z1 and Z2 may be lesser than Z3) ?
>
> This makes sure we have 100% zone level protection and not loss of data on
> 1 zone failure ?
>
>
>
> Thanks,
>
> -kiru
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160222/e3917309/attachment.html>


More information about the OpenStack-dev mailing list