[Openstack] [swift] Storage node failure modes

John Dickinson me at not.mn
Tue Jun 28 04:13:49 UTC 2016


A few years ago, I gave this talk at LCA which covers a lot of these details.

https://www.youtube.com/watch?v=_sUvfGKhaMo&list=PLIr7I80Leee5NpoYTd9ffNvWq0pG18CN3&index=9

--John




On 27 Jun 2016, at 17:36, Mark Kirkwood wrote:

> Hi,
>
> I'm in the process of documenting failure modes (for ops documentation etc). Now I think I understand the intent:
>
> - swift tries to ensure you always have the number of configured replicas
>
> In the case of missing or unmounted devices I'm seeing the expected behaviour i.e:
>
> - new object creation results in the configured number of replicas (some stored on handoff nodes)
> - existing objects replicated on handoff to produce the correct replica number
>
> In the case of a node (or a region) I'm *not* seeing analogous behaviour for *existing* objects, i.e I am a replica down after shutting down on of my nodes and waiting a while.
>
> I am testing using swift 2.7.on a small cluster of vms (4 nodes, 4 devices, 2 regions) - now it may be that my setup is just too trivial (or maybe I haven't waited long enough for swift decide my storage node is really down). Any thoughts? I'd like to understand precisely what is supposed to happen when a node (and also an entire region) is unavailable.
>
> Cheers
>
> Mark
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160627/ab151ab0/attachment.sig>


More information about the Openstack mailing list