[openstack-dev] [Spam] Re: [all] Outcome of distributed lock manager discussion @ the summit

Clint Byrum clint at fewbar.com
Thu Nov 5 19:34:25 UTC 2015


Excerpts from Chris Dent's message of 2015-11-05 00:08:16 -0800:
> On Thu, 5 Nov 2015, Robert Collins wrote:
> 
> > In the session we were told that zookeeper is already used in CI jobs
> > for ceilometer (was this wrong?) and thats why we figured it made a
> > sane default for devstack.
> 
> For clarity: What ceilometer (actually gnocchi) is doing is using tooz
> in CI (gate-ceilometer-dsvm-integration). And for now it is using
> redis as that was "simple".
> 
> Outside of CI it is possible to deploy ceilo, aodh and gnocchi to use
> tooz for coordinating group partitioning in active-active HA setups
> and shared locks. Again the standard deploy for that has been to use
> redis because of availability. It's fairly understood that zookeeper
> would be more correct but there are packaging concerns.
> 

Redis jettisons all consistency on partitions... It's really ugly:

https://aphyr.com/posts/307-call-me-maybe-redis-redux

    These results are catastrophic. In a partition which lasted for
    roughly 45% of the test, 45% of acknowledged writes were thrown
    away. To add insult to injury, Redis preserved all the failed writes
    in place of the successful ones.

So... yeah. I actually think it is dangerous to have Redis in tooz at
all. One partition and you have split brains, locks granted to multiple
places, and basically the pure chaos that you were trying to prevent by
using a lock in the first place. If you're using redis, the only sane
thing to do is to shut everything down when there's a partition (which
is not easy to detect!).

To contrast this with Zookeeper and Consul:

https://aphyr.com/posts/291-call-me-maybe-zookeeper
https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul

Even though etcd and consul ended up suffering from stale reads, they
added pieces to their API that allow fully consistent reads (presumably
suffering a performance penalty when doing so).



More information about the OpenStack-dev mailing list