[openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

Clint Byrum clint at fewbar.com
Wed May 31 06:14:42 UTC 2017


Excerpts from Jay Pipes's message of 2017-05-30 21:06:59 -0400:
> On 05/30/2017 05:07 PM, Clint Byrum wrote:
> > Excerpts from Jay Pipes's message of 2017-05-30 14:52:01 -0400:
> >> Sorry for the delay in getting back on this... comments inline.
> >>
> >> On 05/18/2017 06:13 PM, Adrian Turjak wrote:
> >>> Hello fellow OpenStackers,
> >>>
> >>> For the last while I've been looking at options for multi-region
> >>> multi-master Keystone, as well as multi-master for other services I've
> >>> been developing and one thing that always came up was there aren't many
> >>> truly good options for a true multi-master backend.
> >>
> >> Not sure whether you've looked into Galera? We had a geo-distributed
> >> 12-site Galera cluster servicing our Keystone assignment/identity
> >> information WAN-replicated. Worked a charm for us at AT&T. Much easier
> >> to administer than master-slave replication topologies and the
> >> performance (yes, even over WAN links) of the ws-rep replication was
> >> excellent. And yes, I'm aware Galera doesn't have complete snapshot
> >> isolation support, but for Keystone's workloads (heavy, heavy read, very
> >> little write) it is indeed ideal.
> >>
> > 
> > This has not been my experience.
> > 
> > We had a 3 site, 9 node global cluster and it was _extremely_ sensitive
> > to latency. We'd lose even read ability whenever we had a latency storm
> > due to quorum problems.
> > 
> > Our sites were London, Dallas, and Sydney, so it was pretty common for
> > there to be latency between any of them.
> > 
> > I lost track of it after some reorgs, but I believe the solution was
> > to just have a single site 3-node galera for writes, and then use async
> > replication for reads. We even helped land patches in Keystone to allow
> > split read/write host configuration.
> 
> Interesting, thanks for the info. Can I ask, were you using the Galera 
> cluster for read-heavy data like Keystone identity/assignment storage? 
> Or did you have write-heavy data mixed in (like Keystone's old UUID 
> token storage...)
> 
> It should be noted that CockroachDB's documentation specifically calls 
> out that it is extremely sensitive to latency due to the way it measures 
> clock skew... so might not be suitable for WAN-separated clusters?
> 

That particular Galera was for Keystone only, and we were using Fernet
tokens.

Revocation events were a constant but manageable source of writes. I
believe some optimizations were made to reduce the frequency of the events
but that was after we had worked around the problems they created. Using
async replication simply meant that we were accepting the replication lag
window as a period of time where a revocation event might not apply. I
don't know that we ever got hard numbers, but with the write data we had
we speculated the worst result would be that you'd revoke a token in
Dallas and Sydney or London might keep working with that token for
however long a latency storm lasted + the recovery time for applying the
backlog. At worst, a few minutes.

Either way, it should be much simpler to manage slave lag than to deal
with a Galera cluster that won't accept any writes at all because it
can't get quorum.



More information about the OpenStack-dev mailing list