[openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master
lbragstad at gmail.com
Thu Jun 1 21:44:39 UTC 2017
On Thu, Jun 1, 2017 at 3:46 PM, Andrey Grebennikov <
agrebennikov at mirantis.com> wrote:
> We had a very similar conversation multiple times with Keystone cores
> (multi-site Keystone).
Geo-rep Galera was suggested first and it was immediately declined (one of
> the reasons was the case of complete corruption of Keystone DB everywhere
> in case of accidental table corrupt in one site) by me as well as current
> Right after that I was told many times that federation is the only right
> way to go nowadays.
After doing some digging, I found the original specification  and the
meeting agenda  where we talked about the alternative.
If I recall correctly, I thought I remember the proposal (being able to
specify project IDs at creation time) being driven by not wanting to
replicate all of keystone's backends in multi-region deployments,but still
wanting to validate tokens across regions. Today, if you have a region in
Seattle and region in Sydney, a token obtained from a keystone in Seattle
and validated in Sydney would require both regions to share identity,
resource, and assignment backends (among others depending on what kind of
token it is). The request in the specification would allow only the
identity and role backends to be replicated but the project backend in each
region wouldn't need to be synced or replicated. Instead, operators could
create projects with matching IDs in each region in order for tokens
generated from one to be validated in the other. Most folks involved in the
meeting considered this behavior for project IDs to be a slippery-slope.
Federation was brought up because sharing identity information globally,
but not project or role information globally sounded like federation (e.g.
having all your user information in an IdP somewhere and setting up each
region's keystone to federate to the IdP). The group seemed eager to expose
gaps in the federation implementation that prevented that case and address
Hopefully that helps capture some of the context (feel free to fill in gaps
if I missed any).
> Is this statement still valid?
> On Thu, Jun 1, 2017 at 12:51 PM, Jay Pipes <jaypipes at gmail.com> wrote:
>> On 05/31/2017 11:06 PM, Mike Bayer wrote:
>>> I'd also throw in, there's lots of versions of Galera with different
>>> bugfixes / improvements as we go along, not to mention configuration
>>> settings.... if Jay observes it working great on a distributed cluster and
>>> Clint observes it working terribly, it could be that these were not the
>>> same Galera versions being used.
>> Agreed. The version of Galera we were using IIRC was Percona XtraDB
>> Cluster 5.6. And, remember that the wsrep_provider_options do make a big
>> difference, especially in WAN-replicated setups.
>> We also increased the tolerance settings for network disruption so that
>> the cluster operated without hiccups over the WAN. I think the
>> wsrep_provider_options setting was evs.inactive_timeout=PT30Sm
>> evs.suspect_timeout=PT15S, and evs.join_retrans_period=PT1S.
>> Also, regardless of settings, if your network sucks, none of these
>> distributed databases are going to be fun to operate :)
>> At AT&T, we jumped through a lot of hoops to ensure multiple levels of
>> redundancy and high performance for the network links inside and between
>> datacenters. It really makes a huge difference when your network rocks.
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
> Andrey Grebennikov
> Principal Deployment Engineer
> Mirantis Inc, Austin TX
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev