[Openstack-operators] Milti-site Keystone & Galera
Tim Bell
Tim.Bell at cern.ch
Mon Sep 28 17:17:13 UTC 2015
CERN do the same…. The memcache functions on keystone are very useful for scaling it up.
Tim
From: Matt Fischer [mailto:matt at mattfischer.com]
Sent: 28 September 2015 18:51
To: Curtis <serverascode at gmail.com>
Cc: openstack-operators at lists.openstack.org; Jonathan Proulx <jon at jonproulx.com>
Subject: Re: [Openstack-operators] Milti-site Keystone & Galera
Yes. We have a separate DB cluster for global stuff like Keystone & Designate, and a regional cluster for things like nova/neutron etc.
On Mon, Sep 28, 2015 at 10:43 AM, Curtis <serverascode at gmail.com <mailto:serverascode at gmail.com> > wrote:
Hi,
For organizations with the keystone database shared across regions via
galera, do you just have keystone (and perhaps glance as was
suggested) in its own cluster that is multi-region, and the other
databases in a cluster that is only in one region (ie. just local
their their region)? Or are you giving other services their own
database in the single multi-region cluster and thus replicating all
the databases? Or is there another solution?
Thanks,
Curtis.
On Tue, Sep 8, 2015 at 3:22 PM, Jonathan Proulx <jon at jonproulx.com <mailto:jon at jonproulx.com> > wrote:
> Thanks Jay & Matt,
>
> That's basically what I thought, so I'll keep thinking it :)
>
> We're not replicating glance DB because images will be stored in
> different local Ceph storage on each side so the images won't be
> directly available. We thought about moving back to a file back end
> and rsync'ing but RBD gets us lots of fun things we want to keep
> (quick start, copy on write thin cloned ephemeral storage etc...) so
> decided to live with making our users copy images around.
>
> -Jon
>
>
>
> On Tue, Sep 8, 2015 at 5:00 PM, Jay Pipes <jaypipes at gmail.com <mailto:jaypipes at gmail.com> > wrote:
>> On 09/08/2015 04:44 PM, Jonathan Proulx wrote:
>>>
>>> Hi All,
>>>
>>> I'm pretty close to opening a second region in my cloud at a second
>>> physical location.
>>>
>>> The plan so far had been to only share keystone between the regions
>>> (nova, glance, cinder etc would be distinct) and implement this by
>>> using MariaDB with galera replication between sites with each site
>>> having it's own gmcast_segment to minimize the long distance catter
>>> plus a 3rd site with a galera arbitrator for the obvious reason.
>>
>>
>> I would also strongly consider adding the Glance registry database to the
>> same cross-WAN Galera cluster. At AT&T, we had such a setup for Keystone and
>> Glance registry databases at 10+ deployment zones across 6+ datacenters
>> across the nation. Besides adjusting the latency timeout for the Galera
>> settings, we made no other modifications to our
>> internal-to-an-availability-zone Nova database Galera cluster settings.
>>
>> The Keystone and Glance registry databases have a virtually identical read
>> and write data access pattern: small record/row size, small number of
>> INSERTs, virtually no UPDATE and DELETE calls, and heavy SELECT operations
>> on a small data set. This data access pattern is an ideal fit for a
>> WAN-replicated Galera cluster.
>>
>>> Today I was warned against using this in a multi writer setup. I'd planned
>>> on one writer per physical location.
>>
>>
>> I don't know who warned you about this, but it's not an issue in the real
>> world. We ran in full multi-writer mode, with each deployment zone writing
>> to and reading from its nearest Galera cluster nodes. No issues.
>>
>> Best,
>> -jay
>>
>>> I had been under the impression this was the 'done thing' for
>>> geographically sepperate regions, was I wrong? Should I replicate just
>>> for DR and always pick a single possible remote write site?
>>>
>>> site to site link is 2x10G (different physical paths), short link is
>>> 2.2ms average latency (2.1ms low, 2.5ms high over 250 packets) long
>>> link shouldn't be much longer but isn't yet complete to test.
>>>
>>> -Jon
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org <mailto:OpenStack-operators at lists.openstack.org>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org <mailto:OpenStack-operators at lists.openstack.org>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org <mailto:OpenStack-operators at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
--
Twitter: @serverascode
Blog: serverascode.com <http://serverascode.com>
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org <mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150928/cf31c9b9/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 7349 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150928/cf31c9b9/attachment.bin>
More information about the OpenStack-operators
mailing list