[Openstack-operators] Milti-site Keystone & Galera
Federico Michele Facca
federico.facca at create-net.org
Mon Sep 28 17:35:21 UTC 2015
considering that across dc, latency increase, and that latency may cause
brain splits easily in high read/write, data of other services that are not
global should never be replicated on multisite using a synch approach
(asynch one for disaster recovery, may be good enough!)
Indeed, as Tim said, one of the most important things regarding keystone
distribution, is leveraging on memcached.
Somehow you deal with two totally different data types within keystone:
- user/projects/domain (quite static, i.e. it is not that every second you
change tons of these - so mostly read) -> perfect for db persistency
- tokens (quite dynamics - lot write and read) -> better managed by
memcached.
An OPNFV group did an interesting analysis on the multisite IdM:
https://etherpad.opnfv.org/p/multisite_identity_management
I think most of the possible architectures are discussed with pro and cons.
Br,
Federico
--
Future Internet is closer than you think!
http://www.fiware.org
Official Mirantis partner for OpenStack Training
https://www.create-net.org/community/openstack-training
--
Dr. Federico M. Facca
CREATE-NET
Via alla Cascata 56/D
38123 Povo Trento (Italy)
P +39 0461 312471
M +39 334 6049758
E federico.facca at create-net.org
T @chicco785
W www.create-net.org
On Mon, Sep 28, 2015 at 7:17 PM, Tim Bell <Tim.Bell at cern.ch> wrote:
> CERN do the same…. The memcache functions on keystone are very useful for
> scaling it up.
>
>
>
> Tim
>
>
>
> *From:* Matt Fischer [mailto:matt at mattfischer.com]
> *Sent:* 28 September 2015 18:51
> *To:* Curtis <serverascode at gmail.com>
> *Cc:* openstack-operators at lists.openstack.org; Jonathan Proulx <
> jon at jonproulx.com>
> *Subject:* Re: [Openstack-operators] Milti-site Keystone & Galera
>
>
>
> Yes. We have a separate DB cluster for global stuff like Keystone &
> Designate, and a regional cluster for things like nova/neutron etc.
>
>
>
> On Mon, Sep 28, 2015 at 10:43 AM, Curtis <serverascode at gmail.com> wrote:
>
> Hi,
>
> For organizations with the keystone database shared across regions via
> galera, do you just have keystone (and perhaps glance as was
> suggested) in its own cluster that is multi-region, and the other
> databases in a cluster that is only in one region (ie. just local
> their their region)? Or are you giving other services their own
> database in the single multi-region cluster and thus replicating all
> the databases? Or is there another solution?
>
> Thanks,
> Curtis.
>
>
> On Tue, Sep 8, 2015 at 3:22 PM, Jonathan Proulx <jon at jonproulx.com> wrote:
> > Thanks Jay & Matt,
> >
> > That's basically what I thought, so I'll keep thinking it :)
> >
> > We're not replicating glance DB because images will be stored in
> > different local Ceph storage on each side so the images won't be
> > directly available. We thought about moving back to a file back end
> > and rsync'ing but RBD gets us lots of fun things we want to keep
> > (quick start, copy on write thin cloned ephemeral storage etc...) so
> > decided to live with making our users copy images around.
> >
> > -Jon
> >
> >
> >
> > On Tue, Sep 8, 2015 at 5:00 PM, Jay Pipes <jaypipes at gmail.com> wrote:
> >> On 09/08/2015 04:44 PM, Jonathan Proulx wrote:
> >>>
> >>> Hi All,
> >>>
> >>> I'm pretty close to opening a second region in my cloud at a second
> >>> physical location.
> >>>
> >>> The plan so far had been to only share keystone between the regions
> >>> (nova, glance, cinder etc would be distinct) and implement this by
> >>> using MariaDB with galera replication between sites with each site
> >>> having it's own gmcast_segment to minimize the long distance catter
> >>> plus a 3rd site with a galera arbitrator for the obvious reason.
> >>
> >>
> >> I would also strongly consider adding the Glance registry database to
> the
> >> same cross-WAN Galera cluster. At AT&T, we had such a setup for
> Keystone and
> >> Glance registry databases at 10+ deployment zones across 6+ datacenters
> >> across the nation. Besides adjusting the latency timeout for the Galera
> >> settings, we made no other modifications to our
> >> internal-to-an-availability-zone Nova database Galera cluster settings.
> >>
> >> The Keystone and Glance registry databases have a virtually identical
> read
> >> and write data access pattern: small record/row size, small number of
> >> INSERTs, virtually no UPDATE and DELETE calls, and heavy SELECT
> operations
> >> on a small data set. This data access pattern is an ideal fit for a
> >> WAN-replicated Galera cluster.
> >>
> >>> Today I was warned against using this in a multi writer setup. I'd
> planned
> >>> on one writer per physical location.
> >>
> >>
> >> I don't know who warned you about this, but it's not an issue in the
> real
> >> world. We ran in full multi-writer mode, with each deployment zone
> writing
> >> to and reading from its nearest Galera cluster nodes. No issues.
> >>
> >> Best,
> >> -jay
> >>
> >>> I had been under the impression this was the 'done thing' for
> >>> geographically sepperate regions, was I wrong? Should I replicate just
> >>> for DR and always pick a single possible remote write site?
> >>>
> >>> site to site link is 2x10G (different physical paths), short link is
> >>> 2.2ms average latency (2.1ms low, 2.5ms high over 250 packets) long
> >>> link shouldn't be much longer but isn't yet complete to test.
> >>>
> >>> -Jon
> >>>
> >>> _______________________________________________
> >>> OpenStack-operators mailing list
> >>> OpenStack-operators at lists.openstack.org
> >>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>>
> >>
> >> _______________________________________________
> >> OpenStack-operators mailing list
> >> OpenStack-operators at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> --
> Twitter: @serverascode
> Blog: serverascode.com
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150928/d477383f/attachment.html>
More information about the OpenStack-operators
mailing list