<div dir="ltr"><div>considering that across dc, latency increase, and that latency may cause brain splits easily in high read/write, data of other services that are not global should never be replicated on multisite using a synch approach (asynch one for disaster recovery, may be good enough!)<br></div><div><br></div><div>Indeed, as Tim said, one of the most important things regarding keystone distribution, is leveraging on memcached.<br></div><div><br></div><div><div>Somehow you deal with two totally different data types within keystone:</div><div>- user/projects/domain (quite static, i.e. it is not that every second you change tons of these - so mostly read) -> perfect for db persistency</div><div>- tokens (quite dynamics - lot write and read) -> better managed by memcached.</div></div><div><br></div>An OPNFV group did an interesting analysis on the multisite IdM:<div><br><div><a href="https://etherpad.opnfv.org/p/multisite_identity_management" target="_blank">https://etherpad.opnfv.org/p/multisite_identity_management</a><br></div><div><br></div><div>I think most of the possible architectures are discussed with pro and cons.</div></div><div><br></div><div class="gmail_extra"><div><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">Br,</span></div><div><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">Federico</span></div><div dir="ltr"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><br></span></div><div dir="ltr"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">--</span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">Future Internet is closer than you think!</span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><a href="http://www.fiware.org" target="_blank">http://www.fiware.org</a></span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">Official Mirantis partner for OpenStack Training</span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><a href="https://www.create-net.org/community/openstack-training" target="_blank">https://www.create-net.org/community/openstack-training</a></span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">-- </span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">Dr. Federico M. Facca</span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">CREATE-NET</span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">Via alla Cascata 56/D</span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">38123 Povo Trento (Italy)</span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">P </span><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><a href="tel:%2B39%200461%20312471" value="+390461312471" target="_blank">+39 0461 312471</a></span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">M <a href="tel:%2B39%20334%206049758" value="+393346049758" target="_blank">+39 334 6049758</a></span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">E <a href="mailto:federico.facca@create-net.org" target="_blank">federico.facca@create-net.org</a></span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">T @chicco785</span><br style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px"><span style="color:rgb(0,0,0);font-family:Helvetica;font-size:12px">W <a href="http://www.create-net.org" target="_blank">www.create-net.org</a></span></div></div></div></div></div></div></div></div></div>
<br><div class="gmail_quote">On Mon, Sep 28, 2015 at 7:17 PM, Tim Bell <span dir="ltr"><<a href="mailto:Tim.Bell@cern.ch" target="_blank">Tim.Bell@cern.ch</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div lang="EN-GB" link="blue" vlink="purple"><div><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)">CERN do the same…. The memcache functions on keystone are very useful for scaling it up.<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)">Tim<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(31,73,125)"><u></u> <u></u></span></p><div style="border-style:none none none solid;border-left-color:blue;border-left-width:1.5pt;padding:0cm 0cm 0cm 4pt"><div><div style="border-style:solid none none;border-top-color:rgb(225,225,225);border-top-width:1pt;padding:3pt 0cm 0cm"><p class="MsoNormal"><b><span lang="EN-US" style="font-size:11pt;font-family:Calibri,sans-serif">From:</span></b><span lang="EN-US" style="font-size:11pt;font-family:Calibri,sans-serif"> Matt Fischer [mailto:<a href="mailto:matt@mattfischer.com" target="_blank">matt@mattfischer.com</a>] <br><b>Sent:</b> 28 September 2015 18:51<br><b>To:</b> Curtis <<a href="mailto:serverascode@gmail.com" target="_blank">serverascode@gmail.com</a>><br><b>Cc:</b> <a href="mailto:openstack-operators@lists.openstack.org" target="_blank">openstack-operators@lists.openstack.org</a>; Jonathan Proulx <<a href="mailto:jon@jonproulx.com" target="_blank">jon@jonproulx.com</a>><br><b>Subject:</b> Re: [Openstack-operators] Milti-site Keystone & Galera<u></u><u></u></span></p></div></div><div><div><p class="MsoNormal"><u></u> <u></u></p><div><p class="MsoNormal">Yes. We have a separate DB cluster for global stuff like Keystone & Designate, and a regional cluster for things like nova/neutron etc.<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p><div><p class="MsoNormal">On Mon, Sep 28, 2015 at 10:43 AM, Curtis <<a href="mailto:serverascode@gmail.com" target="_blank">serverascode@gmail.com</a>> wrote:<u></u><u></u></p><blockquote style="border-style:none none none solid;border-left-color:rgb(204,204,204);border-left-width:1pt;padding:0cm 0cm 0cm 6pt;margin-left:4.8pt;margin-right:0cm"><p class="MsoNormal">Hi,<br><br>For organizations with the keystone database shared across regions via<br>galera, do you just have keystone (and perhaps glance as was<br>suggested) in its own cluster that is multi-region, and the other<br>databases in a cluster that is only in one region (ie. just local<br>their their region)? Or are you giving other services their own<br>database in the single multi-region cluster and thus replicating all<br>the databases? Or is there another solution?<br><br>Thanks,<br>Curtis.<u></u><u></u></p><div><div><p class="MsoNormal" style="margin-bottom:12pt"><br>On Tue, Sep 8, 2015 at 3:22 PM, Jonathan Proulx <<a href="mailto:jon@jonproulx.com" target="_blank">jon@jonproulx.com</a>> wrote:<br>> Thanks Jay & Matt,<br>><br>> That's basically what I thought, so I'll keep thinking it :)<br>><br>> We're not replicating glance DB because images will be stored in<br>> different local Ceph storage on each side so the images won't be<br>> directly available. We thought about moving back to a file back end<br>> and rsync'ing but RBD gets us lots of fun things we want to keep<br>> (quick start, copy on write thin cloned ephemeral storage etc...) so<br>> decided to live with making our users copy images around.<br>><br>> -Jon<br>><br>><br>><br>> On Tue, Sep 8, 2015 at 5:00 PM, Jay Pipes <<a href="mailto:jaypipes@gmail.com" target="_blank">jaypipes@gmail.com</a>> wrote:<br>>> On 09/08/2015 04:44 PM, Jonathan Proulx wrote:<br>>>><br>>>> Hi All,<br>>>><br>>>> I'm pretty close to opening a second region in my cloud at a second<br>>>> physical location.<br>>>><br>>>> The plan so far had been to only share keystone between the regions<br>>>> (nova, glance, cinder etc would be distinct) and implement this by<br>>>> using MariaDB with galera replication between sites with each site<br>>>> having it's own gmcast_segment to minimize the long distance catter<br>>>> plus a 3rd site with a galera arbitrator for the obvious reason.<br>>><br>>><br>>> I would also strongly consider adding the Glance registry database to the<br>>> same cross-WAN Galera cluster. At AT&T, we had such a setup for Keystone and<br>>> Glance registry databases at 10+ deployment zones across 6+ datacenters<br>>> across the nation. Besides adjusting the latency timeout for the Galera<br>>> settings, we made no other modifications to our<br>>> internal-to-an-availability-zone Nova database Galera cluster settings.<br>>><br>>> The Keystone and Glance registry databases have a virtually identical read<br>>> and write data access pattern: small record/row size, small number of<br>>> INSERTs, virtually no UPDATE and DELETE calls, and heavy SELECT operations<br>>> on a small data set. This data access pattern is an ideal fit for a<br>>> WAN-replicated Galera cluster.<br>>><br>>>> Today I was warned against using this in a multi writer setup. I'd planned<br>>>> on one writer per physical location.<br>>><br>>><br>>> I don't know who warned you about this, but it's not an issue in the real<br>>> world. We ran in full multi-writer mode, with each deployment zone writing<br>>> to and reading from its nearest Galera cluster nodes. No issues.<br>>><br>>> Best,<br>>> -jay<br>>><br>>>> I had been under the impression this was the 'done thing' for<br>>>> geographically sepperate regions, was I wrong? Should I replicate just<br>>>> for DR and always pick a single possible remote write site?<br>>>><br>>>> site to site link is 2x10G (different physical paths), short link is<br>>>> 2.2ms average latency (2.1ms low, 2.5ms high over 250 packets) long<br>>>> link shouldn't be much longer but isn't yet complete to test.<br>>>><br>>>> -Jon<br>>>><br>>>> _______________________________________________<br>>>> OpenStack-operators mailing list<br>>>> <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>>>><br>>><br>>> _______________________________________________<br>>> OpenStack-operators mailing list<br>>> <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>><br>> _______________________________________________<br>> OpenStack-operators mailing list<br>> <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br><br><br><u></u><u></u></p></div></div><p class="MsoNormal"><span><span style="color:rgb(136,136,136)">--</span></span><span style="color:rgb(136,136,136)"><br><span>Twitter: @serverascode</span><br><span>Blog: <a href="http://serverascode.com" target="_blank">serverascode.com</a></span></span><u></u><u></u></p><div><div><p class="MsoNormal"><br>_______________________________________________<br>OpenStack-operators mailing list<br><a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><u></u><u></u></p></div></div></blockquote></div><p class="MsoNormal"><u></u> <u></u></p></div></div></div></div></div></div><br>_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br></blockquote></div><br></div></div>