<div dir="ltr">Just to add in my $0.02, we run in multiple sites as well. We are using regions to do this. Cells at this point have a lot going for it, but we thought it wasn't there yet. We also don't have the necessary resources to make our own changes to it like a few other places do. <div><br></div><div>With that, we said the only real thing that we should do is make sure items such as Tenant and User ID's remained the same. That allows us to do show-back reporting and it makes it easier on the user-base on when they want to deploy from one region to another. With that requirement, we did used galera in the same manner that many of the others mentioned. We then deployed Keystone pointing to that galera DB. That is the only DB that is replicated across sites. Everything else such as Nova, Neutron, etc are all within its own location. </div><div><br></div><div>The only real confusing piece for our users is the dashboard. When you first go to the dashboard, there is a dropdown to select a region. Many users think that is going to send them to a particular location, so their information from that location is going to show up. It is really to which region do you want to authenticate against. Once you are in the dashboard, you can select which Project you want to see. That has been a major point of confusion. I think our solution is to just rename that text. </div><div><br></div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 5, 2015 at 11:46 AM, Clayton O'Neill <span dir="ltr"><<a href="mailto:clayton@oneill.net" target="_blank">clayton@oneill.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span class=""><span style="font-size:13px">On Tue, May 5, 2015 at 11:33 AM, Curtis <span dir="ltr"><<a href="mailto:serverascode@gmail.com" target="_blank">serverascode@gmail.com</a>></span> wrote:<br></span></span><div class="gmail_quote" style="font-size:13px"><span class=""><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Do people have any comments or strategies on dealing with Galera<br>replication across the WAN using regions? Seems like something to try<br>to avoid if possible, though might not be possible. Any thoughts on<br>that?<br></blockquote><div><br></div></span></span><div>We're doing this with good luck. Few things I'd recommend being aware of:</div><div><br></div><div>Set galera_group_segment so that each site is in a separate segment. This will make it smarter about doing replication and for state transfer.</div><div><br></div><div>Make sure you look at the timers and tunables in Galera and make sure they make sense for your network. We've got lots of BW and lowish latency (37ms), so the defaults have worked pretty well for us.</div><div><br></div><div>Make sure that when you do provisioning in one site, you don't have CM tools in the other site breaking things. We can ran into issues during our first deploy like this where Puppet was making a change in one site to a user, and Puppet in the other site reverted the change nearly immediately. You may have to tweak your deployment process to deal with that sort of thing.</div><div><br></div><div>Make sure you're running Galera or Galera Arbitrator in enough sites to maintain quorum if you have issues. We run 3 nodes in one DC, and 3 nodes in another DC for Horizon, Keystone and Designate. We run a Galera arbitrator in a third DC to settle ties.</div><div><br></div><div>Lastly, the obvious one is just to stay up to date on patches. Galera is pretty stable, but we have run into bugs that we had to get fixes for.</div></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 5, 2015 at 11:33 AM, Curtis <span dir="ltr"><<a href="mailto:serverascode@gmail.com" target="_blank">serverascode@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Do people have any comments or strategies on dealing with Galera<br>
replication across the WAN using regions? Seems like something to try<br>
to avoid if possible, though might not be possible. Any thoughts on<br>
that?<br>
<br>
Thanks,<br>
Curtis.<br>
<div><div><br>
On Mon, May 4, 2015 at 3:11 PM, Jesse Keating <<a href="mailto:jlk@bluebox.net" target="_blank">jlk@bluebox.net</a>> wrote:<br>
> I agree with Subbu. You'll want that to be a region so that the control<br>
> plane is mostly contained. Only Keystone (and swift if you have that) would<br>
> be doing lots of site to site communication to keep databases in sync.<br>
><br>
> <a href="http://docs.openstack.org/arch-design/content/multi_site.html" target="_blank">http://docs.openstack.org/arch-design/content/multi_site.html</a> is a good read<br>
> on the topic.<br>
><br>
><br>
> - jlk<br>
><br>
> On Mon, May 4, 2015 at 1:58 PM, Allamaraju, Subbu <<a href="mailto:subbu@subbu.org" target="_blank">subbu@subbu.org</a>> wrote:<br>
>><br>
>> I suggest building a new AZ (“region” in OpenStack parlance) in the new<br>
>> location. In general I would avoid setting up control plane to operate<br>
>> across multiple facilities unless the cloud is very large.<br>
>><br>
>> > On May 4, 2015, at 1:40 PM, Jonathan Proulx <<a href="mailto:jon@jonproulx.com" target="_blank">jon@jonproulx.com</a>> wrote:<br>
>> ><br>
>> > Hi All,<br>
>> ><br>
>> > We're about to expand our OpenStack Cloud to a second datacenter.<br>
>> > Anyone one have opinions they'd like to share as to what I would and<br>
>> > should be worrying about or how to structure this? Should I be<br>
>> > thinking cells or regions (or maybe both)? Any obvious or not so<br>
>> > obvious pitfalls I should try to avoid?<br>
>> ><br>
>> > Current scale is about 75 hypervisors. Running juno on Ubuntu 14.04<br>
>> > using Ceph for volume storage, ephemeral block devices, and image<br>
>> > storage (as well as object store). Bulk data storage for most (but by<br>
>> > no means all) of our workloads is at the current location (not that<br>
>> > that matters I suppose).<br>
>> ><br>
>> > Second location is about 150km away and we'll have 10G (at least)<br>
>> > between sites. The expansion will be approximately the same size as<br>
>> > the existing cloud maybe slightly larger and given site capacities the<br>
>> > new location is also more likely to be where any future grown goes.<br>
>> ><br>
>> > Thanks,<br>
>> > -Jon<br>
>> ><br>
>> > _______________________________________________<br>
>> > OpenStack-operators mailing list<br>
>> > <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
>> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
>><br>
>><br>
>> _______________________________________________<br>
>> OpenStack-operators mailing list<br>
>> <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> OpenStack-operators mailing list<br>
> <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
><br>
<br>
<br>
<br>
</div></div><span><font color="#888888">--<br>
Twitter: @serverascode<br>
Blog: <a href="http://serverascode.com" target="_blank">serverascode.com</a><br>
</font></span><div><div><br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
</div></div></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br></blockquote></div><br></div>