<div dir="ltr">Hi Jon,<div class="gmail_extra"><br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">We're about to expand our OpenStack Cloud to a second datacenter.<br></blockquote><div><br></div><div>Congratulations! :)</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Anyone one have opinions they'd like to share as to what I would and<br>
should be worrying about or how to structure this? </blockquote><div><br></div><div>What services will be shared between the two locations? Keystone with db replication is usually quite easy and Glance with some type of file sync is also easy.</div><div><br></div><div>Also think about network connectivity. Will the new location have a local gateway to the internet? Or will all traffic come back to the original location in order to get out? That's outside of OpenStack and more of a general network/sysadmin thing, but it will determine how you handle OpenStack outages when a network outage happens.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Should I be thinking cells or regions (or maybe both)? Any obvious or not so<br>
obvious pitfalls I should try to avoid?<br></blockquote><div><br></div><div>I have never used Cells, but that's mostly due to being able to accomplish everything with Regions. Check out some of my posts over the last year on the regular OpenStack list about Regions.</div><div><br></div><div>Also think about how you'll handle Quotas. Do you want each user to have a separate quota for each side? Or share a quota? I'm not aware of a "supported" way by OpenStack or a side project that does the latter. We've been doing this ourselves for several years using out of bound scripts.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Current scale is about 75 hypervisors. Running juno on Ubuntu 14.04<br>
using Ceph for volume storage, ephemeral block devices, and image<br>
storage (as well as object store). Bulk data storage for most (but by<br>
no means all) of our workloads is at the current location (not that<br>
that matters I suppose).<br>
<br>
Second location is about 150km away and we'll have 10G (at least)<br>
between sites. </blockquote><div><br></div><div>For one of our clouds, the two regions are 300km apart on a 10G connection. We're seeing approximately 3.7ms ping times.</div><div><br></div><div>Some short notes:</div><div><br></div><div>* Galera replication works well -- we don't see any noticeable lag.<br></div><div><br></div><div>* We replicate Glance images by a simple rsync script.<br></div><div><br></div><div>* We have one site designated as "master" and that's where the main DNS name points. Each site has a separate DNS name so you could access each one using a specific URL (<a href="http://cloud.example.com">cloud.example.com</a>, <a href="http://site1.cloud.example.com">site1.cloud.example.com</a>, <a href="http://site2.cloud.example.com">site2.cloud.example.com</a>). By accessing any of them, once logged into the dashboard you can access the opposite through Horizon.</div><div> <br></div><div>Hope that helps... let me know if you have any questions on any of the above.</div><div><br></div><div>Joe</div></div></div></div>