<div dir="ltr"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 18, 2018 at 4:31 PM, Peter Penchev <span dir="ltr"><<a href="mailto:openstack-dev@storpool.com" target="_blank">openstack-dev@storpool.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="gmail-HOEnZb"><div class="gmail-h5">On Tue, Sep 18, 2018 at 03:07:45PM +0200, Attila Fazekas wrote:<br>
> On Tue, Sep 18, 2018 at 2:09 PM, Peter Penchev <<a href="mailto:openstack-dev@storpool.com">openstack-dev@storpool.com</a>><br>
> wrote:<br>
> <br>
> > On Tue, Sep 18, 2018 at 11:32:37AM +0200, Attila Fazekas wrote:<br>
> > [format recovered; top-posting after an inline reply looks confusing]<br>
> > > On Mon, Sep 17, 2018 at 11:43 PM, Jay Pipes <<a href="mailto:jaypipes@gmail.com">jaypipes@gmail.com</a>> wrote:<br>
> > ><br>
> > > > On 09/17/2018 09:39 AM, Peter Penchev wrote:<br>
> > > ><br>
> > > >> Hi,<br>
> > > >><br>
> > > >> So here's a possibly stupid question - or rather, a series of such :)<br>
> > > >> Let's say a company has two (or five, or a hundred) datacenters in<br>
> > > >> geographically different locations and wants to deploy OpenStack in<br>
> > both.<br>
> > > >> What would be a deployment scenario that would allow relatively easy<br>
> > > >> migration (cold, not live) of instances from one datacenter to<br>
> > another?<br>
> > > >><br>
> > > >> My understanding is that for servers located far away from one another<br>
> > > >> regions would be a better metaphor than availability zones, if only<br>
> > > >> because it would be faster for the various storage, compute, etc.<br>
> > > >> services to communicate with each other for the common case of doing<br>
> > > >> actions within the same datacenter. Is this understanding wrong - is<br>
> > it<br>
> > > >> considered all right for groups of servers located in far away places<br>
> > to<br>
> > > >> be treated as different availability zones in the same cluster?<br>
> > > >><br>
> > > >> If the groups of servers are put in different regions, though, this<br>
> > > >> brings me to the real question: how can an instance be migrated across<br>
> > > >> regions? Note that the instance will almost certainly have some<br>
> > > >> shared-storage volume attached, and assume (not quite the common case,<br>
> > > >> but still) that the underlying shared storage technology can be taught<br>
> > > >> about another storage cluster in another location and can transfer<br>
> > > >> volumes and snapshots to remote clusters. From what I've found, there<br>
> > > >> are three basic ways:<br>
> > > >><br>
> > > >> - do it pretty much by hand: create snapshots of the volumes used in<br>
> > > >> the underlying storage system, transfer them to the other storage<br>
> > > >> cluster, then tell the Cinder volume driver to manage them, and<br>
> > spawn<br>
> > > >> an instance with the newly-managed newly-transferred volumes<br>
> > > >><br>
> > > ><br>
> > > > Yes, this is a perfectly reasonable solution. In fact, when I was at<br>
> > AT&T,<br>
> > > > this was basically how we allowed tenants to spin up instances in<br>
> > multiple<br>
> > > > regions: snapshot the instance, it gets stored in the Swift storage<br>
> > for the<br>
> > > > region, tenant starts the instance in a different region, and Nova<br>
> > pulls<br>
> > > > the image from the Swift storage in the other region. It's slow the<br>
> > first<br>
> > > > time it's launched in the new region, of course, since the bits need<br>
> > to be<br>
> > > > pulled from the other region's Swift storage, but after that, local<br>
> > image<br>
> > > > caching speeds things up quite a bit.<br>
> > > ><br>
> > > > This isn't migration, though. Namely, the tenant doesn't keep their<br>
> > > > instance ID, their instance's IP addresses, or anything like that.<br>
> ><br>
> > Right, sorry, I should have clarified that what we're interested in is<br>
> > technically creating a new instance with the same disk contents, so<br>
> > that's fine. Thanks for confirming that there is not a simpler way that<br>
> > I've missed, I guess :)<br>
> ><br>
> > > > I've heard some users care about that stuff, unfortunately, which is<br>
> > why<br>
> > > > we have shelve [offload]. There's absolutely no way to perform a<br>
> > > > cross-region migration that keeps the instance ID and instance IP<br>
> > addresses.<br>
> > > ><br>
> > > > - use Cinder to backup the volumes from one region, then restore them<br>
> > to<br>
> > > >> the other; if this is combined with a storage-specific Cinder<br>
> > backup<br>
> > > >> driver that knows that "backing up" is "creating a snapshot" and<br>
> > > >> "restoring to the other region" is "transferring that snapshot to<br>
> > the<br>
> > > >> remote storage cluster", it seems to be the easiest way forward<br>
> > (once<br>
> > > >> the Cinder backup driver has been written)<br>
> > > >><br>
> > > ><br>
> > > > Still won't have the same instance ID and IP address, which is what<br>
> > > > certain users tend to complain about needing with move operations.<br>
> > > ><br>
> > > > - use Nova's "server image create" command, transfer the resulting<br>
> > > >> Glance image somehow (possibly by downloading it from the Glance<br>
> > > >> storage in one region and simulateneously uploading it to the<br>
> > Glance<br>
> > > >> instance in the other), then spawn an instance off that image<br>
> > > >><br>
> > > ><br>
> > > > Still won't have the same instance ID and IP address :)<br>
> > > ><br>
> > > > Best,<br>
> > > > -jay<br>
> > > ><br>
> > > > The "server image create" approach seems to be the simplest one,<br>
> > > >> although it is a bit hard to imagine how it would work without<br>
> > > >> transferring data unnecessarily (the online articles I've seen<br>
> > > >> advocating it seem to imply that a Nova instance in a region cannot be<br>
> > > >> spawned off a Glance image in another region, so there will need to be<br>
> > > >> at least one set of "download the image and upload it to the other<br>
> > > >> side", even if the volume-to-image and image-to-volume transfers are<br>
> > > >> instantaneous, e.g. using glance-cinderclient). However, when I tried<br>
> > > >> it with a Nova instance backed by a StorPool volume (no ephemeral<br>
> > image<br>
> > > >> at all), the Glance image was zero bytes in length and only its<br>
> > metadata<br>
> > > >> contained some information about a volume snapshot created at that<br>
> > > >> point, so this seems once again to go back to options 1 and 2 for the<br>
> > > >> different ways to transfer a Cinder volume or snapshot to the other<br>
> > > >> region. Or have I missed something, is there a way to get the "server<br>
> > > >> image create / image download / image create" route to handle volumes<br>
> > > >> attached to the instance?<br>
> > > >><br>
> > > >> So... have I missed something else, too, or are these the options for<br>
> > > >> transferring a Nova instance between two distant locations?<br>
> > > >><br>
> > > >> Thanks for reading this far, and thanks in advance for your help!<br>
> > > >><br>
> > > >> Best regards,<br>
> > > >> Peter<br>
> > ><br>
> > > Create a volume transfer VM/machine in each region.<br>
> > > attache the volume -> dd -> compress -> internet ->decompress -> new<br>
> > > volume, attache(/boot with) to the volume to the final machine.<br>
> > > In case you have frequent transfers you may keep up the machines for the<br>
> > > next one..<br>
> ><br>
> > Thanks for the advice, but this would involve transferring *a lot* more<br>
> > data than if we leave it to the underlying storage :) As I mentioned,<br>
> > the underlying storage can be taught about remote clusters and can be told<br>
> > to create a remote snapshot of a volume; this will be the base on which<br>
> > we will write our Cinder backup driver. So both my options 1 (do it "by<br>
> > hand" with the underlying storage) and 2 (cinder volume backup/restore)<br>
> > would be preferable.<br>
> ><br>
> <br>
> Cinder might get a feature for `rescue` a volume in case accidentally<br>
> someone<br>
> deleted the DB record or some other bad thing happened.<br>
> This needs to be admin only op where you would need to specify where is the<br>
> volume,<br>
> If just a new volume `shows up` on the storage, but without<br>
> the knowledge of cinder, it could be rescued as well.<br>
<br>
</div></div>Hmm, is this not what the Cinder "manage" command does?<br>
<span class="gmail-"><br></span></blockquote><div>Sounds like it does:<br></div><div><a href="https://blueprints.launchpad.net/horizon/+spec/add-manage-unmanage-volume">https://blueprints.launchpad.net/horizon/+spec/add-manage-unmanage-volume</a><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="gmail-">
> Among same storage types probably cinder could have an admin only<br>
> API for transfer.<br>
> <br>
> I am not sure is volume backup/restore is really better across regions<br>
> than the above steps properly piped however<br>
> it is very infrastructure dependent,<br>
> bandwidth and latency across regions matters.<br>
</span>[snip discussion]<br>
<br>
Well, the reason my initial message said "assume the underlying storage<br>
can do that" was that I did not want to go into marketing/advertisement<br>
territory and say flat out that the StorPool storage system can do that :)<br>
<div class="gmail-HOEnZb"><div class="gmail-h5"><br>
Best regards,<br>
Peter<br>
<br>
-- <br>
Peter Penchev <a href="mailto:openstack-dev@storpool.com">openstack-dev@storpool.com</a> <a href="https://storpool.com/" rel="noreferrer" target="_blank">https://storpool.com/</a><br>
<br>
______________________________<wbr>_________________<br>
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack</a><br>
Post to : <a href="mailto:openstack@lists.openstack.org">openstack@lists.openstack.org</a><br>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack</a><br>
</div></div></blockquote></div><br></div></div></div>