[Openstack] [nova][cinder] Migrate instances between regions or between clusters?
Attila Fazekas
afazekas at redhat.com
Tue Sep 18 09:32:37 UTC 2018
Create a volume transfer VM/machine in each region.
attache the volume -> dd -> compress -> internet ->decompress -> new
volume, attache(/boot with) to the volume to the final machine.
In case you have frequent transfers you may keep up the machines for the
next one..
In case the storage is just on the compute node: snapshot ->glance download
->glance upload
Would be nice if cinder/glance could take the credentials for another
openstack and move the volume/image to another cinder/glance.
If you want the same IP , specify the ip at instance boot time (port
create),
but you cannot be sure the same ip is always available or really route-able
to different region.. unless... VPN like solution in place...
The uuid not expected to be changed by the users or admins (unsafe),
but you can use other metadata for description/your uuid.
On Mon, Sep 17, 2018 at 11:43 PM, Jay Pipes <jaypipes at gmail.com> wrote:
> On 09/17/2018 09:39 AM, Peter Penchev wrote:
>
>> Hi,
>>
>> So here's a possibly stupid question - or rather, a series of such :)
>> Let's say a company has two (or five, or a hundred) datacenters in
>> geographically different locations and wants to deploy OpenStack in both.
>> What would be a deployment scenario that would allow relatively easy
>> migration (cold, not live) of instances from one datacenter to another?
>>
>> My understanding is that for servers located far away from one another
>> regions would be a better metaphor than availability zones, if only
>> because it would be faster for the various storage, compute, etc.
>> services to communicate with each other for the common case of doing
>> actions within the same datacenter. Is this understanding wrong - is it
>> considered all right for groups of servers located in far away places to
>> be treated as different availability zones in the same cluster?
>>
>> If the groups of servers are put in different regions, though, this
>> brings me to the real question: how can an instance be migrated across
>> regions? Note that the instance will almost certainly have some
>> shared-storage volume attached, and assume (not quite the common case,
>> but still) that the underlying shared storage technology can be taught
>> about another storage cluster in another location and can transfer
>> volumes and snapshots to remote clusters. From what I've found, there
>> are three basic ways:
>>
>> - do it pretty much by hand: create snapshots of the volumes used in
>> the underlying storage system, transfer them to the other storage
>> cluster, then tell the Cinder volume driver to manage them, and spawn
>> an instance with the newly-managed newly-transferred volumes
>>
>
> Yes, this is a perfectly reasonable solution. In fact, when I was at AT&T,
> this was basically how we allowed tenants to spin up instances in multiple
> regions: snapshot the instance, it gets stored in the Swift storage for the
> region, tenant starts the instance in a different region, and Nova pulls
> the image from the Swift storage in the other region. It's slow the first
> time it's launched in the new region, of course, since the bits need to be
> pulled from the other region's Swift storage, but after that, local image
> caching speeds things up quite a bit.
>
> This isn't migration, though. Namely, the tenant doesn't keep their
> instance ID, their instance's IP addresses, or anything like that.
>
> I've heard some users care about that stuff, unfortunately, which is why
> we have shelve [offload]. There's absolutely no way to perform a
> cross-region migration that keeps the instance ID and instance IP addresses.
>
> - use Cinder to backup the volumes from one region, then restore them to
>> the other; if this is combined with a storage-specific Cinder backup
>> driver that knows that "backing up" is "creating a snapshot" and
>> "restoring to the other region" is "transferring that snapshot to the
>> remote storage cluster", it seems to be the easiest way forward (once
>> the Cinder backup driver has been written)
>>
>
> Still won't have the same instance ID and IP address, which is what
> certain users tend to complain about needing with move operations.
>
> - use Nova's "server image create" command, transfer the resulting
>> Glance image somehow (possibly by downloading it from the Glance
>> storage in one region and simulateneously uploading it to the Glance
>> instance in the other), then spawn an instance off that image
>>
>
> Still won't have the same instance ID and IP address :)
>
> Best,
> -jay
>
> The "server image create" approach seems to be the simplest one,
>> although it is a bit hard to imagine how it would work without
>> transferring data unnecessarily (the online articles I've seen
>> advocating it seem to imply that a Nova instance in a region cannot be
>> spawned off a Glance image in another region, so there will need to be
>> at least one set of "download the image and upload it to the other
>> side", even if the volume-to-image and image-to-volume transfers are
>> instantaneous, e.g. using glance-cinderclient). However, when I tried
>> it with a Nova instance backed by a StorPool volume (no ephemeral image
>> at all), the Glance image was zero bytes in length and only its metadata
>> contained some information about a volume snapshot created at that
>> point, so this seems once again to go back to options 1 and 2 for the
>> different ways to transfer a Cinder volume or snapshot to the other
>> region. Or have I missed something, is there a way to get the "server
>> image create / image download / image create" route to handle volumes
>> attached to the instance?
>>
>> So... have I missed something else, too, or are these the options for
>> transferring a Nova instance between two distant locations?
>>
>> Thanks for reading this far, and thanks in advance for your help!
>>
>> Best regards,
>> Peter
>>
>>
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>> Post to : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>>
>>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
> Post to : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20180918/fd334f6e/attachment.html>
More information about the Openstack
mailing list