[openstack-dev] Attached and Un-attached Volume migration

Kanade, Rohan Rohan.Kanade at nttdata.com
Tue Feb 5 10:46:58 UTC 2013


>I believe it was my comment on the review that started this idea, but I was actually suggesting something different from point 2 below. Comments in line.
On Feb 4, 2013, at 3:33 AM, "Kanade, Rohan" <Rohan.Kanade at nttdata.com> wrote:

>> This is in context to an ongoing review https://review.openstack.org/#/c/20333/   and comments on the review.
>> 
>> There are two cases which need to be considered for the topic of ?Attached and Un-attached Volume migration?.
>> 
>> Case 1: Is intended to be implemented by the above review . Libvirt api (migrateToURI)  takes care of correctly copying and syncing data between the original volumes (attached) on ?cinder-node1?  and the new destination volumes on "cinder-node2", without taking the original volumes offline or detaching them before copying.

>I may have slightly misunderstood the functionality of the patch, so let me ask the following question to clarify:

>Is it possible to do live-migration and specify the same compute node so that only the volumes are migrated?

>for example:

>nova live-migration instance1 compute-node1 --block_device_mapping ?

>If it is possible to keep the instance on the same node while migrating volumes, then I think my concerns are addressed.

Currently, with or without the this patch, live migration or live block migration to the same compute node is not possible due to two issues.
1) https://github.com/openstack/nova/blob/master/nova/scheduler/driver.py#L222

2) Libvirt already sees a matching domain , since the instance is already on that node, hence it cannot re-create the same domain on that compute node.
"Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 97, in wait
    readers.get(fileno, noop).cb(fileno)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 192, in main
    result = function(*args, **kwargs)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2747, in _live_migration
    recover_method(ctxt, instance_ref, dest, block_migration)
  File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
    self.gen.next()
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2741, in _live_migration
    CONF.live_migration_bandwidth)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 187, in doit
    result = proxy_call(self._autowrap, f, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 147, in proxy_call
    rv = execute(f,*args,**kwargs)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 76, in tworker
    rv = meth(*args,**kwargs)
  File "/usr/lib/python2.7/dist-packages/libvirt.py", line 951, in migrateToURI
    if ret == -1: raise libvirtError ('virDomainMigrateToURI() failed', dom=self)
libvirtError: Requested operation is not valid: domain is already active as 'instance-00000004'
"

Hence the instance will always have to be migrated to another compute node for Case 1.

______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data.  If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding



More information about the OpenStack-dev mailing list