[Openstack] Live Migration of VMs without shared storage

Remo Mattei Remo at Italy1.com
Tue Mar 3 15:56:46 UTC 2015


You need to share the key between the compute. I would use ssh-copy-id if you are not sure how to copy the ssh key over. 

The key will be added into the authorized_keys. 

Remo 
> On Mar 3, 2015, at 07:48, somshekar kadam <som_kadam at yahoo.co.in> wrote:
> 
> Remo,  
> 
> I am able to ssh between compute nodes by sharing the ssh   key .
> Not sure still I am stcuk at the same point, hope the permissions on below key files are proper on two nodes celestial8 and celestia7 nodes. 
> 
> stack at celestial8:~/.ssh$ ls -al
> total 24
> drwx------  2 stack stack 4096 Mar  3 19:49 .
> drwxr-xr-x 29 stack stack 4096 Mar  3 20:49 ..
> -rw-rw-r--  1 stack stack 2392 Mar  3 19:51 authorized_keys
> -rw-------  1 stack stack 1675 Mar  3 19:52 id_rsa
> -rw-r--r--  1 stack stack  398 Mar  3 19:52 id_rsa.pub
> -rw-r--r--  1 stack stack  444 Mar  3 19:53 known_hosts
> stack at celestial8:~/.ssh$ cat known_hosts 
> |1|eVFoxCzwWbatX3NdVKBrDpNY0vo=|XIQuz6AvrNwBrkS4+qIing5h54w= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEOrmoxz9yzn9znNNlpfrhLJWTswgOijZg5Gx+7sIqPgbRwxILHPOEQGCFzU9UXjjplS6jD9HRlsW69kXn/QIUk=
> |1|VoR4tXOGjJeXYoVXYOD9hI7vC6c=|MxjK/kHliUh9LXbh4blsXbeL1X4= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLChC/gKsrM6H1ETtlv2HQWB0LBDqVA4WLTRw+/18iiCSQNP3H7X1m/EnN0yTY+QGWNP+0b40bMbTK2SIwbwXjY=
> stack at celestial8:~/.ssh$ ssh stack at 10.10.126.49
> Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic x86_64)
> 
>  * Documentation:  https://help.ubuntu.com/
> 
> 456 packages can be updated.
> 140 updates are security updates.
> 
> Last login: Tue Mar  3 21:08:08 2015 from celestial8
> stack at celestial7:~$ cd /opt/stack/.ssh/
> stack at celestial7:~/.ssh$ ls -al
> total 28
> drwx------  2 stack stack 4096 Mar  3 19:49 .
> drwxr-xr-x 36 stack stack 4096 Mar  3 21:09 ..
> -rw-rw-r--  1 stack stack 2392 Mar  3 19:53 authorized_keys
> -rw-------  1 stack stack 1675 Mar  3 19:50 id_rsa
> -rw-r--r--  1 stack stack  398 Mar  3 19:50 id_rsa.pub
> -rw-r--r--  1 stack stack  444 Mar  3 19:51 known_hosts
> -rw-r--r--  1 stack stack  666 Feb 27 17:17 known_hosts.old
> stack at celestial7:~/.ssh$ cat known_hosts
> |1|x5M0ZAeVuWbnhZUnTbQZgXjnzRk=|+Udzs+eBFcRmN0eUFpLYI7O/wtA= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE853DSXrFQA4N2xfM9PaVtr+EmyqHAlaKK4yPW4hmJZMrvnIUE1YCqDxN7YQmySO83BeGjThcwCIKEbt1RBjlw=
> |1|/7pA8TcIVXv/TGN043CJXK83WMg=|49bI46LTeM7oh8v5Gcy4dOG773g= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEOrmoxz9yzn9znNNlpfrhLJWTswgOijZg5Gx+7sIqPgbRwxILHPOEQGCFzU9UXjjplS6jD9HRlsW69kXn/QIUk=
> 
>  
> 
> Regards 
> Neelu
> 
> 
> On Tuesday, 3 March 2015 11:24 AM, somshekar kadam <som_kadam at yahoo.co.in> wrote:
> 
> 
> Ok Remo thanks. 
> Nova user is the user who has installed it, currenlty user stack is the user i have set. 
> 
> nova.conf of my controller. 
> 
> ------
> [DEFAULT]
> flat_interface = eth0
> flat_network_bridge = br100
> vlan_interface = eth0
> public_interface = br100
> network_manager = nova.network.manager.FlatDHCPManager
> firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
> compute_driver = libvirt.LibvirtDriver
> default_ephemeral_format = ext4
> metadata_workers = 2
> ec2_workers = 2
> osapi_compute_workers = 2
> rabbit_userid = stackrabbit
> rabbit_password = stack123
> rabbit_hosts = 10.10.126.49
> rpc_backend = rabbit
> keystone_ec2_url = http://10.10.126.49:5000/v2.0/ec2tokens
> ec2_dmz_host = 10.10.126.49
> vncserver_proxyclient_address = 127.0.0.1
> vncserver_listen = 127.0.0.1
> vnc_enabled = true
> xvpvncproxy_base_url = http://10.10.126.49:6081/console
> novncproxy_base_url = http://10.10.126.49:6080/vnc_auto.html
> logging_exception_prefix = %(color)s%(asctime)s.%(msecs)03d TRACE %(name)s %(instance)s
> logging_debug_format_suffix = from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d
> logging_default_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [-%(color)s] %(instance)s%(color)s%(message)s
> logging_context_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [%(request_id)s %(user_name)s %(project_name)s%(color)s] %(instance)s%(color)s%(message)s
> force_config_drive = none
> send_arp_for_ha = True
> multi_host = True
> instances_path = /opt/stack/data/nova/instances
> lock_path = /opt/stack/data/nova
> state_path = /opt/stack/data/nova
> enabled_apis = ec2,osapi_compute,metadata
> instance_name_template = instance-%08x
> sql_connection = mysql://root:stack123@127.0.0.1/nova?charset=utf8
> my_ip = 10.10.126.49
> s3_port = 3333
> s3_host = 10.10.126.49
> default_floating_pool = public
> force_dhcp_release = True
> dhcpbridge_flagfile = /etc/nova/nova.conf
> scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
> rootwrap_config = /etc/nova/rootwrap.conf
> api_paste_config = /etc/nova/api-paste.ini
> allow_migrate_to_same_host = False
> allow_resize_to_same_host = False
> debug = True
> verbose = True
> 
> [osapi_v3]
> enabled = True
> 
> [keystone_authtoken]
> signing_dir = /var/cache/nova
> cafile = /opt/stack/data/ca-bundle.pem
> auth_uri = http://10.10.126.49:5000
> project_domain_id = default
> project_name = service
> user_domain_id = default
> password = stack123
> username = nova
> auth_url = http://10.10.126.49:35357
> auth_plugin = password
> 
> [spice]
> enabled = false
> html5proxy_base_url = http://10.10.126.49:6082/spice_auto.html
> 
> [glance]
> api_servers = http://10.10.126.49:9292
> 
> [libvirt]
> inject_partition = -2
> live_migration_uri = qemu+ssh://stack@%s/system
> use_usb_tablet = False
> cpu_mode = custom
> cpu_model = Nehalem
> virt_type = kvm
> live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED
> block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_NON_SHARED_INC
> [keymgr]
> fixed_key = 43a3c7e902d5f44c09916261590bc613cf0373c44db5722a5c283b7b37b9ea1c
> 
>  
> 
> Regards 
> Neelu
> 
> 
> On Tuesday, 3 March 2015 11:05 AM, Remo Mattei <Remo at Italy1.com> wrote:
> 
> 
> You need to make sure that you ssh between machines using the nova user not root user. 
> 
> Remo 
>> On Mar 2, 2015, at 21:33, somshekar kadam <som_kadam at yahoo.co.in <mailto:som_kadam at yahoo.co.in>> wrote:
>> 
>> Remo, 
>> I did not understand, nova can not root ?
>> 
>> DO I need to make user root ssh instead of a user ?
>>  
>> 
>> Regards 
>> Neelu
>> 
>> 
>> On Monday, 2 March 2015 10:41 PM, Remo Mattei <remo at italy1.com <mailto:remo at italy1.com>> wrote:
>> 
>> 
>> Make sure nova can not root. 
>> 
>> Remo
>> 
>> Inviato da iPhone ()
>> 
>> Il giorno 02/mar/2015, alle ore 08:21, somshekar kadam <som_kadam at yahoo.co.in <mailto:som_kadam at yahoo.co.in>> ha scritto:
>> 
>>> Hello John, 
>>> Thanks for all your valuable inputs. 
>>> I had missed these configuration. Hope these configs mentioned below will be useful for others who are trying out live migration. 
>>> 
>>> 1. force_config_drive = none , otherwise it was giving error. 
>>> 2. allow_migrate_to_same_host = False
>>> allow_resize_to_same_host = False
>>> 
>>> 3. set proper cpu mode as below 
>>> cpu_mode = custom
>>> cpu_model = Nehalem
>>> virt_type = kvm
>>> 
>>> Now I am able to make normal migration working fine, from one host to another. 
>>> 
>>> When I try with live migration with block-migration, its failing for not getting proper host key verification ,, 
>>> I am looking into it. I have shared keys proper I am able to ssh to-fro host nodes using passwordless ssh. 
>>> no clue right now. 
>>> 
>>> error log 
>>> ------------------
>>> [[01;31mLive Migration fa       ilure: operation failed: Failed to connect to remote libvirt URI qemu+ssh://stack@celestial8/ <applewebdata://E7EE6AF6-A468-45C5-A99B-BC84AC4BE8B9>
>>> system: Cannot recv data: Host key verifica       tion failed.: Connection reset by peer^[[00m
>>> 
>>> 
>>>       on thread notification^[[00m ^[[00;33mfrom (pid=8953) thread_finished /opt/stack/nova/nova/virt/ libvirt/driver.py:5638^[[00m
>>> 270370 Traceback (most recent call last):
>>> 270371   File "/usr/local/lib/python2.7/ dist-packages/eventlet/hubs/ hub.py", line 457, in fire_timers
>>> 270372     timer()
>>> 270373   File "/usr/local/lib/python2.7/ dist-packages/eventlet/hubs/ timer.py", line 58, in __call__
>>> 270374     cb(*args, **kw)
>>> 270375   File "/usr/local/lib/python2.7/ dist-packages/eventlet/event. py", line 168, in _do_send
>>> 270376     waiter.switch(result)
>>> 270377   File "/usr/local/lib/python2.7/ dist-packages/eventlet/ greenthread.py", line 214, in main
>>> 270378     result = function(*args, **kwargs)
>>> 270379   File "/opt/stack/nova/nova/virt/ libvirt/driver.py", line 5435, in _live_migration_operation
>>> 270380     instance=instance)
>>> 270381   File "/usr/local/lib/python2.7/ dist-packages/oslo_utils/ excutils.py", line 82, in __exit__
>>> 270382     six.reraise(self.type_, self.value, self.tb)
>>> 270383   File "/opt/stack/nova/nova/virt/ libvirt/driver.py", line 5404, in _live_migration_operation
>>> 270384     CONF.libvirt.live_migration_ bandwidth)
>>> 270385   File "/usr/local/lib/python2.7/ dist-packages/eventlet/tpool. py", line 183, in doit
>>> 270386     result = proxy_call(self._autowrap, f, *args, **kwargs)
>>> 270387   File "/usr/local/lib/python2.7/ dist-packages/eventlet/tpool. py", line 141, in proxy_call
>>> 270388     rv = execute(f, *args, **kwargs)
>>> 270389   File "/usr/local/lib/python2.7/ dist-packages/eventlet/tpool. py", line 122, in execute
>>> 270390     six.reraise(c, e, tb)
>>> 270391   File "/usr/local/lib/python2.7/ dist-packages/eventlet/tpool. py", line 80, in tworker
>>> 270392     rv = meth(*args, **kwargs)
>>> 270393   File "/usr/lib/python2.7/dist- packages/libvirt.py", line 1582, in migrateToURI2
>>> 270394     if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self)
>>> 270395 libvirtError: operation failed: Failed to connect to remote libvirt URI qemu+ssh://stack@celestial8/ <applewebdata://E7EE6AF6-A468-45C5-A99B-BC84AC4BE8B9> system: Cannot recv data: Host key v       erification failed.: Connection reset by peer
>>> @                                                                   
>>> -----
>>> 
>>> 
>>>  
>>> 
>>> Regards 
>>> Neelu
>>> 
>>> 
>>> On Friday, 27 February 2015 8:09 PM, somshekar kadam <som_kadam at yahoo.co.in <mailto:som_kadam at yahoo.co.in>> wrote:
>>> 
>>> 
>>> Have done the same configs as mentioned in above thread links for libvirt and nova.conf. 
>>>  
>>> 
>>> Regards 
>>> Neelu
>>> 
>>> 
>>> On Friday, 27 February 2015 8:07 PM, somshekar kadam <som_kadam at yahoo.co.in <mailto:som_kadam at yahoo.co.in>> wrote:
>>> 
>>> 
>>> John, your patch is in the mainline now. I have tested it. Not sure what is the thing I am missing
>>> I have specified cpu model none and even tried setting cpu mode custom also and model kvm64. 
>>> 
>>> I am getting below error on live migration. 
>>> 
>>> error log
>>> ----
>>> Remote error: libvirtError Requested operation is not valid: no CPU model specified [u’Traceback (most recent call last):\n’, u’ File “/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py”, line 142, in _dispatch_and_reply\n executo
>>> Code
>>> 500
>>> Details
>>> File “/opt/stack/nova/nova/conductor/manager.py”, line 606, in _live_migrate block_migration, disk_over_commit) File “/opt/stack/nova/nova/conductor/tasks/live_migrate.py”, line 194, in execute return task.execute() File “/opt/stack/nova/nova/conductor/tasks/live_migrate.py”, line 62, in execute self._check_requested_destination() File “/opt/stack/nova/nova/conductor/tasks/live_migrate.py”, line 100, in _check_requested_destination self._call_livem_checks_on_host(self.destination) File “/opt/stack/nova/nova/conductor/tasks/live_migrate.py”, line 142, in _call_livem_checks_on_host destination, self.block_migration, self.disk_over_commit) File “/opt/stack/nova/nova/compute/rpcapi.py”, line 391, in check_can_live_migrate_destination disk_over_commit=disk_over_commit) File “/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py”, line 156, in call retry=self.retry) File “/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py”, line 90, in _send timeout=timeout, retry=retry) File “/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py”, line 417, in send retry=retry) File “/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py”, line 408, in _send raise result
>>> Created
>>> --------------------
>>> 
>>> debug log 
>>> ---------------
>>> DEBUG (shell:803) Live migration of instance 477e8963-aadf-4dc7-b26f-
>>> f6e9332dd33b to host celestial5 failed (HTTP 500) (Request-ID: req-7973edbc-8844-41c5-926a- 9f5e7cee62f2)
>>> Traceback (most recent call last):
>>>   File "/usr/local/lib/python2.7/ dist-packages/novaclient/ shell.py", line 800, in main
>>>     OpenStackComputeShell().main( argv)
>>>   File "/usr/local/lib/python2.7/ dist-packages/novaclient/ shell.py", line 730, in main
>>>     args.func(self.cs, args)
>>>   File "/usr/local/lib/python2.7/ dist-packages/novaclient/v1_1/ shell.py", line 3001, in do_live_migration
>>>     args.disk_over_commit)
>>>   File "/usr/local/lib/python2.7/ dist-packages/novaclient/v1_1/ servers.py", line 344, in live_migrate
>>>     disk_over_commit)
>>>   File "/usr/local/lib/python2.7/ dist-packages/novaclient/v1_1/ servers.py", line 1124, in live_migrate
>>>     'disk_over_commit': disk_over_commit})
>>>   File "/usr/local/lib/python2.7/ dist-packages/novaclient/v1_1/ servers.py", line 1240, in _action
>>>     return self.api.client.post(url, body=body)
>>>   File "/usr/local/lib/python2.7/ dist-packages/novaclient/ client.py", line 490, in post
>>>     return self._cs_request(url, 'POST', **kwargs)
>>>   File "/usr/local/lib/python2.7/ dist-packages/novaclient/ client.py", line 465, in _cs_request
>>>     resp, body = self._time_request(url, method, **kwargs)
>>>   File "/usr/local/lib/python2.7/ dist-packages/novaclient/ client.py", line 439, in _time_request
>>>     resp, body = self.request(url, method, **kwargs)
>>>   File "/usr/local/lib/python2.7/ dist-packages/novaclient/ client.py", line 433, in request
>>>     raise exceptions.from_response(resp, body, url, method)
>>> ClientException: Live migration of instance 477e8963-aadf-4dc7-b26f- f6e9332dd33b to host celestial5 failed (HTTP 500) (Request-ID: req-7973edbc-8844-41c5-926a- 9f5e7cee62f2)
>>> ERROR (ClientException): Live migration of instance 477e8963-aadf-4dc7-b26f- f6e9332dd33b to host celestial5 failed (HTTP 500) (Request-ID: req-7973edbc-8844-41c5-926a- 9f5e7cee62f2)
>>> ----------------------
>>> 
>>> Regards 
>>> Neelu
>>> 
>>> 
>>> On Thursday, 26 February 2015 8:21 PM, John Griffith <john.griffith at solidfire.com <mailto:john.griffith at solidfire.com>> wrote:
>>> 
>>> 
>>> 
>>> 
>>> On Thu, Feb 26, 2015 at 7:07 AM, somshekar kadam <som_kadam at yahoo.co.in <mailto:som_kadam at yahoo.co.in>> wrote:
>>> First of all thanks for quick reply. 
>>> Documentation does not mention much about Volume-backed live migration. 
>>> Is it tested, I mean supported and working. 
>>> I will try Block live migration as no storage is required. 
>>>  
>>> 
>>> Regards 
>>> Neelu
>>> 
>>> 
>>> On Thursday, 26 February 2015 7:13 PM, Robert van Leeuwen <Robert.vanLeeuwen at spilgames.com <mailto:Robert.vanLeeuwen at spilgames.com>> wrote:
>>> 
>>> 
>>> > Is the Live Migration of VMs even without shared storage supported in Openstack now.
>>> > If yes is there any document for the same.
>>> 
>>> 
>>> Wel, depends on what you call "supported".
>>> Yes, it is possible. 
>>> Will it always work? Probably not until you look at the bugs below.
>>> They have been fixed recently but they might not be merged with the version you are running: 
>>> https://bugs.launchpad.net/nova/+bug/1270825 <https://bugs.launchpad.net/nova/+bug/1270825>
>>> https://bugs.launchpad.net/nova/+bug/1082414 <https://bugs.launchpad.net/nova/+bug/1082414>
>>> 
>>> There might be more issues but I hit the ones mentioned above.
>>> Have a look at the docs to configure it:
>>> http://docs.openstack.org/admin-guide-cloud/content/section_configuring-compute-migrations.html <http://docs.openstack.org/admin-guide-cloud/content/section_configuring-compute-migrations.html>
>>> 
>>> Cheers,
>>> Robert van Leeuwen
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>>> Post to     : openstack at lists.openstack.org <mailto:openstack at lists.openstack.org>
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>>> 
>>> 
>>> It's dated and getting the patches backported has proven to take forever, but I did a write up and testing on this a while back [1].  Should still be accurate.
>>> 
>>> Thanks,
>>> John
>>> 
>>> [1]: https://griffithscorner.wordpress.com/2014/12/08/openstack-live-migration-with-cinder-backed-instances/ <https://griffithscorner.wordpress.com/2014/12/08/openstack-live-migration-with-cinder-backed-instances/>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>>> Post to     : openstack at lists.openstack.org <mailto:openstack at lists.openstack.org>
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>>> 
>>> 
>>> 
>> 
>> 
>> !DSPAM:1,54f547dd139391777220169!
> 
> 
> 
> 
> 
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
> Post to    : openstack at lists.openstack.org <mailto:openstack at lists.openstack.org>
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
> 
> 
> !DSPAM:1,54f5d81428881364561999!

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150303/a7f69175/attachment.html>


More information about the Openstack mailing list