Re: [lists.openstack.org代发]Re: [Nova] migration issue
Brin Zhang(张百林)
zhangbailin at inspur.com
Thu Jul 25 15:21:27 UTC 2019
Is it possible to configure the environment, what did the migrated server do before migration? And the migration log is given?
Or submit a bug on the launchpad, explaining the version of the problem, the recurring steps, and so on.
Above, in order to convenient and clearer analysis of problems.
________________________________
发件人:Eugen Block <eblock at nde.ag>
时间:2019年07月25日 下午 4:06
收件人:openstack-discuss at lists.openstack.org
主题:[lists.openstack.org代发]Re: [Nova] migration issue
I'm not sure if I get this right, according to your config output you
already use ceph as storage backend (which already provides ephemeral
disk if configured to do so) but you also want to configure lvm? Can
you explain what your goal is here?
Zitat von Budai Laszlo <laszlo.budai at gmail.com>:
> Hi Eugen,
>
> Thank you for your suggestions. I have tested, and the nova user is
> able to scp from one compute host to the other. I have also tested
> the migration without the local LVM ephemeral disc, and it is
> working in that case. The issue seems to appear when I'm trying to
> use local ephemeral disk managed by LVM (images_type=lvm).
> In this case my ephemeral disk is created on a local LVM volume that
> is part of the ssd_nova VG (images_volume_group = ssd_nova).
>
> I could not see the source file that the nova is trying to copy, but
> this may have two reasons:
> 1. The file is not created
> 2. it's life is very short, and I cannot observe it.
>
> The directory
> /var/lib/nova/instances/5a6dae58-00d6-4317-b635-909fdf09ac49_resize
> doesn't exists while my instance is running. It appears for a short
> period of time (fraction of a second) while the migrate is trying to
> do its job, and then disappears again.
>
> Kind regards,
> Laszlo
>
> On 7/24/19 10:37 PM, Eugen Block wrote:
>> Hi,
>>
>> is this the first migration test or have there already been some
>> successful migrations? If it's the first I would suggest to check
>> if the nova user can access the other node via ssh without
>> password. Is
>> /var/lib/nova/instances/5a6dae58-00d6-4317-b635-909fdf09ac49_resize/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph0 present on the source node? Can you run the 'scp' command manually as nova
>> user?
>>
>> Regards,
>> Eugen
>>
>>
>> Zitat von Budai Laszlo <laszlo.budai at gmail.com>:
>>
>>> Dear all,
>>>
>>> we are testing the cold migratoin of instances that are using
>>> local ephemeral storage and it fails with the following error:
>>>
>>>
>>> 2019-07-24 13:47:13.115 58902 INFO nova.virt.libvirt.driver
>>> [req-356fce31-5e98-425f-89d6-8a98664d31ad
>>> 122b0950c0cc47bdbb78e63724d65105 f713e44c723e491aa67352e12f83e0d7
>>> - default default] [instance: 5a6dae
>>> 58-00d6-4317-b635-909fdf09ac49] Instance shutdown successfully
>>> after 3 seconds.
>>> 2019-07-24 13:47:13.126 58902 INFO nova.virt.libvirt.driver [-]
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] Instance
>>> destroyed successfully.
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [req-356fce31-5e98-425f-89d6-8a98664d31ad
>>> 122b0950c0cc47bdbb78e63724d65105 f713e44c723e491aa67352e12f83e0d7
>>> - default default] [instance: 5a6dae58-
>>> 00d6-4317-b635-909fdf09ac49] Setting instance vm_state to ERROR:
>>> ProcessExecutionError: Unexpected error while running command.
>>> Command: scp -r
>>> /var/lib/nova/instances/5a6dae58-00d6-4317-b635-909fdf09ac49_resize/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph0
>>> 192.168.56.46:/dev/ssd_nova/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph
>>>
>>> Exit code: 1
>>> Stdout: u''
>>> Stderr:
>>> u'------------------------------------------------------------------------------\n* WARNING *\n* You are accessing a secured
>>> syst
>>> em and your actions will be logged along *\n* with identifying
>>> information. Disconnect immediately if you are not an *\n*
>>> authorized user of this
>>> system. *
>>> \n------------------------------------------------------------------------------\n/var/lib/nova/instances/5a6dae58-00d6-4317-b635-909fdf09ac49_resize/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph0: No
>>> suc
>>> h file or directory\n'
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] Traceback (most
>>> recent call last):
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/compute/manager.py", line
>>> 75
>>> 18, in _error_out_instance_on_exception
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] yield
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/compute/manager.py", line
>>> 42
>>> 75, in _resize_instance
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] timeout,
>>> retry_interval)
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
>>> lin
>>> e 8200, in migrate_disk_and_power_off
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] shared_storage)
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_utils/excutils.py", line
>>> 220
>>> , in __exit__
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49]
>>> self.force_reraise()
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_utils/excutils.py", line
>>> 196
>>> , in force_reraise
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49]
>>> six.reraise(self.type_, self.value, self.tb)
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
>>> lin
>>> e 8185, in migrate_disk_and_power_off
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49]
>>> compression=compression)
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/virt/libvirt/utils.py",
>>> line
>>> 226, in copy_image
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49]
>>> compression=compression)
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/virt/libvirt/volume/remotefs
>>> .py", line 110, in copy_file
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49]
>>> compression=compression)
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/virt/libvirt/volume/remotefs
>>> .py", line 196, in copy_file
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49]
>>> on_execute=on_execute, on_completion=on_completion)
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/utils.py",
>>> line 231, in exec
>>> ute
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] return
>>> processutils.execute(*cmd, **kwargs)
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_concurrency/processutils.py"
>>> , line 424, in execute
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49]
>>> cmd=sanitized_cmd)
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49]
>>> ProcessExecutionError: Unexpected error while running command.
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] Command: scp -r
>>> /var/lib/nova/instances/5a6dae58-00d6-4317-b635-909fdf09ac49_resize/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph0
>>> 192.168.56.46:/dev/ssd_nova/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph0
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] Exit code: 1
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] Stdout: u''
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] Stderr:
>>> u'------------------------------------------------------------------------------\n* WARNING *\n* You are accessing a secured system and your actions will be logged along *\n* with identifying information. Disconnect immediately if you are not an *\n* authorized user of this system. *\n------------------------------------------------------------------------------\n/var/lib/nova/instances/5a6dae58-00d6-4317-b635-909fdf09ac49_resize/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph0: No such file or
>>> directory\n'
>>> 2019-07-24 13:47:14.008 58902 ERROR nova.compute.manager
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49]
>>> 2019-07-24 13:47:14.380 58902 INFO nova.compute.manager
>>> [req-356fce31-5e98-425f-89d6-8a98664d31ad
>>> 122b0950c0cc47bdbb78e63724d65105 f713e44c723e491aa67352e12f83e0d7
>>> - default default] [instance:
>>> 5a6dae58-00d6-4317-b635-909fdf09ac49] Swapping old allocation on
>>> 3da11b23-5ef8-4471-a3c9-5f743b0bbfd7 held by migration
>>> da66c416-2636-421b-99cb-470bd07dad4c for instance
>>> 2019-07-24 13:47:14.979 58902 INFO nova.compute.manager
>>> [req-356fce31-5e98-425f-89d6-8a98664d31ad
>>> 122b0950c0cc47bdbb78e63724d65105 f713e44c723e491aa67352e12f83e0d7
>>> - default default] [instance:
>>> 5a6dae58-00d6-4317-b635-909fdf09ac49] Successfully reverted task
>>> state from resize_migrating on failure for instance.
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> [req-356fce31-5e98-425f-89d6-8a98664d31ad
>>> 122b0950c0cc47bdbb78e63724d65105 f713e44c723e491aa67352e12f83e0d7
>>> - default default] Exception during message handling:
>>> ProcessExecutionError: Unexpected error while running command.
>>> Command: scp -r
>>> /var/lib/nova/instances/5a6dae58-00d6-4317-b635-909fdf09ac49_resize/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph0
>>> 192.168.56.46:/dev/ssd_nova/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph0
>>> Exit code: 1
>>> Stdout: u''
>>> Stderr:
>>> u'------------------------------------------------------------------------------\n* WARNING *\n* You are accessing a secured system and your actions will be logged along *\n* with identifying information. Disconnect immediately if you are not an *\n* authorized user of this system. *\n------------------------------------------------------------------------------\n/var/lib/nova/instances/5a6dae58-00d6-4317-b635-909fdf09ac49_resize/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph0: No such file or
>>> directory\n'
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> Traceback (most recent call last):
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in
>>> _process_incoming
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> res = self.dispatcher.dispatch(message)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in
>>> dispatch
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> return self._do_dispatch(endpoint, method, ctxt, args)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in
>>> _do_dispatch
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> result = func(ctxt, **new_args)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in
>>> wrapped
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> function_name, call_dict, binary)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in
>>> __exit__
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> self.force_reraise()
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in
>>> force_reraise
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> six.reraise(self.type_, self.value, self.tb)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in
>>> wrapped
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> return f(self, context, *args, **kw)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/compute/manager.py", line 186, in
>>> decorated_function
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> "Error: %s", e, instance=instance)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in
>>> __exit__
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> self.force_reraise()
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in
>>> force_reraise
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> six.reraise(self.type_, self.value, self.tb)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/compute/manager.py", line 156, in
>>> decorated_function
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> return function(self, context, *args, **kwargs)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/compute/utils.py", line 977, in
>>> decorated_function
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> return function(self, context, *args, **kwargs)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/compute/manager.py", line 214, in
>>> decorated_function
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> kwargs['instance'], e, sys.exc_info())
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in
>>> __exit__
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> self.force_reraise()
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in
>>> force_reraise
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> six.reraise(self.type_, self.value, self.tb)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/compute/manager.py", line 202, in
>>> decorated_function
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> return function(self, context, *args, **kwargs)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/compute/manager.py", line 4240, in
>>> resize_instance
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> self._revert_allocation(context, instance, migration)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in
>>> __exit__
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> self.force_reraise()
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in
>>> force_reraise
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> six.reraise(self.type_, self.value, self.tb)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/compute/manager.py", line 4237, in
>>> resize_instance
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> instance_type, clean_shutdown)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/compute/manager.py", line 4275, in
>>> _resize_instance
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> timeout, retry_interval)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 8200, in
>>> migrate_disk_and_power_off
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> shared_storage)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in
>>> __exit__
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> self.force_reraise()
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in
>>> force_reraise
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> six.reraise(self.type_, self.value, self.tb)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 8185, in
>>> migrate_disk_and_power_off
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> compression=compression)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/virt/libvirt/utils.py", line 226, in
>>> copy_image
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> compression=compression)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/virt/libvirt/volume/remotefs.py", line 110, in
>>> copy_file
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> compression=compression)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/virt/libvirt/volume/remotefs.py", line 196, in
>>> copy_file
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> on_execute=on_execute, on_completion=on_completion)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/nova/utils.py",
>>> line 231, in execute
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> return processutils.execute(*cmd, **kwargs)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> File
>>> "/openstack/venvs/nova-17.1.3/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 424, in
>>> execute
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> cmd=sanitized_cmd)
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> ProcessExecutionError: Unexpected error while running command.
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> Command: scp -r
>>> /var/lib/nova/instances/5a6dae58-00d6-4317-b635-909fdf09ac49_resize/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph0
>>> 192.168.56.46:/dev/ssd_nova/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph0
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server Exit code: 1
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server Stdout: u''
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> Stderr:
>>> u'------------------------------------------------------------------------------\n* WARNING *\n* You are accessing a secured system and your actions will be logged along *\n* with identifying information. Disconnect immediately if you are not an *\n* authorized user of this system. *\n------------------------------------------------------------------------------\n/var/lib/nova/instances/5a6dae58-00d6-4317-b635-909fdf09ac49_resize/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph0: No such file or
>>> directory\n'
>>> 2019-07-24 13:47:15.000 58902 ERROR oslo_messaging.rpc.server
>>> 2019-07-24 13:47:17.065 58902 INFO nova.virt.libvirt.imagecache
>>> [req-e73474d0-6857-4c52-9c93-73f3bae247b5 - - - - -] image
>>> ca89475a-cf15-46d1-ab2d-c2b96c403248 at
>>> (/var/lib/nova/instances/_base/ab9f9dd0888a4950702b9c993c82f7ab36a3ed07):
>>> checking
>>> 2019-07-24 13:47:17.066 58902 INFO nova.virt.libvirt.imagecache
>>> [req-e73474d0-6857-4c52-9c93-73f3bae247b5 - - - - -] Active base
>>> files:
>>> /var/lib/nova/instances/_base/ab9f9dd0888a4950702b9c993c82f7ab36a3ed07
>>> 2019-07-24 13:47:27.768 58902 INFO nova.compute.manager [-]
>>> [instance: 5a6dae58-00d6-4317-b635-909fdf09ac49] VM Stopped
>>> (Lifecycle Event)
>>> 2019-07-24 13:47:27.918 58902 INFO nova.compute.manager
>>> [req-e53e928d-81fe-4185-9e62-abc2113e838c - - - - -] [instance:
>>> 5a6dae58-00d6-4317-b635-909fdf09ac49] During
>>> _sync_instance_power_state the DB power_state (1) does not match
>>> the vm_power_state from the hypervisor (4). Updating power_state
>>> in the DB to match the hypervisor.
>>>
>>>
>>> The disk has its root FS n an RBD device (a volume created by
>>> cinder) and it has its ephemeral disk on a local LVM volume
>>> created by nova.
>>>
>>> It seems that the
>>> /var/lib/nova/instances/5a6dae58-00d6-4317-b635-909fdf09ac49_resize/5a6dae58-00d6-4317-b635-909fdf09ac49_disk.eph0 is not there when scp is trying to copy it to the other
>>> side.
>>>
>>> Any ideas how to fix this?
>>>
>>> The nova config on the nodes looks like this:
>>>
>>> # Ansible managed
>>> [DEFAULT]
>>> allow_resize_to_same_host = True
>>> # Compute
>>> compute_driver = libvirt.LibvirtDriver
>>> # Scheduler
>>> cpu_allocation_ratio = 2.0
>>> # Logs / State
>>> debug = False
>>> # Hypervisor
>>> default_ephemeral_format = ext4
>>> disk_allocation_ratio = 1.0
>>> # Api's
>>> enabled_apis = osapi_compute,metadata
>>> executor_thread_pool_size = 64
>>> fatal_deprecations = False
>>> # Configdrive
>>> force_config_drive = False
>>> host = compute8.mgmt.lab.mydomain.com
>>> image_cache_manager_interval = 0
>>> instance_name_template = instance-%08x
>>> # Ceilometer notification configurations
>>> instance_usage_audit = True
>>> instance_usage_audit_period = hour
>>> instances_path = /var/lib/nova/instances
>>> ## Vif
>>> libvirt_vif_type = ethernet
>>> log_dir = /var/log/nova
>>> # Metadata
>>> metadata_workers = 16
>>> # Network
>>> my_ip = 192.168.56.47
>>> notify_on_state_change = vm_and_task_state
>>> osapi_compute_workers = 16
>>> ram_allocation_ratio = 1.0
>>> reserved_host_disk_mb = 0
>>> reserved_host_memory_mb = 2048
>>> resume_guests_state_on_host_boot = False
>>> rootwrap_config = /etc/nova/rootwrap.conf
>>> rpc_response_timeout = 60
>>> service_down_time = 120
>>> state_path = /var/lib/nova
>>> # Rpc all
>>> transport_url =
>>> rabbit://nova:09c23ad7680fc920ab466c5ad5f6c88c2e2e8cf8bd7@192.168.56.194:5671,nova:09c23ad7680fc920ab466c5ad5f6c88c2e2e8cf8bd7@192.168.56.209:5671,nova:09c23ad7680fc920ab466c5ad5f6c88c2e2e
>>> 8cf8bd7 at 192.168.56.179:5671//nova
>>> # Disable stderr logging
>>> use_stderr = False
>>> vif_plugging_is_fatal = True
>>> vif_plugging_timeout = 30
>>>
>>> [api]
>>> auth_strategy = keystone
>>> enable_instance_password = True
>>> use_forwarded_for = False
>>> vendordata_jsonfile_path = /etc/nova/vendor_data.json
>>>
>>> # Cache
>>> [cache]
>>> backend = oslo_cache.memcache_pool
>>> enabled = true
>>> memcache_servers =
>>> 192.168.56.68:11211,192.168.56.85:11211,192.168.56.170:11211
>>>
>>> # Cinder
>>> [cinder]
>>> cafile = /etc/ssl/certs/ca-certificates.crt
>>> catalog_info = volumev3:cinderv3:internalURL
>>> cross_az_attach = True
>>> os_region_name = A_Lab
>>>
>>> [conductor]
>>> workers = 16
>>>
>>> [filter_scheduler]
>>> available_filters = nova.scheduler.filters.all_filters
>>> enabled_filters = RetryFilter, AvailabilityZoneFilter, RamFilter,
>>> AggregateRamFilter, ComputeFilter, AggregateCoreFilter,
>>> DiskFilter, AggregateDiskFilter, AggregateNumInstancesFilter,
>>> AggregateIoOpsFilter, ComputeCapabilitiesFilter,
>>> ImagePropertiesFilter, ServerGroupAntiAffinityFilter,
>>> ServerGroupAffinityFilter, NUMATopologyFilter, SameHostFilter,
>>> DifferentHostFilter
>>> host_subset_size = 10
>>> max_instances_per_host = 50
>>> max_io_ops_per_host = 10
>>> ram_weight_multiplier = 5.0
>>> tracks_instance_changes = True
>>> weight_classes = nova.scheduler.weights.all_weighers
>>>
>>> # Glance
>>> [glance]
>>> api_servers = https://vip.mgmt.lab.mydomain.com:9292
>>> cafile = /etc/ssl/certs/ca-certificates.crt
>>>
>>> [key_manager]
>>> fixed_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>>
>>> [keystone_authtoken]
>>> auth_type = password
>>> auth_uri = https://vip.mgmt.lab.mydomain.com:5000
>>> auth_url = https://vip.mgmt.lab.mydomain.com:35357
>>> cafile = /etc/ssl/certs/ca-certificates.crt
>>> insecure = False
>>> memcache_secret_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>> # if your memcached server is shared, use these settings to avoid
>>> cache poisoning
>>> memcache_security_strategy = ENCRYPT
>>> memcached_servers =
>>> 192.168.56.68:11211,192.168.56.85:11211,192.168.56.170:11211
>>> password = xxxxxxxxxxxxxxxxxxxxxxx
>>>
>>> project_domain_id = default
>>> project_name = service
>>> region_name = A_Lab
>>> token_cache_time = 300
>>> user_domain_id = default
>>> username = nova
>>>
>>> [libvirt]
>>> disk_cachemodes = network=writeback
>>> hw_disk_discard = unmap
>>> images_rbd_ceph_conf = /etc/ceph/ceph.conf
>>> images_rbd_pool = vms
>>> images_type = lvm
>>> images_volume_group = ssd_nova
>>> inject_key = False
>>> inject_partition = -2
>>> inject_password = False
>>> live_migration_tunnelled = True
>>> live_migration_uri =
>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa"
>>> rbd_secret_uuid = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>> # ceph rbd support
>>> rbd_user = cinder
>>> remove_unused_resized_minimum_age_seconds = 3600
>>> use_virtio_for_bridges = True
>>> virt_type = kvm
>>>
>>> # Neutron
>>> [neutron]
>>> auth_type = password
>>> # Keystone client plugin authentication URL option
>>> auth_url = https://vip.mgmt.lab.mydomain.com:35357/v3
>>> cafile = /etc/ssl/certs/ca-certificates.crt
>>> default_floating_pool = public
>>> insecure = False
>>> metadata_proxy_shared_secret = xxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>> # Keystone client plugin password option
>>> password = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>> project_domain_name = Default
>>> project_name = service
>>> region_name = A_Lab
>>> service_metadata_proxy = True
>>> url = https://vip.mgmt.lab.mydomain.com:9696
>>> user_domain_name = Default
>>> # Keystone client plugin username option
>>> username = neutron
>>>
>>> [oslo_concurrency]
>>> lock_path = /var/lock/nova
>>> # Notifications
>>> [oslo_messaging_notifications]
>>> driver = messagingv2
>>> notification_topics = notifications
>>> transport_url =
>>> rabbit://nova:09c23ad7680fc920ab466c5ad5f6c88c2e2e8cf8bd7@192.168.56.194:5671,nova:09c23ad7680fc920ab466c5ad5f6c88c2e2e8cf8bd7@192.168.56.209:5671,nova:09c23ad7680fc920ab466c5ad5f6c88c2e2e8cf8bd7@192.168.56.179:5671//nova
>>>
>>> [oslo_messaging_rabbit]
>>> rpc_conn_pool_size = 30
>>> ssl = True
>>>
>>> # Placement
>>> [placement]
>>> auth_type = "password"
>>> auth_url = https://vip.mgmt.lab.mydomain.com:35357/v3
>>> cafile = /etc/ssl/certs/ca-certificates.crt
>>> insecure = False
>>> os_interface = internal
>>> os_region_name = A_Lab
>>> password = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>> project_domain_name = Default
>>> project_name = service
>>> user_domain_name = Default
>>> username = placement
>>>
>>> [quota]
>>> cores = 20
>>> injected_file_content_bytes = 10240
>>> injected_file_path_length = 255
>>> injected_files = 5
>>> instances = 10
>>> key_pairs = 100
>>> max_age = 0
>>> metadata_items = 128
>>> ram = 32768
>>> server_group_members = 10
>>> server_groups = 10
>>>
>>> [scheduler]
>>> discover_hosts_in_cells_interval = 60
>>> host_manager = host_manager
>>> max_attempts = 5
>>> periodic_task_interval = 60
>>> scheduler_driver = filter_scheduler
>>>
>>> [spice]
>>> agent_enabled = True
>>> enabled = True
>>> # Console Url and binds
>>> html5proxy_base_url =
>>> https://dashboard.iaas.lab.mydomain.com:6080/spice_auto.html
>>> server_listen = 192.168.56.47
>>> server_proxyclient_address = 192.168.56.47
>>>
>>> [upgrade_levels]
>>> compute = auto
>>>
>>> [vnc]
>>> enabled = False
>>>
>>> [wsgi]
>>> api_paste_config = /etc/nova/api-paste.ini
>>> secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO
>>>
>>>
>>> Thank you in advance for any suggestions.
>>>
>>> Kind regards,
>>> Laszlo
>>
>>
>>
>>
>>
More information about the openstack-discuss
mailing list