Re: snapshot problem
You should respond to the list so other users can try to support you. So nova is trying to live snapshot the instance:
2022-06-12 16:25:55.603 7 INFO nova.compute.manager [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting [...] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning live snapshot process
But I don't see any 'rbd snap create' command. Either the rbd image doesn't support it or there is some setting to keep all rbd images "flat". Can you check any relevant configs you might have in nova? Also can you show the output of 'rbd info <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk' ? Then to test if the underlying rbd functions work as expected you could try to create a live snapshot manually: rbd snap create <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap And paste any relevant output here. Zitat von Parsa Aminian <p.aminian.server@gmail.com>:
Its not working for any instances and all of them are paused . I enable debug logs please check the logs :
2022-06-12 16:16:13.478 7 DEBUG nova.compute.manager [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync for uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 2022-06-12 16:16:13.506 7 DEBUG oslo_concurrency.lockutils [-] Lock "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 2022-06-12 16:16:43.562 7 DEBUG nova.compute.resource_tracker [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 2022-06-12 16:25:55.104 7 DEBUG nova.compute.manager [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 2022-06-12 16:25:55.603 7 INFO nova.compute.manager [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting 63426b4c87a751afaf13b758e0 - default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning live snapshot process default default] Lazy-loading 'pci_devices' on Instance uuid 25c8d676-e20a-4238-a45c-d51daa62b941 obj_load_attr /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 2022-06-12 16:25:57.250 7 DEBUG nova.objects.instance [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - default default] Lazy-loading 'pci_devices' on Instance uuid 25c8d676-e20a-4238-a45c-d51daa62b941 obj_load_attr /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 2022-06-12 16:25:57.317 7 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1655034957.3158934, 25c8d676-e20a-4238-a45c-d51daa62b941 => Paused> emit_event /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 2022-06-12 16:25:57.318 7 INFO nova.compute.manager [-] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle Event) 2022-06-12 16:25:57.389 7 DEBUG nova.compute.manager [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 2022-06-12 16:25:57.395 7 DEBUG nova.compute.manager [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: image_pending_upload, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 2022-06-12 16:25:57.487 7 INFO nova.compute.manager [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the instance has a pending task (image_pending_upload). Skip. 2022-06-12 16:26:02.039 7 DEBUG oslo_concurrency.processutils [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f raw rbd:vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk:id=cinder:conf=/etc/ceph/ceph.conf /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad execute /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 2022-06-12 16:26:17.075 7 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1655034962.0316682, 25c8d676-e20a-4238-a45c-d51daa62b941 => Stopped> emit_event /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 INFO nova.compute.manager [-] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle Event) DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - - - - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - - - - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: image_pending_upload, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 INFO nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - - - - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the instance has a pending task (image_pending_upload). Skip. 2022-06-12 16:26:18.539 7 DEBUG nova.compute.manager [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync for uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 2022-06-12 16:26:18.565 7 DEBUG oslo_concurrency.lockutils [-] Lock "25c8d676-e20a-4238 -a45c-d51daa62b941" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 2022-06-12 16:26:18.566 7 INFO nova.compute.manager [-] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the instance has a pending task (image_pending_upload). Skip. 2022-06-12 16:26:18.566 7 DEBUG oslo_concurrency.lockutils [-] Lock "25c8d676-e20a-4238-a45c-d51daa62b941" released by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371 2022-06-12 16:26:25.769 7 DEBUG oslo_concurrency.processutils [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - default default] CMD "qemu-img convert -t none -O raw -f raw rbd:vms/25c8d676-e20a-4238-a45c-d51daa6b941_disk:id=cinder:conf=/etc/ceph/ceph.conf /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad" returned: 0 in 23.730s execute /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Snapshot extracted, beginning image upload 2022-06-12 16:26:27.981 7 DEBUG nova.virt.driver [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] Emitting event <LifecycleEvent: 1655034987.9807608, 25c8d676-e20a-4238-a45c-d51daa62b941 => Started> emit_event /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 2022-06-12 16:26:27.983 7 INFO nova.compute.manager [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle Event) [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: image_pending_upload, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 2022-06-12 16:26:28.173 7 INFO nova.compute.manager [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] VM Resumed (Lifecycle Event 2022-06-12 16:29:00.859 7 DEBUG oslo_concurrency.lockutils [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Acquired lock "refresh_cache-25c8d676-e20a-4238-a45c-d51daa62b941" lock /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266 25c8d676-e20a-4238-a45c-d51daa62b941] Forcefully refreshing network info cache for instance _get_instance_nw_info /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:1833 2022-06-12 16:29:03.278 7 DEBUG nova.network.neutron [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Updating instance_info_cache with network_info: [{"id": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", "address": "fa:16:3e:ca:00:d9", "network": {"id": "b86c8304-a9bd-4b39-b7fc-f70ffe76f2a8", "bridge": "br-int", "label": "External_Network", "subnets": [{"cidr": "141.11.42.0/24", "dns": [{"address": "8.8.8.8", "type": "dns", "version": 4, "meta": {}}, {"address": "217.218.127.127", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "141.11.42.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "141.11.42.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {}}], "meta": {"injected": true, "tenant_id": "ef940663426b4c87a751afaf13b758e0", "mtu": 1500, "physical_network": "physnet1", "tunneled": false}}, "type": "ovs", "details": {"connectivity": "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type": "system", "bridge_name": "br-int"}, "devname": "tapaa2fdd7d-ad", "ovs_interfaceid": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "meta": {}}] update_instance_cache_with_nw_info /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:117 nstance 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 2022-06-12 16:33:37.595 7 INFO nova.compute.manager [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Took 461.98 seconds to snapshot the instance on the hypervisor. 2022-06-12 16:36:16.459 7 DEBUG nova.compute.manager [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync for uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 Lock "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 2022-06-12 16:37:05.365 7 DEBUG nova.compute.resource_tracker [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 2022-06-12 09:42:32.687 7 INFO nova.compute.manager [req-e79e4177-4712-4795-91da-853bc524fac0 93fb420b3c604d4fae408b81135b58e9 ef940663426b4c87a751afaf13b758e0 - default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting
On Sun, Jun 12, 2022 at 3:36 PM Eugen Block <eblock@nde.ag> wrote:
Have you tried with debug logs? Has it worked with live snapshots before for other instances or did it never work and all snapshots were "cold"?
Zitat von Parsa Aminian <p.aminian.server@gmail.com>:
Hi kolla-ansible victoria version with ceph backend without volume
On Sun, Jun 12, 2022 at 12:45 PM Eugen Block <eblock@nde.ag> wrote:
Hi,
can you share more details about your environment? Which openstack version is it? What is the storage backend? In earlier releases there was an option:
#disable_libvirt_livesnapshot = false
but this option has been deprecated. But if you're on an older version that could explain it.
Zitat von Parsa Aminian <p.aminian.server@gmail.com>:
When I snapshot from the instance , server will gone away and its not reachable until the snapshot is complete here is the logs : 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting 2022-06-12 09:42:34.755 7 INFO nova.compute.manager [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle Event) 2022-06-12 09:42:34.995 7 INFO nova.compute.manager [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the instance has a pending task (image_pending_upload). Skip. 2022-06-12 09:42:57.749 7 INFO nova.compute.manager [-] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle Event) 2022-06-12 09:43:06.102 7 INFO nova.virt.libvirt.driver [req-e79e4177-4712-4795-91da-853bc524fac0 93fb420b3c604d4fae408b81135b58e9 ef940663426b4c87a751afaf13b758e0 - default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Snapshot extracted, beginning image upload 2022-06-12 09:43:08.778 7 INFO nova.compute.manager [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle Event)
Hi kolla-ansible victoria version with ceph backend . rbd info output : rbd image '25c8d676-e20a-4238-a45c-d51daa62b941_disk': size 20 GiB in 5120 objects order 22 (4 MiB objects) snapshot_count: 0 id: b69aaf907284da block_name_prefix: rbd_data.b69aaf907284da format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Fri May 20 00:04:47 2022 access_timestamp: Sun Jun 12 16:26:02 2022 --------------- also live snapshot seems to work correctly without any error or any downtime : docker exec -u root -it ceph-mgr-cephosd01 rbd snap ls vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk SNAPID NAME SIZE PROTECTED TIMESTAMP 344 test-snap 20 GiB Sun Jun 12 23:48:39 2022 also on compute nova.conf, images_type is set on rbd . On Sun, Jun 12, 2022 at 5:55 PM Eugen Block <eblock@nde.ag> wrote: > You should respond to the list so other users can try to support you. > > So nova is trying to live snapshot the instance: > > > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager > > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > > 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - > > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] > > instance snapshotting > > [...] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning > > live snapshot process > > But I don't see any 'rbd snap create' command. Either the rbd image > doesn't support it or there is some setting to keep all rbd images > "flat". Can you check any relevant configs you might have in nova? > Also can you show the output of 'rbd info > <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk' ? Then to test if > the underlying rbd functions work as expected you could try to create > a live snapshot manually: > > rbd snap create <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap > > And paste any relevant output here. > > Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > > > Its not working for any instances and all of them are paused . I enable > > debug logs please check the logs : > > > > 2022-06-12 16:16:13.478 7 DEBUG nova.compute.manager > > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync for > > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > > 2022-06-12 16:16:13.506 7 DEBUG oslo_concurrency.lockutils [-] Lock > > "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by > > > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > > :: waited 0.000s inner > > > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > > 2022-06-12 16:16:43.562 7 DEBUG nova.compute.resource_tracker > > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance > > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute > host > > and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': > > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > > 2022-06-12 16:25:55.104 7 DEBUG nova.compute.manager > > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > 4dbffaa9c14e401c8c210e23ebe0ab7b > > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager > > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > 4dbffaa9c14e401c8c210e23ebe0ab7b > > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > > 63426b4c87a751afaf13b758e0 - default default] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning live snapshot process > > default default] Lazy-loading 'pci_devices' on Instance uuid > > 25c8d676-e20a-4238-a45c-d51daa62b941 obj_load_attr > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 > > 2022-06-12 16:25:57.250 7 DEBUG nova.objects.instance > > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > 4dbffaa9c14e401c8c210e23ebe0ab7b > > ef940663426b4c87a751afaf13b758e0 - default default] Lazy-loading > > 'pci_devices' on Instance uuid 25c8d676-e20a-4238-a45c-d51daa62b941 > > obj_load_attr > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 > > 2022-06-12 16:25:57.317 7 DEBUG nova.virt.driver [-] Emitting event > > <LifecycleEvent: 1655034957.3158934, 25c8d676-e20a-4238-a45c-d51daa62b941 > > => Paused> emit_event > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > > 2022-06-12 16:25:57.318 7 INFO nova.compute.manager [-] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle Event) > > 2022-06-12 16:25:57.389 7 DEBUG nova.compute.manager > > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > > 2022-06-12 16:25:57.395 7 DEBUG nova.compute.manager > > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power state > > after lifecycle event "Paused"; current vm_state: active, current > > task_state: image_pending_upload, current DB power_state: 1, VM > > power_state: 3 handle_lifecycle_event > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > > 2022-06-12 16:25:57.487 7 INFO nova.compute.manager > > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the > instance > > has a pending task (image_pending_upload). Skip. > > 2022-06-12 16:26:02.039 7 DEBUG oslo_concurrency.processutils > > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > 4dbffaa9c14e401c8c210e23ebe0ab7b > > ef940663426b4c87a751afaf13b758e0 - default default] Running cmd > > (subprocess): qemu-img convert -t none -O raw -f raw > > > rbd:vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk:id=cinder:conf=/etc/ceph/ceph.conf > > > /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad > > execute > > > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 > > 2022-06-12 16:26:17.075 7 DEBUG nova.virt.driver [-] Emitting event > > <LifecycleEvent: 1655034962.0316682, 25c8d676-e20a-4238-a45c-d51daa62b941 > > => Stopped> emit_event > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > > INFO nova.compute.manager [-] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle Event) > > DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - - > - > > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state > > _get_power_state > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > > DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - - > - > > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing > > instance power state after lifecycle event "Stopped"; current vm_state: > > active, current task_state: image_pending_upload, current DB power_state: > > 1, VM power_state: 4 handle_lifecycle_event > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > > INFO nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - - > - - > > -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] During > sync_power_state > > the instance has a pending task (image_pending_upload). Skip. > > 2022-06-12 16:26:18.539 7 DEBUG nova.compute.manager > > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync for > > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > > 2022-06-12 16:26:18.565 7 DEBUG oslo_concurrency.lockutils [-] Lock > > "25c8d676-e20a-4238 > > -a45c-d51daa62b941" acquired by > > > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > > :: waited 0.000s inner > > > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > > 2022-06-12 16:26:18.566 7 INFO nova.compute.manager [-] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the > instance > > has a pending task (image_pending_upload). Skip. > > 2022-06-12 16:26:18.566 7 DEBUG oslo_concurrency.lockutils [-] Lock > > "25c8d676-e20a-4238-a45c-d51daa62b941" released by > > > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > > :: held 0.001s inner > > > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371 > > 2022-06-12 16:26:25.769 7 DEBUG oslo_concurrency.processutils > > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > 4dbffaa9c14e401c8c210e23ebe0ab7b > > ef940663426b4c87a751afaf13b758e0 - default default] CMD "qemu-img convert > > -t none -O raw -f raw > > > rbd:vms/25c8d676-e20a-4238-a45c-d51daa6b941_disk:id=cinder:conf=/etc/ceph/ceph.conf > > > /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad" > > returned: 0 in 23.730s execute > > > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 > > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] > Snapshot > > extracted, beginning image upload > > 2022-06-12 16:26:27.981 7 DEBUG nova.virt.driver > > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] Emitting event > > <LifecycleEvent: 1655034987.9807608, 25c8d676-e20a-4238-a45c-d51daa62b941 > > => Started> emit_event > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > > 2022-06-12 16:26:27.983 7 INFO nova.compute.manager > > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle Event) > > [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state > > _get_power_state > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power state > > after lifecycle event "Started"; current vm_state: active, current > > task_state: image_pending_upload, current DB power_state: 1, VM > > power_state: 1 handle_lifecycle_event > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > > 2022-06-12 16:26:28.173 7 INFO nova.compute.manager > > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Resumed (Lifecycle Event > > 2022-06-12 16:29:00.859 7 DEBUG oslo_concurrency.lockutils > > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Acquired lock > > "refresh_cache-25c8d676-e20a-4238-a45c-d51daa62b941" lock > > > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266 > > 25c8d676-e20a-4238-a45c-d51daa62b941] Forcefully refreshing network info > > cache for instance _get_instance_nw_info > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:1833 > > 2022-06-12 16:29:03.278 7 DEBUG nova.network.neutron > > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] Updating instance_info_cache with > > network_info: [{"id": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", "address": > > "fa:16:3e:ca:00:d9", "network": {"id": > > "b86c8304-a9bd-4b39-b7fc-f70ffe76f2a8", "bridge": "br-int", "label": > > "External_Network", "subnets": [{"cidr": "141.11.42.0/24", "dns": > > [{"address": "8.8.8.8", "type": "dns", "version": 4, "meta": {}}, > > {"address": "217.218.127.127", "type": "dns", "version": 4, "meta": {}}], > > "gateway": {"address": "141.11.42.1", "type": "gateway", "version": 4, > > "meta": {}}, "ips": [{"address": "141.11.42.37", "type": "fixed", > > "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": > 4, > > "meta": {}}], "meta": {"injected": true, "tenant_id": > > "ef940663426b4c87a751afaf13b758e0", "mtu": 1500, "physical_network": > > "physnet1", "tunneled": false}}, "type": "ovs", "details": > {"connectivity": > > "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type": > > "system", "bridge_name": "br-int"}, "devname": "tapaa2fdd7d-ad", > > "ovs_interfaceid": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", "qbh_params": > > null, "qbg_params": null, "active": true, "vnic_type": "normal", > "profile": > > {}, "preserve_on_delete": false, "meta": {}}] > > update_instance_cache_with_nw_info > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:117 > > nstance 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this > > compute host and has allocations in placement: {'resources': {'VCPU': 1, > > 'MEMORY_MB': 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > > 2022-06-12 16:33:37.595 7 INFO nova.compute.manager > > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > 4dbffaa9c14e401c8c210e23ebe0ab7b > > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] Took 461.98 seconds to snapshot the > > instance on the hypervisor. > > 2022-06-12 16:36:16.459 7 DEBUG nova.compute.manager > > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync for > > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > > Lock "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by > > > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > > :: waited 0.000s inner > > > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > > 2022-06-12 16:37:05.365 7 DEBUG nova.compute.resource_tracker > > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance > > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute > host > > and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': > > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations > > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > > 2022-06-12 09:42:32.687 7 INFO nova.compute.manager > > [req-e79e4177-4712-4795-91da-853bc524fac0 > 93fb420b3c604d4fae408b81135b58e9 > > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > > > > On Sun, Jun 12, 2022 at 3:36 PM Eugen Block <eblock@nde.ag> wrote: > > > >> Have you tried with debug logs? Has it worked with live snapshots > >> before for other instances or did it never work and all snapshots were > >> "cold"? > >> > >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> > >> > Hi > >> > kolla-ansible victoria version with ceph backend without volume > >> > > >> > On Sun, Jun 12, 2022 at 12:45 PM Eugen Block <eblock@nde.ag> wrote: > >> > > >> >> Hi, > >> >> > >> >> can you share more details about your environment? Which openstack > >> >> version is it? What is the storage backend? In earlier releases there > >> >> was an option: > >> >> > >> >> #disable_libvirt_livesnapshot = false > >> >> > >> >> but this option has been deprecated. But if you're on an older > version > >> >> that could explain it. > >> >> > >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> >> > >> >> > When I snapshot from the instance , server will gone away and its > not > >> >> > reachable until the snapshot is complete here is the logs : > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > >> >> > 2022-06-12 09:42:34.755 7 INFO nova.compute.manager > >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle Event) > >> >> > 2022-06-12 09:42:34.995 7 INFO nova.compute.manager > >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the > >> >> instance > >> >> > has a pending task (image_pending_upload). Skip. > >> >> > 2022-06-12 09:42:57.749 7 INFO nova.compute.manager [-] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle Event) > >> >> > 2022-06-12 09:43:06.102 7 INFO nova.virt.libvirt.driver > >> >> > [req-e79e4177-4712-4795-91da-853bc524fac0 > >> >> 93fb420b3c604d4fae408b81135b58e9 > >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Snapshot extracted, beginning > >> image > >> >> > upload > >> >> > 2022-06-12 09:43:08.778 7 INFO nova.compute.manager > >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle Event) > >> >> > >> >> > >> >> > >> >> > >> >> > >> > >> > >> > >> > >> > > > > >
Could you share your whole nova.conf (only the uncommented options)? Is this option set in your env? #snapshot_image_format = <None> Also when you manually created the snapshot did you do it as the nova user on the compute node? If not, could you retry? Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > Hi > kolla-ansible victoria version with ceph backend . > rbd info output : > rbd image '25c8d676-e20a-4238-a45c-d51daa62b941_disk': > size 20 GiB in 5120 objects > order 22 (4 MiB objects) > snapshot_count: 0 > id: b69aaf907284da > block_name_prefix: rbd_data.b69aaf907284da > format: 2 > features: layering, exclusive-lock, object-map, fast-diff, > deep-flatten > op_features: > flags: > create_timestamp: Fri May 20 00:04:47 2022 > access_timestamp: Sun Jun 12 16:26:02 2022 > --------------- > also live snapshot seems to work correctly without any error or any > downtime : > docker exec -u root -it ceph-mgr-cephosd01 rbd snap ls > vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk > SNAPID NAME SIZE PROTECTED TIMESTAMP > 344 test-snap 20 GiB Sun Jun 12 23:48:39 2022 > > also on compute nova.conf, images_type is set on rbd . > > On Sun, Jun 12, 2022 at 5:55 PM Eugen Block <eblock@nde.ag> wrote: > >> You should respond to the list so other users can try to support you. >> >> So nova is trying to live snapshot the instance: >> >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> > 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] >> > instance snapshotting >> > [...] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning >> > live snapshot process >> >> But I don't see any 'rbd snap create' command. Either the rbd image >> doesn't support it or there is some setting to keep all rbd images >> "flat". Can you check any relevant configs you might have in nova? >> Also can you show the output of 'rbd info >> <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk' ? Then to test if >> the underlying rbd functions work as expected you could try to create >> a live snapshot manually: >> >> rbd snap create <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap >> >> And paste any relevant output here. >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: >> >> > Its not working for any instances and all of them are paused . I enable >> > debug logs please check the logs : >> > >> > 2022-06-12 16:16:13.478 7 DEBUG nova.compute.manager >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync for >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 >> > 2022-06-12 16:16:13.506 7 DEBUG oslo_concurrency.lockutils [-] Lock >> > "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by >> > >> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" >> > :: waited 0.000s inner >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 >> > 2022-06-12 16:16:43.562 7 DEBUG nova.compute.resource_tracker >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute >> host >> > and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 >> > 2022-06-12 16:25:55.104 7 DEBUG nova.compute.manager >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting >> > 63426b4c87a751afaf13b758e0 - default default] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning live snapshot process >> > default default] Lazy-loading 'pci_devices' on Instance uuid >> > 25c8d676-e20a-4238-a45c-d51daa62b941 obj_load_attr >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 >> > 2022-06-12 16:25:57.250 7 DEBUG nova.objects.instance >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> > ef940663426b4c87a751afaf13b758e0 - default default] Lazy-loading >> > 'pci_devices' on Instance uuid 25c8d676-e20a-4238-a45c-d51daa62b941 >> > obj_load_attr >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 >> > 2022-06-12 16:25:57.317 7 DEBUG nova.virt.driver [-] Emitting event >> > <LifecycleEvent: 1655034957.3158934, 25c8d676-e20a-4238-a45c-d51daa62b941 >> > => Paused> emit_event >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 >> > 2022-06-12 16:25:57.318 7 INFO nova.compute.manager [-] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle Event) >> > 2022-06-12 16:25:57.389 7 DEBUG nova.compute.manager >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 >> > 2022-06-12 16:25:57.395 7 DEBUG nova.compute.manager >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power state >> > after lifecycle event "Paused"; current vm_state: active, current >> > task_state: image_pending_upload, current DB power_state: 1, VM >> > power_state: 3 handle_lifecycle_event >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 >> > 2022-06-12 16:25:57.487 7 INFO nova.compute.manager >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the >> instance >> > has a pending task (image_pending_upload). Skip. >> > 2022-06-12 16:26:02.039 7 DEBUG oslo_concurrency.processutils >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> > ef940663426b4c87a751afaf13b758e0 - default default] Running cmd >> > (subprocess): qemu-img convert -t none -O raw -f raw >> > >> rbd:vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk:id=cinder:conf=/etc/ceph/ceph.conf >> > >> /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad >> > execute >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 >> > 2022-06-12 16:26:17.075 7 DEBUG nova.virt.driver [-] Emitting event >> > <LifecycleEvent: 1655034962.0316682, 25c8d676-e20a-4238-a45c-d51daa62b941 >> > => Stopped> emit_event >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 >> > INFO nova.compute.manager [-] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle Event) >> > DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - - >> - >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state >> > _get_power_state >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 >> > DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - - >> - >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing >> > instance power state after lifecycle event "Stopped"; current vm_state: >> > active, current task_state: image_pending_upload, current DB power_state: >> > 1, VM power_state: 4 handle_lifecycle_event >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 >> > INFO nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - - >> - - >> > -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] During >> sync_power_state >> > the instance has a pending task (image_pending_upload). Skip. >> > 2022-06-12 16:26:18.539 7 DEBUG nova.compute.manager >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync for >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 >> > 2022-06-12 16:26:18.565 7 DEBUG oslo_concurrency.lockutils [-] Lock >> > "25c8d676-e20a-4238 >> > -a45c-d51daa62b941" acquired by >> > >> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" >> > :: waited 0.000s inner >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 >> > 2022-06-12 16:26:18.566 7 INFO nova.compute.manager [-] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the >> instance >> > has a pending task (image_pending_upload). Skip. >> > 2022-06-12 16:26:18.566 7 DEBUG oslo_concurrency.lockutils [-] Lock >> > "25c8d676-e20a-4238-a45c-d51daa62b941" released by >> > >> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" >> > :: held 0.001s inner >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371 >> > 2022-06-12 16:26:25.769 7 DEBUG oslo_concurrency.processutils >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> > ef940663426b4c87a751afaf13b758e0 - default default] CMD "qemu-img convert >> > -t none -O raw -f raw >> > >> rbd:vms/25c8d676-e20a-4238-a45c-d51daa6b941_disk:id=cinder:conf=/etc/ceph/ceph.conf >> > >> /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad" >> > returned: 0 in 23.730s execute >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] >> Snapshot >> > extracted, beginning image upload >> > 2022-06-12 16:26:27.981 7 DEBUG nova.virt.driver >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] Emitting event >> > <LifecycleEvent: 1655034987.9807608, 25c8d676-e20a-4238-a45c-d51daa62b941 >> > => Started> emit_event >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 >> > 2022-06-12 16:26:27.983 7 INFO nova.compute.manager >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle Event) >> > [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state >> > _get_power_state >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power state >> > after lifecycle event "Started"; current vm_state: active, current >> > task_state: image_pending_upload, current DB power_state: 1, VM >> > power_state: 1 handle_lifecycle_event >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 >> > 2022-06-12 16:26:28.173 7 INFO nova.compute.manager >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Resumed (Lifecycle Event >> > 2022-06-12 16:29:00.859 7 DEBUG oslo_concurrency.lockutils >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Acquired lock >> > "refresh_cache-25c8d676-e20a-4238-a45c-d51daa62b941" lock >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266 >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Forcefully refreshing network info >> > cache for instance _get_instance_nw_info >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:1833 >> > 2022-06-12 16:29:03.278 7 DEBUG nova.network.neutron >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Updating instance_info_cache with >> > network_info: [{"id": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", "address": >> > "fa:16:3e:ca:00:d9", "network": {"id": >> > "b86c8304-a9bd-4b39-b7fc-f70ffe76f2a8", "bridge": "br-int", "label": >> > "External_Network", "subnets": [{"cidr": "141.11.42.0/24", "dns": >> > [{"address": "8.8.8.8", "type": "dns", "version": 4, "meta": {}}, >> > {"address": "217.218.127.127", "type": "dns", "version": 4, "meta": {}}], >> > "gateway": {"address": "141.11.42.1", "type": "gateway", "version": 4, >> > "meta": {}}, "ips": [{"address": "141.11.42.37", "type": "fixed", >> > "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": >> 4, >> > "meta": {}}], "meta": {"injected": true, "tenant_id": >> > "ef940663426b4c87a751afaf13b758e0", "mtu": 1500, "physical_network": >> > "physnet1", "tunneled": false}}, "type": "ovs", "details": >> {"connectivity": >> > "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type": >> > "system", "bridge_name": "br-int"}, "devname": "tapaa2fdd7d-ad", >> > "ovs_interfaceid": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", "qbh_params": >> > null, "qbg_params": null, "active": true, "vnic_type": "normal", >> "profile": >> > {}, "preserve_on_delete": false, "meta": {}}] >> > update_instance_cache_with_nw_info >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:117 >> > nstance 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this >> > compute host and has allocations in placement: {'resources': {'VCPU': 1, >> > 'MEMORY_MB': 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 >> > 2022-06-12 16:33:37.595 7 INFO nova.compute.manager >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Took 461.98 seconds to snapshot the >> > instance on the hypervisor. >> > 2022-06-12 16:36:16.459 7 DEBUG nova.compute.manager >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync for >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 >> > Lock "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by >> > >> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" >> > :: waited 0.000s inner >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 >> > 2022-06-12 16:37:05.365 7 DEBUG nova.compute.resource_tracker >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute >> host >> > and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 >> > 2022-06-12 09:42:32.687 7 INFO nova.compute.manager >> > [req-e79e4177-4712-4795-91da-853bc524fac0 >> 93fb420b3c604d4fae408b81135b58e9 >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting >> > >> > On Sun, Jun 12, 2022 at 3:36 PM Eugen Block <eblock@nde.ag> wrote: >> > >> >> Have you tried with debug logs? Has it worked with live snapshots >> >> before for other instances or did it never work and all snapshots were >> >> "cold"? >> >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: >> >> >> >> > Hi >> >> > kolla-ansible victoria version with ceph backend without volume >> >> > >> >> > On Sun, Jun 12, 2022 at 12:45 PM Eugen Block <eblock@nde.ag> wrote: >> >> > >> >> >> Hi, >> >> >> >> >> >> can you share more details about your environment? Which openstack >> >> >> version is it? What is the storage backend? In earlier releases there >> >> >> was an option: >> >> >> >> >> >> #disable_libvirt_livesnapshot = false >> >> >> >> >> >> but this option has been deprecated. But if you're on an older >> version >> >> >> that could explain it. >> >> >> >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: >> >> >> >> >> >> > When I snapshot from the instance , server will gone away and its >> not >> >> >> > reachable until the snapshot is complete here is the logs : >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting >> >> >> > 2022-06-12 09:42:34.755 7 INFO nova.compute.manager >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle Event) >> >> >> > 2022-06-12 09:42:34.995 7 INFO nova.compute.manager >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the >> >> >> instance >> >> >> > has a pending task (image_pending_upload). Skip. >> >> >> > 2022-06-12 09:42:57.749 7 INFO nova.compute.manager [-] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle Event) >> >> >> > 2022-06-12 09:43:06.102 7 INFO nova.virt.libvirt.driver >> >> >> > [req-e79e4177-4712-4795-91da-853bc524fac0 >> >> >> 93fb420b3c604d4fae408b81135b58e9 >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Snapshot extracted, beginning >> >> image >> >> >> > upload >> >> >> > 2022-06-12 09:43:08.778 7 INFO nova.compute.manager >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle Event) >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >>
I attached nova.conf On Mon, Jun 13, 2022 at 12:15 PM Eugen Block <eblock@nde.ag> wrote: > Could you share your whole nova.conf (only the uncommented options)? > Is this option set in your env? > > #snapshot_image_format = <None> > > Also when you manually created the snapshot did you do it as the nova > user on the compute node? If not, could you retry? > > > Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > > > Hi > > kolla-ansible victoria version with ceph backend . > > rbd info output : > > rbd image '25c8d676-e20a-4238-a45c-d51daa62b941_disk': > > size 20 GiB in 5120 objects > > order 22 (4 MiB objects) > > snapshot_count: 0 > > id: b69aaf907284da > > block_name_prefix: rbd_data.b69aaf907284da > > format: 2 > > features: layering, exclusive-lock, object-map, fast-diff, > > deep-flatten > > op_features: > > flags: > > create_timestamp: Fri May 20 00:04:47 2022 > > access_timestamp: Sun Jun 12 16:26:02 2022 > > --------------- > > also live snapshot seems to work correctly without any error or any > > downtime : > > docker exec -u root -it ceph-mgr-cephosd01 rbd snap ls > > vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk > > SNAPID NAME SIZE PROTECTED TIMESTAMP > > 344 test-snap 20 GiB Sun Jun 12 23:48:39 2022 > > > > also on compute nova.conf, images_type is set on rbd . > > > > On Sun, Jun 12, 2022 at 5:55 PM Eugen Block <eblock@nde.ag> wrote: > > > >> You should respond to the list so other users can try to support you. > >> > >> So nova is trying to live snapshot the instance: > >> > >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> > 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - > >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] > >> > instance snapshotting > >> > [...] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning > >> > live snapshot process > >> > >> But I don't see any 'rbd snap create' command. Either the rbd image > >> doesn't support it or there is some setting to keep all rbd images > >> "flat". Can you check any relevant configs you might have in nova? > >> Also can you show the output of 'rbd info > >> <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk' ? Then to test if > >> the underlying rbd functions work as expected you could try to create > >> a live snapshot manually: > >> > >> rbd snap create > <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap > >> > >> And paste any relevant output here. > >> > >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> > >> > Its not working for any instances and all of them are paused . I > enable > >> > debug logs please check the logs : > >> > > >> > 2022-06-12 16:16:13.478 7 DEBUG nova.compute.manager > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync > for > >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > >> > 2022-06-12 16:16:13.506 7 DEBUG oslo_concurrency.lockutils [-] Lock > >> > "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by > >> > > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> > :: waited 0.000s inner > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > >> > 2022-06-12 16:16:43.562 7 DEBUG nova.compute.resource_tracker > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance > >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute > >> host > >> > and has allocations in placement: {'resources': {'VCPU': 1, > 'MEMORY_MB': > >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > >> > 2022-06-12 16:25:55.104 7 DEBUG nova.compute.manager > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > >> > 63426b4c87a751afaf13b758e0 - default default] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning live snapshot process > >> > default default] Lazy-loading 'pci_devices' on Instance uuid > >> > 25c8d676-e20a-4238-a45c-d51daa62b941 obj_load_attr > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 > >> > 2022-06-12 16:25:57.250 7 DEBUG nova.objects.instance > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> > ef940663426b4c87a751afaf13b758e0 - default default] Lazy-loading > >> > 'pci_devices' on Instance uuid 25c8d676-e20a-4238-a45c-d51daa62b941 > >> > obj_load_attr > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 > >> > 2022-06-12 16:25:57.317 7 DEBUG nova.virt.driver [-] Emitting event > >> > <LifecycleEvent: 1655034957.3158934, > 25c8d676-e20a-4238-a45c-d51daa62b941 > >> > => Paused> emit_event > >> > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > >> > 2022-06-12 16:25:57.318 7 INFO nova.compute.manager [-] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle Event) > >> > 2022-06-12 16:25:57.389 7 DEBUG nova.compute.manager > >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> > 2022-06-12 16:25:57.395 7 DEBUG nova.compute.manager > >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power > state > >> > after lifecycle event "Paused"; current vm_state: active, current > >> > task_state: image_pending_upload, current DB power_state: 1, VM > >> > power_state: 3 handle_lifecycle_event > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > >> > 2022-06-12 16:25:57.487 7 INFO nova.compute.manager > >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the > >> instance > >> > has a pending task (image_pending_upload). Skip. > >> > 2022-06-12 16:26:02.039 7 DEBUG oslo_concurrency.processutils > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> > ef940663426b4c87a751afaf13b758e0 - default default] Running cmd > >> > (subprocess): qemu-img convert -t none -O raw -f raw > >> > > >> > rbd:vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk:id=cinder:conf=/etc/ceph/ceph.conf > >> > > >> > /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad > >> > execute > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 > >> > 2022-06-12 16:26:17.075 7 DEBUG nova.virt.driver [-] Emitting event > >> > <LifecycleEvent: 1655034962.0316682, > 25c8d676-e20a-4238-a45c-d51daa62b941 > >> > => Stopped> emit_event > >> > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > >> > INFO nova.compute.manager [-] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle Event) > >> > DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa > - - > >> - > >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state > >> > _get_power_state > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> > DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa > - - > >> - > >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing > >> > instance power state after lifecycle event "Stopped"; current > vm_state: > >> > active, current task_state: image_pending_upload, current DB > power_state: > >> > 1, VM power_state: 4 handle_lifecycle_event > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > >> > INFO nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - > - > >> - - > >> > -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] During > >> sync_power_state > >> > the instance has a pending task (image_pending_upload). Skip. > >> > 2022-06-12 16:26:18.539 7 DEBUG nova.compute.manager > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync > for > >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > >> > 2022-06-12 16:26:18.565 7 DEBUG oslo_concurrency.lockutils [-] Lock > >> > "25c8d676-e20a-4238 > >> > -a45c-d51daa62b941" acquired by > >> > > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> > :: waited 0.000s inner > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > >> > 2022-06-12 16:26:18.566 7 INFO nova.compute.manager [-] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the > >> instance > >> > has a pending task (image_pending_upload). Skip. > >> > 2022-06-12 16:26:18.566 7 DEBUG oslo_concurrency.lockutils [-] Lock > >> > "25c8d676-e20a-4238-a45c-d51daa62b941" released by > >> > > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> > :: held 0.001s inner > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371 > >> > 2022-06-12 16:26:25.769 7 DEBUG oslo_concurrency.processutils > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> > ef940663426b4c87a751afaf13b758e0 - default default] CMD "qemu-img > convert > >> > -t none -O raw -f raw > >> > > >> > rbd:vms/25c8d676-e20a-4238-a45c-d51daa6b941_disk:id=cinder:conf=/etc/ceph/ceph.conf > >> > > >> > /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad" > >> > returned: 0 in 23.730s execute > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 > >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] > >> Snapshot > >> > extracted, beginning image upload > >> > 2022-06-12 16:26:27.981 7 DEBUG nova.virt.driver > >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] Emitting event > >> > <LifecycleEvent: 1655034987.9807608, > 25c8d676-e20a-4238-a45c-d51daa62b941 > >> > => Started> emit_event > >> > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > >> > 2022-06-12 16:26:27.983 7 INFO nova.compute.manager > >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle Event) > >> > [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state > >> > _get_power_state > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power > state > >> > after lifecycle event "Started"; current vm_state: active, current > >> > task_state: image_pending_upload, current DB power_state: 1, VM > >> > power_state: 1 handle_lifecycle_event > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > >> > 2022-06-12 16:26:28.173 7 INFO nova.compute.manager > >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Resumed (Lifecycle Event > >> > 2022-06-12 16:29:00.859 7 DEBUG oslo_concurrency.lockutils > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Acquired lock > >> > "refresh_cache-25c8d676-e20a-4238-a45c-d51daa62b941" lock > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266 > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Forcefully refreshing network > info > >> > cache for instance _get_instance_nw_info > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:1833 > >> > 2022-06-12 16:29:03.278 7 DEBUG nova.network.neutron > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Updating instance_info_cache > with > >> > network_info: [{"id": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", > "address": > >> > "fa:16:3e:ca:00:d9", "network": {"id": > >> > "b86c8304-a9bd-4b39-b7fc-f70ffe76f2a8", "bridge": "br-int", "label": > >> > "External_Network", "subnets": [{"cidr": "141.11.42.0/24", "dns": > >> > [{"address": "8.8.8.8", "type": "dns", "version": 4, "meta": {}}, > >> > {"address": "217.218.127.127", "type": "dns", "version": 4, "meta": > {}}], > >> > "gateway": {"address": "141.11.42.1", "type": "gateway", "version": 4, > >> > "meta": {}}, "ips": [{"address": "141.11.42.37", "type": "fixed", > >> > "version": 4, "meta": {}, "floating_ips": []}], "routes": [], > "version": > >> 4, > >> > "meta": {}}], "meta": {"injected": true, "tenant_id": > >> > "ef940663426b4c87a751afaf13b758e0", "mtu": 1500, "physical_network": > >> > "physnet1", "tunneled": false}}, "type": "ovs", "details": > >> {"connectivity": > >> > "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type": > >> > "system", "bridge_name": "br-int"}, "devname": "tapaa2fdd7d-ad", > >> > "ovs_interfaceid": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", > "qbh_params": > >> > null, "qbg_params": null, "active": true, "vnic_type": "normal", > >> "profile": > >> > {}, "preserve_on_delete": false, "meta": {}}] > >> > update_instance_cache_with_nw_info > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:117 > >> > nstance 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this > >> > compute host and has allocations in placement: {'resources': {'VCPU': > 1, > >> > 'MEMORY_MB': 1024, 'DISK_GB': 20}}. > _remove_deleted_instances_allocations > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > >> > 2022-06-12 16:33:37.595 7 INFO nova.compute.manager > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Took 461.98 seconds to snapshot > the > >> > instance on the hypervisor. > >> > 2022-06-12 16:36:16.459 7 DEBUG nova.compute.manager > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync > for > >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > >> > Lock "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by > >> > > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> > :: waited 0.000s inner > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > >> > 2022-06-12 16:37:05.365 7 DEBUG nova.compute.resource_tracker > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance > >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute > >> host > >> > and has allocations in placement: {'resources': {'VCPU': 1, > 'MEMORY_MB': > >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > >> > 2022-06-12 09:42:32.687 7 INFO nova.compute.manager > >> > [req-e79e4177-4712-4795-91da-853bc524fac0 > >> 93fb420b3c604d4fae408b81135b58e9 > >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > >> > > >> > On Sun, Jun 12, 2022 at 3:36 PM Eugen Block <eblock@nde.ag> wrote: > >> > > >> >> Have you tried with debug logs? Has it worked with live snapshots > >> >> before for other instances or did it never work and all snapshots > were > >> >> "cold"? > >> >> > >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> >> > >> >> > Hi > >> >> > kolla-ansible victoria version with ceph backend without volume > >> >> > > >> >> > On Sun, Jun 12, 2022 at 12:45 PM Eugen Block <eblock@nde.ag> > wrote: > >> >> > > >> >> >> Hi, > >> >> >> > >> >> >> can you share more details about your environment? Which openstack > >> >> >> version is it? What is the storage backend? In earlier releases > there > >> >> >> was an option: > >> >> >> > >> >> >> #disable_libvirt_livesnapshot = false > >> >> >> > >> >> >> but this option has been deprecated. But if you're on an older > >> version > >> >> >> that could explain it. > >> >> >> > >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> >> >> > >> >> >> > When I snapshot from the instance , server will gone away and > its > >> not > >> >> >> > reachable until the snapshot is complete here is the logs : > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > >> >> >> > 2022-06-12 09:42:34.755 7 INFO nova.compute.manager > >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle > Event) > >> >> >> > 2022-06-12 09:42:34.995 7 INFO nova.compute.manager > >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state > the > >> >> >> instance > >> >> >> > has a pending task (image_pending_upload). Skip. > >> >> >> > 2022-06-12 09:42:57.749 7 INFO nova.compute.manager [-] > [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle > Event) > >> >> >> > 2022-06-12 09:43:06.102 7 INFO nova.virt.libvirt.driver > >> >> >> > [req-e79e4177-4712-4795-91da-853bc524fac0 > >> >> >> 93fb420b3c604d4fae408b81135b58e9 > >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Snapshot extracted, > beginning > >> >> image > >> >> >> > upload > >> >> >> > 2022-06-12 09:43:08.778 7 INFO nova.compute.manager > >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle > Event) > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> > >> > >> > >> > >> > > > > >
snapshot_image_format doesn't exist on compute or controller nova.conf For the manual snapshot on the previous reply I ran it directly from ceph, but it's interesting from nova compute there is an error about connecting to CEPH cluster !!! also ceph -s is showing similar error [root@R3SG4 ~]# docker exec -u root -it nova_compute rbd snap create vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap2022-06-13 13:42:18.061 7f03dac23200 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory2022-06-13 13:42:18.061 7f03dac23200 -1 AuthRegistry(0x561d7014a790) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling ceph2022-06-13 13:42:18.065 7f03dac23200 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication rbd: couldn't connect to the cluster! On Mon, Jun 13, 2022 at 12:15 PM Eugen Block <eblock@nde.ag> wrote: > Could you share your whole nova.conf (only the uncommented options)? > Is this option set in your env? > > #snapshot_image_format = <None> > > Also when you manually created the snapshot did you do it as the nova > user on the compute node? If not, could you retry? > > > Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > > > Hi > > kolla-ansible victoria version with ceph backend . > > rbd info output : > > rbd image '25c8d676-e20a-4238-a45c-d51daa62b941_disk': > > size 20 GiB in 5120 objects > > order 22 (4 MiB objects) > > snapshot_count: 0 > > id: b69aaf907284da > > block_name_prefix: rbd_data.b69aaf907284da > > format: 2 > > features: layering, exclusive-lock, object-map, fast-diff, > > deep-flatten > > op_features: > > flags: > > create_timestamp: Fri May 20 00:04:47 2022 > > access_timestamp: Sun Jun 12 16:26:02 2022 > > --------------- > > also live snapshot seems to work correctly without any error or any > > downtime : > > docker exec -u root -it ceph-mgr-cephosd01 rbd snap ls > > vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk > > SNAPID NAME SIZE PROTECTED TIMESTAMP > > 344 test-snap 20 GiB Sun Jun 12 23:48:39 2022 > > > > also on compute nova.conf, images_type is set on rbd . > > > > On Sun, Jun 12, 2022 at 5:55 PM Eugen Block <eblock@nde.ag> wrote: > > > >> You should respond to the list so other users can try to support you. > >> > >> So nova is trying to live snapshot the instance: > >> > >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> > 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - > >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] > >> > instance snapshotting > >> > [...] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning > >> > live snapshot process > >> > >> But I don't see any 'rbd snap create' command. Either the rbd image > >> doesn't support it or there is some setting to keep all rbd images > >> "flat". Can you check any relevant configs you might have in nova? > >> Also can you show the output of 'rbd info > >> <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk' ? Then to test if > >> the underlying rbd functions work as expected you could try to create > >> a live snapshot manually: > >> > >> rbd snap create > <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap > >> > >> And paste any relevant output here. > >> > >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> > >> > Its not working for any instances and all of them are paused . I > enable > >> > debug logs please check the logs : > >> > > >> > 2022-06-12 16:16:13.478 7 DEBUG nova.compute.manager > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync > for > >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > >> > 2022-06-12 16:16:13.506 7 DEBUG oslo_concurrency.lockutils [-] Lock > >> > "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by > >> > > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> > :: waited 0.000s inner > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > >> > 2022-06-12 16:16:43.562 7 DEBUG nova.compute.resource_tracker > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance > >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute > >> host > >> > and has allocations in placement: {'resources': {'VCPU': 1, > 'MEMORY_MB': > >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > >> > 2022-06-12 16:25:55.104 7 DEBUG nova.compute.manager > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > >> > 63426b4c87a751afaf13b758e0 - default default] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning live snapshot process > >> > default default] Lazy-loading 'pci_devices' on Instance uuid > >> > 25c8d676-e20a-4238-a45c-d51daa62b941 obj_load_attr > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 > >> > 2022-06-12 16:25:57.250 7 DEBUG nova.objects.instance > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> > ef940663426b4c87a751afaf13b758e0 - default default] Lazy-loading > >> > 'pci_devices' on Instance uuid 25c8d676-e20a-4238-a45c-d51daa62b941 > >> > obj_load_attr > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 > >> > 2022-06-12 16:25:57.317 7 DEBUG nova.virt.driver [-] Emitting event > >> > <LifecycleEvent: 1655034957.3158934, > 25c8d676-e20a-4238-a45c-d51daa62b941 > >> > => Paused> emit_event > >> > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > >> > 2022-06-12 16:25:57.318 7 INFO nova.compute.manager [-] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle Event) > >> > 2022-06-12 16:25:57.389 7 DEBUG nova.compute.manager > >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> > 2022-06-12 16:25:57.395 7 DEBUG nova.compute.manager > >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power > state > >> > after lifecycle event "Paused"; current vm_state: active, current > >> > task_state: image_pending_upload, current DB power_state: 1, VM > >> > power_state: 3 handle_lifecycle_event > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > >> > 2022-06-12 16:25:57.487 7 INFO nova.compute.manager > >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the > >> instance > >> > has a pending task (image_pending_upload). Skip. > >> > 2022-06-12 16:26:02.039 7 DEBUG oslo_concurrency.processutils > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> > ef940663426b4c87a751afaf13b758e0 - default default] Running cmd > >> > (subprocess): qemu-img convert -t none -O raw -f raw > >> > > >> > rbd:vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk:id=cinder:conf=/etc/ceph/ceph.conf > >> > > >> > /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad > >> > execute > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 > >> > 2022-06-12 16:26:17.075 7 DEBUG nova.virt.driver [-] Emitting event > >> > <LifecycleEvent: 1655034962.0316682, > 25c8d676-e20a-4238-a45c-d51daa62b941 > >> > => Stopped> emit_event > >> > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > >> > INFO nova.compute.manager [-] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle Event) > >> > DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa > - - > >> - > >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state > >> > _get_power_state > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> > DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa > - - > >> - > >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing > >> > instance power state after lifecycle event "Stopped"; current > vm_state: > >> > active, current task_state: image_pending_upload, current DB > power_state: > >> > 1, VM power_state: 4 handle_lifecycle_event > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > >> > INFO nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - > - > >> - - > >> > -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] During > >> sync_power_state > >> > the instance has a pending task (image_pending_upload). Skip. > >> > 2022-06-12 16:26:18.539 7 DEBUG nova.compute.manager > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync > for > >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > >> > 2022-06-12 16:26:18.565 7 DEBUG oslo_concurrency.lockutils [-] Lock > >> > "25c8d676-e20a-4238 > >> > -a45c-d51daa62b941" acquired by > >> > > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> > :: waited 0.000s inner > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > >> > 2022-06-12 16:26:18.566 7 INFO nova.compute.manager [-] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the > >> instance > >> > has a pending task (image_pending_upload). Skip. > >> > 2022-06-12 16:26:18.566 7 DEBUG oslo_concurrency.lockutils [-] Lock > >> > "25c8d676-e20a-4238-a45c-d51daa62b941" released by > >> > > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> > :: held 0.001s inner > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371 > >> > 2022-06-12 16:26:25.769 7 DEBUG oslo_concurrency.processutils > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> > ef940663426b4c87a751afaf13b758e0 - default default] CMD "qemu-img > convert > >> > -t none -O raw -f raw > >> > > >> > rbd:vms/25c8d676-e20a-4238-a45c-d51daa6b941_disk:id=cinder:conf=/etc/ceph/ceph.conf > >> > > >> > /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad" > >> > returned: 0 in 23.730s execute > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 > >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] > >> Snapshot > >> > extracted, beginning image upload > >> > 2022-06-12 16:26:27.981 7 DEBUG nova.virt.driver > >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] Emitting event > >> > <LifecycleEvent: 1655034987.9807608, > 25c8d676-e20a-4238-a45c-d51daa62b941 > >> > => Started> emit_event > >> > > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > >> > 2022-06-12 16:26:27.983 7 INFO nova.compute.manager > >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle Event) > >> > [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state > >> > _get_power_state > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power > state > >> > after lifecycle event "Started"; current vm_state: active, current > >> > task_state: image_pending_upload, current DB power_state: 1, VM > >> > power_state: 1 handle_lifecycle_event > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > >> > 2022-06-12 16:26:28.173 7 INFO nova.compute.manager > >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Resumed (Lifecycle Event > >> > 2022-06-12 16:29:00.859 7 DEBUG oslo_concurrency.lockutils > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Acquired lock > >> > "refresh_cache-25c8d676-e20a-4238-a45c-d51daa62b941" lock > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266 > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Forcefully refreshing network > info > >> > cache for instance _get_instance_nw_info > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:1833 > >> > 2022-06-12 16:29:03.278 7 DEBUG nova.network.neutron > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Updating instance_info_cache > with > >> > network_info: [{"id": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", > "address": > >> > "fa:16:3e:ca:00:d9", "network": {"id": > >> > "b86c8304-a9bd-4b39-b7fc-f70ffe76f2a8", "bridge": "br-int", "label": > >> > "External_Network", "subnets": [{"cidr": "141.11.42.0/24", "dns": > >> > [{"address": "8.8.8.8", "type": "dns", "version": 4, "meta": {}}, > >> > {"address": "217.218.127.127", "type": "dns", "version": 4, "meta": > {}}], > >> > "gateway": {"address": "141.11.42.1", "type": "gateway", "version": 4, > >> > "meta": {}}, "ips": [{"address": "141.11.42.37", "type": "fixed", > >> > "version": 4, "meta": {}, "floating_ips": []}], "routes": [], > "version": > >> 4, > >> > "meta": {}}], "meta": {"injected": true, "tenant_id": > >> > "ef940663426b4c87a751afaf13b758e0", "mtu": 1500, "physical_network": > >> > "physnet1", "tunneled": false}}, "type": "ovs", "details": > >> {"connectivity": > >> > "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type": > >> > "system", "bridge_name": "br-int"}, "devname": "tapaa2fdd7d-ad", > >> > "ovs_interfaceid": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", > "qbh_params": > >> > null, "qbg_params": null, "active": true, "vnic_type": "normal", > >> "profile": > >> > {}, "preserve_on_delete": false, "meta": {}}] > >> > update_instance_cache_with_nw_info > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:117 > >> > nstance 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this > >> > compute host and has allocations in placement: {'resources': {'VCPU': > 1, > >> > 'MEMORY_MB': 1024, 'DISK_GB': 20}}. > _remove_deleted_instances_allocations > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > >> > 2022-06-12 16:33:37.595 7 INFO nova.compute.manager > >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Took 461.98 seconds to snapshot > the > >> > instance on the hypervisor. > >> > 2022-06-12 16:36:16.459 7 DEBUG nova.compute.manager > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync > for > >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > >> > Lock "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by > >> > > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> > :: waited 0.000s inner > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > >> > 2022-06-12 16:37:05.365 7 DEBUG nova.compute.resource_tracker > >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance > >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute > >> host > >> > and has allocations in placement: {'resources': {'VCPU': 1, > 'MEMORY_MB': > >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations > >> > > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > >> > 2022-06-12 09:42:32.687 7 INFO nova.compute.manager > >> > [req-e79e4177-4712-4795-91da-853bc524fac0 > >> 93fb420b3c604d4fae408b81135b58e9 > >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > >> > > >> > On Sun, Jun 12, 2022 at 3:36 PM Eugen Block <eblock@nde.ag> wrote: > >> > > >> >> Have you tried with debug logs? Has it worked with live snapshots > >> >> before for other instances or did it never work and all snapshots > were > >> >> "cold"? > >> >> > >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> >> > >> >> > Hi > >> >> > kolla-ansible victoria version with ceph backend without volume > >> >> > > >> >> > On Sun, Jun 12, 2022 at 12:45 PM Eugen Block <eblock@nde.ag> > wrote: > >> >> > > >> >> >> Hi, > >> >> >> > >> >> >> can you share more details about your environment? Which openstack > >> >> >> version is it? What is the storage backend? In earlier releases > there > >> >> >> was an option: > >> >> >> > >> >> >> #disable_libvirt_livesnapshot = false > >> >> >> > >> >> >> but this option has been deprecated. But if you're on an older > >> version > >> >> >> that could explain it. > >> >> >> > >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> >> >> > >> >> >> > When I snapshot from the instance , server will gone away and > its > >> not > >> >> >> > reachable until the snapshot is complete here is the logs : > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > >> >> >> > 2022-06-12 09:42:34.755 7 INFO nova.compute.manager > >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle > Event) > >> >> >> > 2022-06-12 09:42:34.995 7 INFO nova.compute.manager > >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state > the > >> >> >> instance > >> >> >> > has a pending task (image_pending_upload). Skip. > >> >> >> > 2022-06-12 09:42:57.749 7 INFO nova.compute.manager [-] > [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle > Event) > >> >> >> > 2022-06-12 09:43:06.102 7 INFO nova.virt.libvirt.driver > >> >> >> > [req-e79e4177-4712-4795-91da-853bc524fac0 > >> >> >> 93fb420b3c604d4fae408b81135b58e9 > >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Snapshot extracted, > beginning > >> >> image > >> >> >> > upload > >> >> >> > 2022-06-12 09:43:08.778 7 INFO nova.compute.manager > >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle > Event) > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> > >> > >> > >> > >> > > > > >
Hi, I don't have a containerized deployment so I don't have anything to compare, but my compute nodes have a ceph.conf and all required keyrings in /etc/ceph/, maybe someone else can confirm that it's missing and should be present in a kolla-ansible deployment. In the meantime you could provide those files to the container and retry the live snapshot. Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > snapshot_image_format doesn't exist on compute or controller nova.conf > For the manual snapshot on the previous reply I ran it directly from ceph, > but it's interesting from nova compute there is an error about connecting > to CEPH cluster !!! also ceph -s is showing similar error > [root@R3SG4 ~]# docker exec -u root -it nova_compute rbd snap create > vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap2022-06-13 > 13:42:18.061 7f03dac23200 -1 auth: unable to find a keyring on > /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: > (2) No such file or directory2022-06-13 13:42:18.061 7f03dac23200 -1 > AuthRegistry(0x561d7014a790) no keyring found at > /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, > disabling ceph2022-06-13 13:42:18.065 7f03dac23200 -1 monclient: > authenticate NOTE: no keyring found; disabled cephx authentication > rbd: couldn't connect to the cluster! > > On Mon, Jun 13, 2022 at 12:15 PM Eugen Block <eblock@nde.ag> wrote: > >> Could you share your whole nova.conf (only the uncommented options)? >> Is this option set in your env? >> >> #snapshot_image_format = <None> >> >> Also when you manually created the snapshot did you do it as the nova >> user on the compute node? If not, could you retry? >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: >> >> > Hi >> > kolla-ansible victoria version with ceph backend . >> > rbd info output : >> > rbd image '25c8d676-e20a-4238-a45c-d51daa62b941_disk': >> > size 20 GiB in 5120 objects >> > order 22 (4 MiB objects) >> > snapshot_count: 0 >> > id: b69aaf907284da >> > block_name_prefix: rbd_data.b69aaf907284da >> > format: 2 >> > features: layering, exclusive-lock, object-map, fast-diff, >> > deep-flatten >> > op_features: >> > flags: >> > create_timestamp: Fri May 20 00:04:47 2022 >> > access_timestamp: Sun Jun 12 16:26:02 2022 >> > --------------- >> > also live snapshot seems to work correctly without any error or any >> > downtime : >> > docker exec -u root -it ceph-mgr-cephosd01 rbd snap ls >> > vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk >> > SNAPID NAME SIZE PROTECTED TIMESTAMP >> > 344 test-snap 20 GiB Sun Jun 12 23:48:39 2022 >> > >> > also on compute nova.conf, images_type is set on rbd . >> > >> > On Sun, Jun 12, 2022 at 5:55 PM Eugen Block <eblock@nde.ag> wrote: >> > >> >> You should respond to the list so other users can try to support you. >> >> >> >> So nova is trying to live snapshot the instance: >> >> >> >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> > 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - >> >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] >> >> > instance snapshotting >> >> > [...] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning >> >> > live snapshot process >> >> >> >> But I don't see any 'rbd snap create' command. Either the rbd image >> >> doesn't support it or there is some setting to keep all rbd images >> >> "flat". Can you check any relevant configs you might have in nova? >> >> Also can you show the output of 'rbd info >> >> <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk' ? Then to test if >> >> the underlying rbd functions work as expected you could try to create >> >> a live snapshot manually: >> >> >> >> rbd snap create >> <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap >> >> >> >> And paste any relevant output here. >> >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: >> >> >> >> > Its not working for any instances and all of them are paused . I >> enable >> >> > debug logs please check the logs : >> >> > >> >> > 2022-06-12 16:16:13.478 7 DEBUG nova.compute.manager >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync >> for >> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 >> >> > 2022-06-12 16:16:13.506 7 DEBUG oslo_concurrency.lockutils [-] Lock >> >> > "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by >> >> > >> >> >> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" >> >> > :: waited 0.000s inner >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 >> >> > 2022-06-12 16:16:43.562 7 DEBUG nova.compute.resource_tracker >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute >> >> host >> >> > and has allocations in placement: {'resources': {'VCPU': 1, >> 'MEMORY_MB': >> >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 >> >> > 2022-06-12 16:25:55.104 7 DEBUG nova.compute.manager >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 >> >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting >> >> > 63426b4c87a751afaf13b758e0 - default default] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning live snapshot process >> >> > default default] Lazy-loading 'pci_devices' on Instance uuid >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 obj_load_attr >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 >> >> > 2022-06-12 16:25:57.250 7 DEBUG nova.objects.instance >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> >> > ef940663426b4c87a751afaf13b758e0 - default default] Lazy-loading >> >> > 'pci_devices' on Instance uuid 25c8d676-e20a-4238-a45c-d51daa62b941 >> >> > obj_load_attr >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 >> >> > 2022-06-12 16:25:57.317 7 DEBUG nova.virt.driver [-] Emitting event >> >> > <LifecycleEvent: 1655034957.3158934, >> 25c8d676-e20a-4238-a45c-d51daa62b941 >> >> > => Paused> emit_event >> >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 >> >> > 2022-06-12 16:25:57.318 7 INFO nova.compute.manager [-] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle Event) >> >> > 2022-06-12 16:25:57.389 7 DEBUG nova.compute.manager >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 >> >> > 2022-06-12 16:25:57.395 7 DEBUG nova.compute.manager >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power >> state >> >> > after lifecycle event "Paused"; current vm_state: active, current >> >> > task_state: image_pending_upload, current DB power_state: 1, VM >> >> > power_state: 3 handle_lifecycle_event >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 >> >> > 2022-06-12 16:25:57.487 7 INFO nova.compute.manager >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the >> >> instance >> >> > has a pending task (image_pending_upload). Skip. >> >> > 2022-06-12 16:26:02.039 7 DEBUG oslo_concurrency.processutils >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> >> > ef940663426b4c87a751afaf13b758e0 - default default] Running cmd >> >> > (subprocess): qemu-img convert -t none -O raw -f raw >> >> > >> >> >> rbd:vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk:id=cinder:conf=/etc/ceph/ceph.conf >> >> > >> >> >> /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad >> >> > execute >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 >> >> > 2022-06-12 16:26:17.075 7 DEBUG nova.virt.driver [-] Emitting event >> >> > <LifecycleEvent: 1655034962.0316682, >> 25c8d676-e20a-4238-a45c-d51daa62b941 >> >> > => Stopped> emit_event >> >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 >> >> > INFO nova.compute.manager [-] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle Event) >> >> > DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa >> - - >> >> - >> >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state >> >> > _get_power_state >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 >> >> > DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa >> - - >> >> - >> >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing >> >> > instance power state after lifecycle event "Stopped"; current >> vm_state: >> >> > active, current task_state: image_pending_upload, current DB >> power_state: >> >> > 1, VM power_state: 4 handle_lifecycle_event >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 >> >> > INFO nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - >> - >> >> - - >> >> > -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] During >> >> sync_power_state >> >> > the instance has a pending task (image_pending_upload). Skip. >> >> > 2022-06-12 16:26:18.539 7 DEBUG nova.compute.manager >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync >> for >> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 >> >> > 2022-06-12 16:26:18.565 7 DEBUG oslo_concurrency.lockutils [-] Lock >> >> > "25c8d676-e20a-4238 >> >> > -a45c-d51daa62b941" acquired by >> >> > >> >> >> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" >> >> > :: waited 0.000s inner >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 >> >> > 2022-06-12 16:26:18.566 7 INFO nova.compute.manager [-] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the >> >> instance >> >> > has a pending task (image_pending_upload). Skip. >> >> > 2022-06-12 16:26:18.566 7 DEBUG oslo_concurrency.lockutils [-] Lock >> >> > "25c8d676-e20a-4238-a45c-d51daa62b941" released by >> >> > >> >> >> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" >> >> > :: held 0.001s inner >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371 >> >> > 2022-06-12 16:26:25.769 7 DEBUG oslo_concurrency.processutils >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> >> > ef940663426b4c87a751afaf13b758e0 - default default] CMD "qemu-img >> convert >> >> > -t none -O raw -f raw >> >> > >> >> >> rbd:vms/25c8d676-e20a-4238-a45c-d51daa6b941_disk:id=cinder:conf=/etc/ceph/ceph.conf >> >> > >> >> >> /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad" >> >> > returned: 0 in 23.730s execute >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 >> >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] >> >> Snapshot >> >> > extracted, beginning image upload >> >> > 2022-06-12 16:26:27.981 7 DEBUG nova.virt.driver >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] Emitting event >> >> > <LifecycleEvent: 1655034987.9807608, >> 25c8d676-e20a-4238-a45c-d51daa62b941 >> >> > => Started> emit_event >> >> > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 >> >> > 2022-06-12 16:26:27.983 7 INFO nova.compute.manager >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle Event) >> >> > [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state >> >> > _get_power_state >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power >> state >> >> > after lifecycle event "Started"; current vm_state: active, current >> >> > task_state: image_pending_upload, current DB power_state: 1, VM >> >> > power_state: 1 handle_lifecycle_event >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 >> >> > 2022-06-12 16:26:28.173 7 INFO nova.compute.manager >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Resumed (Lifecycle Event >> >> > 2022-06-12 16:29:00.859 7 DEBUG oslo_concurrency.lockutils >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Acquired lock >> >> > "refresh_cache-25c8d676-e20a-4238-a45c-d51daa62b941" lock >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266 >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Forcefully refreshing network >> info >> >> > cache for instance _get_instance_nw_info >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:1833 >> >> > 2022-06-12 16:29:03.278 7 DEBUG nova.network.neutron >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Updating instance_info_cache >> with >> >> > network_info: [{"id": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", >> "address": >> >> > "fa:16:3e:ca:00:d9", "network": {"id": >> >> > "b86c8304-a9bd-4b39-b7fc-f70ffe76f2a8", "bridge": "br-int", "label": >> >> > "External_Network", "subnets": [{"cidr": "141.11.42.0/24", "dns": >> >> > [{"address": "8.8.8.8", "type": "dns", "version": 4, "meta": {}}, >> >> > {"address": "217.218.127.127", "type": "dns", "version": 4, "meta": >> {}}], >> >> > "gateway": {"address": "141.11.42.1", "type": "gateway", "version": 4, >> >> > "meta": {}}, "ips": [{"address": "141.11.42.37", "type": "fixed", >> >> > "version": 4, "meta": {}, "floating_ips": []}], "routes": [], >> "version": >> >> 4, >> >> > "meta": {}}], "meta": {"injected": true, "tenant_id": >> >> > "ef940663426b4c87a751afaf13b758e0", "mtu": 1500, "physical_network": >> >> > "physnet1", "tunneled": false}}, "type": "ovs", "details": >> >> {"connectivity": >> >> > "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type": >> >> > "system", "bridge_name": "br-int"}, "devname": "tapaa2fdd7d-ad", >> >> > "ovs_interfaceid": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", >> "qbh_params": >> >> > null, "qbg_params": null, "active": true, "vnic_type": "normal", >> >> "profile": >> >> > {}, "preserve_on_delete": false, "meta": {}}] >> >> > update_instance_cache_with_nw_info >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:117 >> >> > nstance 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this >> >> > compute host and has allocations in placement: {'resources': {'VCPU': >> 1, >> >> > 'MEMORY_MB': 1024, 'DISK_GB': 20}}. >> _remove_deleted_instances_allocations >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 >> >> > 2022-06-12 16:33:37.595 7 INFO nova.compute.manager >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Took 461.98 seconds to snapshot >> the >> >> > instance on the hypervisor. >> >> > 2022-06-12 16:36:16.459 7 DEBUG nova.compute.manager >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync >> for >> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 >> >> > Lock "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by >> >> > >> >> >> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" >> >> > :: waited 0.000s inner >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 >> >> > 2022-06-12 16:37:05.365 7 DEBUG nova.compute.resource_tracker >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute >> >> host >> >> > and has allocations in placement: {'resources': {'VCPU': 1, >> 'MEMORY_MB': >> >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations >> >> > >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 >> >> > 2022-06-12 09:42:32.687 7 INFO nova.compute.manager >> >> > [req-e79e4177-4712-4795-91da-853bc524fac0 >> >> 93fb420b3c604d4fae408b81135b58e9 >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting >> >> > >> >> > On Sun, Jun 12, 2022 at 3:36 PM Eugen Block <eblock@nde.ag> wrote: >> >> > >> >> >> Have you tried with debug logs? Has it worked with live snapshots >> >> >> before for other instances or did it never work and all snapshots >> were >> >> >> "cold"? >> >> >> >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: >> >> >> >> >> >> > Hi >> >> >> > kolla-ansible victoria version with ceph backend without volume >> >> >> > >> >> >> > On Sun, Jun 12, 2022 at 12:45 PM Eugen Block <eblock@nde.ag> >> wrote: >> >> >> > >> >> >> >> Hi, >> >> >> >> >> >> >> >> can you share more details about your environment? Which openstack >> >> >> >> version is it? What is the storage backend? In earlier releases >> there >> >> >> >> was an option: >> >> >> >> >> >> >> >> #disable_libvirt_livesnapshot = false >> >> >> >> >> >> >> >> but this option has been deprecated. But if you're on an older >> >> version >> >> >> >> that could explain it. >> >> >> >> >> >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: >> >> >> >> >> >> >> >> > When I snapshot from the instance , server will gone away and >> its >> >> not >> >> >> >> > reachable until the snapshot is complete here is the logs : >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting >> >> >> >> > 2022-06-12 09:42:34.755 7 INFO nova.compute.manager >> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle >> Event) >> >> >> >> > 2022-06-12 09:42:34.995 7 INFO nova.compute.manager >> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state >> the >> >> >> >> instance >> >> >> >> > has a pending task (image_pending_upload). Skip. >> >> >> >> > 2022-06-12 09:42:57.749 7 INFO nova.compute.manager [-] >> [instance: >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle >> Event) >> >> >> >> > 2022-06-12 09:43:06.102 7 INFO nova.virt.libvirt.driver >> >> >> >> > [req-e79e4177-4712-4795-91da-853bc524fac0 >> >> >> >> 93fb420b3c604d4fae408b81135b58e9 >> >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Snapshot extracted, >> beginning >> >> >> image >> >> >> >> > upload >> >> >> >> > 2022-06-12 09:43:08.778 7 INFO nova.compute.manager >> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance: >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle >> Event) >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >>
yes you are right , I added ceph.client.admin.keyring to nova compute container and ceph cluster error gone i also try rbd snap create from nova compute container and its work without problem and vm is up . maybe the problem is related to glance .my glance backend is not on ceph I think maybe its the reason On Mon, Jun 13, 2022 at 9:38 PM Eugen Block <eblock@nde.ag> wrote: > Hi, > > I don't have a containerized deployment so I don't have anything to > compare, but my compute nodes have a ceph.conf and all required > keyrings in /etc/ceph/, maybe someone else can confirm that it's > missing and should be present in a kolla-ansible deployment. In the > meantime you could provide those files to the container and retry the > live snapshot. > > > Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > > > snapshot_image_format doesn't exist on compute or controller nova.conf > > For the manual snapshot on the previous reply I ran it directly from > ceph, > > but it's interesting from nova compute there is an error about connecting > > to CEPH cluster !!! also ceph -s is showing similar error > > [root@R3SG4 ~]# docker exec -u root -it nova_compute rbd snap create > > vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap2022-06-13 > > 13:42:18.061 7f03dac23200 -1 auth: unable to find a keyring on > > > /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: > > (2) No such file or directory2022-06-13 13:42:18.061 7f03dac23200 -1 > > AuthRegistry(0x561d7014a790) no keyring found at > > > /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, > > disabling ceph2022-06-13 13:42:18.065 7f03dac23200 -1 monclient: > > authenticate NOTE: no keyring found; disabled cephx authentication > > rbd: couldn't connect to the cluster! > > > > On Mon, Jun 13, 2022 at 12:15 PM Eugen Block <eblock@nde.ag> wrote: > > > >> Could you share your whole nova.conf (only the uncommented options)? > >> Is this option set in your env? > >> > >> #snapshot_image_format = <None> > >> > >> Also when you manually created the snapshot did you do it as the nova > >> user on the compute node? If not, could you retry? > >> > >> > >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> > >> > Hi > >> > kolla-ansible victoria version with ceph backend . > >> > rbd info output : > >> > rbd image '25c8d676-e20a-4238-a45c-d51daa62b941_disk': > >> > size 20 GiB in 5120 objects > >> > order 22 (4 MiB objects) > >> > snapshot_count: 0 > >> > id: b69aaf907284da > >> > block_name_prefix: rbd_data.b69aaf907284da > >> > format: 2 > >> > features: layering, exclusive-lock, object-map, fast-diff, > >> > deep-flatten > >> > op_features: > >> > flags: > >> > create_timestamp: Fri May 20 00:04:47 2022 > >> > access_timestamp: Sun Jun 12 16:26:02 2022 > >> > --------------- > >> > also live snapshot seems to work correctly without any error or any > >> > downtime : > >> > docker exec -u root -it ceph-mgr-cephosd01 rbd snap ls > >> > vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk > >> > SNAPID NAME SIZE PROTECTED TIMESTAMP > >> > 344 test-snap 20 GiB Sun Jun 12 23:48:39 2022 > >> > > >> > also on compute nova.conf, images_type is set on rbd . > >> > > >> > On Sun, Jun 12, 2022 at 5:55 PM Eugen Block <eblock@nde.ag> wrote: > >> > > >> >> You should respond to the list so other users can try to support you. > >> >> > >> >> So nova is trying to live snapshot the instance: > >> >> > >> >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager > >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> > 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - > >> >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] > >> >> > instance snapshotting > >> >> > [...] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning > >> >> > live snapshot process > >> >> > >> >> But I don't see any 'rbd snap create' command. Either the rbd image > >> >> doesn't support it or there is some setting to keep all rbd images > >> >> "flat". Can you check any relevant configs you might have in nova? > >> >> Also can you show the output of 'rbd info > >> >> <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk' ? Then to test if > >> >> the underlying rbd functions work as expected you could try to create > >> >> a live snapshot manually: > >> >> > >> >> rbd snap create > >> <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap > >> >> > >> >> And paste any relevant output here. > >> >> > >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> >> > >> >> > Its not working for any instances and all of them are paused . I > >> enable > >> >> > debug logs please check the logs : > >> >> > > >> >> > 2022-06-12 16:16:13.478 7 DEBUG nova.compute.manager > >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering > sync > >> for > >> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > >> >> > 2022-06-12 16:16:13.506 7 DEBUG oslo_concurrency.lockutils [-] Lock > >> >> > "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by > >> >> > > >> >> > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> >> > :: waited 0.000s inner > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > >> >> > 2022-06-12 16:16:43.562 7 DEBUG nova.compute.resource_tracker > >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this > compute > >> >> host > >> >> > and has allocations in placement: {'resources': {'VCPU': 1, > >> 'MEMORY_MB': > >> >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > >> >> > 2022-06-12 16:25:55.104 7 DEBUG nova.compute.manager > >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state > _get_power_state > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager > >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > >> >> > 63426b4c87a751afaf13b758e0 - default default] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning live snapshot > process > >> >> > default default] Lazy-loading 'pci_devices' on Instance uuid > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 obj_load_attr > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 > >> >> > 2022-06-12 16:25:57.250 7 DEBUG nova.objects.instance > >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> >> > ef940663426b4c87a751afaf13b758e0 - default default] Lazy-loading > >> >> > 'pci_devices' on Instance uuid 25c8d676-e20a-4238-a45c-d51daa62b941 > >> >> > obj_load_attr > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 > >> >> > 2022-06-12 16:25:57.317 7 DEBUG nova.virt.driver [-] Emitting event > >> >> > <LifecycleEvent: 1655034957.3158934, > >> 25c8d676-e20a-4238-a45c-d51daa62b941 > >> >> > => Paused> emit_event > >> >> > > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > >> >> > 2022-06-12 16:25:57.318 7 INFO nova.compute.manager [-] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle Event) > >> >> > 2022-06-12 16:25:57.389 7 DEBUG nova.compute.manager > >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state > _get_power_state > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> >> > 2022-06-12 16:25:57.395 7 DEBUG nova.compute.manager > >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power > >> state > >> >> > after lifecycle event "Paused"; current vm_state: active, current > >> >> > task_state: image_pending_upload, current DB power_state: 1, VM > >> >> > power_state: 3 handle_lifecycle_event > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > >> >> > 2022-06-12 16:25:57.487 7 INFO nova.compute.manager > >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the > >> >> instance > >> >> > has a pending task (image_pending_upload). Skip. > >> >> > 2022-06-12 16:26:02.039 7 DEBUG oslo_concurrency.processutils > >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> >> > ef940663426b4c87a751afaf13b758e0 - default default] Running cmd > >> >> > (subprocess): qemu-img convert -t none -O raw -f raw > >> >> > > >> >> > >> > rbd:vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk:id=cinder:conf=/etc/ceph/ceph.conf > >> >> > > >> >> > >> > /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad > >> >> > execute > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 > >> >> > 2022-06-12 16:26:17.075 7 DEBUG nova.virt.driver [-] Emitting event > >> >> > <LifecycleEvent: 1655034962.0316682, > >> 25c8d676-e20a-4238-a45c-d51daa62b941 > >> >> > => Stopped> emit_event > >> >> > > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > >> >> > INFO nova.compute.manager [-] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle Event) > >> >> > DEBUG nova.compute.manager > [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa > >> - - > >> >> - > >> >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking > state > >> >> > _get_power_state > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> >> > DEBUG nova.compute.manager > [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa > >> - - > >> >> - > >> >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing > >> >> > instance power state after lifecycle event "Stopped"; current > >> vm_state: > >> >> > active, current task_state: image_pending_upload, current DB > >> power_state: > >> >> > 1, VM power_state: 4 handle_lifecycle_event > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > >> >> > INFO nova.compute.manager > [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - > >> - > >> >> - - > >> >> > -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] During > >> >> sync_power_state > >> >> > the instance has a pending task (image_pending_upload). Skip. > >> >> > 2022-06-12 16:26:18.539 7 DEBUG nova.compute.manager > >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering > sync > >> for > >> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > >> >> > 2022-06-12 16:26:18.565 7 DEBUG oslo_concurrency.lockutils [-] Lock > >> >> > "25c8d676-e20a-4238 > >> >> > -a45c-d51daa62b941" acquired by > >> >> > > >> >> > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> >> > :: waited 0.000s inner > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > >> >> > 2022-06-12 16:26:18.566 7 INFO nova.compute.manager [-] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the > >> >> instance > >> >> > has a pending task (image_pending_upload). Skip. > >> >> > 2022-06-12 16:26:18.566 7 DEBUG oslo_concurrency.lockutils [-] Lock > >> >> > "25c8d676-e20a-4238-a45c-d51daa62b941" released by > >> >> > > >> >> > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> >> > :: held 0.001s inner > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371 > >> >> > 2022-06-12 16:26:25.769 7 DEBUG oslo_concurrency.processutils > >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> >> > ef940663426b4c87a751afaf13b758e0 - default default] CMD "qemu-img > >> convert > >> >> > -t none -O raw -f raw > >> >> > > >> >> > >> > rbd:vms/25c8d676-e20a-4238-a45c-d51daa6b941_disk:id=cinder:conf=/etc/ceph/ceph.conf > >> >> > > >> >> > >> > /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad" > >> >> > returned: 0 in 23.730s execute > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 > >> >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] > >> >> Snapshot > >> >> > extracted, beginning image upload > >> >> > 2022-06-12 16:26:27.981 7 DEBUG nova.virt.driver > >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] Emitting event > >> >> > <LifecycleEvent: 1655034987.9807608, > >> 25c8d676-e20a-4238-a45c-d51daa62b941 > >> >> > => Started> emit_event > >> >> > > >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > >> >> > 2022-06-12 16:26:27.983 7 INFO nova.compute.manager > >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle Event) > >> >> > [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state > >> >> > _get_power_state > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power > >> state > >> >> > after lifecycle event "Started"; current vm_state: active, current > >> >> > task_state: image_pending_upload, current DB power_state: 1, VM > >> >> > power_state: 1 handle_lifecycle_event > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > >> >> > 2022-06-12 16:26:28.173 7 INFO nova.compute.manager > >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Resumed (Lifecycle Event > >> >> > 2022-06-12 16:29:00.859 7 DEBUG oslo_concurrency.lockutils > >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Acquired lock > >> >> > "refresh_cache-25c8d676-e20a-4238-a45c-d51daa62b941" lock > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266 > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Forcefully refreshing network > >> info > >> >> > cache for instance _get_instance_nw_info > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:1833 > >> >> > 2022-06-12 16:29:03.278 7 DEBUG nova.network.neutron > >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Updating instance_info_cache > >> with > >> >> > network_info: [{"id": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", > >> "address": > >> >> > "fa:16:3e:ca:00:d9", "network": {"id": > >> >> > "b86c8304-a9bd-4b39-b7fc-f70ffe76f2a8", "bridge": "br-int", > "label": > >> >> > "External_Network", "subnets": [{"cidr": "141.11.42.0/24", "dns": > >> >> > [{"address": "8.8.8.8", "type": "dns", "version": 4, "meta": {}}, > >> >> > {"address": "217.218.127.127", "type": "dns", "version": 4, "meta": > >> {}}], > >> >> > "gateway": {"address": "141.11.42.1", "type": "gateway", > "version": 4, > >> >> > "meta": {}}, "ips": [{"address": "141.11.42.37", "type": "fixed", > >> >> > "version": 4, "meta": {}, "floating_ips": []}], "routes": [], > >> "version": > >> >> 4, > >> >> > "meta": {}}], "meta": {"injected": true, "tenant_id": > >> >> > "ef940663426b4c87a751afaf13b758e0", "mtu": 1500, > "physical_network": > >> >> > "physnet1", "tunneled": false}}, "type": "ovs", "details": > >> >> {"connectivity": > >> >> > "l2", "port_filter": true, "ovs_hybrid_plug": true, > "datapath_type": > >> >> > "system", "bridge_name": "br-int"}, "devname": "tapaa2fdd7d-ad", > >> >> > "ovs_interfaceid": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", > >> "qbh_params": > >> >> > null, "qbg_params": null, "active": true, "vnic_type": "normal", > >> >> "profile": > >> >> > {}, "preserve_on_delete": false, "meta": {}}] > >> >> > update_instance_cache_with_nw_info > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:117 > >> >> > nstance 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on > this > >> >> > compute host and has allocations in placement: {'resources': > {'VCPU': > >> 1, > >> >> > 'MEMORY_MB': 1024, 'DISK_GB': 20}}. > >> _remove_deleted_instances_allocations > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > >> >> > 2022-06-12 16:33:37.595 7 INFO nova.compute.manager > >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Took 461.98 seconds to > snapshot > >> the > >> >> > instance on the hypervisor. > >> >> > 2022-06-12 16:36:16.459 7 DEBUG nova.compute.manager > >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering > sync > >> for > >> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > >> >> > Lock "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by > >> >> > > >> >> > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> >> > :: waited 0.000s inner > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > >> >> > 2022-06-12 16:37:05.365 7 DEBUG nova.compute.resource_tracker > >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this > compute > >> >> host > >> >> > and has allocations in placement: {'resources': {'VCPU': 1, > >> 'MEMORY_MB': > >> >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations > >> >> > > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > >> >> > 2022-06-12 09:42:32.687 7 INFO nova.compute.manager > >> >> > [req-e79e4177-4712-4795-91da-853bc524fac0 > >> >> 93fb420b3c604d4fae408b81135b58e9 > >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > >> >> > > >> >> > On Sun, Jun 12, 2022 at 3:36 PM Eugen Block <eblock@nde.ag> wrote: > >> >> > > >> >> >> Have you tried with debug logs? Has it worked with live snapshots > >> >> >> before for other instances or did it never work and all snapshots > >> were > >> >> >> "cold"? > >> >> >> > >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> >> >> > >> >> >> > Hi > >> >> >> > kolla-ansible victoria version with ceph backend without volume > >> >> >> > > >> >> >> > On Sun, Jun 12, 2022 at 12:45 PM Eugen Block <eblock@nde.ag> > >> wrote: > >> >> >> > > >> >> >> >> Hi, > >> >> >> >> > >> >> >> >> can you share more details about your environment? Which > openstack > >> >> >> >> version is it? What is the storage backend? In earlier releases > >> there > >> >> >> >> was an option: > >> >> >> >> > >> >> >> >> #disable_libvirt_livesnapshot = false > >> >> >> >> > >> >> >> >> but this option has been deprecated. But if you're on an older > >> >> version > >> >> >> >> that could explain it. > >> >> >> >> > >> >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> >> >> >> > >> >> >> >> > When I snapshot from the instance , server will gone away and > >> its > >> >> not > >> >> >> >> > reachable until the snapshot is complete here is the logs : > >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > >> >> >> >> > 2022-06-12 09:42:34.755 7 INFO nova.compute.manager > >> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] > [instance: > >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle > >> Event) > >> >> >> >> > 2022-06-12 09:42:34.995 7 INFO nova.compute.manager > >> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] > [instance: > >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state > >> the > >> >> >> >> instance > >> >> >> >> > has a pending task (image_pending_upload). Skip. > >> >> >> >> > 2022-06-12 09:42:57.749 7 INFO nova.compute.manager [-] > >> [instance: > >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle > >> Event) > >> >> >> >> > 2022-06-12 09:43:06.102 7 INFO nova.virt.libvirt.driver > >> >> >> >> > [req-e79e4177-4712-4795-91da-853bc524fac0 > >> >> >> >> 93fb420b3c604d4fae408b81135b58e9 > >> >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] > [instance: > >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Snapshot extracted, > >> beginning > >> >> >> image > >> >> >> >> > upload > >> >> >> >> > 2022-06-12 09:43:08.778 7 INFO nova.compute.manager > >> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] > [instance: > >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle > >> Event) > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> > >> > >> > >> > >> > > > > >
You don't need the admin keyring on a compute node but for tests it can help. Basically it depends on your ceph auth config which keyring you need on the openstack nodes. If you already have ceph configured why don't you use it for all services? Of course live snapshots are not possible if your glance images are not within ceph (I don't think you mentioned that before). Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > yes you are right , I added ceph.client.admin.keyring to nova compute > container and ceph cluster error gone i also try rbd snap create from nova > compute container and its work without problem and vm is up . > maybe the problem is related to glance .my glance backend is not on ceph I > think maybe its the reason > > On Mon, Jun 13, 2022 at 9:38 PM Eugen Block <eblock@nde.ag> wrote: > >> Hi, >> >> I don't have a containerized deployment so I don't have anything to >> compare, but my compute nodes have a ceph.conf and all required >> keyrings in /etc/ceph/, maybe someone else can confirm that it's >> missing and should be present in a kolla-ansible deployment. In the >> meantime you could provide those files to the container and retry the >> live snapshot. >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: >> >> > snapshot_image_format doesn't exist on compute or controller nova.conf >> > For the manual snapshot on the previous reply I ran it directly from >> ceph, >> > but it's interesting from nova compute there is an error about connecting >> > to CEPH cluster !!! also ceph -s is showing similar error >> > [root@R3SG4 ~]# docker exec -u root -it nova_compute rbd snap create >> > vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap2022-06-13 >> > 13:42:18.061 7f03dac23200 -1 auth: unable to find a keyring on >> > >> /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: >> > (2) No such file or directory2022-06-13 13:42:18.061 7f03dac23200 -1 >> > AuthRegistry(0x561d7014a790) no keyring found at >> > >> /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, >> > disabling ceph2022-06-13 13:42:18.065 7f03dac23200 -1 monclient: >> > authenticate NOTE: no keyring found; disabled cephx authentication >> > rbd: couldn't connect to the cluster! >> > >> > On Mon, Jun 13, 2022 at 12:15 PM Eugen Block <eblock@nde.ag> wrote: >> > >> >> Could you share your whole nova.conf (only the uncommented options)? >> >> Is this option set in your env? >> >> >> >> #snapshot_image_format = <None> >> >> >> >> Also when you manually created the snapshot did you do it as the nova >> >> user on the compute node? If not, could you retry? >> >> >> >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: >> >> >> >> > Hi >> >> > kolla-ansible victoria version with ceph backend . >> >> > rbd info output : >> >> > rbd image '25c8d676-e20a-4238-a45c-d51daa62b941_disk': >> >> > size 20 GiB in 5120 objects >> >> > order 22 (4 MiB objects) >> >> > snapshot_count: 0 >> >> > id: b69aaf907284da >> >> > block_name_prefix: rbd_data.b69aaf907284da >> >> > format: 2 >> >> > features: layering, exclusive-lock, object-map, fast-diff, >> >> > deep-flatten >> >> > op_features: >> >> > flags: >> >> > create_timestamp: Fri May 20 00:04:47 2022 >> >> > access_timestamp: Sun Jun 12 16:26:02 2022 >> >> > --------------- >> >> > also live snapshot seems to work correctly without any error or any >> >> > downtime : >> >> > docker exec -u root -it ceph-mgr-cephosd01 rbd snap ls >> >> > vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk >> >> > SNAPID NAME SIZE PROTECTED TIMESTAMP >> >> > 344 test-snap 20 GiB Sun Jun 12 23:48:39 2022 >> >> > >> >> > also on compute nova.conf, images_type is set on rbd . >> >> > >> >> > On Sun, Jun 12, 2022 at 5:55 PM Eugen Block <eblock@nde.ag> wrote: >> >> > >> >> >> You should respond to the list so other users can try to support you. >> >> >> >> >> >> So nova is trying to live snapshot the instance: >> >> >> >> >> >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> >> > 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 - >> >> >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] >> >> >> > instance snapshotting >> >> >> > [...] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning >> >> >> > live snapshot process >> >> >> >> >> >> But I don't see any 'rbd snap create' command. Either the rbd image >> >> >> doesn't support it or there is some setting to keep all rbd images >> >> >> "flat". Can you check any relevant configs you might have in nova? >> >> >> Also can you show the output of 'rbd info >> >> >> <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk' ? Then to test if >> >> >> the underlying rbd functions work as expected you could try to create >> >> >> a live snapshot manually: >> >> >> >> >> >> rbd snap create >> >> <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap >> >> >> >> >> >> And paste any relevant output here. >> >> >> >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: >> >> >> >> >> >> > Its not working for any instances and all of them are paused . I >> >> enable >> >> >> > debug logs please check the logs : >> >> >> > >> >> >> > 2022-06-12 16:16:13.478 7 DEBUG nova.compute.manager >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering >> sync >> >> for >> >> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 >> >> >> > 2022-06-12 16:16:13.506 7 DEBUG oslo_concurrency.lockutils [-] Lock >> >> >> > "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by >> >> >> > >> >> >> >> >> >> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" >> >> >> > :: waited 0.000s inner >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 >> >> >> > 2022-06-12 16:16:43.562 7 DEBUG nova.compute.resource_tracker >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this >> compute >> >> >> host >> >> >> > and has allocations in placement: {'resources': {'VCPU': 1, >> >> 'MEMORY_MB': >> >> >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 >> >> >> > 2022-06-12 16:25:55.104 7 DEBUG nova.compute.manager >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state >> _get_power_state >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 >> >> >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting >> >> >> > 63426b4c87a751afaf13b758e0 - default default] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning live snapshot >> process >> >> >> > default default] Lazy-loading 'pci_devices' on Instance uuid >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 obj_load_attr >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 >> >> >> > 2022-06-12 16:25:57.250 7 DEBUG nova.objects.instance >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] Lazy-loading >> >> >> > 'pci_devices' on Instance uuid 25c8d676-e20a-4238-a45c-d51daa62b941 >> >> >> > obj_load_attr >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 >> >> >> > 2022-06-12 16:25:57.317 7 DEBUG nova.virt.driver [-] Emitting event >> >> >> > <LifecycleEvent: 1655034957.3158934, >> >> 25c8d676-e20a-4238-a45c-d51daa62b941 >> >> >> > => Paused> emit_event >> >> >> > >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 >> >> >> > 2022-06-12 16:25:57.318 7 INFO nova.compute.manager [-] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle Event) >> >> >> > 2022-06-12 16:25:57.389 7 DEBUG nova.compute.manager >> >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state >> _get_power_state >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 >> >> >> > 2022-06-12 16:25:57.395 7 DEBUG nova.compute.manager >> >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power >> >> state >> >> >> > after lifecycle event "Paused"; current vm_state: active, current >> >> >> > task_state: image_pending_upload, current DB power_state: 1, VM >> >> >> > power_state: 3 handle_lifecycle_event >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 >> >> >> > 2022-06-12 16:25:57.487 7 INFO nova.compute.manager >> >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the >> >> >> instance >> >> >> > has a pending task (image_pending_upload). Skip. >> >> >> > 2022-06-12 16:26:02.039 7 DEBUG oslo_concurrency.processutils >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] Running cmd >> >> >> > (subprocess): qemu-img convert -t none -O raw -f raw >> >> >> > >> >> >> >> >> >> rbd:vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk:id=cinder:conf=/etc/ceph/ceph.conf >> >> >> > >> >> >> >> >> >> /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad >> >> >> > execute >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 >> >> >> > 2022-06-12 16:26:17.075 7 DEBUG nova.virt.driver [-] Emitting event >> >> >> > <LifecycleEvent: 1655034962.0316682, >> >> 25c8d676-e20a-4238-a45c-d51daa62b941 >> >> >> > => Stopped> emit_event >> >> >> > >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 >> >> >> > INFO nova.compute.manager [-] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle Event) >> >> >> > DEBUG nova.compute.manager >> [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa >> >> - - >> >> >> - >> >> >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking >> state >> >> >> > _get_power_state >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 >> >> >> > DEBUG nova.compute.manager >> [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa >> >> - - >> >> >> - >> >> >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing >> >> >> > instance power state after lifecycle event "Stopped"; current >> >> vm_state: >> >> >> > active, current task_state: image_pending_upload, current DB >> >> power_state: >> >> >> > 1, VM power_state: 4 handle_lifecycle_event >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 >> >> >> > INFO nova.compute.manager >> [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - >> >> - >> >> >> - - >> >> >> > -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] During >> >> >> sync_power_state >> >> >> > the instance has a pending task (image_pending_upload). Skip. >> >> >> > 2022-06-12 16:26:18.539 7 DEBUG nova.compute.manager >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering >> sync >> >> for >> >> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 >> >> >> > 2022-06-12 16:26:18.565 7 DEBUG oslo_concurrency.lockutils [-] Lock >> >> >> > "25c8d676-e20a-4238 >> >> >> > -a45c-d51daa62b941" acquired by >> >> >> > >> >> >> >> >> >> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" >> >> >> > :: waited 0.000s inner >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 >> >> >> > 2022-06-12 16:26:18.566 7 INFO nova.compute.manager [-] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the >> >> >> instance >> >> >> > has a pending task (image_pending_upload). Skip. >> >> >> > 2022-06-12 16:26:18.566 7 DEBUG oslo_concurrency.lockutils [-] Lock >> >> >> > "25c8d676-e20a-4238-a45c-d51daa62b941" released by >> >> >> > >> >> >> >> >> >> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" >> >> >> > :: held 0.001s inner >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371 >> >> >> > 2022-06-12 16:26:25.769 7 DEBUG oslo_concurrency.processutils >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] CMD "qemu-img >> >> convert >> >> >> > -t none -O raw -f raw >> >> >> > >> >> >> >> >> >> rbd:vms/25c8d676-e20a-4238-a45c-d51daa6b941_disk:id=cinder:conf=/etc/ceph/ceph.conf >> >> >> > >> >> >> >> >> >> /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad" >> >> >> > returned: 0 in 23.730s execute >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 >> >> >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] >> >> >> Snapshot >> >> >> > extracted, beginning image upload >> >> >> > 2022-06-12 16:26:27.981 7 DEBUG nova.virt.driver >> >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] Emitting event >> >> >> > <LifecycleEvent: 1655034987.9807608, >> >> 25c8d676-e20a-4238-a45c-d51daa62b941 >> >> >> > => Started> emit_event >> >> >> > >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 >> >> >> > 2022-06-12 16:26:27.983 7 INFO nova.compute.manager >> >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle Event) >> >> >> > [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state >> >> >> > _get_power_state >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power >> >> state >> >> >> > after lifecycle event "Started"; current vm_state: active, current >> >> >> > task_state: image_pending_upload, current DB power_state: 1, VM >> >> >> > power_state: 1 handle_lifecycle_event >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 >> >> >> > 2022-06-12 16:26:28.173 7 INFO nova.compute.manager >> >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Resumed (Lifecycle Event >> >> >> > 2022-06-12 16:29:00.859 7 DEBUG oslo_concurrency.lockutils >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Acquired lock >> >> >> > "refresh_cache-25c8d676-e20a-4238-a45c-d51daa62b941" lock >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266 >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Forcefully refreshing network >> >> info >> >> >> > cache for instance _get_instance_nw_info >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:1833 >> >> >> > 2022-06-12 16:29:03.278 7 DEBUG nova.network.neutron >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Updating instance_info_cache >> >> with >> >> >> > network_info: [{"id": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", >> >> "address": >> >> >> > "fa:16:3e:ca:00:d9", "network": {"id": >> >> >> > "b86c8304-a9bd-4b39-b7fc-f70ffe76f2a8", "bridge": "br-int", >> "label": >> >> >> > "External_Network", "subnets": [{"cidr": "141.11.42.0/24", "dns": >> >> >> > [{"address": "8.8.8.8", "type": "dns", "version": 4, "meta": {}}, >> >> >> > {"address": "217.218.127.127", "type": "dns", "version": 4, "meta": >> >> {}}], >> >> >> > "gateway": {"address": "141.11.42.1", "type": "gateway", >> "version": 4, >> >> >> > "meta": {}}, "ips": [{"address": "141.11.42.37", "type": "fixed", >> >> >> > "version": 4, "meta": {}, "floating_ips": []}], "routes": [], >> >> "version": >> >> >> 4, >> >> >> > "meta": {}}], "meta": {"injected": true, "tenant_id": >> >> >> > "ef940663426b4c87a751afaf13b758e0", "mtu": 1500, >> "physical_network": >> >> >> > "physnet1", "tunneled": false}}, "type": "ovs", "details": >> >> >> {"connectivity": >> >> >> > "l2", "port_filter": true, "ovs_hybrid_plug": true, >> "datapath_type": >> >> >> > "system", "bridge_name": "br-int"}, "devname": "tapaa2fdd7d-ad", >> >> >> > "ovs_interfaceid": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", >> >> "qbh_params": >> >> >> > null, "qbg_params": null, "active": true, "vnic_type": "normal", >> >> >> "profile": >> >> >> > {}, "preserve_on_delete": false, "meta": {}}] >> >> >> > update_instance_cache_with_nw_info >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:117 >> >> >> > nstance 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on >> this >> >> >> > compute host and has allocations in placement: {'resources': >> {'VCPU': >> >> 1, >> >> >> > 'MEMORY_MB': 1024, 'DISK_GB': 20}}. >> >> _remove_deleted_instances_allocations >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 >> >> >> > 2022-06-12 16:33:37.595 7 INFO nova.compute.manager >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 >> >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Took 461.98 seconds to >> snapshot >> >> the >> >> >> > instance on the hypervisor. >> >> >> > 2022-06-12 16:36:16.459 7 DEBUG nova.compute.manager >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering >> sync >> >> for >> >> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 >> >> >> > Lock "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by >> >> >> > >> >> >> >> >> >> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" >> >> >> > :: waited 0.000s inner >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 >> >> >> > 2022-06-12 16:37:05.365 7 DEBUG nova.compute.resource_tracker >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this >> compute >> >> >> host >> >> >> > and has allocations in placement: {'resources': {'VCPU': 1, >> >> 'MEMORY_MB': >> >> >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations >> >> >> > >> >> >> >> >> >> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 >> >> >> > 2022-06-12 09:42:32.687 7 INFO nova.compute.manager >> >> >> > [req-e79e4177-4712-4795-91da-853bc524fac0 >> >> >> 93fb420b3c604d4fae408b81135b58e9 >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting >> >> >> > >> >> >> > On Sun, Jun 12, 2022 at 3:36 PM Eugen Block <eblock@nde.ag> wrote: >> >> >> > >> >> >> >> Have you tried with debug logs? Has it worked with live snapshots >> >> >> >> before for other instances or did it never work and all snapshots >> >> were >> >> >> >> "cold"? >> >> >> >> >> >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: >> >> >> >> >> >> >> >> > Hi >> >> >> >> > kolla-ansible victoria version with ceph backend without volume >> >> >> >> > >> >> >> >> > On Sun, Jun 12, 2022 at 12:45 PM Eugen Block <eblock@nde.ag> >> >> wrote: >> >> >> >> > >> >> >> >> >> Hi, >> >> >> >> >> >> >> >> >> >> can you share more details about your environment? Which >> openstack >> >> >> >> >> version is it? What is the storage backend? In earlier releases >> >> there >> >> >> >> >> was an option: >> >> >> >> >> >> >> >> >> >> #disable_libvirt_livesnapshot = false >> >> >> >> >> >> >> >> >> >> but this option has been deprecated. But if you're on an older >> >> >> version >> >> >> >> >> that could explain it. >> >> >> >> >> >> >> >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: >> >> >> >> >> >> >> >> >> >> > When I snapshot from the instance , server will gone away and >> >> its >> >> >> not >> >> >> >> >> > reachable until the snapshot is complete here is the logs : >> >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting >> >> >> >> >> > 2022-06-12 09:42:34.755 7 INFO nova.compute.manager >> >> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] >> [instance: >> >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle >> >> Event) >> >> >> >> >> > 2022-06-12 09:42:34.995 7 INFO nova.compute.manager >> >> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] >> [instance: >> >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state >> >> the >> >> >> >> >> instance >> >> >> >> >> > has a pending task (image_pending_upload). Skip. >> >> >> >> >> > 2022-06-12 09:42:57.749 7 INFO nova.compute.manager [-] >> >> [instance: >> >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle >> >> Event) >> >> >> >> >> > 2022-06-12 09:43:06.102 7 INFO nova.virt.libvirt.driver >> >> >> >> >> > [req-e79e4177-4712-4795-91da-853bc524fac0 >> >> >> >> >> 93fb420b3c604d4fae408b81135b58e9 >> >> >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] >> [instance: >> >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Snapshot extracted, >> >> beginning >> >> >> >> image >> >> >> >> >> > upload >> >> >> >> >> > 2022-06-12 09:43:08.778 7 INFO nova.compute.manager >> >> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] >> [instance: >> >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle >> >> Event) >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >>
sorry for my mistake yes you are right thanks for your help and Cooperation On Tue, Jun 14, 2022 at 5:28 PM Eugen Block <eblock@nde.ag> wrote: > You don't need the admin keyring on a compute node but for tests it > can help. Basically it depends on your ceph auth config which keyring > you need on the openstack nodes. > If you already have ceph configured why don't you use it for all > services? Of course live snapshots are not possible if your glance > images are not within ceph (I don't think you mentioned that before). > > > Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > > > yes you are right , I added ceph.client.admin.keyring to nova compute > > container and ceph cluster error gone i also try rbd snap create from > nova > > compute container and its work without problem and vm is up . > > maybe the problem is related to glance .my glance backend is not on ceph > I > > think maybe its the reason > > > > On Mon, Jun 13, 2022 at 9:38 PM Eugen Block <eblock@nde.ag> wrote: > > > >> Hi, > >> > >> I don't have a containerized deployment so I don't have anything to > >> compare, but my compute nodes have a ceph.conf and all required > >> keyrings in /etc/ceph/, maybe someone else can confirm that it's > >> missing and should be present in a kolla-ansible deployment. In the > >> meantime you could provide those files to the container and retry the > >> live snapshot. > >> > >> > >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> > >> > snapshot_image_format doesn't exist on compute or controller nova.conf > >> > For the manual snapshot on the previous reply I ran it directly from > >> ceph, > >> > but it's interesting from nova compute there is an error about > connecting > >> > to CEPH cluster !!! also ceph -s is showing similar error > >> > [root@R3SG4 ~]# docker exec -u root -it nova_compute rbd snap create > >> > vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap2022-06-13 > >> > 13:42:18.061 7f03dac23200 -1 auth: unable to find a keyring on > >> > > >> > /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: > >> > (2) No such file or directory2022-06-13 13:42:18.061 7f03dac23200 -1 > >> > AuthRegistry(0x561d7014a790) no keyring found at > >> > > >> > /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, > >> > disabling ceph2022-06-13 13:42:18.065 7f03dac23200 -1 monclient: > >> > authenticate NOTE: no keyring found; disabled cephx authentication > >> > rbd: couldn't connect to the cluster! > >> > > >> > On Mon, Jun 13, 2022 at 12:15 PM Eugen Block <eblock@nde.ag> wrote: > >> > > >> >> Could you share your whole nova.conf (only the uncommented options)? > >> >> Is this option set in your env? > >> >> > >> >> #snapshot_image_format = <None> > >> >> > >> >> Also when you manually created the snapshot did you do it as the nova > >> >> user on the compute node? If not, could you retry? > >> >> > >> >> > >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> >> > >> >> > Hi > >> >> > kolla-ansible victoria version with ceph backend . > >> >> > rbd info output : > >> >> > rbd image '25c8d676-e20a-4238-a45c-d51daa62b941_disk': > >> >> > size 20 GiB in 5120 objects > >> >> > order 22 (4 MiB objects) > >> >> > snapshot_count: 0 > >> >> > id: b69aaf907284da > >> >> > block_name_prefix: rbd_data.b69aaf907284da > >> >> > format: 2 > >> >> > features: layering, exclusive-lock, object-map, fast-diff, > >> >> > deep-flatten > >> >> > op_features: > >> >> > flags: > >> >> > create_timestamp: Fri May 20 00:04:47 2022 > >> >> > access_timestamp: Sun Jun 12 16:26:02 2022 > >> >> > --------------- > >> >> > also live snapshot seems to work correctly without any error or any > >> >> > downtime : > >> >> > docker exec -u root -it ceph-mgr-cephosd01 rbd snap ls > >> >> > vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk > >> >> > SNAPID NAME SIZE PROTECTED TIMESTAMP > >> >> > 344 test-snap 20 GiB Sun Jun 12 23:48:39 2022 > >> >> > > >> >> > also on compute nova.conf, images_type is set on rbd . > >> >> > > >> >> > On Sun, Jun 12, 2022 at 5:55 PM Eugen Block <eblock@nde.ag> wrote: > >> >> > > >> >> >> You should respond to the list so other users can try to support > you. > >> >> >> > >> >> >> So nova is trying to live snapshot the instance: > >> >> >> > >> >> >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager > >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> >> > 4dbffaa9c14e401c8c210e23ebe0ab7b > ef940663426b4c87a751afaf13b758e0 - > >> >> >> > default default] [instance: > 25c8d676-e20a-4238-a45c-d51daa62b941] > >> >> >> > instance snapshotting > >> >> >> > [...] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning > >> >> >> > live snapshot process > >> >> >> > >> >> >> But I don't see any 'rbd snap create' command. Either the rbd > image > >> >> >> doesn't support it or there is some setting to keep all rbd images > >> >> >> "flat". Can you check any relevant configs you might have in nova? > >> >> >> Also can you show the output of 'rbd info > >> >> >> <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk' ? Then to test > if > >> >> >> the underlying rbd functions work as expected you could try to > create > >> >> >> a live snapshot manually: > >> >> >> > >> >> >> rbd snap create > >> >> <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk@test-snap > >> >> >> > >> >> >> And paste any relevant output here. > >> >> >> > >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> >> >> > >> >> >> > Its not working for any instances and all of them are paused . I > >> >> enable > >> >> >> > debug logs please check the logs : > >> >> >> > > >> >> >> > 2022-06-12 16:16:13.478 7 DEBUG nova.compute.manager > >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering > >> sync > >> >> for > >> >> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > >> >> >> > 2022-06-12 16:16:13.506 7 DEBUG oslo_concurrency.lockutils [-] > Lock > >> >> >> > "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by > >> >> >> > > >> >> >> > >> >> > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> >> >> > :: waited 0.000s inner > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > >> >> >> > 2022-06-12 16:16:43.562 7 DEBUG nova.compute.resource_tracker > >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this > >> compute > >> >> >> host > >> >> >> > and has allocations in placement: {'resources': {'VCPU': 1, > >> >> 'MEMORY_MB': > >> >> >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > >> >> >> > 2022-06-12 16:25:55.104 7 DEBUG nova.compute.manager > >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state > >> _get_power_state > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> >> >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager > >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > >> >> >> > 63426b4c87a751afaf13b758e0 - default default] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning live snapshot > >> process > >> >> >> > default default] Lazy-loading 'pci_devices' on Instance uuid > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 obj_load_attr > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 > >> >> >> > 2022-06-12 16:25:57.250 7 DEBUG nova.objects.instance > >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] Lazy-loading > >> >> >> > 'pci_devices' on Instance uuid > 25c8d676-e20a-4238-a45c-d51daa62b941 > >> >> >> > obj_load_attr > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101 > >> >> >> > 2022-06-12 16:25:57.317 7 DEBUG nova.virt.driver [-] Emitting > event > >> >> >> > <LifecycleEvent: 1655034957.3158934, > >> >> 25c8d676-e20a-4238-a45c-d51daa62b941 > >> >> >> > => Paused> emit_event > >> >> >> > > >> >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > >> >> >> > 2022-06-12 16:25:57.318 7 INFO nova.compute.manager [-] > [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle > Event) > >> >> >> > 2022-06-12 16:25:57.389 7 DEBUG nova.compute.manager > >> >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state > >> _get_power_state > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> >> >> > 2022-06-12 16:25:57.395 7 DEBUG nova.compute.manager > >> >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance > power > >> >> state > >> >> >> > after lifecycle event "Paused"; current vm_state: active, > current > >> >> >> > task_state: image_pending_upload, current DB power_state: 1, VM > >> >> >> > power_state: 3 handle_lifecycle_event > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > >> >> >> > 2022-06-12 16:25:57.487 7 INFO nova.compute.manager > >> >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state > the > >> >> >> instance > >> >> >> > has a pending task (image_pending_upload). Skip. > >> >> >> > 2022-06-12 16:26:02.039 7 DEBUG oslo_concurrency.processutils > >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] Running cmd > >> >> >> > (subprocess): qemu-img convert -t none -O raw -f raw > >> >> >> > > >> >> >> > >> >> > >> > rbd:vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk:id=cinder:conf=/etc/ceph/ceph.conf > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad > >> >> >> > execute > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 > >> >> >> > 2022-06-12 16:26:17.075 7 DEBUG nova.virt.driver [-] Emitting > event > >> >> >> > <LifecycleEvent: 1655034962.0316682, > >> >> 25c8d676-e20a-4238-a45c-d51daa62b941 > >> >> >> > => Stopped> emit_event > >> >> >> > > >> >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > >> >> >> > INFO nova.compute.manager [-] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle > Event) > >> >> >> > DEBUG nova.compute.manager > >> [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa > >> >> - - > >> >> >> - > >> >> >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking > >> state > >> >> >> > _get_power_state > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> >> >> > DEBUG nova.compute.manager > >> [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa > >> >> - - > >> >> >> - > >> >> >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] > Synchronizing > >> >> >> > instance power state after lifecycle event "Stopped"; current > >> >> vm_state: > >> >> >> > active, current task_state: image_pending_upload, current DB > >> >> power_state: > >> >> >> > 1, VM power_state: 4 handle_lifecycle_event > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > >> >> >> > INFO nova.compute.manager > >> [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa - > >> >> - > >> >> >> - - > >> >> >> > -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] During > >> >> >> sync_power_state > >> >> >> > the instance has a pending task (image_pending_upload). Skip. > >> >> >> > 2022-06-12 16:26:18.539 7 DEBUG nova.compute.manager > >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering > >> sync > >> >> for > >> >> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > >> >> >> > 2022-06-12 16:26:18.565 7 DEBUG oslo_concurrency.lockutils [-] > Lock > >> >> >> > "25c8d676-e20a-4238 > >> >> >> > -a45c-d51daa62b941" acquired by > >> >> >> > > >> >> >> > >> >> > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> >> >> > :: waited 0.000s inner > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > >> >> >> > 2022-06-12 16:26:18.566 7 INFO nova.compute.manager [-] > [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state > the > >> >> >> instance > >> >> >> > has a pending task (image_pending_upload). Skip. > >> >> >> > 2022-06-12 16:26:18.566 7 DEBUG oslo_concurrency.lockutils [-] > Lock > >> >> >> > "25c8d676-e20a-4238-a45c-d51daa62b941" released by > >> >> >> > > >> >> >> > >> >> > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> >> >> > :: held 0.001s inner > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371 > >> >> >> > 2022-06-12 16:26:25.769 7 DEBUG oslo_concurrency.processutils > >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] CMD > "qemu-img > >> >> convert > >> >> >> > -t none -O raw -f raw > >> >> >> > > >> >> >> > >> >> > >> > rbd:vms/25c8d676-e20a-4238-a45c-d51daa6b941_disk:id=cinder:conf=/etc/ceph/ceph.conf > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad" > >> >> >> > returned: 0 in 23.730s execute > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 > >> >> >> > default default] [instance: > 25c8d676-e20a-4238-a45c-d51daa62b941] > >> >> >> Snapshot > >> >> >> > extracted, beginning image upload > >> >> >> > 2022-06-12 16:26:27.981 7 DEBUG nova.virt.driver > >> >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] Emitting > event > >> >> >> > <LifecycleEvent: 1655034987.9807608, > >> >> 25c8d676-e20a-4238-a45c-d51daa62b941 > >> >> >> > => Started> emit_event > >> >> >> > > >> >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704 > >> >> >> > 2022-06-12 16:26:27.983 7 INFO nova.compute.manager > >> >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle > Event) > >> >> >> > [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state > >> >> >> > _get_power_state > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569 > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance > power > >> >> state > >> >> >> > after lifecycle event "Started"; current vm_state: active, > current > >> >> >> > task_state: image_pending_upload, current DB power_state: 1, VM > >> >> >> > power_state: 1 handle_lifecycle_event > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299 > >> >> >> > 2022-06-12 16:26:28.173 7 INFO nova.compute.manager > >> >> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Resumed (Lifecycle > Event > >> >> >> > 2022-06-12 16:29:00.859 7 DEBUG oslo_concurrency.lockutils > >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Acquired > lock > >> >> >> > "refresh_cache-25c8d676-e20a-4238-a45c-d51daa62b941" lock > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266 > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Forcefully refreshing > network > >> >> info > >> >> >> > cache for instance _get_instance_nw_info > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:1833 > >> >> >> > 2022-06-12 16:29:03.278 7 DEBUG nova.network.neutron > >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Updating > instance_info_cache > >> >> with > >> >> >> > network_info: [{"id": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", > >> >> "address": > >> >> >> > "fa:16:3e:ca:00:d9", "network": {"id": > >> >> >> > "b86c8304-a9bd-4b39-b7fc-f70ffe76f2a8", "bridge": "br-int", > >> "label": > >> >> >> > "External_Network", "subnets": [{"cidr": "141.11.42.0/24", > "dns": > >> >> >> > [{"address": "8.8.8.8", "type": "dns", "version": 4, "meta": > {}}, > >> >> >> > {"address": "217.218.127.127", "type": "dns", "version": 4, > "meta": > >> >> {}}], > >> >> >> > "gateway": {"address": "141.11.42.1", "type": "gateway", > >> "version": 4, > >> >> >> > "meta": {}}, "ips": [{"address": "141.11.42.37", "type": > "fixed", > >> >> >> > "version": 4, "meta": {}, "floating_ips": []}], "routes": [], > >> >> "version": > >> >> >> 4, > >> >> >> > "meta": {}}], "meta": {"injected": true, "tenant_id": > >> >> >> > "ef940663426b4c87a751afaf13b758e0", "mtu": 1500, > >> "physical_network": > >> >> >> > "physnet1", "tunneled": false}}, "type": "ovs", "details": > >> >> >> {"connectivity": > >> >> >> > "l2", "port_filter": true, "ovs_hybrid_plug": true, > >> "datapath_type": > >> >> >> > "system", "bridge_name": "br-int"}, "devname": "tapaa2fdd7d-ad", > >> >> >> > "ovs_interfaceid": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1", > >> >> "qbh_params": > >> >> >> > null, "qbg_params": null, "active": true, "vnic_type": "normal", > >> >> >> "profile": > >> >> >> > {}, "preserve_on_delete": false, "meta": {}}] > >> >> >> > update_instance_cache_with_nw_info > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:117 > >> >> >> > nstance 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on > >> this > >> >> >> > compute host and has allocations in placement: {'resources': > >> {'VCPU': > >> >> 1, > >> >> >> > 'MEMORY_MB': 1024, 'DISK_GB': 20}}. > >> >> _remove_deleted_instances_allocations > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > >> >> >> > 2022-06-12 16:33:37.595 7 INFO nova.compute.manager > >> >> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7 > >> >> >> 4dbffaa9c14e401c8c210e23ebe0ab7b > >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Took 461.98 seconds to > >> snapshot > >> >> the > >> >> >> > instance on the hypervisor. > >> >> >> > 2022-06-12 16:36:16.459 7 DEBUG nova.compute.manager > >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering > >> sync > >> >> for > >> >> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693 > >> >> >> > Lock "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by > >> >> >> > > >> >> >> > >> >> > >> > "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" > >> >> >> > :: waited 0.000s inner > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359 > >> >> >> > 2022-06-12 16:37:05.365 7 DEBUG nova.compute.resource_tracker > >> >> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this > >> compute > >> >> >> host > >> >> >> > and has allocations in placement: {'resources': {'VCPU': 1, > >> >> 'MEMORY_MB': > >> >> >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations > >> >> >> > > >> >> >> > >> >> > >> > /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539 > >> >> >> > 2022-06-12 09:42:32.687 7 INFO nova.compute.manager > >> >> >> > [req-e79e4177-4712-4795-91da-853bc524fac0 > >> >> >> 93fb420b3c604d4fae408b81135b58e9 > >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance: > >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting > >> >> >> > > >> >> >> > On Sun, Jun 12, 2022 at 3:36 PM Eugen Block <eblock@nde.ag> > wrote: > >> >> >> > > >> >> >> >> Have you tried with debug logs? Has it worked with live > snapshots > >> >> >> >> before for other instances or did it never work and all > snapshots > >> >> were > >> >> >> >> "cold"? > >> >> >> >> > >> >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> >> >> >> > >> >> >> >> > Hi > >> >> >> >> > kolla-ansible victoria version with ceph backend without > volume > >> >> >> >> > > >> >> >> >> > On Sun, Jun 12, 2022 at 12:45 PM Eugen Block <eblock@nde.ag> > >> >> wrote: > >> >> >> >> > > >> >> >> >> >> Hi, > >> >> >> >> >> > >> >> >> >> >> can you share more details about your environment? Which > >> openstack > >> >> >> >> >> version is it? What is the storage backend? In earlier > releases > >> >> there > >> >> >> >> >> was an option: > >> >> >> >> >> > >> >> >> >> >> #disable_libvirt_livesnapshot = false > >> >> >> >> >> > >> >> >> >> >> but this option has been deprecated. But if you're on an > older > >> >> >> version > >> >> >> >> >> that could explain it. > >> >> >> >> >> > >> >> >> >> >> Zitat von Parsa Aminian <p.aminian.server@gmail.com>: > >> >> >> >> >> > >> >> >> >> >> > When I snapshot from the instance , server will gone away > and > >> >> its > >> >> >> not > >> >> >> >> >> > reachable until the snapshot is complete here is the logs > : > >> >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance > snapshotting > >> >> >> >> >> > 2022-06-12 09:42:34.755 7 INFO nova.compute.manager > >> >> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] > >> [instance: > >> >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle > >> >> Event) > >> >> >> >> >> > 2022-06-12 09:42:34.995 7 INFO nova.compute.manager > >> >> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] > >> [instance: > >> >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During > sync_power_state > >> >> the > >> >> >> >> >> instance > >> >> >> >> >> > has a pending task (image_pending_upload). Skip. > >> >> >> >> >> > 2022-06-12 09:42:57.749 7 INFO nova.compute.manager [-] > >> >> [instance: > >> >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped > (Lifecycle > >> >> Event) > >> >> >> >> >> > 2022-06-12 09:43:06.102 7 INFO nova.virt.libvirt.driver > >> >> >> >> >> > [req-e79e4177-4712-4795-91da-853bc524fac0 > >> >> >> >> >> 93fb420b3c604d4fae408b81135b58e9 > >> >> >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] > >> [instance: > >> >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Snapshot extracted, > >> >> beginning > >> >> >> >> image > >> >> >> >> >> > upload > >> >> >> >> >> > 2022-06-12 09:43:08.778 7 INFO nova.compute.manager > >> >> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] > >> [instance: > >> >> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started > (Lifecycle > >> >> Event) > >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> > >> > >> > >> > >> > > > > >
participants (2)
-
Eugen Block
-
Parsa Aminian