snapshot problem

Parsa Aminian p.aminian.server at gmail.com
Mon Jun 13 09:59:41 UTC 2022


I attached nova.conf

On Mon, Jun 13, 2022 at 12:15 PM Eugen Block <eblock at nde.ag> wrote:

> Could you share your whole nova.conf (only the uncommented options)?
> Is this option set in your env?
>
> #snapshot_image_format = <None>
>
> Also when you manually created the snapshot did you do it as the nova
> user on the compute node? If not, could you retry?
>
>
> Zitat von Parsa Aminian <p.aminian.server at gmail.com>:
>
> > Hi
> > kolla-ansible victoria version with ceph backend .
> > rbd info output :
> > rbd image '25c8d676-e20a-4238-a45c-d51daa62b941_disk':
> >         size 20 GiB in 5120 objects
> >         order 22 (4 MiB objects)
> >         snapshot_count: 0
> >         id: b69aaf907284da
> >         block_name_prefix: rbd_data.b69aaf907284da
> >         format: 2
> >         features: layering, exclusive-lock, object-map, fast-diff,
> > deep-flatten
> >         op_features:
> >         flags:
> >         create_timestamp: Fri May 20 00:04:47 2022
> >         access_timestamp: Sun Jun 12 16:26:02 2022
> > ---------------
> > also live snapshot seems to work correctly without any error or any
> > downtime :
> > docker  exec -u root -it ceph-mgr-cephosd01 rbd snap ls
> > vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk
> > SNAPID  NAME       SIZE    PROTECTED  TIMESTAMP
> >    344  test-snap  20 GiB             Sun Jun 12 23:48:39 2022
> >
> > also on compute nova.conf, images_type is set on rbd .
> >
> > On Sun, Jun 12, 2022 at 5:55 PM Eugen Block <eblock at nde.ag> wrote:
> >
> >> You should respond to the list so other users can try to support you.
> >>
> >> So nova is trying to live snapshot the instance:
> >>
> >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager
> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7
> >> > 4dbffaa9c14e401c8c210e23ebe0ab7b ef940663426b4c87a751afaf13b758e0 -
> >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941]
> >> > instance snapshotting
> >> > [...] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning
> >> > live snapshot process
> >>
> >> But I don't see any 'rbd snap create' command. Either the rbd image
> >> doesn't support it or there is some setting to keep all rbd images
> >> "flat". Can you check any relevant configs you might have in nova?
> >> Also can you show the output of 'rbd info
> >> <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk' ? Then to test if
> >> the underlying rbd functions work as expected you could try to create
> >> a live snapshot manually:
> >>
> >> rbd snap create
> <pool>/25c8d676-e20a-4238-a45c-d51daa62b941_disk at test-snap
> >>
> >> And paste any relevant output here.
> >>
> >> Zitat von Parsa Aminian <p.aminian.server at gmail.com>:
> >>
> >> > Its not working for any instances and all of them are paused . I
> enable
> >> > debug logs please check the logs :
> >> >
> >> > 2022-06-12 16:16:13.478 7 DEBUG nova.compute.manager
> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync
> for
> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693
> >> > 2022-06-12 16:16:13.506 7 DEBUG oslo_concurrency.lockutils [-] Lock
> >> > "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by
> >> >
> >>
> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync"
> >> > :: waited 0.000s inner
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
> >> > 2022-06-12 16:16:43.562 7 DEBUG nova.compute.resource_tracker
> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute
> >> host
> >> > and has allocations in placement: {'resources': {'VCPU': 1,
> 'MEMORY_MB':
> >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539
> >> > 2022-06-12 16:25:55.104 7 DEBUG nova.compute.manager
> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7
> >> 4dbffaa9c14e401c8c210e23ebe0ab7b
> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569
> >> > 2022-06-12 16:25:55.603 7 INFO nova.compute.manager
> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7
> >> 4dbffaa9c14e401c8c210e23ebe0ab7b
> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting
> >> > 63426b4c87a751afaf13b758e0 - default default] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Beginning live snapshot process
> >> > default default] Lazy-loading 'pci_devices' on Instance uuid
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 obj_load_attr
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101
> >> > 2022-06-12 16:25:57.250 7 DEBUG nova.objects.instance
> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7
> >> 4dbffaa9c14e401c8c210e23ebe0ab7b
> >> > ef940663426b4c87a751afaf13b758e0 - default default] Lazy-loading
> >> > 'pci_devices' on Instance uuid 25c8d676-e20a-4238-a45c-d51daa62b941
> >> > obj_load_attr
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/objects/instance.py:1101
> >> > 2022-06-12 16:25:57.317 7 DEBUG nova.virt.driver [-] Emitting event
> >> > <LifecycleEvent: 1655034957.3158934,
> 25c8d676-e20a-4238-a45c-d51daa62b941
> >> > => Paused> emit_event
> >> >
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704
> >> > 2022-06-12 16:25:57.318 7 INFO nova.compute.manager [-] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle Event)
> >> > 2022-06-12 16:25:57.389 7 DEBUG nova.compute.manager
> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state _get_power_state
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569
> >> > 2022-06-12 16:25:57.395 7 DEBUG nova.compute.manager
> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power
> state
> >> > after lifecycle event "Paused"; current vm_state: active, current
> >> > task_state: image_pending_upload, current DB power_state: 1, VM
> >> > power_state: 3 handle_lifecycle_event
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299
> >> > 2022-06-12 16:25:57.487 7 INFO nova.compute.manager
> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the
> >> instance
> >> > has a pending task (image_pending_upload). Skip.
> >> > 2022-06-12 16:26:02.039 7 DEBUG oslo_concurrency.processutils
> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7
> >> 4dbffaa9c14e401c8c210e23ebe0ab7b
> >> > ef940663426b4c87a751afaf13b758e0 - default default] Running cmd
> >> > (subprocess): qemu-img convert -t none -O raw -f raw
> >> >
> >>
> rbd:vms/25c8d676-e20a-4238-a45c-d51daa62b941_disk:id=cinder:conf=/etc/ceph/ceph.conf
> >> >
> >>
> /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad
> >> > execute
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384
> >> > 2022-06-12 16:26:17.075 7 DEBUG nova.virt.driver [-] Emitting event
> >> > <LifecycleEvent: 1655034962.0316682,
> 25c8d676-e20a-4238-a45c-d51daa62b941
> >> > => Stopped> emit_event
> >> >
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704
> >> > INFO nova.compute.manager [-] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle Event)
> >> > DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa
> - -
> >> -
> >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state
> >> > _get_power_state
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569
> >> > DEBUG nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa
> - -
> >> -
> >> > - -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing
> >> > instance power state after lifecycle event "Stopped"; current
> vm_state:
> >> > active, current task_state: image_pending_upload, current DB
> power_state:
> >> > 1, VM power_state: 4 handle_lifecycle_event
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299
> >> > INFO nova.compute.manager [req-f9f8cbf5-6208-4dca-aca6-48dee87f38fa -
> -
> >> - -
> >> > -] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] During
> >> sync_power_state
> >> > the instance has a pending task (image_pending_upload). Skip.
> >> > 2022-06-12 16:26:18.539 7 DEBUG nova.compute.manager
> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync
> for
> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693
> >> > 2022-06-12 16:26:18.565 7 DEBUG oslo_concurrency.lockutils [-] Lock
> >> > "25c8d676-e20a-4238
> >> > -a45c-d51daa62b941" acquired by
> >> >
> >>
> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync"
> >> > :: waited 0.000s inner
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
> >> > 2022-06-12 16:26:18.566 7 INFO nova.compute.manager [-] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state the
> >> instance
> >> > has a pending task (image_pending_upload). Skip.
> >> > 2022-06-12 16:26:18.566 7 DEBUG oslo_concurrency.lockutils [-] Lock
> >> > "25c8d676-e20a-4238-a45c-d51daa62b941" released by
> >> >
> >>
> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync"
> >> > :: held 0.001s inner
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371
> >> > 2022-06-12 16:26:25.769 7 DEBUG oslo_concurrency.processutils
> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7
> >> 4dbffaa9c14e401c8c210e23ebe0ab7b
> >> > ef940663426b4c87a751afaf13b758e0 - default default] CMD "qemu-img
> convert
> >> > -t none -O raw -f raw
> >> >
> >>
> rbd:vms/25c8d676-e20a-4238-a45c-d51daa6b941_disk:id=cinder:conf=/etc/ceph/ceph.conf
> >> >
> >>
> /var/lib/nova/instances/snapshots/tmpv21b_i59/8717dec4c99c4ef7bac752e2a48690ad"
> >> > returned: 0 in 23.730s execute
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423
> >> > default default] [instance: 25c8d676-e20a-4238-a45c-d51daa62b941]
> >> Snapshot
> >> > extracted, beginning image upload
> >> > 2022-06-12 16:26:27.981 7 DEBUG nova.virt.driver
> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] Emitting event
> >> > <LifecycleEvent: 1655034987.9807608,
> 25c8d676-e20a-4238-a45c-d51daa62b941
> >> > => Started> emit_event
> >> >
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py:1704
> >> > 2022-06-12 16:26:27.983 7 INFO nova.compute.manager
> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle Event)
> >> > [instance: 25c8d676-e20a-4238-a45c-d51daa62b941] Checking state
> >> > _get_power_state
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1569
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Synchronizing instance power
> state
> >> > after lifecycle event "Started"; current vm_state: active, current
> >> > task_state: image_pending_upload, current DB power_state: 1, VM
> >> > power_state: 1 handle_lifecycle_event
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:1299
> >> > 2022-06-12 16:26:28.173 7 INFO nova.compute.manager
> >> > [req-40444d74-f2fa-4569-87dd-375139938e81 - - - - -] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Resumed (Lifecycle Event
> >> > 2022-06-12 16:29:00.859 7 DEBUG oslo_concurrency.lockutils
> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Acquired lock
> >> > "refresh_cache-25c8d676-e20a-4238-a45c-d51daa62b941" lock
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Forcefully refreshing network
> info
> >> > cache for instance _get_instance_nw_info
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:1833
> >> > 2022-06-12 16:29:03.278 7 DEBUG nova.network.neutron
> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Updating instance_info_cache
> with
> >> > network_info: [{"id": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1",
> "address":
> >> > "fa:16:3e:ca:00:d9", "network": {"id":
> >> > "b86c8304-a9bd-4b39-b7fc-f70ffe76f2a8", "bridge": "br-int", "label":
> >> > "External_Network", "subnets": [{"cidr": "141.11.42.0/24", "dns":
> >> > [{"address": "8.8.8.8", "type": "dns", "version": 4, "meta": {}},
> >> > {"address": "217.218.127.127", "type": "dns", "version": 4, "meta":
> {}}],
> >> > "gateway": {"address": "141.11.42.1", "type": "gateway", "version": 4,
> >> > "meta": {}}, "ips": [{"address": "141.11.42.37", "type": "fixed",
> >> > "version": 4, "meta": {}, "floating_ips": []}], "routes": [],
> "version":
> >> 4,
> >> > "meta": {}}], "meta": {"injected": true, "tenant_id":
> >> > "ef940663426b4c87a751afaf13b758e0", "mtu": 1500, "physical_network":
> >> > "physnet1", "tunneled": false}}, "type": "ovs", "details":
> >> {"connectivity":
> >> > "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type":
> >> > "system", "bridge_name": "br-int"}, "devname": "tapaa2fdd7d-ad",
> >> > "ovs_interfaceid": "aa2fdd7d-ad18-4890-ad57-14bf9888d2c1",
> "qbh_params":
> >> > null, "qbg_params": null, "active": true, "vnic_type": "normal",
> >> "profile":
> >> > {}, "preserve_on_delete": false, "meta": {}}]
> >> > update_instance_cache_with_nw_info
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/network/neutron.py:117
> >> > nstance 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this
> >> > compute host and has allocations in placement: {'resources': {'VCPU':
> 1,
> >> > 'MEMORY_MB': 1024, 'DISK_GB': 20}}.
> _remove_deleted_instances_allocations
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539
> >> > 2022-06-12 16:33:37.595 7 INFO nova.compute.manager
> >> > [req-5ecfdf74-7cf3-481a-aa12-140deae202f7
> >> 4dbffaa9c14e401c8c210e23ebe0ab7b
> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Took 461.98 seconds to snapshot
> the
> >> > instance on the hypervisor.
> >> > 2022-06-12 16:36:16.459 7 DEBUG nova.compute.manager
> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Triggering sync
> for
> >> > uuid 25c8d676-e20a-4238-a45c-d51daa62b941 _sync_power_states
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py:9693
> >> > Lock "25c8d676-e20a-4238-a45c-d51daa62b941" acquired by
> >> >
> >>
> "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync"
> >> > :: waited 0.000s inner
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
> >> > 2022-06-12 16:37:05.365 7 DEBUG nova.compute.resource_tracker
> >> > [req-2ecf34c3-72e7-4f33-89cb-9b250cd6d223 - - - - -] Instance
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941 actively managed on this compute
> >> host
> >> > and has allocations in placement: {'resources': {'VCPU': 1,
> 'MEMORY_MB':
> >> > 1024, 'DISK_GB': 20}}. _remove_deleted_instances_allocations
> >> >
> >>
> /var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1539
> >> > 2022-06-12 09:42:32.687 7 INFO nova.compute.manager
> >> > [req-e79e4177-4712-4795-91da-853bc524fac0
> >> 93fb420b3c604d4fae408b81135b58e9
> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance:
> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting
> >> >
> >> > On Sun, Jun 12, 2022 at 3:36 PM Eugen Block <eblock at nde.ag> wrote:
> >> >
> >> >> Have you tried with debug logs? Has it worked with live snapshots
> >> >> before for other instances or did it never work and all snapshots
> were
> >> >> "cold"?
> >> >>
> >> >> Zitat von Parsa Aminian <p.aminian.server at gmail.com>:
> >> >>
> >> >> > Hi
> >> >> > kolla-ansible victoria version with ceph backend without volume
> >> >> >
> >> >> > On Sun, Jun 12, 2022 at 12:45 PM Eugen Block <eblock at nde.ag>
> wrote:
> >> >> >
> >> >> >> Hi,
> >> >> >>
> >> >> >> can you share more details about your environment? Which openstack
> >> >> >> version is it? What is the storage backend? In earlier releases
> there
> >> >> >> was an option:
> >> >> >>
> >> >> >> #disable_libvirt_livesnapshot = false
> >> >> >>
> >> >> >> but this option has been deprecated. But if you're on an older
> >> version
> >> >> >> that could explain it.
> >> >> >>
> >> >> >> Zitat von Parsa Aminian <p.aminian.server at gmail.com>:
> >> >> >>
> >> >> >> > When I snapshot from the instance , server will gone away and
> its
> >> not
> >> >> >> > reachable until the snapshot is complete here is the logs :
> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] instance snapshotting
> >> >> >> > 2022-06-12 09:42:34.755 7 INFO nova.compute.manager
> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance:
> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Paused (Lifecycle
> Event)
> >> >> >> > 2022-06-12 09:42:34.995 7 INFO nova.compute.manager
> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance:
> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] During sync_power_state
> the
> >> >> >> instance
> >> >> >> > has a pending task (image_pending_upload). Skip.
> >> >> >> > 2022-06-12 09:42:57.749 7 INFO nova.compute.manager [-]
> [instance:
> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Stopped (Lifecycle
> Event)
> >> >> >> > 2022-06-12 09:43:06.102 7 INFO nova.virt.libvirt.driver
> >> >> >> > [req-e79e4177-4712-4795-91da-853bc524fac0
> >> >> >> 93fb420b3c604d4fae408b81135b58e9
> >> >> >> > ef940663426b4c87a751afaf13b758e0 - default default] [instance:
> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] Snapshot extracted,
> beginning
> >> >> image
> >> >> >> > upload
> >> >> >> > 2022-06-12 09:43:08.778 7 INFO nova.compute.manager
> >> >> >> > [req-786946b1-3d22-489c-bf4d-8b1375b09ecb - - - - -] [instance:
> >> >> >> > 25c8d676-e20a-4238-a45c-d51daa62b941] VM Started (Lifecycle
> Event)
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
> >>
> >>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20220613/f13af722/attachment-0001.htm>
-------------- next part --------------
for nova.conf yes I changed ips and passwords for security reason :
[root at R3SG4 ~]# cat /etc/kolla/nova-compute/nova.conf
[DEFAULT]
debug = False
log_dir = /var/log/kolla/nova
state_path = /var/lib/nova
allow_resize_to_same_host = true
compute_driver = libvirt.LibvirtDriver
my_ip = ip
instance_usage_audit = True
instance_usage_audit_period = hour
transport_url = rabbit://openstack:iyuiyiuyu@ip:5672//
injected_network_template = /usr/lib/python3.6/site-packages/nova/virt/interfaces.template
flat_injected = true
force_config_drive = true
config_drive_cdrom = True
enable_instance_password = True
dhcp_domain =
resume_guests_state_on_host_boot = true
reclaim_instance_interval = 86400
cpu_allocation_ratio = 3.5
ram_allocation_ratio = 1.0
disk_allocation_ratio = 1.0
resize_confirm_window = 10

[conductor]
workers = 5

[vnc]
novncproxy_host = ip
novncproxy_port = 6080
server_listen = ip
server_proxyclient_address = ip
novncproxy_base_url = http://ip:6080/vnc_auto.html

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[glance]
api_servers = http://ip:9292
cafile =
cafile =
num_retries = 3
debug = False

[neutron]
metadata_proxy_shared_secret = jklhiuy
service_metadata_proxy = true
auth_url = http://ip:35357
auth_type = password
cafile =
project_domain_name = Default
user_domain_id = default
project_name = service
username = neutron
password = iupouopiupou
region_name = RegionThree
valid_interfaces = internal

[libvirt]
connection_uri = qemu+tcp://ip/system
live_migration_inbound_addr = ip
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
disk_cachemodes = network=writeback
hw_disk_discard = unmap
rbd_secret_uuid = 987798
virt_type = kvm
inject_partition = -1
inject_password = true
cpu_mode = custom
cpu_model = Westmere

[upgrade_levels]
compute = auto

[oslo_messaging_notifications]
transport_url = rabbit://openstack:lkjhlkhlk@ip:5672//
driver = messagingv2
topics = notifications

[privsep_entrypoint]
helper_command = sudo nova-rootwrap /etc/nova/rootwrap.conf privsep-helper --config-file /etc/nova/nova.conf

[guestfs]
debug = False

[placement]
auth_type = password
auth_url = http://ip:35357
username = placement
password = oiuoiuoiu
user_domain_name = Default
project_name = service
project_domain_name = Default
region_name = RegionThree
cafile =
valid_interfaces = internal

[notifications]
notify_on_state_change = vm_and_task_state
notification_format = unversioned

[keystone_authtoken]
www_authenticate_uri = http://ip:5000
auth_url = http://ip:35357

-------------


More information about the openstack-discuss mailing list