[Openstack] nova compute repeating logs

sonia verma soniaverma9727 at gmail.com
Fri May 16 11:00:54 UTC 2014


Hi

I'm trying to boot VM from my controller node(openstack dashboard) onto
compute node but it is stucking at spawning state.
I'm able to see the VM interface onto the compute but the status is
spawning even after 10-15 minutes.

Below are the nova schedular logs..

ova/openstack/common/loopingcall.py:130^M 2014-05-16 16:02:16.581 13421
DEBUG nova.openstack.common.periodic_task [-] Running periodic task
SchedulerManager._expire_reservations run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178^M 2014-05-16
16:02:16.588 13421 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178^M 2014-05-16
16:02:16.589 13421 DEBUG nova.openstack.common.loopingcall [-] Dynamic
looping call sleeping for 60.00 seconds _inner
/opt/stack/nova/nova/openstack/common/loopingcall.py:130^M 2014-05-16
16:03:16.593 13421 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task SchedulerManager._expire_reservations run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178^M 2014-05-16
16:03:16.600 13421 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178^M 2014-05-16
16:03:16.601 13421 DEBUG nova.openstack.common.loopingcall [-] Dynamic
looping call sleeping for 60.00 seconds _inner
/opt/stack/nova/nova/openstack/common/loopingcall.py:130^M

Also my nova-compute logs at the compute node are repeating
continuesly.Below are the logs.

-05-16 05:34:19.503 26935 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:176^M 2014-05-16
05:34:19.504 26935 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task ComputeManager._instance_usage_audit run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:176^M 2014-05-16
05:34:19.504 26935 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task ComputeManager.update_available_resource run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:176^M 2014-05-16
05:34:19.505 26935 DEBUG nova.openstack.common.lockutils [-] Got semaphore
"compute_resources" lock
/opt/stack/nova/nova/openstack/common/lockutils.py:166^M 2014-05-16
05:34:19.505 26935 DEBUG nova.openstack.common.lockutils [-] Got semaphore
/ lock "update_available_resource" inner
/opt/stack/nova/nova/openstack/common/lockutils.py:245^M 2014-05-16
05:34:19.505 26935 AUDIT nova.compute.resource_tracker [-] Auditing locally
available compute resources^M 2014-05-16 05:34:19.506 26935 DEBUG
nova.virt.libvirt.driver [-] Updating host stats update_status
/opt/stack/nova/nova/virt/libvirt/driver.py:4865^M 2014-05-16 05:34:19.566
26935 DEBUG nova.openstack.common.processutils [-] Running cmd
(subprocess): env LC_ALL=C LANG=C qemu-img info
/opt/stack/data/nova/instances/961b0fcd-60e3-488f-93df-5b852d93ede2/disk
execute /opt/stack/nova/nova/openstack/common/processutils.py:147^M
2014-05-16 05:34:19.612 26935 DEBUG nova.openstack.common.processutils [-]
Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info
/opt/stack/data/nova/instances/961b0fcd-60e3-488f-93df-5b852d93ede2/disk
execute /opt/stack/nova/nova/openstack/common/processutils.py:147^M
2014-05-16 05:34:19.703 26935 DEBUG nova.compute.resource_tracker [-]
Hypervisor: free ram (MB): 5565 _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:388^M 2014-05-16
05:34:19.705 26935 DEBUG nova.compute.resource_tracker [-] Hypervisor: free
disk (GB): 95 _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:389^M 2014-05-16
05:34:19.705 26935 DEBUG nova.compute.resource_tracker [-] Hypervisor: free
VCPUs: 24 _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:394^M 2014-05-16
05:34:19.706 26935 DEBUG nova.compute.resource_tracker [-] Hypervisor:
assignable PCI devices: [] _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:401^M 2014-05-16
05:34:19.708 26935 DEBUG nova.openstack.common.rpc.amqp [-] Making
synchronous call on conductor ... multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553^M 2014-05-16
05:34:19.709 26935 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is
7435553a261b4f3eb61f985017441333 multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:556^M 2014-05-16
05:34:19.709 26935 DEBUG nova.openstack.common.rpc.amqp [-] UNIQUE_ID is
f2dd9f9fc517406bbe82366085de5523. _add_unique_id
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341^M 2014-05-16
05:34:19.716 26935 DEBUG nova.openstack.common.rpc.amqp [-] Making
synchronous call on conductor ... multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553^M 2014-05-16
05:34:19.717 26935 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is
965b77a6b9da47c884bd22a2d47de23c multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:556^M
Please help regarding this.

Thanks
Sonia
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140516/ed51854d/attachment.html>


More information about the Openstack mailing list