[openstack-dev] Vm stucking at spawning state

abhishek jain ashujain9727 at gmail.com
Sun May 11 17:21:38 UTC 2014


Hi

I have installed openstack using devstack.I'm using multinode setup with
one controller node and one compute node.
I'm able to boot VM onto controller node but not able to boot VM onto
compute node from openstack dashboard.
Moreover I'm not getting error in nova-compute service of both controller
node and compute node.The neutron service is also enabled in my setup.
Whenever I'm booting my VM onto copute node from the openstack dashboard,it
is  stucking at spawning state.

The nova schedular logs of the controller node are as below ..


2014-05-11 18:48:22.547 20123 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task SchedulerManager._expire_reservations
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-05-11 18:48:22.554 20123 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task SchedulerManager._run_periodic_tasks
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-05-11 18:48:22.554 20123 DEBUG nova.openstack.common.loopingcall [-]
Dynamic looping call sleeping for 60.00 seconds _inner
/opt/stack/nova/nova/openstack/common/loopingcall.py:130
2014-05-11 18:48:51.370 INFO nova.scheduler.filter_scheduler
[req-e36b289c-510c-4f47-9500-73f9a91c5f0e admin admin] Attempting to build
1 instance(s) uuids: [u'3946ee7f-d4a0-4c75-8ecc-0c32ec739c14']
2014-05-11 18:48:51.371 DEBUG nova.scheduler.filter_scheduler
[req-e36b289c-510c-4f47-9500-73f9a91c5f0e admin admin] Request Spec:
{u'num_instances': 1, u'block_device_mapping': [{u'instance_uuid':
u'3946ee7f-d4a0-4c75-8ecc-0c32ec739c14', u'guest_format': None,
u'boot_index': 0, u'delete_on_termination': True, u'no_device': None,
u'connection_info': None, u'snapshot_id': None, u'device_name': None,
u'disk_bus': None, u'image_id': u'041b934c-5d1e-4bf6-aad0-4872a2fe4b40',
u'source_type': u'image', u'device_type': u'disk', u'volume_id': None,
u'destination_type': u'local', u'volume_size': None}], u'image':
{u'status': u'active', u'name': u'cirros-0.3.0-i386-disk.img', u'deleted':
False, u'container_format': u'ami', u'created_at':
u'2014-05-11T13:11:31.000000', u'disk_format': u'ami', u'updated_at':
u'2014-05-11T13:11:32.000000', u'id':
u'041b934c-5d1e-4bf6-aad0-4872a2fe4b40', u'owner':
u'a0baea45e9104b8d82a1b32fff4762b1', u'min_ram': 0, u'checksum':
u'90169ba6f09b5906a7f0755bd00bf2c3', u'min_disk': 0, u'is_public': True,
u'deleted_at': None, u'properties': {}, u'size': 9159168},
u'instance_type': {u'root_gb': 1, u'name': u'm1.tiny', u'ephemeral_gb': 0,
u'memory_mb': 512, u'vcpus': 1, u'extra_specs': {}, u'swap': 0,
u'rxtx_factor': 1.0, u'flavorid': u'1', u'vcpu_weight': None, u'id': 2},
u'instance_properties': {u'vm_state': u'building', u'availability_zone':
u'availability-zone', u'terminated_at': None, u'ephemeral_gb': 0,
u'instance_type_id': 2, u'user_data': None, u'cleaned': False, u'vm_mode':
None, u'deleted_at': None, u'reservation_id': u'r-9ovpg3py', u'id': 3,
u'security_groups': [], u'disable_terminate': False, u'display_name':
u'admin-private3', u'uuid': u'3946ee7f-d4a0-4c75-8ecc-0c32ec739c14',
u'default_swap_device': None, u'info_cache': {u'instance_uuid':
u'3946ee7f-d4a0-4c75-8ecc-0c32ec739c14', u'deleted': False, u'created_at':
u'2014-05-11T13:18:50.000000', u'updated_at': None, u'network_info': [],
u'deleted_at': None}, u'hostname': u'admin-private3', u'launched_on': None,
u'display_description': u'admin-private3', u'key_data': None, u'kernel_id':
u'', u'power_state': 0, u'default_ephemeral_device': None, u'progress': 0,
u'project_id': u'a0baea45e9104b8d82a1b32fff4762b1', u'launched_at': None,
u'scheduled_at': None, u'node': None, u'ramdisk_id': u'', u'access_ip_v6':
None, u'access_ip_v4': None, u'deleted': False, u'key_name': None,
u'updated_at': None, u'host': None, u'ephemeral_key_uuid': None,
u'architecture': None, u'user_id': u'85ba3d1ce9c949e5bda18c4a14e7d7de',
u'system_metadata': {u'image_min_disk': u'1', u'instance_type_memory_mb':
u'512', u'instance_type_swap': u'0', u'instance_type_vcpu_weight': None,
u'instance_type_root_gb': u'1', u'instance_type_id': u'2',
u'instance_type_name': u'm1.tiny', u'instance_type_ephemeral_gb': u'0',
u'instance_type_rxtx_factor': u'1.0', u'instance_type_flavorid': u'1',
u'image_container_format': u'ami', u'instance_type_vcpus': u'1',
u'image_min_ram': u'0', u'image_disk_format': u'ami',
u'image_base_image_ref': u'041b934c-5d1e-4bf6-aad0-4872a2fe4b40'},
u'task_state': u'scheduling', u'shutdown_terminate': False, u'cell_name':
None, u'root_gb': 1, u'locked': False, u'name': u'instance-00000003',
u'created_at': u'2014-05-11T13:18:50.000000', u'locked_by': None,
u'launch_index': 0, u'metadata': {}, u'memory_mb': 512, u'vcpus': 1,
u'image_ref': u'041b934c-5d1e-4bf6-aad0-4872a2fe4b40', u'root_device_name':
None, u'auto_disk_config': False, u'os_type': None, u'config_drive': u''},
u'security_group': [u'default'], u'instance_uuids':
[u'3946ee7f-d4a0-4c75-8ecc-0c32ec739c14']} schedule_run_instance
/opt/stack/nova/nova/scheduler/filter_scheduler.py:82
2014-05-11 18:48:51.375 AUDIT nova.scheduler.host_manager
[req-e36b289c-510c-4f47-9500-73f9a91c5f0e admin admin] Host filter forcing
available hosts to t4240-ubuntu1310
2014-05-11 18:48:51.375 DEBUG nova.scheduler.filter_scheduler
[req-e36b289c-510c-4f47-9500-73f9a91c5f0e admin admin] Filtered
[(t4240-ubuntu1310, t4240-ubuntu1310) ram:4352 disk:97280 io_ops:2
instances:2] _schedule
/opt/stack/nova/nova/scheduler/filter_scheduler.py:331
2014-05-11 18:48:51.376 DEBUG nova.scheduler.filter_scheduler
[req-e36b289c-510c-4f47-9500-73f9a91c5f0e admin admin] Weighed [WeighedHost
[host: t4240-ubuntu1310, weight: 1.0]] _schedule
/opt/stack/nova/nova/scheduler/filter_scheduler.py:336
2014-05-11 18:48:51.376 INFO nova.scheduler.filter_scheduler
[req-e36b289c-510c-4f47-9500-73f9a91c5f0e admin admin] Choosing host
WeighedHost [host: t4240-ubuntu1310, weight: 1.0] for instance
3946ee7f-d4a0-4c75-8ecc-0c32ec739c14
2014-05-11 18:49:22.558 20123 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task SchedulerManager._expire_reservations
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-05-11 18:49:22.565 20123 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task SchedulerManager._run_periodic_tasks
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-05-11 18:49:22.565 20123 DEBUG nova.openstack.common.loopingcall [-]
Dynamic looping call sleeping for 60.00 seconds _inner
/opt/stack/nova/nova/openstack/common/loopingcall.py:130
2014-05-11 18:50:22.567 20123 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task SchedulerManager._expire_reservations
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-05-11 18:50:22.574 20123 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task SchedulerManager._run_periodic_tasks
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-05-11 18:50:22.574 20123 DEBUG nova.openstack.common.loopingcall [-]
Dynamic looping call sleeping for 60.00 seconds _inner
/opt/stack/nova/nova/openstack/common/loopingcall.py:130
2014-05-11 18:51:22.577 20123 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task SchedulerManager._expire_reservations
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-05-11 18:51:22.584 20123 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task SchedulerManager._run_periodic_tasks
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-05-11 18:51:22.585 20123 DEBUG nova.openstack.common.loopingcall [-]
Dynamic looping call sleeping for 60.00 seconds _inner
/opt/stack/nova/nova/openstack/common/loopingcall.py:130
2014-05-11 18:52:22.588 20123 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task SchedulerManager._expire_reservations
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-05-11 18:52:22.595 20123 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task SchedulerManager._run_periodic_tasks
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-05-11 18:52:22.595 20123 DEBUG nova.openstack.common.loopingcall [-]
Dynamic looping call sleeping for 60.00 seconds _inner
/opt/stack/nova/nova/openstack/common/loopingcall.py:130
2014-05-11 18:53:22.597 20123 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task SchedulerManager._expire_reservations
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-05-11 18:53:22.604 20123 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task SchedulerManager._run_periodic_tasks
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-05-11 18:53:22.605 20123 DEBUG nova.openstack.common.loopingcall [-]
Dynamic looping call sleeping for 60.00 seconds _inner
/opt/stack/nova/nova/openstack/common/loopingcall.py:130


Please help regarding this.


Thanks
Abhishek Jain
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140511/5486c481/attachment.html>


More information about the OpenStack-dev mailing list