[Openstack] [nova] Launch instance without os networking
Andreas Scheuring
Andreas.Scheuring at de.ibm.com
Thu Jul 31 09:45:52 UTC 2014
Hi together,
for test purposes I tried to launch a vm without any networking in
openstack configured on a multinode environment (setup via devstack:
controller + cpu node). That means, neither neutron nor nova-network is
installed on the controller and the cpu node.
Should I be able to launch an instance with such an configuration, or does
nova require an networking service to be present?
What I currently see in the cpu logs when launching an instance is a
endless loop with the content below. Is this an indication that at least
some networking is required? Or is something else wrong?
Controller is running
key, horizon, g-reg, g-api, n-api, n-cond, n-crt, n-sch, n-novnc, n-xvnc,
n-cauth, n-obj, c-api, c-sch, c-vol, h-eng, h-api, h-api-cfn, h-api-cw
CPU is running
n-cpu
screen output of cpu service:
2014-07-22 12:51:04.618 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager._poll_volume_usage from (pid=5402)
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-07-22 12:51:04.620 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager._instance_usage_audit from (pid=5402)
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-07-22 12:51:04.621 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager.update_available_resource from
(pid=5402) run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-07-22 12:51:04.623 DEBUG nova.openstack.common.lockutils [-] Got
semaphore "compute_resources" from (pid=5402) lock
/opt/stack/nova/nova/openstack/common/lockutils.py:168
2014-07-22 12:51:04.624 DEBUG nova.openstack.common.lockutils [-] Got
semaphore / lock "update_available_resource" from (pid=5402) inner
/opt/stack/nova/nova/openstack/common/lockutils.py:248
2014-07-22 12:51:04.625 AUDIT nova.compute.resource_tracker [-] Auditing
locally available compute resources
2014-07-22 12:51:04.625 DEBUG nova.virt.libvirt.driver [-] Updating host
stats from (pid=5402) update_status
/opt/stack/nova/nova/virt/libvirt/driver.py:5247
2014-07-22 12:51:04.672 DEBUG nova.compute.resource_tracker [-]
Hypervisor: free ram (MB): 1544 from (pid=5402)
_report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:409
2014-07-22 12:51:04.672 DEBUG nova.compute.resource_tracker [-]
Hypervisor: free disk (GB): 13 from (pid=5402)
_report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:410
2014-07-22 12:51:04.673 DEBUG nova.compute.resource_tracker [-]
Hypervisor: free VCPUs: 2 from (pid=5402) _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:415
2014-07-22 12:51:04.673 DEBUG nova.compute.resource_tracker [-]
Hypervisor: assignable PCI devices: [] from (pid=5402)
_report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:422
2014-07-22 12:51:04.712 AUDIT nova.compute.resource_tracker [-] Free ram
(MB): 1377
2014-07-22 12:51:04.714 AUDIT nova.compute.resource_tracker [-] Free disk
(GB): 17
2014-07-22 12:51:04.715 AUDIT nova.compute.resource_tracker [-] Free
VCPUS: 1
2014-07-22 12:51:04.784 INFO nova.compute.resource_tracker [-]
Compute_service record updated for devstack-compute:devstack-compute
2014-07-22 12:51:04.785 DEBUG nova.openstack.common.lockutils [-]
Semaphore / lock released "update_available_resource" from (pid=5402)
inner /opt/stack/nova/nova/openstack/common/lockutils.py:252
2014-07-22 12:51:04.800 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager._poll_rebooting_instances from
(pid=5402) run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-07-22 12:51:04.802 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager._reclaim_queued_deletes from
(pid=5402) run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-07-22 12:51:04.803 DEBUG nova.compute.manager [-]
CONF.reclaim_instance_interval <= 0, skipping... from (pid=5402)
_reclaim_queued_deletes /opt/stack/nova/nova/compute/manager.py:5364
2014-07-22 12:51:04.804 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager._poll_unconfirmed_resizes from
(pid=5402) run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-07-22 12:51:04.805 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager._poll_rescued_instances from
(pid=5402) run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-07-22 12:51:04.805 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager._check_instance_build_time from
(pid=5402) run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-07-22 12:51:04.806 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager._heal_instance_info_cache from
(pid=5402) run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178
2014-07-22 12:51:04.806 DEBUG nova.compute.manager [-] Starting heal
instance info cache from (pid=5402) _heal_instance_info_cache
/opt/stack/nova/nova/compute/manager.py:4789
2014-07-22 12:51:04.806 DEBUG nova.compute.manager [-] Rebuilding the list
of instances to heal from (pid=5402) _heal_instance_info_cache
/opt/stack/nova/nova/compute/manager.py:4793
2014-07-22 12:51:04.828 DEBUG nova.compute.manager [-] [instance:
ec79df2f-477c-4fbe-85ed-ef7eeed83d43] Skipping network cache update for
instance because it is Building. from (pid=5402) _heal_instance_info_cache
/opt/stack/nova/nova/compute/manager.py:4803
2014-07-22 12:51:04.829 DEBUG nova.compute.manager [-] Didn't find any
instances for network info cache update. from (pid=5402)
_heal_instance_info_cache /opt/stack/nova/nova/compute/manager.py:4855
2014-07-22 12:51:04.830 DEBUG nova.openstack.common.loopingcall [-]
Dynamic looping call sleeping for 56.43 seconds from (pid=5402) _inner
/opt/stack/nova/nova/openstack/common/loopingcall.py:132
Thanks,
Andreas
More information about the Openstack
mailing list