[Openstack] Error creating Nova instances stuck @ spawning state - more details
Deepak Shetty
dpkshetty at gmail.com
Thu Jun 5 14:20:15 UTC 2014
Hi,
In the below mail, I was hitting issue with wrong flavour size and i
had less disk space.
I increased my disk space and memory in my devstack VM. Tried with the
cirros default image that devstack has and it still stucks at spawnign
state.
Note that virsh list shows it as running, so its only some issue w/
openstack that it think its still spawning.
I also added the icmp and ssh ports to the default sec-group, associated
key pair and used the same key and sec-group while starting the VM and
below are some of the logs that prove that nova claims are fine and I don't
see any errors in the n-cpu and n-sch logs. nova list shows it as spawning
for ever and after a long time i see a Warning msg as below ...
*2014-06-05 14:10:58.048 WARNING nova.compute.manager [-] Found 1 in the
database and 0 on the hypervisor.*
Debug logs from n-cpu
===================
2014-06-05 13:56:50.587 AUDIT nova.compute.resource_tracker [-] Free ram
(MB): 3441
2014-06-05 13:56:50.587 AUDIT nova.compute.resource_tracker [-] Free disk
(GB): 49
2014-06-05 13:56:50.587 AUDIT nova.compute.resource_tracker [-] Free VCPUS:
2
2014-06-05 13:56:50.587 AUDIT nova.compute.resource_tracker [-] PCI stats:
[]
2014-06-05 14:03:59.689 AUDIT nova.compute.claims
[req-6be46071-a2be-4443-a6bc-26e10b723fe0 admin admin] [instance:
6838c318-5653-4d2f-a5c8-f62c2cd0eb36] Attempting claim: memory 512 MB, disk
1 GB, VCPUs 1
2014-06-05 14:03:59.689 AUDIT nova.compute.claims
[req-6be46071-a2be-4443-a6bc-26e10b723fe0 admin admin] [instance:
6838c318-5653-4d2f-a5c8-f62c2cd0eb36] Total memory: 3953 MB, used: 512.00 MB
2014-06-05 14:03:59.689 AUDIT nova.compute.claims
[req-6be46071-a2be-4443-a6bc-26e10b723fe0 admin admin] [instance:
6838c318-5653-4d2f-a5c8-f62c2cd0eb36] memory limit not specified,
defaulting to unlimited
2014-06-05 14:03:59.689 AUDIT nova.compute.claims
[req-6be46071-a2be-4443-a6bc-26e10b723fe0 admin admin] [instance:
6838c318-5653-4d2f-a5c8-f62c2cd0eb36] Total disk: 49 GB, used: 0.00 GB
2014-06-05 14:03:59.689 AUDIT nova.compute.claims
[req-6be46071-a2be-4443-a6bc-26e10b723fe0 admin admin] [instance:
6838c318-5653-4d2f-a5c8-f62c2cd0eb36] disk limit not specified, defaulting
to unlimited
2014-06-05 14:03:59.690 AUDIT nova.compute.claims
[req-6be46071-a2be-4443-a6bc-26e10b723fe0 admin admin] [instance:
6838c318-5653-4d2f-a5c8-f62c2cd0eb36] Total CPUs: 2 VCPUs, used: 0.00 VCPUs
2014-06-05 14:03:59.690 AUDIT nova.compute.claims
[req-6be46071-a2be-4443-a6bc-26e10b723fe0 admin admin] [instance:
6838c318-5653-4d2f-a5c8-f62c2cd0eb36] CPUs limit not specified, defaulting
to unlimited
*2014-06-05 14:03:59.690 AUDIT nova.compute.claims
[req-6be46071-a2be-4443-a6bc-26e10b723fe0 admin admin] [instance:
6838c318-5653-4d2f-a5c8-f62c2cd0eb36] Claim successful *
2014-06-05 14:04:01.053 DEBUG nova.compute.manager [-] [instance:
6838c318-5653-4d2f-a5c8-f62c2cd0eb36] Instance network_info:
|[VIF({'ovs_interfaceid': None, 'network': Network({'bridge': u'br100',
'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type':
u'fixed', 'floating_ips': [], 'address': u'10.0.0.2'})], 'version': 4,
'meta': {u'dhcp_server': u'10.0.0.1'}, 'dns': [IP({'meta': {}, 'version':
4, 'type': u'dns', 'address': u'8.8.4.4'})], 'routes': [], 'cidr': u'
10.0.0.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': u'gateway',
'address': u'10.0.0.1'})}), Subnet({'ips': [], 'version': None, 'meta':
{u'dhcp_server': None}, 'dns': [], 'routes': [], 'cidr': None, 'gateway':
IP({'meta': {}, 'version': None, 'type': u'gateway', 'address': None})})],
'meta': {u'tenant_id': None, u'should_create_bridge': True,
u'bridge_interface': u'eth0'}, 'id':
u'f1f132df-9b29-4db9-b7b6-433ffc50e14c', 'label': u'private'}), 'devname':
None, 'qbh_params': None, 'meta': {}, 'details': {}, 'address':
u'fa:16:3e:6a:ef:1f', 'active': False, 'type': u'bridge', 'id':
u'e470cfc3-fa7e-4d94-b394-250db29e5fe1', 'qbg_params': None})]| from
(pid=2194) _allocate_network_async
/opt/stack/nova/nova/compute/manager.py:1535
2014-06-05 14:07:52.384 DEBUG nova.virt.libvirt.driver [-] Updating host
stats from (pid=2194) update_status
/opt/stack/nova/nova/virt/libvirt/driver.py:5314
2014-06-05 14:07:52.423 DEBUG nova.compute.resource_tracker [-] Hypervisor:
free ram (MB): 2194 from (pid=2194) _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:409
2014-06-05 14:07:52.424 DEBUG nova.compute.resource_tracker [-] Hypervisor:
free disk (GB): 43 from (pid=2194) _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:410
2014-06-05 14:07:52.424 DEBUG nova.compute.resource_tracker [-] Hypervisor:
free VCPUs: 2 from (pid=2194) _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:415
2014-06-05 14:07:52.424 DEBUG nova.compute.resource_tracker [-] Hypervisor:
assignable PCI devices: [] from (pid=2194) _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:422
*2014-06-05 14:05:52.014 AUDIT nova.compute.resource_tracker [-] Free ram
(MB): 2929 2014-06-05 14:05:52.014 AUDIT nova.compute.resource_tracker [-]
Free disk (GB): 48 2014-06-05 14:05:52.014 AUDIT
nova.compute.resource_tracker [-] Free VCPUS: 1 2014-06-05 14:05:52.014
AUDIT nova.compute.resource_tracker [-] PCI stats: [] *
2014-06-05 14:04:52.275 DEBUG nova.compute.manager [-] [instance:
6838c318-5653-4d2f-a5c8-f62c2cd0eb36] Skipping network cache update for
instance because it is Building. from (pid=2194) _heal_instance_info_cache
/opt/stack/nova/nova/compute/manager.py:4876
2014-06-05 14:10:53.096 DEBUG nova.openstack.common.loopingcall [-] Dynamic
looping call sleeping for 4.91 seconds from (pid=2194) _inner
/opt/stack/nova/nova/openstack/common/loopingcall.py:130
2014-06-05 14:10:58.011 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager._sync_power_states from (pid=2194)
run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:171
*2014-06-05 14:10:58.048 WARNING nova.compute.manager [-] Found 1 in the
database and 0 on the hypervisor.*
*2014-06-05 14:10:58.049 INFO nova.compute.manager [-] [instance:
6838c318-5653-4d2f-a5c8-f62c2cd0eb36] During sync_power_state the instance
has a pending task (spawning). Skip. *
Debug logs from n-sched
====================
2014-06-05 14:03:59.488 DEBUG nova.filters
[req-6be46071-a2be-4443-a6bc-26e10b723fe0 admin admin] Starting with 1
host(s) from (pid=2244) get_filtered_objects
/opt/stack/nova/nova/filters.py:70
2014-06-05 14:03:59.488 DEBUG nova.filters
[req-6be46071-a2be-4443-a6bc-26e10b723fe0 admin admin] *Filter
AllHostsFilter returned 1 host(s) from (pid=2244) *get_filtered_objects
/opt/stack/nova/nova/filters.py:88
2014-06-05 14:03:59.488 DEBUG nova.scheduler.filter_scheduler
[req-6be46071-a2be-4443-a6bc-26e10b723fe0 admin admin] Filtered
[(devstack-vm.localdomain, devstack-vm.localdomain) ram:3441 disk:43008
io_ops:0 instances:1] from (pid=2244) _schedule
/opt/stack/nova/nova/scheduler/filter_scheduler.py:332
*2014-06-05 14:03:59.489 DEBUG nova.scheduler.filter_scheduler
[req-6be46071-a2be-4443-a6bc-26e10b723fe0 admin admin] Weighed [WeighedHost
[host: devstack-vm.localdomain *
Cli dumps
========
nova boot --flavor 1 --image cirros-0.3.1-x86_64-uec --key_name mykey
--security_group default dpkvm2
[stack at devstack-vm ~]$ nova list
+--------------------------------------+--------+--------+------------+-------------+------------------+
| ID | Name | Status | Task State |
Power State | Networks |
+--------------------------------------+--------+--------+------------+-------------+------------------+
| 6838c318-5653-4d2f-a5c8-f62c2cd0eb36 | dpkvm2 | BUILD | spawning |
NOSTATE | private=10.0.0.2 |
+--------------------------------------+--------+--------+------------+-------------+------------------+
[stack at devstack-vm ~]$ glance image-list
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
| ID | Name |
Disk Format | Container Format | Size | Status |
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
| 24009f72-5bcc-46ec-a3dc-68dd1deaa746 | cirros-0.3.1-x86_64-uec |
ami | ami | 25165824 | active |
| 22a72dca-c187-48bf-8e04-1f979f9bf00d | cirros-0.3.1-x86_64-uec-kernel |
aki | aki | 4955792 | active |
| 8e1e0dad-58b9-46f3-82b7-87437a81af7e | cirros-0.3.1-x86_64-uec-ramdisk |
ari | ari | 3714968 | active |
| 2103231d-5ca8-401a-a810-d29b89ac68b8 | ubuntu_trusty_cloudimg |
qcow2 | bare | 253755904 | active |
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
thanx,
deepak
On Thu, Jun 5, 2014 at 2:55 PM, Deepak Shetty <dpkshetty at gmail.com> wrote:
> Hi,
> I am getting the below error when spawning Nova instances. I freshly
> uploaded a new ubuntu cloud img into glance and using that i get the below
> issue..
>
>
> 2014-06-05 09:18:19.759 AUDIT nova.compute.manager
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Starting instance...
> 2014-06-05 09:18:19.827 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore
> "compute_resources" from (pid=22474) lock
> /opt/stack/nova/nova/openstack/common/lockutils.py:168
> 2014-06-05 09:18:19.827 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore / lock
> "instance_claim" from (pid=22474) inner
> /opt/stack/nova/nova/openstack/common/lockutils.py:248
> 2014-06-05 09:18:19.828 WARNING nova.compute.resource_tracker
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Host field should not be set on the
> instance until resources have been
> claimed.
> 2014-06-05 09:18:19.828 WARNING nova.compute.resource_tracker
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Node field should not be set on the
> instance until resources have been
> claimed.
> 2014-06-05 09:18:19.828 DEBUG nova.compute.resource_tracker
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Memory overhead for
> 2048 MB instance; 0 MB from (pid=22474) instance_claim
> /opt/stack/nova/nova/compute/resource_tracker.p
> y:119
> 2014-06-05 09:18:19.830 AUDIT nova.compute.claims
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Attempting claim: memory 2048 MB,
> disk 20 GB, VCPUs 1
> 2014-06-05 09:18:19.830 AUDIT nova.compute.claims
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Total memory: 3953 MB, used: 512.00 MB
> 2014-06-05 09:18:19.831 AUDIT nova.compute.claims
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] memory limit not specified,
> defaulting to unlimited
> 2014-06-05 09:18:19.831 AUDIT nova.compute.claims
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Total disk: 9 GB, used: 0.00 GB
> 2014-06-05 09:18:19.831 AUDIT nova.compute.claims
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] disk limit not specified, defaulting
> to unlimited
> 2014-06-05 09:18:19.831 AUDIT nova.compute.claims
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Total CPUs: 2 VCPUs, used: 0.00 VCPUs
> 2014-06-05 09:18:19.832 AUDIT nova.compute.claims
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] CPUs limit not specified, defaulting
> to unlimited
> 2014-06-05 09:18:19.832 AUDIT nova.compute.claims
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Claim successful
> 2014-06-05 09:18:19.898 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Semaphore / lock
> released "instance_claim" from (pid=22474) inner
> /opt/stack/nova/nova/openstack/common/lockutils.py:252
> 2014-06-05 09:18:20.016 DEBUG nova.network.base_api
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Updating cache with
> info: [VIF({'ovs_interfaceid': None, 'network': Network({'bridge':
> u'br100', 'subnets': [Subnet({'ips': [Fixed
> IP({'meta': {}, 'version': 4, 'type': u'fixed', 'floating_ips': [],
> 'address': u'10.0.0.2'})], 'version': 4, 'meta': {u'dhcp_server':
> u'10.0.0.1'}, 'dns': [IP({'meta': {}, 'version': 4, 'type': u'dns',
> 'address': u'8.8.4.4'})], 'routes':
> [], 'cidr': u'10.0.0.0/24', 'gateway': IP({'meta': {}, 'version': 4,
> 'type': u'gateway', 'address': u'10.0.0.1'})}), Subnet({'ips': [],
> 'version': None, 'meta': {u'dhcp_server': None}, 'dns': [], 'routes': [],
> 'cidr': None, 'gateway': I
> P({'meta': {}, 'version': None, 'type': u'gateway', 'address': None})})],
> 'meta': {u'tenant_id': None, u'should_create_bridge': True,
> u'bridge_interface': u'eth0'}, 'id':
> u'f1f132df-9b29-4db9-b7b6-433ffc50e14c', 'label': u'private'}), 'd
> evname': None, 'qbh_params': None, 'meta': {}, 'details': {}, 'address':
> u'fa:16:3e:28:c7:10', 'active': False, 'type': u'bridge', 'id':
> u'1d9e192e-cb5e-46b4-b008-82965290b709', 'qbg_params': None})] from
> (pid=22474) update_instance_cach
> e_with_nw_info /opt/stack/nova/nova/network/base_api.py:37
> 2014-06-05 09:18:20.075 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore
> "compute_resources" from (pid=22474) lock
> /opt/stack/nova/nova/openstack/common/lockutils.py:168
> 2014-06-05 09:18:20.075 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore / lock
> "update_usage" from (pid=22474) inner
> /opt/stack/nova/nova/openstack/common/lockutils.py:248
> 2014-06-05 09:18:20.097 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Semaphore / lock
> released "update_usage" from (pid=22474) inner
> /opt/stack/nova/nova/openstack/common/lockutils.py:252
> 2014-06-05 09:18:20.097 DEBUG nova.block_device
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] block_device_list []
> from (pid=22474) volume_in_mapping /opt/stack/nova/nova/block_device.py:530
> 2014-06-05 09:18:20.216 DEBUG nova.block_device
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] block_device_list []
> from (pid=22474) volume_in_mapping /opt/stack/nova/nova/block_device.py:530
> 2014-06-05 09:18:20.216 INFO nova.virt.libvirt.driver
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Creating image
> 2014-06-05 09:18:20.217 DEBUG nova.openstack.common.processutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Running cmd
> (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf chown 1001
> /opt/stack/data/nova/instances/aa69fd
> 1e-184d-48f6-934c-855e8b02ea4f/console.log from (pid=22474) execute
> /opt/stack/nova/nova/openstack/common/processutils.py:155
> 2014-06-05 09:18:20.334 DEBUG nova.openstack.common.processutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Result was 0 from
> (pid=22474) execute
> /opt/stack/nova/nova/openstack/common/processutils.py:188
> 2014-06-05 09:18:20.335 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore
> "80efadd4dea317b5d3be15f44bb6e29f9fb900fe" from (pid=22474) lock
> /opt/stack/nova/nova/openstack/common/loc
> kutils.py:168
> 2014-06-05 09:18:20.335 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Attempting to grab
> file lock "80efadd4dea317b5d3be15f44bb6e29f9fb900fe" from (pid=22474) lock
> /opt/stack/nova/nova/opens
> tack/common/lockutils.py:178
> 2014-06-05 09:18:20.336 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got file lock
> "80efadd4dea317b5d3be15f44bb6e29f9fb900fe" at
> /opt/stack/data/nova/instances/locks/nova-80efadd4dea317b5d3
> be15f44bb6e29f9fb900fe from (pid=22474) lock
> /opt/stack/nova/nova/openstack/common/lockutils.py:206
> 2014-06-05 09:18:20.336 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore / lock
> "fetch_func_sync" from (pid=22474) inner
> /opt/stack/nova/nova/openstack/common/lockutils.py:248
> 2014-06-05 09:18:20.336 DEBUG stevedore.extension
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] found extension
> EntryPoint.parse('file = nova.image.download.file') from (pid=22474)
> _load_plugins /opt/stack/stevedore/stevedore/ex
> tension.py:156
> 2014-06-05 09:18:20.337 DEBUG stevedore.extension
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] found extension
> EntryPoint.parse('file = nova.image.download.file') from (pid=22474)
> _load_plugins /opt/stack/stevedore/stevedore/ex
> tension.py:156
> 2014-06-05 09:18:20.337 DEBUG glanceclient.common.http
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] curl -i -X GET -H
> 'X-Service-Catalog: [{"endpoints_links": [], "endpoints": [{"adminURL": "
> http://192.168.122.252:8776/v1/70668
> 0464f7d4b1b98ae815ab656e3bb", "region": "RegionOne", "publicURL": "
> http://192.168.122.252:8776/v1/706680464f7d4b1b98ae815ab656e3bb", "id":
> "6f712556b1d049e2b9e955b30a2b4182", "internalURL": "
> http://192.168.122.252:8776/v1/706680464f7d4b1
> b98ae815ab656e3bb"}], "type": "volume", "name": "cinder"}]' -H
> 'X-Identity-Status: Confirmed' -H 'X-Roles: _member_,admin' -H 'User-Agent:
> python-glanceclient' -H 'X-Tenant-Id: 706680464f7d4b1b98ae815ab656e3bb' -H
> 'X-User-Id: b0b642c9208
> 04b27a7fb8b1c6a720d0f' -H 'X-Auth-Token: ***' -H 'Content-Type:
> application/octet-stream'
> http://192.168.122.252:9292/v1/images/2103231d-5ca8-401a-a810-d29b89ac68b8
> from (pid=22474) log_curl_request /opt/stack/python-glanceclient/glancec
> lient/common/http.py:153
> 2014-06-05 09:18:20.360 DEBUG glanceclient.common.http
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin]
> HTTP/1.1 200 OK
> x-image-meta-id: 2103231d-5ca8-401a-a810-d29b89ac68b8
> date: Thu, 05 Jun 2014 09:18:20 GMT
> x-image-meta-deleted: False
> x-image-meta-checksum: cf936325efd283c9514f81178d8a107a
> transfer-encoding: chunked
> x-image-meta-container_format: bare
> x-image-meta-protected: False
> x-image-meta-min_disk: 0
> x-image-meta-created_at: 2014-06-05T07:31:14
> x-image-meta-size: 270008320
> x-image-meta-status: active
> etag: cf936325efd283c9514f81178d8a107a
> x-image-meta-is_public: False
> x-image-meta-min_ram: 0
> x-image-meta-owner: 706680464f7d4b1b98ae815ab656e3bb
> x-image-meta-updated_at: 2014-06-05T07:31:16
> content-type: application/octet-stream
> x-openstack-request-id: req-71d59b4f-8bbe-48ac-ae47-dbb8888ae681
> x-image-meta-disk_format: qcow2
> x-image-meta-name: ubuntu_trusty_cloudimg
> from (pid=22474) log_http_response
> /opt/stack/python-glanceclient/glanceclient/common/http.py:167
> 2014-06-05 09:18:27.852 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Released file lock
> "80efadd4dea317b5d3be15f44bb6e29f9fb900fe" at
> /opt/stack/data/nova/instances/locks/nova-80efadd4dea317b5d3be15f44bb6e29f9fb900fe
> from (pid=22474) lock /opt/stack/nova/nova/openstack/common/lockutils.py:210
> 2014-06-05 09:18:27.852 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Semaphore / lock
> released "fetch_func_sync" from (pid=22474) inner
> /opt/stack/nova/nova/openstack/common/lockutils.py:252
> 2014-06-05 09:18:27.852 ERROR nova.compute.manager
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Instance failed to spawn
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Traceback (most recent call last):
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/compute/manager.py", line 2059, in _build_resources
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] yield resources
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/compute/manager.py", line 1962, in
> _build_and_run_instance
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f]
> block_device_info=block_device_info)
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2281, in spawn
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] admin_pass=admin_password)
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2655, in _create_image
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] project_id=instance['project_id'])
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 192, in cache
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] *args, **kwargs)
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 384, in
> create_image
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] prepare_template(target=base,
> max_size=size, *args, **kwargs)
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/openstack/common/lockutils.py", line 249, in inner
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] return f(*args, **kwargs)
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 182, in
> fetch_func_sync
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] fetch_func(target=target, *args,
> **kwargs)
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/virt/libvirt/utils.py", line 660, in fetch_image
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] max_size=max_size)
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/virt/images.py", line 78, in fetch_to_raw
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] max_size=max_size)
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/virt/images.py", line 72, in fetch
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] image_service.download(context,
> image_id, dst_path=path)
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/image/glance.py", line 323, in download
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] for chunk in image_chunks:
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/python-glanceclient/glanceclient/common/http.py", line 553, in
> __iter__
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] chunk = self.next()
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/python-glanceclient/glanceclient/common/http.py", line 571, in
> next
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] chunk = self._resp.read(CHUNKSIZE)
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/usr/lib64/python2.7/httplib.py", line 543, in read
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] return self._read_chunked(amt)
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/usr/lib64/python2.7/httplib.py", line 597, in _read_chunked
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] raise
> IncompleteRead(''.join(value))
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] IncompleteRead: IncompleteRead(0
> bytes read)
> 2014-06-05 09:18:27.852 TRACE nova.compute.manager [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f]
> 2014-06-05 09:18:27.853 ERROR root
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Original exception
> being dropped: ['Traceback (most recent call last):\n', ' File
> "/opt/stack/nova/nova/compute/manager.py", line 2059, in
> _build_resources\n yield resources\n', ' File
> "/opt/stack/nova/nova/compute/manager.py", line 1962, in
> _build_and_run_instance\n block_device_info=block_device_info)\n', '
> File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2281, in
> spawn\n admin_pass=admin_password)\n', ' File
> "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2655, in
> _create_image\n project_id=instance[\'project_id\'])\n', ' File
> "/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 192, in
> cache\n *args, **kwargs)\n', ' File
> "/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 384, in
> create_image\n prepare_template(target=base, max_size=size, *args,
> **kwargs)\n', ' File "/opt/stack/nova/nova/openstack/common/lockutils.py",
> line 249, in inner\n return f(*args, **kwargs)\n', ' File
> "/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 182, in
> fetch_func_sync\n fetch_func(target=target, *args, **kwargs)\n', ' File
> "/opt/stack/nova/nova/virt/libvirt/utils.py", line 660, in fetch_image\n
> max_size=max_size)\n', ' File "/opt/stack/nova/nova/virt/images.py", line
> 78, in fetch_to_raw\n max_size=max_size)\n', ' File
> "/opt/stack/nova/nova/virt/images.py", line 72, in fetch\n
> image_service.download(context, image_id, dst_path=path)\n', ' File
> "/opt/stack/nova/nova/image/glance.py", line 323, in download\n for
> chunk in image_chunks:\n', ' File
> "/opt/stack/python-glanceclient/glanceclient/common/http.py", line 553, in
> __iter__\n chunk = self.next()\n', ' File
> "/opt/stack/python-glanceclient/glanceclient/common/http.py", line 571, in
> next\n chunk = self._resp.read(CHUNKSIZE)\n', ' File
> "/usr/lib64/python2.7/httplib.py", line 543, in read\n return
> self._read_chunked(amt)\n', ' File "/usr/lib64/python2.7/httplib.py", line
> 597, in _read_chunked\n raise IncompleteRead(\'\'.join(value))\n',
> 'IncompleteRead: IncompleteRead(0 bytes read)\n']
> 2014-06-05 09:18:27.853 DEBUG nova.compute.claims
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Aborting claim: [Claim: 2048 MB
> memory, 20 GB disk, 1 VCPUS] from (pid=22474) abort
> /opt/stack/nova/nova/compute/claims.py:113
> 2014-06-05 09:18:27.854 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore
> "compute_resources" from (pid=22474) lock
> /opt/stack/nova/nova/openstack/common/lockutils.py:168
> 2014-06-05 09:18:27.854 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore / lock
> "abort_instance_claim" from (pid=22474) inner
> /opt/stack/nova/nova/openstack/common/lockutils.py:248
> 2014-06-05 09:18:28.080 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Semaphore / lock
> released "abort_instance_claim" from (pid=22474) inner
> /opt/stack/nova/nova/openstack/common/lockutils.py:252
> 2014-06-05 09:18:28.082 DEBUG nova.compute.utils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] 'NetworkInfo' object has no attribute
> 'wait' from (pid=22474) notify_about_instance_usage
> /opt/stack/nova/nova/compute/utils.py:291
> 2014-06-05 09:18:28.082 TRACE nova.compute.utils [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Traceback (most recent call last):
> 2014-06-05 09:18:28.082 TRACE nova.compute.utils [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/compute/manager.py", line 1966, in
> _build_and_run_instance
> 2014-06-05 09:18:28.082 TRACE nova.compute.utils [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] network_info=network_info)
> 2014-06-05 09:18:28.082 TRACE nova.compute.utils [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
> 2014-06-05 09:18:28.082 TRACE nova.compute.utils [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] self.gen.throw(type, value,
> traceback)
> 2014-06-05 09:18:28.082 TRACE nova.compute.utils [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] File
> "/opt/stack/nova/nova/compute/manager.py", line 2065, in _build_resources
> 2014-06-05 09:18:28.082 TRACE nova.compute.utils [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] network_info.wait(do_raise=False)
> 2014-06-05 09:18:28.082 TRACE nova.compute.utils [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] AttributeError: 'NetworkInfo' object
> has no attribute 'wait'
> 2014-06-05 09:18:28.082 TRACE nova.compute.utils [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f]
> 2014-06-05 09:18:28.087 DEBUG nova.compute.manager
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
> aa69fd1e-184d-48f6-934c-855e8b02ea4f] Build of instance
> aa69fd1e-184d-48f6-934c-855e8b02ea4f was re-scheduled: 'NetworkInfo' object
> has no attribute 'wait' from (pid=22474) do_build_and_run_instance
> /opt/stack/nova/nova/compute/manager.py:1895
> 2014-06-05 09:18:28.168 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Semaphore / lock
> released "do_build_and_run_instance" from (pid=22474) inner
> /opt/stack/nova/nova/openstack/common/lockutils.py:252
> 2014-06-05 09:18:28.260 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore
> "aa69fd1e-184d-48f6-934c-855e8b02ea4f" from (pid=22474) lock
> /opt/stack/nova/nova/openstack/common/lockutils.py:168
> 2014-06-05 09:18:28.260 DEBUG nova.openstack.common.lockutils
> [req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore / lock
> "do_build_and_run_instance" from (pid=22474) inner
> /opt/stack/nova/nova/openstack/common/lockutils.py:248
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140605/6fb9e007/attachment.html>
More information about the Openstack
mailing list