[Openstack-operators] Help: Instances don't start

Edier Zapata edalzap at gmail.com
Mon Jul 11 18:19:20 UTC 2011


Here is the log of 2nd compute node (cluster10), when I run the
instance, the 1st compute node is the main server:

cluster at cluster10:~$ tail -f /var/log/nova/nova-compute.log

2011-07-11 13:02:10,388 DEBUG nova.rpc [-] received
{u'_context_request_id': u'VA0-3-5VJSCL9TMLWRSX',
u'_context_read_deleted': False, u'args': {u'instance_id': 22,
u'injected_files': None, u'availability_zone': None},
u'_context_is_admin': True, u'_context_timestamp':
u'2011-07-11T18:02:09Z', u'_context_user': u'osadmin', u'method':
u'run_instance', u'_context_project': u'Univalle',
u'_context_remote_address': u'192.168.28.50'} from (pid=3757) _receive
/usr/lib/pymodules/python2.6/nova/rpc.py:167
2011-07-11 13:02:10,389 DEBUG nova.rpc [-] unpacked context:
{'timestamp': u'2011-07-11T18:02:09Z', 'remote_address':
u'192.168.28.50', 'project': u'Univalle', 'is_admin': True, 'user':
u'osadmin', 'request_id': u'VA0-3-5VJSCL9TMLWRSX', 'read_deleted':
False} from (pid=3757) _unpack_context
/usr/lib/pymodules/python2.6/nova/rpc.py:331
2011-07-11 13:02:10,545 AUDIT nova.compute.manager
[VA0-3-5VJSCL9TMLWRSX osadmin Univalle] instance 22: starting...
2011-07-11 13:02:10,735 DEBUG nova.rpc [-] Making asynchronous call on
network.cluster1 ... from (pid=3757) call
/usr/lib/pymodules/python2.6/nova/rpc.py:350
2011-07-11 13:02:10,735 DEBUG nova.rpc [-] MSG_ID is
e5033ff75a554f58b6bf065bd6e0148f from (pid=3757) call
/usr/lib/pymodules/python2.6/nova/rpc.py:353
2011-07-11 13:02:11,509 DEBUG nova.utils [-] Attempting to grab
semaphore "ensure_bridge" for method "ensure_bridge"... from
(pid=3757) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-07-11 13:02:11,510 DEBUG nova.utils [-] Attempting to grab file
lock "ensure_bridge" for method "ensure_bridge"... from (pid=3757)
inner /usr/lib/pymodules/python2.6/nova/utils.py:599
2011-07-11 13:02:11,511 DEBUG nova.utils [-] Running cmd (subprocess):
ip link show dev br100 from (pid=3757) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:11,723 DEBUG nova.virt.libvirt_conn [-] instance
instance-00000016: starting toXML method from (pid=3757) to_xml
/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:996
2011-07-11 13:02:11,896 DEBUG nova.virt.libvirt_conn [-] instance
instance-00000016: finished toXML method from (pid=3757) to_xml
/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:1041
2011-07-11 13:02:11,965 INFO nova [-] called setup_basic_filtering in nwfilter
2011-07-11 13:02:11,965 INFO nova [-] ensuring static filters
2011-07-11 13:02:12,031 INFO nova [-]
<nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x3b77490>
2011-07-11 13:02:12,032 INFO nova [-]
<nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x3b77510>
2011-07-11 13:02:12,037 DEBUG nova.utils [-] Attempting to grab
semaphore "iptables" for method "apply"... from (pid=3757) inner
/usr/lib/pymodules/python2.6/nova/utils.py:594
2011-07-11 13:02:12,038 DEBUG nova.utils [-] Attempting to grab file
lock "iptables" for method "apply"... from (pid=3757) inner
/usr/lib/pymodules/python2.6/nova/utils.py:599
2011-07-11 13:02:12,043 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-save -t filter from (pid=3757) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:12,068 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-restore from (pid=3757) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:12,095 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-save -t nat from (pid=3757) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:12,119 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-restore from (pid=3757) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:12,170 DEBUG nova.utils [-] Running cmd (subprocess):
mkdir -p /var/lib/nova/instances/instance-00000016/ from (pid=3757)
execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:12,187 INFO nova.virt.libvirt_conn [-] instance
instance-00000016: Creating image
2011-07-11 13:02:12,272 DEBUG nova.utils [-] Attempting to grab
semaphore "6332e1f2" for method "call_if_not_exists"... from
(pid=3757) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-07-11 13:02:12,272 DEBUG nova.utils [-] Running cmd (subprocess):
cp /var/lib/nova/instances/_base/6332e1f2
/var/lib/nova/instances/instance-00000016/kernel from (pid=3757)
execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:12,304 DEBUG nova.utils [-] Attempting to grab
semaphore "11412fa5" for method "call_if_not_exists"... from
(pid=3757) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-07-11 13:02:12,313 ERROR nova.compute.manager
[VA0-3-5VJSCL9TMLWRSX osadmin Univalle] Instance '22' failed to spawn.
Is virtualization enabled in the BIOS?
(nova.compute.manager): TRACE: Traceback (most recent call last):
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/compute/manager.py", line 234, in
run_instance
(nova.compute.manager): TRACE:     self.driver.spawn(instance_ref)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/exception.py", line 120, in _wrap
(nova.compute.manager): TRACE:     return f(*args, **kw)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 616, in
spawn
(nova.compute.manager): TRACE:     self._create_image(instance, xml,
network_info)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 849, in
_create_image
(nova.compute.manager): TRACE:     project=project)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 776, in
_cache_image
(nova.compute.manager): TRACE:     call_if_not_exists(base, fn, *args, **kwargs)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/utils.py", line 607, in inner
(nova.compute.manager): TRACE:     retval = f(*args, **kwargs)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 774, in
call_if_not_exists
(nova.compute.manager): TRACE:     fn(target=base, *args, **kwargs)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 787, in
_fetch_image
(nova.compute.manager): TRACE:     images.fetch(image_id, target, user, project)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/images.py", line 51, in fetch
(nova.compute.manager): TRACE:     metadata =
image_service.get(elevated, image_id, image_file)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/image/local.py", line 116, in get
(nova.compute.manager): TRACE:     raise exception.NotFound
(nova.compute.manager): TRACE: NotFound: None
(nova.compute.manager): TRACE:
2011-07-11 13:03:02,495 INFO nova.compute.manager [-] Found instance
'instance-00000016' in DB but no VM. State=8, so setting state to
shutoff.
2011-07-11 13:03:02,496 INFO nova.compute.manager [-] DB/VM state
mismatch. Changing state from '8' to '5'


Thanks / Gracias
>
> On 07/11/2011 01:08 PM, Edier Zapata wrote:
>
> Hi,
>  I installed a 10 nodes nova-compute cloud, this way:
>  Server: All nova services, including nova-compute, using the script
> in:
> https://github.com/elasticdog/OpenStack-NOVA-Installer-Script/raw/master/nova-install
>  Nodes: nova-compute, using same script than step 1
> When I run an image ( euca-run-instances $emi -k mykey -t m1.tiny ) I get
> this:
> root at cluster1:/home/cluster/OpenStack-org# euca-run-instances $emi -k
> mykey -t m1.tiny
> RESERVATION     r-0hr83xsv      Univalle        default
> INSTANCE        i-00000015      ami-1f75034f
> scheduling      mykey (Univalle, None)  0               m1.tiny
> 2011-07-11T15:41:06Z    unknown zone
>
> and some time later:
>
> root at cluster1:/home/cluster/OpenStack-org# euca-describe-instances
> RESERVATION     r-0hr83xsv      Univalle        default
> INSTANCE        i-00000015      ami-1f75034f    192.168.29.2
> 192.168.29.2    shutdown        mykey (Univalle, cluster10)     0
>          m1.tiny 2011-07-11T15:41:06Z     nova
>
> The cluster10 log say:
> cluster at cluster10:~$ tail -5 /var/log/nova/nova-compute.log
> 2011-07-11 10:59:51,810 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
> 2011-07-11 11:00:51,890 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
> 2011-07-11 11:01:51,970 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
> 2011-07-11 11:02:52,050 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
> 2011-07-11 11:03:52,130 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
>
> Services running:
>
> root at cluster1:/home/cluster/OpenStack-org# nova-manage service list
> cluster1   nova-network enabled  :-) 2011-07-11 16:07:40
> cluster1   nova-compute enabled  :-) 2011-07-11 16:07:44
> cluster1   nova-scheduler enabled  :-) 2011-07-11 16:07:45
> cluster10  nova-compute enabled  :-) 2011-07-11 16:07:39
>
> Can anyone help me with this please?
>
> Thanks
>
>
>



-- 
Edier Alberto Zapata Hernández
Ingeniero de Sistemas
Universidad de Valle



More information about the Openstack-operators mailing list