[Openstack] why the instances reboot when I restart libvirt-bin, nova-compute services.

DeadSun mwjpiero at gmail.com
Mon Oct 24 01:24:51 UTC 2011


Sometimes I find nova-compute not update to db of controller. So I restart
nova-compute. But it stops in the stat:
"Connecting to libvirt: qemu:///system from (pid=22152) _get_connection
/data/nova/nova/virt/libvirt/connection.py:205".
Then I restart libvirt-bin service.
After that, the log show instances will be rebooted. How it happened?

This is debug show:
_______________________________________________________
2011-10-24 09:14:13,646 DEBUG nova.compute.manager [-] Checking state of
instance-00000005 from (pid=22152) _get_power_state
/data/nova/nova/compute/manager.py:190
2011-10-24 09:14:13,646 DEBUG nova.virt.libvirt_conn [-] Connecting to
libvirt: qemu:///system from (pid=22152) _get_connection
/data/nova/nova/virt/libvirt/connection.py:205
2011-10-24 09:14:13,654 DEBUG nova.compute.manager [-] Current state of
instance-00000005 is 1, state in DB is 1. from (pid=22152) init_host
/data/nova/nova/compute/manager.py:171
2011-10-24 09:14:13,654 INFO nova.compute.manager [-] Rebooting instance
instance-00000005 after nova-compute restart.
2011-10-24 09:14:13,654 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock:
decorating: |<function reboot_instance at 0x22971b8>|
2011-10-24 09:14:13,655 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock:
arguments: |<nova.compute.manager.ComputeManager object at 0x1ba4f90>|
|<nova.context.RequestContext object at 0x2c5db50>| |5|
2011-10-24 09:14:13,655 DEBUG nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] instance 5: getting locked
state from (pid=22152) get_lock /data/nova/nova/compute/manager.py:1276
2011-10-24 09:14:13,731 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock:
locked: |False|
2011-10-24 09:14:13,732 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock: admin:
|True|
2011-10-24 09:14:13,732 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock:
executing: |<function reboot_instance at 0x22971b8>|
2011-10-24 09:14:13,732 AUDIT nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] Rebooting instance 5
2011-10-24 09:14:13,834 DEBUG nova.compute.manager [-] Checking state of
instance-00000005 from (pid=22152) _get_power_state
/data/nova/nova/compute/manager.py:190
2011-10-24 09:14:14,853 DEBUG nova.rpc [-] Making asynchronous call on
network ... from (pid=22152) multicall /data/nova/nova/rpc/impl_kombu.py:721
2011-10-24 09:14:14,854 DEBUG nova.rpc [-] MSG_ID is
2f82739970714a76910e8c229bbeb7fe from (pid=22152) multicall
/data/nova/nova/rpc/impl_kombu.py:724
2011-10-24 09:14:14,854 DEBUG nova.rpc [-] Pool creating new connection from
(pid=22152) create /data/nova/nova/rpc/impl_kombu.py:504
2011-10-24 09:14:14,869 INFO nova.rpc [-] Connected to AMQP server on
10.200.200.2:5672
2011-10-24 09:14:17,316 INFO nova.virt.libvirt.firewall [-] Attempted to
unfilter instance 5 which is not filtered
2011-10-24 09:14:17,316 INFO nova [-] called setup_basic_filtering in
nwfilter
2011-10-24 09:14:17,317 INFO nova [-] ensuring static filters
2011-10-24 09:14:20,058 INFO nova.virt.libvirt_conn [-] Instance
instance-00000005 destroyed successfully.
2011-10-24 09:14:26,825 DEBUG nova.virt.libvirt.firewall [-] iptables
firewall: Setup Basic Filtering from (pid=22152) setup_basic_filtering
/data/nova/nova/virt/libvirt/firewall.py:524
2011-10-24 09:14:26,825 DEBUG nova.utils [-] Attempting to grab semaphore
"iptables" for method "_do_refresh_provider_fw_rules"... from (pid=22152)
inner /data/nova/nova/utils.py:717
2011-10-24 09:14:26,825 DEBUG nova.utils [-] Attempting to grab file lock
"iptables" for method "_do_refresh_provider_fw_rules"... from (pid=22152)
inner /data/nova/nova/utils.py:722
2011-10-24 09:14:26,830 DEBUG nova.utils [-] Attempting to grab semaphore
"iptables" for method "apply"... from (pid=22152) inner
/data/nova/nova/utils.py:717
2011-10-24 09:14:26,831 DEBUG nova.utils [-] Attempting to grab file lock
"iptables" for method "apply"... from (pid=22152) inner
/data/nova/nova/utils.py:722
2011-10-24 09:14:26,832 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-save -t filter from (pid=22152) execute
/data/nova/nova/utils.py:168
2011-10-24 09:14:26,848 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-restore from (pid=22152) execute /data/nova/nova/utils.py:168
2011-10-24 09:14:26,862 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-save -t nat from (pid=22152) execute /data/nova/nova/utils.py:168
2011-10-24 09:14:26,878 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-restore from (pid=22152) execute /data/nova/nova/utils.py:168
2011-10-24 09:14:26,952 DEBUG nova.virt.libvirt.firewall [-] Adding security
group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x3c6f910> from (pid=22152) instance_rules
/data/nova/nova/virt/libvirt/firewall.py:650
2011-10-24 09:14:26,953 INFO nova.virt.libvirt.firewall [-] Using cidr '
0.0.0.0/0'
2011-10-24 09:14:26,953 INFO nova.virt.libvirt.firewall [-] Using fw_rules:
['-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED
-j ACCEPT', '-j $provider', u'-s 10.200.200.4 -p udp --sport 67 --dport 68
-j ACCEPT', u'-s 10.200.200.0/24 -j ACCEPT', '-j ACCEPT -p icmp -s 0.0.0.0/0
']
2011-10-24 09:14:26,954 DEBUG nova.virt.libvirt.firewall [-] Adding security
group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x3c6f9d0> from (pid=22152) instance_rules
/data/nova/nova/virt/libvirt/firewall.py:650
2011-10-24 09:14:26,954 INFO nova.virt.libvirt.firewall [-] Using cidr '
0.0.0.0/0'
2011-10-24 09:14:26,955 INFO nova.virt.libvirt.firewall [-] Using fw_rules:
['-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED
-j ACCEPT', '-j $provider', u'-s 10.200.200.4 -p udp --sport 67 --dport 68
-j ACCEPT', u'-s 10.200.200.0/24 -j ACCEPT', '-j ACCEPT -p icmp -s 0.0.0.0/0',
'-j ACCEPT -p tcp --dport 3389 -s 0.0.0.0/0']
2011-10-24 09:14:26,955 DEBUG nova.virt.libvirt.firewall [-] Adding security
group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x3c6fa90> from (pid=22152) instance_rules
/data/nova/nova/virt/libvirt/firewall.py:650
2011-10-24 09:14:26,956 INFO nova.virt.libvirt.firewall [-] Using cidr '
0.0.0.0/0'
2011-10-24 09:14:26,956 INFO nova.virt.libvirt.firewall [-] Using fw_rules:
['-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED
-j ACCEPT', '-j $provider', u'-s 10.200.200.4 -p udp --sport 67 --dport 68
-j ACCEPT', u'-s 10.200.200.0/24 -j ACCEPT', '-j ACCEPT -p icmp -s 0.0.0.0/0',
'-j ACCEPT -p tcp --dport 3389 -s 0.0.0.0/0', '-j ACCEPT -p tcp --dport 22
-s 0.0.0.0/0']
2011-10-24 09:14:26,957 DEBUG nova.utils [-] Attempting to grab semaphore
"iptables" for method "apply"... from (pid=22152) inner
/data/nova/nova/utils.py:717
2011-10-24 09:14:26,958 DEBUG nova.utils [-] Attempting to grab file lock
"iptables" for method "apply"... from (pid=22152) inner
/data/nova/nova/utils.py:722
2011-10-24 09:14:26,959 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-save -t filter from (pid=22152) execute
/data/nova/nova/utils.py:168
2011-10-24 09:14:26,976 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-restore from (pid=22152) execute /data/nova/nova/utils.py:168
2011-10-24 09:14:26,994 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-save -t nat from (pid=22152) execute /data/nova/nova/utils.py:168
2011-10-24 09:14:27,009 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-restore from (pid=22152) execute /data/nova/nova/utils.py:168
2011-10-24 09:14:29,362 DEBUG nova.compute.manager [-] Checking state of
instance-00000005 from (pid=22152) _get_power_state
/data/nova/nova/compute/manager.py:190
2011-10-24 09:14:30,474 DEBUG nova.compute.manager [-] Checking state of
instance-00000007 from (pid=22152) _get_power_state
/data/nova/nova/compute/manager.py:190
2011-10-24 09:14:31,270 DEBUG nova.compute.manager [-] Current state of
instance-00000007 is 1, state in DB is 1. from (pid=22152) init_host
/data/nova/nova/compute/manager.py:171
2011-10-24 09:14:31,270 INFO nova.compute.manager [-] Rebooting instance
instance-00000007 after nova-compute restart.
2011-10-24 09:14:31,271 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock:
decorating: |<function reboot_instance at 0x22971b8>|
2011-10-24 09:14:31,271 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock:
arguments: |<nova.compute.manager.ComputeManager object at 0x1ba4f90>|
|<nova.context.RequestContext object at 0x2c5db50>| |7|
2011-10-24 09:14:31,272 DEBUG nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] instance 7: getting locked
state from (pid=22152) get_lock /data/nova/nova/compute/manager.py:1276
2011-10-24 09:14:31,364 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock:
locked: |False|
2011-10-24 09:14:31,364 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock: admin:
|True|
2011-10-24 09:14:31,365 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock:
executing: |<function reboot_instance at 0x22971b8>|
2011-10-24 09:14:31,365 AUDIT nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] Rebooting instance 7
2011-10-24 09:14:31,465 DEBUG nova.compute.manager [-] Checking state of
instance-00000007 from (pid=22152) _get_power_state
/data/nova/nova/compute/manager.py:190
2011-10-24 09:14:32,515 DEBUG nova.rpc [-] Making asynchronous call on
network ... from (pid=22152) multicall /data/nova/nova/rpc/impl_kombu.py:721
2011-10-24 09:14:32,516 DEBUG nova.rpc [-] MSG_ID is
b52a834b8112495d8480887fe5cbb64e from (pid=22152) multicall
/data/nova/nova/rpc/impl_kombu.py:724
2011-10-24 09:14:33,367 INFO nova.virt.libvirt_conn [-] Instance
instance-00000005 rebooted successfully.
2011-10-24 09:14:35,958 INFO nova.virt.libvirt.firewall [-] Attempted to
unfilter instance 7 which is not filtered
2011-10-24 09:14:35,959 INFO nova [-] called setup_basic_filtering in
nwfilter
2011-10-24 09:14:35,959 INFO nova [-] ensuring static filters
2011-10-24 09:14:38,005 INFO nova.virt.libvirt_conn [-] Instance
instance-00000007 destroyed successfully.
2011-10-24 09:14:38,041 DEBUG nova.virt.libvirt.firewall [-] Adding security
group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x40637d0> from (pid=22152) instance_rules
/data/nova/nova/virt/libvirt/firewall.py:650
2011-10-24 09:14:38,042 INFO nova.virt.libvirt.firewall [-] Using cidr '
0.0.0.0/0'
2011-10-24 09:14:38,042 INFO nova.virt.libvirt.firewall [-] Using fw_rules:
['-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED
-j ACCEPT', '-j $provider', u'-s 10.200.200.4 -p udp --sport 67 --dport 68
-j ACCEPT', u'-s 10.200.200.0/24 -j ACCEPT', '-j ACCEPT -p icmp -s 0.0.0.0/0
']
2011-10-24 09:14:38,042 DEBUG nova.virt.libvirt.firewall [-] Adding security
group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x4063890> from (pid=22152) instance_rules
/data/nova/nova/virt/libvirt/firewall.py:650
2011-10-24 09:14:38,043 INFO nova.virt.libvirt.firewall [-] Using cidr '
0.0.0.0/0'
2011-10-24 09:14:38,043 INFO nova.virt.libvirt.firewall [-] Using fw_rules:
['-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED
-j ACCEPT', '-j $provider', u'-s 10.200.200.4 -p udp --sport 67 --dport 68
-j ACCEPT', u'-s 10.200.200.0/24 -j ACCEPT', '-j ACCEPT -p icmp -s 0.0.0.0/0',
'-j ACCEPT -p tcp --dport 3389 -s 0.0.0.0/0']
2011-10-24 09:14:38,043 DEBUG nova.virt.libvirt.firewall [-] Adding security
group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x4063950> from (pid=22152) instance_rules
/data/nova/nova/virt/libvirt/firewall.py:650
2011-10-24 09:14:38,043 INFO nova.virt.libvirt.firewall [-] Using cidr '
0.0.0.0/0'
2011-10-24 09:14:38,044 INFO nova.virt.libvirt.firewall [-] Using fw_rules:
['-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED
-j ACCEPT', '-j $provider', u'-s 10.200.200.4 -p udp --sport 67 --dport 68
-j ACCEPT', u'-s 10.200.200.0/24 -j ACCEPT', '-j ACCEPT -p icmp -s 0.0.0.0/0',
'-j ACCEPT -p tcp --dport 3389 -s 0.0.0.0/0', '-j ACCEPT -p tcp --dport 22
-s 0.0.0.0/0']
2011-10-24 09:14:38,044 DEBUG nova.utils [-] Attempting to grab semaphore
"iptables" for method "apply"... from (pid=22152) inner
/data/nova/nova/utils.py:717
2011-10-24 09:14:38,044 DEBUG nova.utils [-] Attempting to grab file lock
"iptables" for method "apply"... from (pid=22152) inner
/data/nova/nova/utils.py:722
2011-10-24 09:14:38,045 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-save -t filter from (pid=22152) execute
/data/nova/nova/utils.py:168
2011-10-24 09:14:38,060 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-restore from (pid=22152) execute /data/nova/nova/utils.py:168
2011-10-24 09:14:38,076 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-save -t nat from (pid=22152) execute /data/nova/nova/utils.py:168
2011-10-24 09:14:38,090 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-restore from (pid=22152) execute /data/nova/nova/utils.py:168
2011-10-24 09:14:40,925 DEBUG nova.compute.manager [-] Checking state of
instance-00000007 from (pid=22152) _get_power_state
/data/nova/nova/compute/manager.py:190
2011-10-24 09:14:42,586 DEBUG nova.compute.manager [-] Checking state of
instance-00000009 from (pid=22152) _get_power_state
/data/nova/nova/compute/manager.py:190
2011-10-24 09:14:43,742 DEBUG nova.compute.manager [-] Current state of
instance-00000009 is 1, state in DB is 1. from (pid=22152) init_host
/data/nova/nova/compute/manager.py:171
2011-10-24 09:14:43,742 INFO nova.compute.manager [-] Rebooting instance
instance-00000009 after nova-compute restart.
2011-10-24 09:14:43,743 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock:
decorating: |<function reboot_instance at 0x22971b8>|
2011-10-24 09:14:43,743 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock:
arguments: |<nova.compute.manager.ComputeManager object at 0x1ba4f90>|
|<nova.context.RequestContext object at 0x2c5db50>| |9|
2011-10-24 09:14:43,743 DEBUG nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] instance 9: getting locked
state from (pid=22152) get_lock /data/nova/nova/compute/manager.py:1276
2011-10-24 09:14:43,815 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock:
locked: |False|
2011-10-24 09:14:43,816 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock: admin:
|True|
2011-10-24 09:14:43,816 INFO nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] check_instance_lock:
executing: |<function reboot_instance at 0x22971b8>|
2011-10-24 09:14:43,816 AUDIT nova.compute.manager
[90e8a83b-4b9b-4bd0-b2b5-362236b72631 None None] Rebooting instance 9
2011-10-24 09:14:43,892 DEBUG nova.compute.manager [-] Checking state of
instance-00000009 from (pid=22152) _get_power_state
/data/nova/nova/compute/manager.py:190
2011-10-24 09:14:44,856 DEBUG nova.rpc [-] Making asynchronous call on
network ... from (pid=22152) multicall /data/nova/nova/rpc/impl_kombu.py:721
2011-10-24 09:14:44,856 DEBUG nova.rpc [-] MSG_ID is
4b097d218fbf4b4aa85b9c37c892df52 from (pid=22152) multicall
/data/nova/nova/rpc/impl_kombu.py:724
2011-10-24 09:14:46,186 INFO nova.virt.libvirt_conn [-] Instance
instance-00000007 rebooted successfully.
2011-10-24 09:14:49,855 INFO nova.virt.libvirt.firewall [-] Attempted to
unfilter instance 9 which is not filtered
2011-10-24 09:14:49,856 INFO nova [-] called setup_basic_filtering in
nwfilter
2011-10-24 09:14:49,856 INFO nova [-] ensuring static filters
2011-10-24 09:14:51,725 INFO nova.virt.libvirt_conn [-] Instance
instance-00000009 destroyed successfully.
2011-10-24 09:14:51,785 DEBUG nova.virt.libvirt.firewall [-] Adding security
group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x3e01a10> from (pid=22152) instance_rules
/data/nova/nova/virt/libvirt/firewall.py:650
2011-10-24 09:14:51,786 INFO nova.virt.libvirt.firewall [-] Using cidr '
0.0.0.0/0'
2011-10-24 09:14:51,786 INFO nova.virt.libvirt.firewall [-] Using fw_rules:
['-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED
-j ACCEPT', '-j $provider', u'-s 10.200.200.4 -p udp --sport 67 --dport 68
-j ACCEPT', u'-s 10.200.200.0/24 -j ACCEPT', '-j ACCEPT -p icmp -s 0.0.0.0/0
']
2011-10-24 09:14:51,786 DEBUG nova.virt.libvirt.firewall [-] Adding security
group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x3e01ad0> from (pid=22152) instance_rules
/data/nova/nova/virt/libvirt/firewall.py:650
2011-10-24 09:14:51,787 INFO nova.virt.libvirt.firewall [-] Using cidr '
0.0.0.0/0'
2011-10-24 09:14:51,787 INFO nova.virt.libvirt.firewall [-] Using fw_rules:
['-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED
-j ACCEPT', '-j $provider', u'-s 10.200.200.4 -p udp --sport 67 --dport 68
-j ACCEPT', u'-s 10.200.200.0/24 -j ACCEPT', '-j ACCEPT -p icmp -s 0.0.0.0/0',
'-j ACCEPT -p tcp --dport 3389 -s 0.0.0.0/0']
2011-10-24 09:14:51,787 DEBUG nova.virt.libvirt.firewall [-] Adding security
group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x3e01b90> from (pid=22152) instance_rules
/data/nova/nova/virt/libvirt/firewall.py:650
2011-10-24 09:14:51,787 INFO nova.virt.libvirt.firewall [-] Using cidr '
0.0.0.0/0'
2011-10-24 09:14:51,788 INFO nova.virt.libvirt.firewall [-] Using fw_rules:
['-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED
-j ACCEPT', '-j $provider', u'-s 10.200.200.4 -p udp --sport 67 --dport 68
-j ACCEPT', u'-s 10.200.200.0/24 -j ACCEPT', '-j ACCEPT -p icmp -s 0.0.0.0/0',
'-j ACCEPT -p tcp --dport 3389 -s 0.0.0.0/0', '-j ACCEPT -p tcp --dport 22
-s 0.0.0.0/0']
2011-10-24 09:14:51,788 DEBUG nova.utils [-] Attempting to grab semaphore
"iptables" for method "apply"... from (pid=22152) inner
/data/nova/nova/utils.py:717
2011-10-24 09:14:51,788 DEBUG nova.utils [-] Attempting to grab file lock
"iptables" for method "apply"... from (pid=22152) inner
/data/nova/nova/utils.py:722
2011-10-24 09:14:51,789 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-save -t filter from (pid=22152) execute
/data/nova/nova/utils.py:168
2011-10-24 09:14:51,804 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-restore from (pid=22152) execute /data/nova/nova/utils.py:168
2011-10-24 09:14:51,820 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-save -t nat from (pid=22152) execute /data/nova/nova/utils.py:168
2011-10-24 09:14:51,835 DEBUG nova.utils [-] Running cmd (subprocess): sudo
iptables-restore from (pid=22152) execute /data/nova/nova/utils.py:168
2011-10-24 09:14:54,960 DEBUG nova.compute.manager [-] Checking state of
instance-00000009 from (pid=22152) _get_power_state
/data/nova/nova/compute/manager.py:190
2011-10-24 09:15:04,275 INFO nova.virt.libvirt_conn [-] Compute_service
record updated for node2
2011-10-24 09:15:05,050 INFO nova.virt.libvirt_conn [-] Instance
instance-00000009 rebooted successfully.
2011-10-24 09:15:05,054 INFO nova.rpc [-] Connected to AMQP server on
10.200.200.2:5672
---------------------------------------------------------------------------------------------------------------------------------------------------------

-- 
非淡薄无以明志,非宁静无以致远
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20111024/da9b9b50/attachment.html>


More information about the Openstack mailing list