[openstack-dev] [nova] boot images in power state PAUSED for stable/juno
Paul Michali (pcm)
pcm at cisco.com
Fri Jan 2 12:53:48 UTC 2015
No other tools. Running a stock Ubuntu 14.04 server, installed devstack, created local.conf, stacked, and tried to create a VM. I’ve since seen this on another VM I have running with Kilo code, so it is not specifically a Juno issue.
I don’t see what the difference is between a working and non-working setup. :(
On all instances, virsh -v shows 1.2.2.
Baffled.
PCM (Paul Michali)
MAIL …..…. pcm at cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
On Jan 2, 2015, at 4:48 AM, Kevin Benton <blak111 at gmail.com> wrote:
> Ah, doesn't seem to be a Neutron issue then since the
> 'network-vif-plugged' event is showing up and it's attempting to
> resume.
>
> The red flag looks like that "Instance is paused unexpectedly.
> Ignore.". If you grep the nova code base for that, it brings up a note
> linking to bug 1097806.[1] The VM is paused when Nova didn't expect it
> to be. Do you have any other tools running that might be affecting
> kvm?
>
> 1. https://bugs.launchpad.net/nova/+bug/1097806
>
> On Thu, Jan 1, 2015 at 8:09 AM, Paul Michali (pcm) <pcm at cisco.com> wrote:
>> Hi Kevin,
>>
>> No exceptions/tracebacks/errors in Neutron at all. In the Nova logs, it
>> seems to create the instance, pause, and then resume, but it looks like
>> maybe it is not resuming?
>>
>> 2015-01-01 14:44:30.716 3516 DEBUG nova.openstack.common.processutils [-]
>> Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf
>> ovs-vsctl --timeout=120 -- --if-exists del-port qvoded0d35f-20 -- add-port
>> br-int qvoded0d35\
>> f-20 -- set Interface qvoded0d35f-20
>> external-ids:iface-id=ded0d35f-204f-4ca8-a85b-85decb53d9fe
>> external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:81:ab:12
>> external-ids:vm-uuid=c32ac737-1788-4420-b200-2a107d5ad335 exec\
>> ute /opt/stack/nova/nova/openstack/common/processutils.py:161
>> 2015-01-01 14:44:30.786 3516 DEBUG nova.openstack.common.processutils [-]
>> Result was 0 execute
>> /opt/stack/nova/nova/openstack/common/processutils.py:195
>> 2015-01-01 14:44:31.542 3516 DEBUG nova.virt.driver [-] Emitting event
>> <LifecycleEvent: 1420123471.54, c32ac737-1788-4420-b200-2a107d5ad335 =>
>> Started> emit_event /opt/stack/nova/nova/virt/driver.py:1298
>> 2015-01-01 14:44:31.543 3516 INFO nova.compute.manager [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] VM Started (Lifecycle Event)
>> 2015-01-01 14:44:31.584 DEBUG nova.compute.manager
>> [req-77c13ae6-ccf9-48ee-881a-8bb7f04ee4bc None None] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] Synchronizing instance power state
>> after lifecycle event "Started"; current vm_sta\
>> te: building, current task_state: spawning, current DB power_state: 0, VM
>> power_state: 1 handle_lifecycle_event
>> /opt/stack/nova/nova/compute/manager.py:1105
>> 2015-01-01 14:44:31.629 INFO nova.compute.manager
>> [req-77c13ae6-ccf9-48ee-881a-8bb7f04ee4bc None None] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] During sync_power_state the instance
>> has a pending task (spawning). Skip.
>> 2015-01-01 14:44:31.630 3516 DEBUG nova.virt.driver [-] Emitting event
>> <LifecycleEvent: 1420123471.54, c32ac737-1788-4420-b200-2a107d5ad335 =>
>> Paused> emit_event /opt/stack/nova/nova/virt/driver.py:1298
>> 2015-01-01 14:44:31.630 3516 INFO nova.compute.manager [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] VM Paused (Lifecycle Event)
>> 2015-01-01 14:44:31.670 3516 DEBUG nova.compute.manager [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] Synchronizing instance power state
>> after lifecycle event "Paused"; current vm_state: building, current
>> task_state: spawning, c\
>> urrent DB power_state: 0, VM power_state: 3 handle_lifecycle_event
>> /opt/stack/nova/nova/compute/manager.py:1105
>> 2015-01-01 14:44:31.714 3516 INFO nova.compute.manager [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] During sync_power_state the instance
>> has a pending task (spawning). Skip.
>> 2015-01-01 14:44:38.293 DEBUG nova.compute.manager
>> [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] Received event
>> network-vif-plugged-ded0d35f-204f-4ca8-a85b-85decb53d9fe externa\
>> l_instance_event /opt/stack/nova/nova/compute/manager.py:6180
>> 2015-01-01 14:44:38.293 DEBUG nova.openstack.common.lockutils
>> [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] Created new
>> semaphore "c32ac737-1788-4420-b200-2a107d5ad335-events" internal_lock
>> /opt/stack/nova/nova/openstack/comm\
>> on/lockutils.py:206
>> 2015-01-01 14:44:38.294 DEBUG nova.openstack.common.lockutils
>> [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] Acquired semaphore
>> "c32ac737-1788-4420-b200-2a107d5ad335-events" lock
>> /opt/stack/nova/nova/openstack/common/lockutils\
>> .py:229
>> 2015-01-01 14:44:38.294 DEBUG nova.openstack.common.lockutils
>> [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] Got semaphore / lock
>> "_pop_event" inner /opt/stack/nova/nova/openstack/common/lockutils.py:271
>> 2015-01-01 14:44:38.294 DEBUG nova.openstack.common.lockutils
>> [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] Releasing semaphore
>> "c32ac737-1788-4420-b200-2a107d5ad335-events" lock
>> /opt/stack/nova/nova/openstack/common/lockutil\
>> s.py:238
>> 2015-01-01 14:44:38.295 DEBUG nova.openstack.common.lockutils
>> [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] Semaphore / lock
>> released "_pop_event" inner
>> /opt/stack/nova/nova/openstack/common/lockutils.py:275
>> 2015-01-01 14:44:38.295 DEBUG nova.compute.manager
>> [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] Processing event
>> network-vif-plugged-ded0d35f-204f-4ca8-a85b-85decb53d9fe _proc\
>> ess_instance_event /opt/stack/nova/nova/compute/manager.py:6166
>> 2015-01-01 14:44:38.309 3516 DEBUG nova.virt.libvirt.driver [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] Instance is running spawn
>> /opt/stack/nova/nova/virt/libvirt/driver.py:2622
>> 2015-01-01 14:44:38.310 3516 DEBUG nova.virt.driver [-] Emitting event
>> <LifecycleEvent: 1420123478.31, c32ac737-1788-4420-b200-2a107d5ad335 =>
>> Resumed> emit_event /opt/stack/nova/nova/virt/driver.py:1298
>> 2015-01-01 14:44:38.311 3516 INFO nova.compute.manager [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] VM Resumed (Lifecycle Event)
>> 2015-01-01 14:44:38.315 3516 INFO nova.virt.libvirt.driver [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] Instance spawned successfully.
>> 2015-01-01 14:44:38.315 3516 DEBUG nova.compute.manager [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] Checking state _get_power_state
>> /opt/stack/nova/nova/compute/manager.py:1156
>> 2015-01-01 14:44:38.348 3516 DEBUG nova.compute.manager [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] Synchronizing instance power state
>> after lifecycle event "Resumed"; current vm_state: building, current
>> task_state: spawning, \
>> current DB power_state: 0, VM power_state: 1 handle_lifecycle_event
>> /opt/stack/nova/nova/compute/manager.py:1105
>> 2015-01-01 14:44:38.391 3516 DEBUG nova.openstack.common.lockutils [-]
>> Releasing semaphore "c32ac737-1788-4420-b200-2a107d5ad335" lock
>> /opt/stack/nova/nova/openstack/common/lockutils.py:238
>> 2015-01-01 14:44:38.394 3516 DEBUG nova.virt.driver [-] Emitting event
>> <LifecycleEvent: 1420123478.31, c32ac737-1788-4420-b200-2a107d5ad335 =>
>> Resumed> emit_event /opt/stack/nova/nova/virt/driver.py:1298
>> 2015-01-01 14:44:38.394 3516 DEBUG nova.openstack.common.lockutils [-]
>> Semaphore / lock released "_locked_do_build_and_run_instance" inner
>> /opt/stack/nova/nova/openstack/common/lockutils.py:275
>> 2015-01-01 14:44:38.396 3516 INFO nova.compute.manager [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] VM Resumed (Lifecycle Event)
>> 2015-01-01 14:44:38.441 3516 DEBUG nova.compute.manager [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] Synchronizing instance power state
>> after lifecycle event "Resumed"; current vm_state: active, current
>> task_state: None, curren\
>> t DB power_state: 1, VM power_state: 1 handle_lifecycle_event
>> /opt/stack/nova/nova/compute/manager.py:1105
>> 2015-01-01 14:44:38.489 3516 DEBUG nova.virt.driver [-] Emitting event
>> <LifecycleEvent: 1420123478.33, c32ac737-1788-4420-b200-2a107d5ad335 =>
>> Paused> emit_event /opt/stack/nova/nova/virt/driver.py:1298
>> 2015-01-01 14:44:38.490 3516 INFO nova.compute.manager [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] VM Paused (Lifecycle Event)
>> 2015-01-01 14:44:38.535 3516 DEBUG nova.compute.manager [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] Synchronizing instance power state
>> after lifecycle event "Paused"; current vm_state: active, current
>> task_state: None, current\
>> DB power_state: 1, VM power_state: 3 handle_lifecycle_event
>> /opt/stack/nova/nova/compute/manager.py:1105
>> 2015-01-01 14:44:38.643 3516 WARNING nova.compute.manager [-] [instance:
>> c32ac737-1788-4420-b200-2a107d5ad335] Instance is paused unexpectedly.
>> Ignore.
>>
>> I think I saw this in one of my Kilo based VMs too.
>>
>>
>> Regards,
>>
>> PCM (Paul Michali)
>>
>> MAIL …..…. pcm at cisco.com
>> IRC ……..… pc_m (irc.freenode.com)
>> TW ………... @pmichali
>> GPG Key … 4525ECC253E31A83
>> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>>
>>
>>
>>
>> On Dec 31, 2014, at 7:57 PM, Kevin Benton <blak111 at gmail.com> wrote:
>>
>> Any exceptions on the Neutron side? It might not be notifying nova
>> that the network is ready.
>>
>> On Wed, Dec 31, 2014 at 8:41 AM, Paul Michali (pcm) <pcm at cisco.com> wrote:
>>
>> Not sure if I’m going crazy or what. I’m using DevStack and, after stacking
>> I tried booting a Cirros 3.2, 3.3, and Ubuntu cloud 14.04 image. Each time,
>> the image ends up in PAUSED power state:
>>
>> ubuntu at juno:/opt/stack/neutron$ nova show peter
>> +--------------------------------------+----------------------------------------------------------------+
>> | Property | Value
>> |
>> +--------------------------------------+----------------------------------------------------------------+
>> | OS-DCF:diskConfig | MANUAL
>> |
>> | OS-EXT-AZ:availability_zone | nova
>> |
>> | OS-EXT-SRV-ATTR:host | juno
>> |
>> | OS-EXT-SRV-ATTR:hypervisor_hostname | juno
>> |
>> | OS-EXT-SRV-ATTR:instance_name | instance-00000001
>> |
>> | OS-EXT-STS:power_state | 3
>> |
>> | OS-EXT-STS:task_state | -
>> |
>> | OS-EXT-STS:vm_state | active
>> |
>> | OS-SRV-USG:launched_at | 2014-12-31T15:15:33.000000
>> |
>> | OS-SRV-USG:terminated_at | -
>> |
>> | accessIPv4 |
>> |
>> | accessIPv6 |
>> |
>> | config_drive |
>> |
>> | created | 2014-12-31T15:15:24Z
>> |
>> | flavor | m1.tiny (1)
>> |
>> | hostId |
>> 5b0c48250ccc0ac3fca8a821e29e4b154ec0b101f9cc0a0b27071a3f |
>> | id |
>> ec5c8d70-ae80-4cc3-a5bb-b68019170dd6 |
>> | image | cirros-0.3.3-x86_64-uec
>> (797e4dee-8c03-497f-8dac-a44b9351dfa3) |
>> | key_name | -
>> |
>> | metadata | {}
>> |
>> | name | peter
>> |
>> | os-extended-volumes:volumes_attached | []
>> |
>> | private network | 10.0.0.4
>> |
>> | progress | 0
>> |
>> | security_groups | default
>> |
>> | status | ACTIVE
>> |
>> | tenant_id | 7afb5bc1d88d462c8d57178437d3c277
>> |
>> | updated | 2014-12-31T15:15:34Z
>> |
>> | user_id | 4ff18bdbeb4d436ea4ff1bcd29e269a9
>> |
>> +--------------------------------------+----------------------------------------------------------------+
>> ubuntu at juno:/opt/stack/neutron$ nova list
>> +--------------------------------------+-------+--------+------------+-------------+------------------+
>> | ID | Name | Status | Task State | Power
>> State | Networks |
>> +--------------------------------------+-------+--------+------------+-------------+------------------+
>> | ec5c8d70-ae80-4cc3-a5bb-b68019170dd6 | peter | ACTIVE | - |
>> Paused | private=10.0.0.4 |
>>
>>
>> I don’t see this with Kilo latest images. Any idea what I may be doing
>> wrong, or if there is an issue (I didn’t see anything on Google search)?
>>
>> IMAGE_ID=`nova image-list | grep 'cloudimg-amd64 ' | cut -d' ' -f 2`
>> PRIVATE_NET=`neutron net-list | grep 'private ' | cut -f 2 -d' ‘`
>>
>> nova boot peter --flavor 3 --image $IMAGE_ID --user-data
>> ~/devstack/user_data.txt --nic net-id=$PRIVATE_NET
>> nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic
>> net-id=$PRIVATE_NET paul
>>
>> Thanks.
>>
>>
>> PCM (Paul Michali)
>>
>> MAIL …..…. pcm at cisco.com
>> IRC ……..… pc_m (irc.freenode.com)
>> TW ………... @pmichali
>> GPG Key … 4525ECC253E31A83
>> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>>
>>
>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Kevin Benton
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Kevin Benton
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150102/f157e35e/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 842 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150102/f157e35e/attachment.pgp>
More information about the OpenStack-dev
mailing list