[Openstack] Instance in error state after flavor change

Nicolas Odermatt odermattn at gmail.com
Wed May 2 15:02:35 UTC 2012


Hey guys,

 

I'm playing aroung with the openstack api and I'm trying to change the
flavor of an instance. Thanks to the documentation I found the necessary
information to write the api call, which was done pretty fast and easy.
Sadly the api call has a secondary effect because after the execution of the
api call  the instance changes into the error state.

 

I had a look into the nova-compute.log and this is what I found :

root at nova-controller:/etc/nova# tail -n20 /var/log/nova/nova-compute.log 

2012-05-02 14:47:18,887 DEBUG nova.rpc [-] unpacked context: {'user_id':
u'nodermatt', 'roles': [u'Admin'], 'timestamp':
u'2012-05-02T14:47:18.558838', 'auth_token':
u'be66c633-9a8d-47eb-a095-9b653375b138', 'msg_id': None, 'remote_address':
u'192.168.7.106', 'strategy': u'keystone', 'is_admin': True, 'request_id':
u'744129f2-17d9-4756-a276-548ad6dfe975', 'project_id': u'2', 'read_deleted':
False} from (pid=1496) _unpack_context
/var/lib/nova/nova/rpc/impl_kombu.py:646

2012-05-02 14:47:18,888 INFO nova.compute.manager
[744129f2-17d9-4756-a276-548ad6dfe975 nodermatt 2] check_instance_lock:
decorating: |<function prep_resize at 0x253ecf8>|

2012-05-02 14:47:18,889 INFO nova.compute.manager
[744129f2-17d9-4756-a276-548ad6dfe975 nodermatt 2] check_instance_lock:
arguments: |<nova.compute.manager.ComputeManager object at 0x1d85e10>|
|<nova.rpc.impl_kombu.RpcContext object at 0x3cc1c90>|
|ff8f4521-fbc2-45e3-91ac-de395a7a1331|

2012-05-02 14:47:18,889 DEBUG nova.compute.manager
[744129f2-17d9-4756-a276-548ad6dfe975 nodermatt 2] instance
ff8f4521-fbc2-45e3-91ac-de395a7a1331: getting locked state from (pid=1496)
get_lock /var/lib/nova/nova/compute/manager.py:1199

2012-05-02 14:47:18,946 INFO nova.compute.manager
[744129f2-17d9-4756-a276-548ad6dfe975 nodermatt 2] check_instance_lock:
locked: |False|

2012-05-02 14:47:18,946 INFO nova.compute.manager
[744129f2-17d9-4756-a276-548ad6dfe975 nodermatt 2] check_instance_lock:
admin: |True|

2012-05-02 14:47:18,946 INFO nova.compute.manager
[744129f2-17d9-4756-a276-548ad6dfe975 nodermatt 2] check_instance_lock:
executing: |<function prep_resize at 0x253ecf8>|

2012-05-02 14:47:19,124 ERROR nova.rpc [744129f2-17d9-4756-a276-548ad6dfe975
nodermatt 2] Exception during message handling

(nova.rpc): TRACE: Traceback (most recent call last):

(nova.rpc): TRACE:   File "/var/lib/nova/nova/rpc/impl_kombu.py", line 620,
in _process_data

(nova.rpc): TRACE:     rval = node_func(context=ctxt, **node_args)

(nova.rpc): TRACE:   File "/var/lib/nova/nova/exception.py", line 100, in
wrapped

(nova.rpc): TRACE:     return f(*args, **kw)

(nova.rpc): TRACE:   File "/var/lib/nova/nova/compute/manager.py", line 118,
in decorated_function

(nova.rpc): TRACE:     function(self, context, instance_id, *args, **kwargs)

(nova.rpc): TRACE:   File "/var/lib/nova/nova/compute/manager.py", line 965,
in prep_resize

(nova.rpc): TRACE:     raise exception.Error(msg)

(nova.rpc): TRACE: Error: Migration error: destination same as source!

(nova.rpc): TRACE: 

2012-05-02 14:48:00,294 INFO nova.compute.manager
[28207266-3917-4c91-b0d8-4afa36f7c018 None None] Updating host status

 

Also a thread on the openstack forum and a bug on launchpad raised my
curiosity.

Forum thread : http://forums.openstack.org/viewtopic.php?f=10&t=693

Launchpad: https://lists.launchpad.net/openstack/msg05540.html

 

For the record here is the code:

curl -v -H "X-Auth-Token:4963437d-0a6a-4302-a4ff-f3be191fbc1d"
http://192.168.7.211:8774/v1.1/2/servers/3/action -d '{"resize" :
{"flavorRef" : "2"}}' -H 'Content-type:application/json' | python -m
json.tool

 

The machine, on which I am working, is a single node stackops deployment.

 

Does someone has an idea what may be the cause of the error?

 

Have a good day,

Nicolas

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20120502/22b33a1e/attachment.html>


More information about the Openstack mailing list