[Openstack] Nova Boot works on 50% of hosts but not on the other 50%. Horizon is happy on 100%

Geraint Jones geraint at koding.com
Fri Aug 23 23:33:48 UTC 2013


Hi

We have deployed 2 x Controller and 8 x Compute nodes.

The controllers run everything in HA behind HAProxy ­ Except MySQL (MySQL is
standalone on controller0)

When issuing nova boot then the following work fine :

nova boot --flavor 3 --user_data /srv/install_salt --image
02e7425f-a71b-4e5f-b480-eafeebbbeda3 --key_name root_salt-master --nic
net-id=d7742ca3-e7d8-4a41-ba53-a33c145f9243 stage-webserver-12
--availability-zone nova:compute0
nova boot --flavor 3 --user_data /srv/install_salt --image
02e7425f-a71b-4e5f-b480-eafeebbbeda3 --key_name root_salt-master --nic
net-id=d7742ca3-e7d8-4a41-ba53-a33c145f9243 stage-webserver-12
--availability-zone nova:compute2
nova boot --flavor 3 --user_data /srv/install_salt --image
02e7425f-a71b-4e5f-b480-eafeebbbeda3 --key_name root_salt-master --nic
net-id=d7742ca3-e7d8-4a41-ba53-a33c145f9243 stage-webserver-12
--availability-zone nova:compute4
nova boot --flavor 3 --user_data /srv/install_salt --image
02e7425f-a71b-4e5f-b480-eafeebbbeda3 --key_name root_salt-master --nic
net-id=d7742ca3-e7d8-4a41-ba53-a33c145f9243 stage-webserver-12
--availability-zone nova:compute6

However the following fail :

nova boot --flavor 3 --user_data /srv/install_salt --image
02e7425f-a71b-4e5f-b480-eafeebbbeda3 --key_name root_salt-master --nic
net-id=d7742ca3-e7d8-4a41-ba53-a33c145f9243 stage-webserver-12
--availability-zone nova:compute1
nova boot --flavor 3 --user_data /srv/install_salt --image
02e7425f-a71b-4e5f-b480-eafeebbbeda3 --key_name root_salt-master --nic
net-id=d7742ca3-e7d8-4a41-ba53-a33c145f9243 stage-webserver-12
--availability-zone nova:compute3
nova boot --flavor 3 --user_data /srv/install_salt --image
02e7425f-a71b-4e5f-b480-eafeebbbeda3 --key_name root_salt-master --nic
net-id=d7742ca3-e7d8-4a41-ba53-a33c145f9243 stage-webserver-12
--availability-zone nova:compute5
nova boot --flavor 3 --user_data /srv/install_salt --image
02e7425f-a71b-4e5f-b480-eafeebbbeda3 --key_name root_salt-master --nic
net-id=d7742ca3-e7d8-4a41-ba53-a33c145f9243 stage-webserver-12
--availability-zone nova:compute7

With the failure I get the following stack trace :

+-------------------------------------+-------------------------------------
--------------------------------------------------------------------+
| Property                            | Value
|
+-------------------------------------+-------------------------------------
--------------------------------------------------------------------+
| status                              | ERROR
|
| updated                             | 2013-08-23T21:19:52Z
|
| OS-EXT-STS:task_state               | None
|
| OS-EXT-SRV-ATTR:host                | None
|
| key_name                            | root_salt-master
|
| image                               | Ubuntu 13.04 amd64
(02e7425f-a71b-4e5f-b480-eafeebbbeda3)
|
| hostId                              |
|
| OS-EXT-STS:vm_state                 | error
|
| OS-EXT-SRV-ATTR:instance_name       | instance-000001c3
|
| OS-EXT-SRV-ATTR:hypervisor_hostname | None
|
| flavor                              | m1.medium (3)
|
| id                                  | 496a9ad5-c511-4641-a086-ad8fd2177952
|
| user_id                             | b7818872f0e84537a978e7df0c453924
|
| name                                | stage-webserver-12
|
| created                             | 2013-08-23T21:19:51Z
|
| tenant_id                           | 59a2d92051fe417f90539622d61951d0
|
| OS-DCF:diskConfig                   | MANUAL
|
| metadata                            | {}
|
| accessIPv4                          |
|
| accessIPv6                          |
|
| fault                               | {u'message':
u'HTTPInternalServerError', u'code': 500, u'details':
u'HTTPInternalServerError (HTTP 500) |
|                                     |   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 224, in
decorated_function      |
|                                     |     return function(self, context,
*args, **kwargs)                                                     |
|                                     |   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1224, in
run_instance           |
|                                     |     do_run_instance()
|
|                                     |   File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line
242, in inner        |
|                                     |     retval = f(*args, **kwargs)
|
|                                     |   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1223, in
do_run_instance        |
|                                     |     admin_password, is_first_time,
node, instance)                                                      |
|                                     |   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 879, in
_run_instance           |
|                                     |
self._set_instance_error_state(context, instance[\'uuid\'])
|
|                                     |   File
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
|
|                                     |     self.gen.next()
|
|                                     |   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 800, in
_run_instance           |
|                                     |     image_meta =
self._check_image_size(context, instance)
|
|                                     |   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1006, in
_check_image_size      |
|                                     |     image_meta =
_get_image_meta(context, instance[\'image_ref\'])
|
|                                     |   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 268, in
_get_image_meta         |
|                                     |     return
image_service.show(context, image_id)
|
|                                     |   File
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 240, in show
|
|                                     |
_reraise_translated_image_exception(image_id)
|
|                                     |   File
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 238, in show
|
|                                     |     image =
self._client.call(context, 1, \'get\', image_id)
|
|                                     |   File
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 183, in call
|
|                                     |     return getattr(client.images,
method)(*args, **kwargs)                                              |
|                                     |   File
"/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 104, in
get                   |
|                                     |     % urllib.quote(image_id))
|
|                                     |   File
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 260, in
raw_request         |
|                                     |     return self._http_request(url,
method, **kwargs)                                                    |
|                                     |   File
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 221, in
_http_request       |
|                                     |     raise exc.from_response(resp,
body_str)                                                             |
|                                     | ', u'created':
u'2013-08-23T21:19:52Z'}
|
| OS-EXT-STS:power_state              | 0
|
| OS-EXT-AZ:availability_zone         | nova
|
| config_drive                        |
|
+-------------------------------------+-------------------------------------
--------------------------------------------------------------------+

The odd thing is if I use horizon to boot 14 instances then they end up on
compute0 ­ 7, and they all work perfectly.

I am using the same credentials in Horizon and Nova.

Thanks


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130823/6e4e55f0/attachment.html>


More information about the Openstack mailing list