[openstack-dev] Issue with MutliNode openstack installation with devstack

Vikash Kumar vikash.kumar at oneconvergence.com
Wed Oct 16 14:45:02 UTC 2013


Hi,

   I am trying to install openstack on mutli node with help of devstack (
http://devstack.org/guides/multinode-lab.html).

   I got some issues:

    My setup details: One controller and One compute node (both VM).
                              OS - Ubuntu 13.04
                               Memory: 2G
   *a. VM's are going in paused state.*

       I tried to launch VM from horizon, and all the VM's goes in*
Paused *state.
VM's are scheduled on *compute node.  *

       When controller node come up, *nova-manage service list* shows the
controller node as VM also. Reason by default nova compute services also
come up.

        After compute node installation, *nova-manage service list *shows
compute node as compute node only and not the

       There is one nova -error:

       * ERROR nova.openstack.common.periodic_task [-] Error during
ComputeManager.update_available_resource: Compute host oc-vm could not be
found.#012Traceback (most recent call last):#012#012  File
"/opt/stack/nova/nova/openstack/common/rpc/common.py", line 420, in
catch_client_exception#012    return func(*args, **kwargs)#012#012  File
"/opt/stack/nova/nova/conductor/manager.py", line 419, in
service_get_all_by#012    result =
self.db.service_get_by_compute_host(context, host)#012#012  File
"/opt/stack/nova/nova/db/api.py", line 140, in
service_get_by_compute_host#012    return
IMPL.service_get_by_compute_host(context, host)#012#012  File
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 107, in wrapper#012
return f(*args, **kwargs)#012#012  File
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 441, in
service_get_by_compute_host#012    raise
exception.ComputeHostNotFound(host=host)#012#012ComputeHostNotFound:
Compute host oc-vm could not be found.#0122013-10-16 06:25:27.358 7143
TRACE nova.openstack.common.periodic_task Traceback (most recent call
last):#0122013-10-16 06:25:27.358 7143 TRACE
nova.openstack.common.periodic_task   File
"/opt/stack/nova/nova/openstack/common/periodic_task.py", line 180, in
run_periodic_tasks#0122013-10-16 06:25:27.358 7143 TRACE
nova.openstack.common.periodic_task     task(self, context)#0122013-10-16
06:25:27.358 7143 TRACE nova.openstack.common.periodic_task   File
"/opt/stack/nova/nova/compute/manager.py", line 4872, in
update_available_resource#0122013-10-16 06:25:27.358 7143 TRACE
nova.openstack.common.periodic_task     compute_nodes_in_db =
self._get_compute_nodes_in_db(context)#0122013-10-16 06:25:27.358 7143
TRACE nova.openstack.common.periodic_task   File
"/opt/stack/nova/nova/compute/manager.py", line 4883, in
_get_compute_nodes_in_db#0122013-10-16 06:25:27.358 7143 TRACE
nova.openstack.common.periodic_task     context, self.host)#0122013-10-16
06:25:27.358 7143 TRACE nova.openstack.common.periodic_task   File
"/opt/stack/nova/nova/condu*

  * b. g-api was flagging issue. *

        *ERROR glance.store.sheepdog [-] Error in store configuration:
Unexpected error while running command.#012Command: collie#012Exit code:
127#012Stdout: ''#012Stderr: '/bin/sh: 1: collie: not found\n'

        WARNING glance.store.base [-] Failed to configure store correctly:
Store sheepdog could not be configured correctly. Reason: Error in store
configuration: Unexpected error while running command.#012Command:
collie#012Exit code: 127#012Stdout: ''#012Stderr: '/bin/sh: 1: collie: not
found\n' Disabling add method.

        WARNING glance.store.base [-] Failed to configure store correctly:
Store cinder could not be configured correctly. Reason: Cinder storage
requires a context. Disabling add method
*
    I think this is a bug and also reported by other developers. I resolved
it by installing *sheepdog *explicitly on compute node. After installation
, i didn't saw that error.


     *My localrc file:

*
*     Controller:*
ADMIN_PASSWORD=secret
MYSQL_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret
SERVICE_TOKEN=secret
HOST_IP=192.168.0.66
FLAT_INTERFACE=eth0
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=128
FLOATING_RANGE=192.168.0.22/24
MULTI_HOST=1
Q_PLUGIN=openvswitch
ENABLE_TENANT_TUNNELS=True
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
enable_service q-lbaas
DEST=/opt/stack
LOGFILE=stack.sh.log
RECLONE=yes
SCREEN_LOGDIR=/opt/stack/logs/screen
SYSLOG=True

**I have enabled GRE tunneling.*

  *Compute Node:*

ADMIN_PASSWORD=secret
MYSQL_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret
SERVICE_TOKEN=secret
HOST_IP=192.168.0.103
FLAT_INTERFACE=eth0
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=128
FLOATING_RANGE=192.168.0.22/24
MULTI_HOST=1
DATABASE_TYPE=mysql
SERVICE_HOST=192.168.0.66
MYSQL_HOST=192.168.0.66
RABBIT_HOST=192.168.0.66
GLANCE_HOSTPORT=192.168.0.66:9292
Q_HOST=192.168.0.66
MATCHMAKER_REDIS_HOST=192.168.0.66
ENABLE_TENANT_TUNNELS=True
disable_service n-net
enable_service n-cpu rabbit q-agt neutron
Q_PLUGIN=openvswitch
DEST=/opt/stack
LOGFILE=stack.sh.log
RECLONE=yes
SCREEN_LOGDIR=/opt/stack/logs/screen
SYSLOG=True

  Is there any issue with localrc configuration? Why VM's are going into
paused state ?


Regards,
Vikash
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131016/5cff44b7/attachment.html>


More information about the OpenStack-dev mailing list