[Openstack] [Devstack][VCenter][n-cpu]Error: Service n-cpu is not running

foss geek thefossgeek at gmail.com
Mon Sep 15 11:44:41 UTC 2014


Dear All,

I am using Devstack Icehouse stable version to integrate openstack with
VCenter.  I am using CentOS 6.5 64 bit.

I am facing the below issue while running ./stack.  Any pointer/help would
be greatly appreciated.

Here is related error log.


$./stack.sh

<snip>

2014-09-15 11:35:27.881 | + [[ -x /opt/stack/devstack/local.sh ]]
2014-09-15 11:35:27.898 | + service_check
2014-09-15 11:35:27.910 | + local service
2014-09-15 11:35:27.925 | + local failures
2014-09-15 11:35:27.936 | + SCREEN_NAME=stack
2014-09-15 11:35:27.953 | + SERVICE_DIR=/opt/stack/status
2014-09-15 11:35:27.964 | + [[ ! -d /opt/stack/status/stack ]]
2014-09-15 11:35:27.981 | ++ ls /opt/stack/status/stack/n-cpu.failure
2014-09-15 11:35:27.999 | + failures=/opt/stack/status/stack/n-cpu.failure
2014-09-15 11:35:28.006 | + for service in '$failures'
2014-09-15 11:35:28.023 | ++ basename /opt/stack/status/stack/n-cpu.failure
2014-09-15 11:35:28.034 | + service=n-cpu.failure
2014-09-15 11:35:28.051 | + service=n-cpu
*2014-09-15 11:35:28.057 | + echo 'Error: Service n-cpu is not running'*
*2014-09-15 11:35:28.074 | Error: Service n-cpu is not running*
2014-09-15 11:35:28.091 | + '[' -n /opt/stack/status/stack/n-cpu.failure ']'
*2014-09-15 11:35:28.098 | + die 1164 'More details about the above errors
can be found with screen, with ./rejoin-stack.sh'*
2014-09-15 11:35:28.109 | + local exitcode=0
2014-09-15 11:35:28.126 | [Call Trace]
2014-09-15 11:35:28.139 | ./stack.sh:1313:service_check
*2014-09-15 11:35:28.174 | /opt/stack/devstack/functions-common:1164:die*
*2014-09-15 11:35:28.184 | [ERROR]
/opt/stack/devstack/functions-common:1164 More details about the above
errors can be found with screen, with ./rejoin-stack.sh*

<snip>


Here is n-cpu screen log:
==================

$ cd /opt/stack/nova && /usr/bin/nova-compute --config-file
/etc/nova/nova.conf & echo $! >/opt/stack/status/stack/n-cpu.pid; fg ||
echo "n-cpu failed to start" | tee "/opt/stack/status/stack/n-cpu.failure"
2> eror
[1] 32476
cd /opt/stack/nova && /usr/bin/nova-compute --config-file
/etc/nova/nova.conf
2014-09-15 08:00:50.685 DEBUG nova.servicegroup.api [-] ServiceGroup driver
defined as an instance of db from (pid=32477) __new__
/opt/stack/nova/nova/servicegroup/api.py:65
2014-09-15 08:00:51.435 INFO nova.openstack.common.periodic_task [-]
Skipping periodic task _periodic_update_dns because its interval is negative
2014-09-15 08:00:52.104 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('file = nova.image.download.file') from (pid=32477)
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:00:52.178 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('file = nova.image.download.file') from (pid=32477)
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:00:52.186 INFO nova.virt.driver [-] Loading compute driver
'vmwareapi.VMwareVCDriver'
2014-09-15 08:01:21.188 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('file = nova.image.download.file') from (pid=32477)
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:01:21.188 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('file = nova.image.download.file') from (pid=32477)
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:01:21.825 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('file = nova.image.download.file') from (pid=32477)
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:01:21.826 DEBUG stevedore.extension [-] found extension
EntryPoint.parse('file = nova.image.download.file') from (pid=32477)
_load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156
2014-09-15 08:01:26.208 INFO oslo.messaging._drivers.impl_rabbit [-]
Connecting to AMQP server on 10.10.2.2:5672
2014-09-15 08:01:26.235 INFO oslo.messaging._drivers.impl_rabbit [-]
Connected to AMQP server on 10.10.2.2:5672
/usr/lib/python2.6/site-packages/amqp/channel.py:616: VDeprecationWarning:
The auto_delete flag for exchanges has been deprecated and will be removed
from py-amqp v1.5.0.
  warn(VDeprecationWarning(EXCHANGE_AUTODELETE_DEPRECATED))
2014-09-15 08:01:26.244 CRITICAL nova
[req-282f0493-f7d1-4215-bba2-4cf390efc6ac None None] TypeError: __init__()
got an unexpected keyword argument 'namedtuple_as_object'

2014-09-15 08:01:26.244 TRACE nova Traceback (most recent call last):
2014-09-15 08:01:26.244 TRACE nova   File "/usr/bin/nova-compute", line 10,
in <module>
2014-09-15 08:01:26.244 TRACE nova     sys.exit(main())
2014-09-15 08:01:26.244 TRACE nova   File
"/opt/stack/nova/nova/cmd/compute.py", line 72, in main
2014-09-15 08:01:26.244 TRACE nova     db_allowed=CONF.conductor.use_local)
2014-09-15 08:01:26.244 TRACE nova   File
"/opt/stack/nova/nova/service.py", line 274, in create
2014-09-15 08:01:26.244 TRACE nova     db_allowed=db_allowed)
2014-09-15 08:01:26.244 TRACE nova   File
"/opt/stack/nova/nova/service.py", line 156, in __init__
2014-09-15 08:01:26.244 TRACE nova
self.conductor_api.wait_until_ready(context.get_admin_context())
2014-09-15 08:01:26.244 TRACE nova   File
"/opt/stack/nova/nova/conductor/api.py", line 354, in wait_until_ready
2014-09-15 08:01:26.244 TRACE nova     timeout=timeout)
2014-09-15 08:01:26.244 TRACE nova   File
"/opt/stack/nova/nova/baserpc.py", line 62, in ping
2014-09-15 08:01:26.244 TRACE nova     return cctxt.call(context, 'ping',
arg=arg_p)
2014-09-15 08:01:26.244 TRACE nova   File
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 152,
in call
2014-09-15 08:01:26.244 TRACE nova     retry=self.retry)
2014-09-15 08:01:26.244 TRACE nova   File
"/usr/lib/python2.6/site-packages/oslo/messaging/transport.py", line 90, in
_send
2014-09-15 08:01:26.244 TRACE nova     timeout=timeout, retry=retry)
2014-09-15 08:01:26.244 TRACE nova   File
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 408, in send
2014-09-15 08:01:26.244 TRACE nova     retry=retry)
2014-09-15 08:01:26.244 TRACE nova   File
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 376, in _send
2014-09-15 08:01:26.244 TRACE nova     msg = rpc_common.serialize_msg(msg)
2014-09-15 08:01:26.244 TRACE nova   File
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/common.py", line
302, in serialize_msg
2014-09-15 08:01:26.244 TRACE nova     _MESSAGE_KEY:
jsonutils.dumps(raw_msg)}
2014-09-15 08:01:26.244 TRACE nova   File
"/usr/lib/python2.6/site-packages/oslo/messaging/openstack/common/jsonutils.py",
line 172, in dumps
2014-09-15 08:01:26.244 TRACE nova     return json.dumps(value,
default=default, **kwargs)
2014-09-15 08:01:26.244 TRACE nova   File
"/usr/lib64/python2.6/site-packages/simplejson/__init__.py", line 237, in
dumps
2014-09-15 08:01:26.244 TRACE nova     **kw).encode(obj)
*2014-09-15 08:01:26.244 TRACE nova TypeError: __init__() got an unexpected
keyword argument 'namedtuple_as_object'*
2014-09-15 08:01:26.244 TRACE nova
n-cpu failed to start



Here is my local.conf:
=================


$ cat local.conf
[[local|localrc]]

# Credentials
DATABASE_PASSWORD=devstack
ADMIN_PASSWORD=devstack
SERVICE_PASSWORD=devstack
SERVICE_TOKEN=devstack
RABBIT_PASSWORD=devstack

# Services
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,neutron,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-lbaas
ENABLED_SERVICES+=,s-proxy,s-object,s-container,s-account
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
ENABLED_SERVICES+=,trove,tr-api,tr-tmgr,tr-cond
ENABLED_SERVICES+=,horizon

## Neutron - Load Balancing
ENABLED_SERVICES+=,q-lbaas

## Neutron - VPN as a Service
ENABLED_SERVICES+=,q-vpn

## Neutron - Firewall as a Service
ENABLED_SERVICES+=,q-fwaas

# VLAN configuration
Q_PLUGIN=ml2
ENABLE_TENANT_VLANS=True

# GRE tunnel configuration
Q_PLUGIN=ml2
ENABLE_TENANT_TUNNELS=True


VIRT_DRIVER=vsphere
VMWAREAPI_IP=192.168.1.9
VMWAREAPI_USER=root
VMWAREAPI_PASSWORD=root at 123
VMWAREAPI_CLUSTER=openstack

# Images
# Use this image when creating test instances
IMAGE_URLS+=",
http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img"
# Use this image when working with Orchestration (Heat)
IMAGE_URLS+=",
http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F17-x86_64-cfntools.qcow2
"

# Branches
KEYSTONE_BRANCH=stable/icehouse
NOVA_BRANCH=stable/icehouse
NEUTRON_BRANCH=stable/icehouse
SWIFT_BRANCH=stable/icehouse
GLANCE_BRANCH=stable/icehouse
CINDER_BRANCH=stable/icehouse
HEAT_BRANCH=stable/icehouse
TROVE_BRANCH=stable/icehouse
HORIZON_BRANCH=stable/icehouse

# Swift Configuration
SWIFT_REPLICAS=1
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5

# Enable Logging
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs


Here is the /etc/nova/nova.conf created by Devstack script:
=============================================

[DEFAULT]
vif_plugging_timeout = 300
vif_plugging_is_fatal = True
service_neutron_metadata_proxy = True
linuxnet_interface_driver =
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
security_group_api = neutron
firewall_driver = nova.virt.firewall.NoopFirewallDriver
neutron_url = http://10.10.2.2:9696
neutron_region_name = RegionOne
neutron_admin_tenant_name = service
neutron_auth_strategy = keystone
neutron_admin_auth_url = http://10.10.2.2:35357/v2.0
neutron_admin_password = devstack
neutron_admin_username = neutron
network_api_class = nova.network.neutronv2.api.API
compute_driver = vmwareapi.VMwareVCDriver
glance_api_servers = 10.10.2.2:9292
rabbit_password = devstack
rabbit_hosts = 10.10.2.2
rpc_backend = nova.openstack.common.rpc.impl_kombu
ec2_dmz_host = 10.10.2.2
vncserver_proxyclient_address = 127.0.0.1
vncserver_listen = 127.0.0.1
vnc_enabled = true
xvpvncproxy_base_url = http://10.10.2.2:6081/console
novncproxy_base_url = http://10.10.2.2:6080/vnc_auto.html
logging_exception_prefix = %(color)s%(asctime)s.%(msecs)03d TRACE %(name)s
^[[01;35m%(instance)s^[[00m
logging_debug_format_suffix = ^[[00;33mfrom (pid=%(process)d) %(funcName)s
%(pathname)s:%(lineno)d^[[00m

logging_default_format_string = %(asctime)s.%(msecs)03d
%(color)s%(levelname)s %(name)s [^[[00;36m-%(color)s]
^[[01;35m%(instance)s%(color)s%(message)s^[[00m
logging_context_format_string = %(asctime)s.%(msecs)03d
%(color)s%(levelname)s %(name)s [^[[01;36m%(request_id)s
^[[00;36m%(user_name)s %(project_name)s%(color)s]
^[[01;35m%(instance)s%(color)s%(message)s^[[00m
force_config_drive = always
instances_path = /opt/stack/data/nova/instances
lock_path = /opt/stack/data/nova
state_path = /opt/stack/data/nova
volume_api_class = nova.volume.cinder.API
enabled_apis = ec2,osapi_compute,metadata
bindir = /usr/bin
instance_name_template = instance-%08x
sql_connection = mysql://root:devstack@127.0.0.1/nova?charset=utf8
metadata_workers = 4
ec2_workers = 4
osapi_compute_workers = 4
my_ip = 10.10.2.2
osapi_compute_extension =
nova.api.openstack.compute.contrib.standard_extensions
s3_port = 3333
s3_host = 10.10.2.2
default_floating_pool = public
fixed_range =
force_dhcp_release = True
dhcpbridge_flagfile = /etc/nova/nova.conf
scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
rootwrap_config = /etc/nova/rootwrap.conf
api_paste_config = /etc/nova/api-paste.ini
allow_resize_to_same_host = True
auth_strategy = keystone
debug = True
verbose = True

[conductor]
workers = 4

[osapi_v3]
enabled = True

[keystone_authtoken]
signing_dir = /var/cache/nova
admin_password = devstack
admin_user = nova
cafile =
admin_tenant_name = service
auth_protocol = http
auth_port = 35357
auth_host = 10.10.2.2

[spice]
enabled = false
html5proxy_base_url = http://10.10.2.2:6082/spice_auto.html

[vmware]
integration_bridge = br-int
cluster_name = openstack
host_password = root at 123
host_username = root
host_ip = 192.168.1.9

[keymgr]
fixed_key = 674122EBF84A4F33918DC0DB432D6C20FBD17F8F49A4840F41CD7AC3DE7CAEEE


Thanks for your time.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140915/abd4298f/attachment.html>


More information about the Openstack mailing list