[Openstack] Cannot start "nova-api" service

John Griffith john.griffith at solidfire.com
Tue Nov 13 02:47:08 UTC 2012


On Mon, Nov 12, 2012 at 7:34 PM, Ahmed Al-Mehdi <ahmed at coraid.com> wrote:

>
>
> From: John Griffith <john.griffith at solidfire.com>
> Date: Monday, November 12, 2012 7:17 PM
> To: Jian Hua Geng <gengjh at cn.ibm.com>
> Cc: Ahmed Al-Mehdi <ahmed at coraid.com>, "
> openstack-bounces+gengjh=cn.ibm.com at lists.launchpad.net" <
> openstack-bounces+gengjh=cn.ibm.com at lists.launchpad.net>, "
> openstack at lists.launchpad.net" <openstack at lists.launchpad.net>
>
> Subject: Re: [Openstack] Cannot start "nova-api" service
>
>
>
> On Mon, Nov 12, 2012 at 6:57 PM, Jian Hua Geng <gengjh at cn.ibm.com> wrote:
>
>> By default both cinder and nova-api are listening on the same port 8776
>> (this should be a bug I think), you can try to change the default value in
>> the cinder.conf like: osapi_volume_listen_port = 8777 if you are running
>> the cinder and nova-api on the same machine.
>>
>> --------------------------------------------------
>> Best regard,
>> David Geng
>>
>> --------------------------------------------------
>>
>> [image: Inactive hide details for Ahmed Al-Mehdi ---11/13/2012 09:32:21
>> AM---Ahmed Al-Mehdi <ahmed at coraid.com>]Ahmed Al-Mehdi ---11/13/2012
>> 09:32:21 AM---Ahmed Al-Mehdi <ahmed at coraid.com>
>>
>>
>>    *Ahmed Al-Mehdi <ahmed at coraid.com>*
>>    Sent by: openstack-bounces+gengjh=cn.ibm.com at lists.launchpad.net
>>
>>    11/13/2012 09:32 AM
>>
>>
>> To
>>
>>
>>    Vishvananda Ishaya <vishvananda at gmail.com>,
>>
>>
>> cc
>>
>>
>>    "openstack at lists.launchpad.net" <openstack at lists.launchpad.net>
>>
>>
>> Subject
>>
>>
>>    Re: [Openstack] Cannot start "nova-api" service
>>
>>
>> Hello,
>>
>> Can someone please help me with a nova-api issue.  After install all the
>> nova services, all seem to be running fine, except for nova-api.  I even
>> reboot my controller node, no luck.  After reboot all services are running,
>> except nova-api.  When I manually start nova-api, nova-api crashes with the
>> following error "*error: [Errno 98] Address already in use*".  I
>> installed nova-volume earlier during the install process, but later on
>> installed cinder, and made the necessary modifications (as far as I can
>> tell) to nova-api.conf to use cinder for block storage.  Should I uninstall
>> nova-volume?
>>
>> 2012-11-12 14:46:24 INFO keystone.middleware.auth_token [-] Starting
>> keystone auth_token middleware
>> 2012-11-12 14:46:24 INFO keystone.middleware.auth_token [-] Using
>> /var/lib/nova/keystone-signing as cache directory for signing certificate
>> 2012-11-12 14:46:24 CRITICAL nova [-] [Errno 98] Address already in use
>> 2012-11-12 14:46:24 TRACE nova Traceback (most recent call last):
>> 2012-11-12 14:46:24 TRACE nova   File "/usr/bin/nova-api", line 50, in
>> <module>
>> 2012-11-12 14:46:24 TRACE nova     server = service.WSGIService(api)
>> 2012-11-12 14:46:24 TRACE nova   File
>> "/usr/lib/python2.7/dist-packages/nova/service.py", line 584, in __init__
>> 2012-11-12 14:46:24 TRACE nova     port=self.port)
>> 2012-11-12 14:46:24 TRACE nova   File
>> "/usr/lib/python2.7/dist-packages/nova/wsgi.py", line 72, in __init__
>> 2012-11-12 14:46:24 TRACE nova     self._socket = eventlet.listen((host,
>> port), backlog=backlog)
>> 2012-11-12 14:46:24 TRACE nova   File
>> "/usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, in
>> listen
>> 2012-11-12 14:46:24 TRACE nova     sock.bind(addr)
>> 2012-11-12 14:46:24 TRACE nova   File "/usr/lib/python2.7/socket.py",
>> line 224, in meth
>> 2012-11-12 14:46:24 TRACE nova     return getattr(self._sock,name)(*args)
>> 2012-11-12 14:46:24 TRACE nova error: [Errno 98] Address already in use
>> 2012-11-12 14:46:24 TRACE nova
>> 2012-11-12 14:46:24 INFO nova.service [-] Parent process has died
>> unexpectedly, exiting
>> 2012-11-12 14:46:24 INFO nova.service [-] Parent process has died
>> unexpectedly, exiting
>> 2012-11-12 14:46:24 INFO nova.wsgi [-] Stopping WSGI server.
>> 2012-11-12 14:46:24 INFO nova.wsgi [-] Stopping WSGI server.
>>
>> Would highly appreciate any pointers to understanding or resolving the
>> issue.
>>
>> Regards,
>> Ahmed.
>>
>>
>> *From: *Ahmed Al-Mehdi <*ahmed at coraid.com* <ahmed at coraid.com>>*
>> Date: *Friday, November 9, 2012 12:45 AM*
>> To: *Vishvananda Ishaya <*vishvananda at gmail.com* <vishvananda at gmail.com>>
>> *
>> Cc: *"*openstack at lists.launchpad.net* <openstack at lists.launchpad.net>" <*
>> openstack at lists.launchpad.net* <openstack at lists.launchpad.net>>*
>> Subject: *Re: [Openstack] Cannot start "nova-api" service
>>
>>
>>
>>    *From: *Vishvananda Ishaya <*vishvananda at gmail.com*<vishvananda at gmail.com>
>>    >*
>>    Date: *Thursday, November 8, 2012 8:18 PM*
>>    To: *Ahmed Al-Mehdi <*ahmed at coraid.com* <ahmed at coraid.com>>*
>>    Cc: *"*openstack at lists.launchpad.net* <openstack at lists.launchpad.net>"
>>    <*openstack at lists.launchpad.net* <openstack at lists.launchpad.net>>*
>>    Subject: *Re: [Openstack] Cannot start "nova-api" service
>>
>>       On Nov 8, 2012, at 7:01 PM, Ahmed Al-Mehdi <*ahmed at coraid.com*<ahmed at coraid.com>>
>>       wrote:
>>       Vish,
>>
>>          I am running cinder-api.   The following two lines are present
>>          in nova.conf.
>>
>>          volume_api_class=nova.volume.cinder.API
>>          enabled_apis=ec2,osapi_compute,metadata
>>
>>          Do I need to re-sync the db, or add any additional lines to
>>          nova.conf?
>>
>>       No that is it. Are you sure a) you don't have another nova-api or
>>       nova-metadata or nova-api-os-compute process running? and b) that your
>>       nova.conf is being read properly?
>>
>>       Vish
>>
>>    As far as I can tell, no other nova-api, nova-metadata,
>>    nova-api-os-compute is running.   If there another way to confirm besides
>>    running "ps aux".  And how can I tell if nova.conf is being read properly?
>>
>>    root at bodega:~# ps aux | grep nova
>>    nova       914  0.0  0.0  37952  1312 ?        Ss   16:01   0:00 su
>>    -s /bin/sh -c exec nova-novncproxy --config-file=/etc/nova/nova.conf nova
>>    nova       916  0.0  0.2 122976 24108 ?        S    16:01   0:01
>>    /usr/bin/python /usr/bin/nova-novncproxy --config-file=/etc/nova/nova.conf
>>    nova      1235  0.0  0.0  37952  1312 ?        Ss   16:01   0:00 su
>>    -s /bin/sh -c exec nova-cert --config-file=/etc/nova/nova.conf nova
>>    nova      1243  0.0  0.0  37952  1308 ?        Ss   16:01   0:00 su
>>    -s /bin/sh -c exec nova-consoleauth --config-file=/etc/nova/nova.conf nova
>>    nova      1244  0.2  0.6 122996 51232 ?        S    16:01   1:12
>>    /usr/bin/python /usr/bin/nova-cert --config-file=/etc/nova/nova.conf
>>    nova      1249  0.2  0.6 122992 51252 ?        S    16:01   1:13
>>    /usr/bin/python /usr/bin/nova-consoleauth --config-file=/etc/nova/nova.conf
>>    nova      1252  0.0  0.0  37952  1312 ?        Ss   16:01   0:00 su
>>    -s /bin/sh -c exec nova-network --config-file=/etc/nova/nova.conf nova
>>    nova      1255  0.0  0.0  37952  1308 ?        Ss   16:01   0:00 su
>>    -s /bin/sh -c exec nova-scheduler --config-file=/etc/nova/nova.conf nova
>>    nova      1259  0.3  0.6 124964 53100 ?        S    16:01   1:16
>>    /usr/bin/python /usr/bin/nova-network --config-file=/etc/nova/nova.conf
>>    nova      1260  0.3  0.7 151856 59068 ?        S    16:01   1:16
>>    /usr/bin/python /usr/bin/nova-scheduler --config-file=/etc/nova/nova.conf
>>    root      3509  0.0  0.0   9388   920 pts/3    S+   22:55   0:00 grep
>>    --color=auto nova
>>    root at bodega:~#
>>    root at bodega:~#
>>    root at bodega:~# ls -l /etc/nova/
>>    total 32
>>    -rw-r----- 1 nova nova 3588 Sep 25 17:48 api-paste.ini
>>    -rw-r-xr-x 1 nova nova 1329 Oct 20 19:16 logging.conf
>>    -rw-r----- 1 nova nova 2203 Nov  8 18:34 nova.conf
>>    -rw-r----- 1 root root  434 Nov  5 10:44 nova.conf.orig.ahmed
>>    -rw-r----- 1 nova nova 5181 Sep 25 17:48 policy.json
>>    -rw-r--r-- 1 root root  304 Sep 25 17:48 rootwrap.conf
>>    drwxr-xr-x 2 root root 4096 Nov  5 10:36 rootwrap.d
>>    root at bodega:~#
>>
>>
>>    Can you help me understand the following error message in the file:
>>
>>    *2012-11-08 23:31:27 CRITICAL nova [-] [Errno 98] Address already in
>>    use*
>>
>>    By address, are we talking about tcp port number? If so, what is the
>>    port number?
>>
>>    Thank you,
>>    Ahmed.
>>
>>    _______________________________________________
>>       Mailing list: https://launchpad.net/~openstack
>>       Post to     : openstack at lists.launchpad.net
>>       Unsubscribe : https://launchpad.net/~openstack
>>       More help   : https://help.launchpad.net/ListHelp
>>
>>
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack at lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>> Ahmed,
>
> Seems to me like you still have nova-volume configured.  BTW, as mentioned
> earlier nova-volume and cinder-volume using the same port is NOT a bug,
> it's by design. Also there seems to be some confusion here, osapi_compute
> does NOT use 8776, it uses 8774 by default.
>
> Something that was pointed out earlier today is that the install document
> lists everything under the keystone ini heading.  If you copied your
> nova.conf exactly like the doc then your entry for:
> enabled_apis=ec2,osapi_compute,metadata  Is going to be ignored.
>
> Would you please provide your nova.conf and cinder.conf files (link to a
> pastebin perhaps), and we can verify this.
>
> John
>
>
> Hi John,
>
> When you say I still have nova-volume configured, do you mean it is still
> running on my controller node.  I don't think nova-volume is running (from
> output of "ps aux | nova"), but I could be wrong.  However, nova-volume is
> still configured in keystone.  I copied the nova.conf and cinder.conf file
> onto paste bin - http://pastebin.com/xtpVKzs0.  Thank you very much for
> your help.
>
> root at bodega:~/ahmed# keystone endpoint-list
>
> +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+--------------------------------------------+
> |                id                |   region  |
>  publicurl                    |                   internalurl
>     |                  adminurl                  |
>
> +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+--------------------------------------------+
> | 1753b8533e474cf0934cd2eb1d23be45 | RegionOne |
> http://10.176.20.158:8888/v1/AUTH_%(tenant_id)s |
> http://10.176.20.158:8888/v1/AUTH_%(tenant_id)s |
> http://10.176.20.158:8888/v1        |
> | 35c19563fce34c04a20cc82952b096b5 | RegionOne |
> http://10.176.20.158:8773/services/Cloud    |
> http://10.176.20.158:8773/services/Cloud    |
> http://10.176.20.158:8773/services/Admin  |
> | 36f5a5c3021a485db6b900aee9d7520c | RegionOne |
> http://10.176.20.158:9292/v1          |
> http://10.176.20.158:9292/v1          |
> http://10.176.20.158:9292/v1        |
> | 7fdf2e50d29a454897f3c5d395a8326f | RegionOne |
> http://10.176.20.158:8774/v2/%(tenant_id)s   |
> http://10.176.20.158:8774/v2/%(tenant_id)s   |
> http://10.176.20.158:8774/v2/%(tenant_id)s |
> | 9b1ade95c694401cb61362daf281713b | RegionOne |
> http://10.176.20.158:5000/v2.0         |
> http://10.176.20.158:5000/v2.0         |
> http://10.176.20.158:35357/v2.0       |
> | d728f31d9745467aaf53eeeba633ffe4 | RegionOne |
> http://10.176.20.158:8776/v1/%(tenant_id)s   |
> http://10.176.20.158:8776/v1/%(tenant_id)s   |
> http://10.176.20.158:8776/v1/%(tenant_id)s |
> | e298efba7f0148819c400a26e7f6f448 | RegionOne |
> http://10.176.20.158:8776/v1/%(tenant_id)s   |
> http://10.176.20.158:8776/v1/%(tenant_id)s   |
> http://10.176.20.158:8776/v1/%(tenant_id)s |
>
> +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+--------------------------------------------+
> root at bodega:~/ahmed# keystone service-list
>
> +----------------------------------+----------+--------------+---------------------------+
> |                id                |   name   |     type     |
>  description        |
>
> +----------------------------------+----------+--------------+---------------------------+
> | 083662fed26d490b88172a3aa638107a |  volume  |    volume    |    Nova
> Volume Service    |
> | 3224949d951a4fb3b45adb5778caebfe |  cinder  |    volume    |   Cinder
> Volume Service   |
> | 55f097212ab948e5a2bf13e47ac1be9c |   ec2    |     ec2      |  EC2
> Compatibility Layer  |
> | 611a8d8380de4671863c2cd59a4d5bd8 |  glance  |    image     |    Glance
> Image Service   |
> | 90eedca2364b4a5bba477be31738c052 | keystone |   identity   | Keystone
> Identity Service |
> | a6552ffaa4904ec09ef399d71dd5e18f |  swift   | object-store |   Object
> Storage Service  |
> | f0f5c38f832f4584b93c562d1d756fa3 |   nova   |   compute    |    Nova
> Compute Service   |
>
> +----------------------------------+----------+--------------+---------------------------+
> root at bodega:~/ahmed#
>
>
> Thank you,
> Ahmed.
>
>
> Ahmed,

This entry in your nova.conf is the issue: [keystone_authtoken]

Remove that line and restart and you should be correct.  What's happening
in your case is everything after this entry that is NOT keystone_authtoken
related is basically being ignored and the defaults are picked up instead.

John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20121112/6c67e959/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ecblank.gif
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20121112/6c67e959/attachment.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20121112/6c67e959/attachment-0001.gif>


More information about the Openstack mailing list