[Openstack-operators] Cannot launch instances on Ocata.

Andy Wojnarek andy.wojnarek at theatsgroup.com
Thu May 18 01:02:15 UTC 2017


I just realized netstat is only showing 8778 listening on tcp6. 

root at gvicopnstk01:/etc/nova# netstat -an | grep 8778
tcp6       0      0 :::8778                 :::*                    LISTEN
tcp6       0      0 192.168.241.114:8778    192.168.241.115:58948   FIN_WAIT2
tcp6       0      0 192.168.241.114:8778    192.168.241.115:58946   TIME_WAIT

How does this get started up? All of my nova services are started and apache2 is started. Does the port listen via the wgsi script called by apache?

Thanks,
Andrew Wojnarek |  Sr. Systems Engineer    | ATS Group, LLC
mobile 717.856.6901 | andy.wojnarek at TheATSGroup.com
Galileo Performance Explorer Blog <http://galileosuite.com/blog/> Offers Deep Insights for Server/Storage Systems


On 5/17/17, 6:35 PM, "Andy Wojnarek" <andy.wojnarek at theatsgroup.com> wrote:

    Thanks!
    
    I see my endpoints had controller:8778/placement, and that appears to be wrong… I think they must have updated the Ocata installation guide as such – because I see it correct in the guide now. I also see some people having the same issue as me, where the placement API is returning a 404.
    
    I added the new endpoints in, and restarted, but I’m still getting 404s. So now I’ll just troubleshoot and figure out why I’m getting 404’s in general. 
    
    
    Nova-placement-api.log
    017-05-17 18:29:57.341 4941 DEBUG nova.api.openstack.placement.requestlog [req-ef132927-a597-4d6c-9255-b1558d919bda 30ba9e287aff4fe5b806c327901192dd 15f08d64f0ce4dab95448b40a45ff8dd - default default] Starting request: 192.168.241.115 "GET /placement/resource_providers/f4df986c-1a2c-4e0f-827e-9867f5b16b66" __call__ /usr/lib/python2.7/dist-packages/nova/api/openstack/placement/requestlog.py:38
    2017-05-17 18:29:57.342 4941 INFO nova.api.openstack.placement.requestlog [req-ef132927-a597-4d6c-9255-b1558d919bda 30ba9e287aff4fe5b806c327901192dd 15f08d64f0ce4dab95448b40a45ff8dd - default default] 192.168.241.115 "GET /placement/resource_providers/f4df986c-1a2c-4e0f-827e-9867f5b16b66" status: 404 len: 52 microversion: 1.0
    2017-05-17 18:29:57.348 4944 DEBUG nova.api.openstack.placement.requestlog [req-18a8653e-9a30-4015-a66d-c783e44b0310 30ba9e287aff4fe5b806c327901192dd 15f08d64f0ce4dab95448b40a45ff8dd - default default] Starting request: 192.168.241.115 "POST /placement/resource_providers" __call__ /usr/lib/python2.7/dist-packages/nova/api/openstack/placement/requestlog.py:38
    2017-05-17 18:29:57.349 4944 INFO nova.api.openstack.placement.requestlog [req-18a8653e-9a30-4015-a66d-c783e44b0310 30ba9e287aff4fe5b806c327901192dd 15f08d64f0ce4dab95448b40a45ff8dd - default default] 192.168.241.115 "POST /placement/resource_providers" status: 404 len: 52 microversion: 1.0
    2017-05-17 18:29:57.389 4942 DEBUG nova.api.openstack.placement.requestlog [req-2cc2fdae-af5d-499e-93cd-449503218458 30ba9e287aff4fe5b806c327901192dd 15f08d64f0ce4dab95448b40a45ff8dd - default default] Starting request: 192.168.241.115 "GET /placement/resource_providers/f4df986c-1a2c-4e0f-827e-9867f5b16b66/allocations" __call__ /usr/lib/python2.7/dist-packages/nova/api/openstack/placement/requestlog.py:38
    2017-05-17 18:29:57.391 4942 INFO nova.api.openstack.placement.requestlog [req-2cc2fdae-af5d-499e-93cd-449503218458 30ba9e287aff4fe5b806c327901192dd 15f08d64f0ce4dab95448b40a45ff8dd - default default] 192.168.241.115 "GET /placement/resource_providers/f4df986c-1a2c-4e0f-827e-9867f5b16b66/allocations" status: 404 len: 52 microversion: 1.0
    
    
    nova_placement_access.log
    192.168.241.115 - - [17/May/2017:18:32:01 -0400] "POST /placement/resource_providers HTTP/1.1" 404 367 "-" "nova-compute keystoneauth1/2.18.0 python-requests/2.12.4 CPython/2.7.12"
    192.168.241.115 - - [17/May/2017:18:32:01 -0400] "GET /placement/resource_providers/f4df986c-1a2c-4e0f-827e-9867f5b16b66/allocations HTTP/1.1" 404 367 "-" "nova-compute keystoneauth1/2.18.0 python-requests/2.12.4 CPython/2.7.12"
    192.168.241.115 - - [17/May/2017:18:33:02 -0400] "GET /placement/resource_providers/f4df986c-1a2c-4e0f-827e-9867f5b16b66 HTTP/1.1" 404 368 "-" "nova-compute keystoneauth1/2.18.0 python-requests/2.12.4 CPython/2.7.12"
    192.168.241.115 - - [17/May/2017:18:33:02 -0400] "POST /placement/resource_providers HTTP/1.1" 404 367 "-" "nova-compute keystoneauth1/2.18.0 python-requests/2.12.4 CPython/2.7.12"
    192.168.241.115 - - [17/May/2017:18:33:02 -0400] "GET /placement/resource_providers/f4df986c-1a2c-4e0f-827e-9867f5b16b66/allocations HTTP/1.1" 404 367 "-" "nova-compute keystoneauth1/2.18.0 python-requests/2.12.4 CPython/2.7.12"
    
    root at gvicopnstk01:/var/log/apache2# openstack endpoint list | grep -i 8778
    | 4103a80eceb84e2cbdd1f75e1a34321c | RegionOne | placement    | placement    | True    | internal  | http://gvicopnstk01:8778/placement |
    | 46df3838adbe4af3955cd0dc5a97e11c | RegionOne | placement    | placement    | True    | public    | http://gvicopnstk01:8778/placement |
    | 5331ff425b384951b503e2dc07e38913 | RegionOne | placement    | placement    | True    | internal  | http://gvicopnstk01:8778           |
    | a83c1c9c85eb489aa3dd687aa91381e8 | RegionOne | placement    | placement    | True    | public    | http://gvicopnstk01:8778           |
    | a899bde2010641a192887c9b924de10a | RegionOne | placement    | placement    | True    | admin     | http://gvicopnstk01:8778           |
    | d5dddc0923c14cda918a502a587e6320 | RegionOne | placement    | placement    | True    | admin     | http://gvicopnstk01:8778/placement |
    
    
    Thanks,
    Andrew Wojnarek |  Sr. Systems Engineer    | ATS Group, LLC
    mobile 717.856.6901 | andy.wojnarek at TheATSGroup.com
    Galileo Performance Explorer Blog <http://galileosuite.com/blog/> Offers Deep Insights for Server/Storage Systems
    
    
    On 5/17/17, 5:32 PM, "Erik McCormick" <emccormick at cirrusseven.com> wrote:
    
        I'm just spit-balling now because I haven't used Ocata yet, but your
        placement API may be up, but it's returning a 404 so it seems to be
        misconfigured somehow.
        
        Perhaps there's some useful nuggets in here:
        https://ask.openstack.org/en/question/102256/how-to-configure-placement-service-for-compute-node-on-ocata/
        
        or here:
        https://docs.openstack.org/developer/nova/placement.html
        
        I'm assuming that service spits out its own log. IF so, is there anything in it?
        
        -Erik
        
        On Wed, May 17, 2017 at 5:23 PM, Andy Wojnarek
        <andy.wojnarek at theatsgroup.com> wrote:
        > I’m seeing the following on the controller:
        >
        >
        > 2017-05-17 17:20:12.049 2212 ERROR nova.scheduler.client.report [req-2953a824-f607-4b9d-86bf-f0d585fba787 b07949d8ae7144049851c7abb39ac6db 4fd0307bf4b74c5a8718b180c24c7cff - - -] Failed to retrieve filtered list of resource providers from placement API for filters {'resources': 'DISK_GB:1,MEMORY_MB:512,VCPU:1'}. Got 404: 404 Not Found
        >
        > The resource could not be found.
        >
        >    .
        >
        > I don’t see any scheduler log on the compute:
        >
        > root at gvicopnstk02:/var/log/nova# ls -ltr
        > total 58584
        > -rw-r--r-- 1 nova nova  1353724 May 14 06:25 nova-compute.log.4.gz
        > -rw-rw-r-- 1 nova nova       20 May 14 06:25 nova-manage.log.4.gz
        > -rw-r--r-- 1 nova nova  1349014 May 15 06:25 nova-compute.log.3.gz
        > -rw-rw-r-- 1 nova nova       20 May 15 06:25 nova-manage.log.3.gz
        > -rw-rw-r-- 1 nova nova       20 May 16 06:25 nova-manage.log.2.gz
        > -rw-r--r-- 1 nova nova  1350172 May 16 06:25 nova-compute.log.2.gz
        > -rw-r--r-- 1 nova nova 38318600 May 17 06:25 nova-compute.log.1
        > -rw-rw-r-- 1 nova nova        0 May 17 06:25 nova-manage.log.1
        > -rw-rw-r-- 1 nova nova        0 May 17 06:25 nova-manage.log
        > -rw-r--r-- 1 nova nova 17588608 May 17 17:21 nova-compute.log
        >
        >
        > 2017-05-17 17:20:46.483 1528 ERROR nova.scheduler.client.report [req-19cd6ce4-cb9c-4b7a-8cdb-0d3643f38701 - - - - -] Failed to create resource provider record in placement API for UUID f4df986c-1a2c-4e0f-827e-9867f5b16b66. Got 404: 404 Not Found
        >
        > The resource could not be found.
        >
        >
        > So it looks like the placement API isn’t working?
        >
        >
        > Placement looks up and running:
        >
        >
        > root at gvicopnstk01:/var/log/nova# openstack service list
        > +----------------------------------+-----------+-----------+
        > | ID                               | Name      | Type      |
        > +----------------------------------+-----------+-----------+
        > | 018d4b8b185b4137be4a2fee14b361ee | glance    | image     |
        > | 39d57b81f57140f9936bcc0a6f8ac244 | keystone  | identity  |
        > | 626d6cf1c9c842a39283b5595e597af0 | placement | placement |
        > | 6b90234efded4ed9b4344e8eb14f422b | neutron   | network   |
        > | ebbcff558b904f21818a656bd177f51b | nova      | compute   |
        >
        >
        >
        >
        >
        > root at gvicopnstk01:/var/log/nova# openstack endpoint list | grep -i placement
        > | 4103a80eceb84e2cbdd1f75e1a34321c | RegionOne | placement    | placement    | True    | internal  | http://gvicopnstk01:8778/placement |
        > | 46df3838adbe4af3955cd0dc5a97e11c | RegionOne | placement    | placement    | True    | public    | http://gvicopnstk01:8778/placement |
        > | d5dddc0923c14cda918a502a587e6320 | RegionOne | placement    | placement    | True    | admin     | http://gvicopnstk01:8778/placement |
        >
        >
        > root at gvicopnstk01:/var/log/nova# netstat -an | grep -i 8778
        > tcp6       0      0 :::8778                 :::*                    LISTEN
        > tcp6       0      0 192.168.241.114:8778    192.168.241.115:50734   TIME_WAIT
        > tcp6       0      0 192.168.241.114:8778    192.168.241.115:50736   FIN_WAIT2
        >
        > This placement thing is new to Ocata right?
        >
        > Thanks,
        > Andrew Wojnarek |  Sr. Systems Engineer    | ATS Group, LLC
        > mobile 717.856.6901 | andy.wojnarek at TheATSGroup.com
        > Galileo Performance Explorer Blog <http://galileosuite.com/blog/> Offers Deep Insights for Server/Storage Systems
        >
        >
        > On 5/17/17, 5:19 PM, "Erik McCormick" <emccormick at cirrusseven.com> wrote:
        >
        >     You'll want to check the nova-scheduler.log (controller) and the
        >     nova-compute.log (compute). You can look for your request ID and then
        >     go forward from there. Those should shed some more light on what the
        >     issue is
        >
        >     -Erik
        >
        >     On Wed, May 17, 2017 at 5:09 PM, Andy Wojnarek
        >     <andy.wojnarek at theatsgroup.com> wrote:
        >     > Hi,
        >     >
        >     >
        >     >
        >     > I have a new Openstack cloud running in our lab, but I am unable to launch
        >     > instances. This is Ocata running on Ubuntu 16.04.2
        >     >
        >     >
        >     >
        >     > Here are the errors I am getting when trying to launch an instance:
        >     >
        >     >
        >     >
        >     > On my controller node in log file /var/log/nova/nova-conductor.log
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
        >     > [req-a9beeb33-9454-47a2-96e2-908d5b1e4c46 b07949d8ae7144049851c7abb39ac6db
        >     > 4fd0307bf4b74c5a8718b180c24c7cff - - -] Failed to schedule instances
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager Traceback (most
        >     > recent call last):
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 866, in
        >     > schedule_and_build_instances
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
        >     > request_specs[0].to_legacy_filter_properties_dict())
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 597, in
        >     > _schedule_instances
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     hosts =
        >     > self.scheduler_client.select_destinations(context, spec_obj)
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py", line 371, in
        >     > wrapped
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
        >     > func(*args, **kwargs)
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line
        >     > 51, in select_destinations
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
        >     > self.queryclient.select_destinations(context, spec_obj)
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line
        >     > 37, in __run_method
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
        >     > getattr(self.instance, __name)(*args, **kwargs)
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py", line 32,
        >     > in select_destinations
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
        >     > self.scheduler_rpcapi.select_destinations(context, spec_obj)
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py", line 129, in
        >     > select_destinations
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
        >     > cctxt.call(ctxt, 'select_destinations', **msg_args)
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 169,
        >     > in call
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
        >     > retry=self.retry)
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 97, in
        >     > _send
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
        >     > timeout=timeout, retry=retry)
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
        >     > line 458, in send
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     retry=retry)
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
        >     > line 449, in _send
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     raise result
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
        >     > NoValidHost_Remote: No valid host was found. There are not enough hosts
        >     > available.
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager Traceback (most
        >     > recent call last):
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 218,
        >     > in inner
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
        >     > func(*args, **kwargs)
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 98, in
        >     > select_destinations
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     dests =
        >     > self.driver.select_destinations(ctxt, spec_obj)
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
        >     > "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line
        >     > 79, in select_destinations
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     raise
        >     > exception.NoValidHost(reason=reason)
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager NoValidHost: No
        >     > valid host was found. There are not enough hosts available.
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
        >     >
        >     > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
        >     >
        >     > 2017-05-17 16:48:33.686 2654 DEBUG oslo_db.sqlalchemy.engines
        >     > [req-a9beeb33-9454-47a2-96e2-908d5b1e4c46 b07949d8ae7144049851c7abb39ac6db
        >     > 4fd0307bf4b74c5a8718b180c24c7cff - - -] MySQL server mode set to
        >     > STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
        >     > _check_effective_sql_mode
        >     > /usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
        >     >
        >     > 2017-05-17 16:48:36.013 2654 WARNING nova.scheduler.utils
        >     > [req-a9beeb33-9454-47a2-96e2-908d5b1e4c46 b07949d8ae7144049851c7abb39ac6db
        >     > 4fd0307bf4b74c5a8718b180c24c7cff - - -] Failed to
        >     > compute_task_build_instances: No valid host was found. There are not enough
        >     > hosts available.
        >     >
        >     >
        >     >
        >     >
        >     >
        >     > The hypervisor is up:
        >     >
        >     > root at gvicopnstk01:/var/log/nova# openstack hypervisor list
        >     >
        >     > +----+---------------------+-----------------+-----------------+-------+
        >     >
        >     > | ID | Hypervisor Hostname | Hypervisor Type | Host IP         | State |
        >     >
        >     > +----+---------------------+-----------------+-----------------+-------+
        >     >
        >     > |  1 | gvicopnstk02        | QEMU            | 192.168.241.115 | up    |
        >     >
        >     >
        >     >
        >     > Services are up:
        >     >
        >     > root at gvicopnstk01:/var/log/nova# openstack compute service list
        >     >
        >     > +----+------------------+--------------+----------+---------+-------+----------------------------+
        >     >
        >     > | ID | Binary           | Host         | Zone     | Status  | State |
        >     > Updated At                 |
        >     >
        >     > +----+------------------+--------------+----------+---------+-------+----------------------------+
        >     >
        >     > |  6 | nova-consoleauth | gvicopnstk01 | internal | enabled | up    |
        >     > 2017-05-17T21:07:00.000000 |
        >     >
        >     > |  7 | nova-scheduler   | gvicopnstk01 | internal | enabled | up    |
        >     > 2017-05-17T21:07:00.000000 |
        >     >
        >     > |  9 | nova-conductor   | gvicopnstk01 | internal | enabled | up    |
        >     > 2017-05-17T21:07:00.000000 |
        >     >
        >     > | 24 | nova-compute     | gvicopnstk02 | nova     | enabled | up    |
        >     > 2017-05-17T21:07:07.000000 |
        >     >
        >     >
        >     >
        >     > I absolutely cannot figure out. It’s acting like there are no valid compute
        >     > nodes available, but all the Openstack commands are coming back as status is
        >     > up and running.
        >     >
        >     >
        >     >
        >     > Thanks,
        >     >
        >     > Andrew Wojnarek |  Sr. Systems Engineer    | ATS Group, LLC
        >     >
        >     > mobile 717.856.6901 | andy.wojnarek at TheATSGroup.com
        >     >
        >     > Galileo Performance Explorer Blog Offers Deep Insights for Server/Storage
        >     > Systems
        >     >
        >     >
        >     > _______________________________________________
        >     > OpenStack-operators mailing list
        >     > OpenStack-operators at lists.openstack.org
        >     > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
        >     >
        >
        >
        >
        
    
    
    _______________________________________________
    OpenStack-operators mailing list
    OpenStack-operators at lists.openstack.org
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
    




More information about the OpenStack-operators mailing list