[Openstack-operators] Cannot launch instances on Ocata.

Andy Wojnarek andy.wojnarek at theatsgroup.com
Wed May 17 21:23:22 UTC 2017


I’m seeing the following on the controller:


2017-05-17 17:20:12.049 2212 ERROR nova.scheduler.client.report [req-2953a824-f607-4b9d-86bf-f0d585fba787 b07949d8ae7144049851c7abb39ac6db 4fd0307bf4b74c5a8718b180c24c7cff - - -] Failed to retrieve filtered list of resource providers from placement API for filters {'resources': 'DISK_GB:1,MEMORY_MB:512,VCPU:1'}. Got 404: 404 Not Found

The resource could not be found.

   .

I don’t see any scheduler log on the compute:

root at gvicopnstk02:/var/log/nova# ls -ltr
total 58584
-rw-r--r-- 1 nova nova  1353724 May 14 06:25 nova-compute.log.4.gz
-rw-rw-r-- 1 nova nova       20 May 14 06:25 nova-manage.log.4.gz
-rw-r--r-- 1 nova nova  1349014 May 15 06:25 nova-compute.log.3.gz
-rw-rw-r-- 1 nova nova       20 May 15 06:25 nova-manage.log.3.gz
-rw-rw-r-- 1 nova nova       20 May 16 06:25 nova-manage.log.2.gz
-rw-r--r-- 1 nova nova  1350172 May 16 06:25 nova-compute.log.2.gz
-rw-r--r-- 1 nova nova 38318600 May 17 06:25 nova-compute.log.1
-rw-rw-r-- 1 nova nova        0 May 17 06:25 nova-manage.log.1
-rw-rw-r-- 1 nova nova        0 May 17 06:25 nova-manage.log
-rw-r--r-- 1 nova nova 17588608 May 17 17:21 nova-compute.log


2017-05-17 17:20:46.483 1528 ERROR nova.scheduler.client.report [req-19cd6ce4-cb9c-4b7a-8cdb-0d3643f38701 - - - - -] Failed to create resource provider record in placement API for UUID f4df986c-1a2c-4e0f-827e-9867f5b16b66. Got 404: 404 Not Found

The resource could not be found.


So it looks like the placement API isn’t working?


Placement looks up and running:


root at gvicopnstk01:/var/log/nova# openstack service list
+----------------------------------+-----------+-----------+
| ID                               | Name      | Type      |
+----------------------------------+-----------+-----------+
| 018d4b8b185b4137be4a2fee14b361ee | glance    | image     |
| 39d57b81f57140f9936bcc0a6f8ac244 | keystone  | identity  |
| 626d6cf1c9c842a39283b5595e597af0 | placement | placement |
| 6b90234efded4ed9b4344e8eb14f422b | neutron   | network   |
| ebbcff558b904f21818a656bd177f51b | nova      | compute   |





root at gvicopnstk01:/var/log/nova# openstack endpoint list | grep -i placement
| 4103a80eceb84e2cbdd1f75e1a34321c | RegionOne | placement    | placement    | True    | internal  | http://gvicopnstk01:8778/placement |
| 46df3838adbe4af3955cd0dc5a97e11c | RegionOne | placement    | placement    | True    | public    | http://gvicopnstk01:8778/placement |
| d5dddc0923c14cda918a502a587e6320 | RegionOne | placement    | placement    | True    | admin     | http://gvicopnstk01:8778/placement |


root at gvicopnstk01:/var/log/nova# netstat -an | grep -i 8778
tcp6       0      0 :::8778                 :::*                    LISTEN
tcp6       0      0 192.168.241.114:8778    192.168.241.115:50734   TIME_WAIT
tcp6       0      0 192.168.241.114:8778    192.168.241.115:50736   FIN_WAIT2

This placement thing is new to Ocata right?

Thanks,
Andrew Wojnarek |  Sr. Systems Engineer    | ATS Group, LLC
mobile 717.856.6901 | andy.wojnarek at TheATSGroup.com
Galileo Performance Explorer Blog <http://galileosuite.com/blog/> Offers Deep Insights for Server/Storage Systems


On 5/17/17, 5:19 PM, "Erik McCormick" <emccormick at cirrusseven.com> wrote:

    You'll want to check the nova-scheduler.log (controller) and the
    nova-compute.log (compute). You can look for your request ID and then
    go forward from there. Those should shed some more light on what the
    issue is
    
    -Erik
    
    On Wed, May 17, 2017 at 5:09 PM, Andy Wojnarek
    <andy.wojnarek at theatsgroup.com> wrote:
    > Hi,
    >
    >
    >
    > I have a new Openstack cloud running in our lab, but I am unable to launch
    > instances. This is Ocata running on Ubuntu 16.04.2
    >
    >
    >
    > Here are the errors I am getting when trying to launch an instance:
    >
    >
    >
    > On my controller node in log file /var/log/nova/nova-conductor.log
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
    > [req-a9beeb33-9454-47a2-96e2-908d5b1e4c46 b07949d8ae7144049851c7abb39ac6db
    > 4fd0307bf4b74c5a8718b180c24c7cff - - -] Failed to schedule instances
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager Traceback (most
    > recent call last):
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 866, in
    > schedule_and_build_instances
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
    > request_specs[0].to_legacy_filter_properties_dict())
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 597, in
    > _schedule_instances
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     hosts =
    > self.scheduler_client.select_destinations(context, spec_obj)
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py", line 371, in
    > wrapped
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
    > func(*args, **kwargs)
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line
    > 51, in select_destinations
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
    > self.queryclient.select_destinations(context, spec_obj)
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line
    > 37, in __run_method
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
    > getattr(self.instance, __name)(*args, **kwargs)
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py", line 32,
    > in select_destinations
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
    > self.scheduler_rpcapi.select_destinations(context, spec_obj)
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py", line 129, in
    > select_destinations
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
    > cctxt.call(ctxt, 'select_destinations', **msg_args)
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 169,
    > in call
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
    > retry=self.retry)
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 97, in
    > _send
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
    > timeout=timeout, retry=retry)
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
    > line 458, in send
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     retry=retry)
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
    > line 449, in _send
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     raise result
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
    > NoValidHost_Remote: No valid host was found. There are not enough hosts
    > available.
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager Traceback (most
    > recent call last):
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 218,
    > in inner
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
    > func(*args, **kwargs)
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 98, in
    > select_destinations
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     dests =
    > self.driver.select_destinations(ctxt, spec_obj)
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
    > "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line
    > 79, in select_destinations
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     raise
    > exception.NoValidHost(reason=reason)
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager NoValidHost: No
    > valid host was found. There are not enough hosts available.
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
    >
    > 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
    >
    > 2017-05-17 16:48:33.686 2654 DEBUG oslo_db.sqlalchemy.engines
    > [req-a9beeb33-9454-47a2-96e2-908d5b1e4c46 b07949d8ae7144049851c7abb39ac6db
    > 4fd0307bf4b74c5a8718b180c24c7cff - - -] MySQL server mode set to
    > STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
    > _check_effective_sql_mode
    > /usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
    >
    > 2017-05-17 16:48:36.013 2654 WARNING nova.scheduler.utils
    > [req-a9beeb33-9454-47a2-96e2-908d5b1e4c46 b07949d8ae7144049851c7abb39ac6db
    > 4fd0307bf4b74c5a8718b180c24c7cff - - -] Failed to
    > compute_task_build_instances: No valid host was found. There are not enough
    > hosts available.
    >
    >
    >
    >
    >
    > The hypervisor is up:
    >
    > root at gvicopnstk01:/var/log/nova# openstack hypervisor list
    >
    > +----+---------------------+-----------------+-----------------+-------+
    >
    > | ID | Hypervisor Hostname | Hypervisor Type | Host IP         | State |
    >
    > +----+---------------------+-----------------+-----------------+-------+
    >
    > |  1 | gvicopnstk02        | QEMU            | 192.168.241.115 | up    |
    >
    >
    >
    > Services are up:
    >
    > root at gvicopnstk01:/var/log/nova# openstack compute service list
    >
    > +----+------------------+--------------+----------+---------+-------+----------------------------+
    >
    > | ID | Binary           | Host         | Zone     | Status  | State |
    > Updated At                 |
    >
    > +----+------------------+--------------+----------+---------+-------+----------------------------+
    >
    > |  6 | nova-consoleauth | gvicopnstk01 | internal | enabled | up    |
    > 2017-05-17T21:07:00.000000 |
    >
    > |  7 | nova-scheduler   | gvicopnstk01 | internal | enabled | up    |
    > 2017-05-17T21:07:00.000000 |
    >
    > |  9 | nova-conductor   | gvicopnstk01 | internal | enabled | up    |
    > 2017-05-17T21:07:00.000000 |
    >
    > | 24 | nova-compute     | gvicopnstk02 | nova     | enabled | up    |
    > 2017-05-17T21:07:07.000000 |
    >
    >
    >
    > I absolutely cannot figure out. It’s acting like there are no valid compute
    > nodes available, but all the Openstack commands are coming back as status is
    > up and running.
    >
    >
    >
    > Thanks,
    >
    > Andrew Wojnarek |  Sr. Systems Engineer    | ATS Group, LLC
    >
    > mobile 717.856.6901 | andy.wojnarek at TheATSGroup.com
    >
    > Galileo Performance Explorer Blog Offers Deep Insights for Server/Storage
    > Systems
    >
    >
    > _______________________________________________
    > OpenStack-operators mailing list
    > OpenStack-operators at lists.openstack.org
    > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
    >
    




More information about the OpenStack-operators mailing list