[Openstack-operators] Cannot launch instances on Ocata.

Erik McCormick emccormick at cirrusseven.com
Wed May 17 21:19:39 UTC 2017


You'll want to check the nova-scheduler.log (controller) and the
nova-compute.log (compute). You can look for your request ID and then
go forward from there. Those should shed some more light on what the
issue is

-Erik

On Wed, May 17, 2017 at 5:09 PM, Andy Wojnarek
<andy.wojnarek at theatsgroup.com> wrote:
> Hi,
>
>
>
> I have a new Openstack cloud running in our lab, but I am unable to launch
> instances. This is Ocata running on Ubuntu 16.04.2
>
>
>
> Here are the errors I am getting when trying to launch an instance:
>
>
>
> On my controller node in log file /var/log/nova/nova-conductor.log
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
> [req-a9beeb33-9454-47a2-96e2-908d5b1e4c46 b07949d8ae7144049851c7abb39ac6db
> 4fd0307bf4b74c5a8718b180c24c7cff - - -] Failed to schedule instances
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager Traceback (most
> recent call last):
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 866, in
> schedule_and_build_instances
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
> request_specs[0].to_legacy_filter_properties_dict())
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 597, in
> _schedule_instances
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     hosts =
> self.scheduler_client.select_destinations(context, spec_obj)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py", line 371, in
> wrapped
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
> func(*args, **kwargs)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line
> 51, in select_destinations
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
> self.queryclient.select_destinations(context, spec_obj)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line
> 37, in __run_method
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
> getattr(self.instance, __name)(*args, **kwargs)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py", line 32,
> in select_destinations
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
> self.scheduler_rpcapi.select_destinations(context, spec_obj)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py", line 129, in
> select_destinations
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
> cctxt.call(ctxt, 'select_destinations', **msg_args)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 169,
> in call
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
> retry=self.retry)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 97, in
> _send
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
> timeout=timeout, retry=retry)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> line 458, in send
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     retry=retry)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> line 449, in _send
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     raise result
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
> NoValidHost_Remote: No valid host was found. There are not enough hosts
> available.
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager Traceback (most
> recent call last):
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 218,
> in inner
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     return
> func(*args, **kwargs)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 98, in
> select_destinations
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     dests =
> self.driver.select_destinations(ctxt, spec_obj)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line
> 79, in select_destinations
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager     raise
> exception.NoValidHost(reason=reason)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager NoValidHost: No
> valid host was found. There are not enough hosts available.
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
>
> 2017-05-17 16:48:33.686 2654 DEBUG oslo_db.sqlalchemy.engines
> [req-a9beeb33-9454-47a2-96e2-908d5b1e4c46 b07949d8ae7144049851c7abb39ac6db
> 4fd0307bf4b74c5a8718b180c24c7cff - - -] MySQL server mode set to
> STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
> _check_effective_sql_mode
> /usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
>
> 2017-05-17 16:48:36.013 2654 WARNING nova.scheduler.utils
> [req-a9beeb33-9454-47a2-96e2-908d5b1e4c46 b07949d8ae7144049851c7abb39ac6db
> 4fd0307bf4b74c5a8718b180c24c7cff - - -] Failed to
> compute_task_build_instances: No valid host was found. There are not enough
> hosts available.
>
>
>
>
>
> The hypervisor is up:
>
> root at gvicopnstk01:/var/log/nova# openstack hypervisor list
>
> +----+---------------------+-----------------+-----------------+-------+
>
> | ID | Hypervisor Hostname | Hypervisor Type | Host IP         | State |
>
> +----+---------------------+-----------------+-----------------+-------+
>
> |  1 | gvicopnstk02        | QEMU            | 192.168.241.115 | up    |
>
>
>
> Services are up:
>
> root at gvicopnstk01:/var/log/nova# openstack compute service list
>
> +----+------------------+--------------+----------+---------+-------+----------------------------+
>
> | ID | Binary           | Host         | Zone     | Status  | State |
> Updated At                 |
>
> +----+------------------+--------------+----------+---------+-------+----------------------------+
>
> |  6 | nova-consoleauth | gvicopnstk01 | internal | enabled | up    |
> 2017-05-17T21:07:00.000000 |
>
> |  7 | nova-scheduler   | gvicopnstk01 | internal | enabled | up    |
> 2017-05-17T21:07:00.000000 |
>
> |  9 | nova-conductor   | gvicopnstk01 | internal | enabled | up    |
> 2017-05-17T21:07:00.000000 |
>
> | 24 | nova-compute     | gvicopnstk02 | nova     | enabled | up    |
> 2017-05-17T21:07:07.000000 |
>
>
>
> I absolutely cannot figure out. It’s acting like there are no valid compute
> nodes available, but all the Openstack commands are coming back as status is
> up and running.
>
>
>
> Thanks,
>
> Andrew Wojnarek |  Sr. Systems Engineer    | ATS Group, LLC
>
> mobile 717.856.6901 | andy.wojnarek at TheATSGroup.com
>
> Galileo Performance Explorer Blog Offers Deep Insights for Server/Storage
> Systems
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



More information about the OpenStack-operators mailing list