Issue installing OpenStack in CentOS VMs - cannot launch instances

Eugen Block eblock at nde.ag
Wed Nov 16 08:03:33 UTC 2022


Hi,

these lines indicate that the compute node has been discovered successfully:

[root at controller0 ~]# /bin/sh -c "nova-manage cell_v2 list_hosts"
+-----------+--------------------------------------+---------------------+
| Cell Name | Cell UUID | Hostname |
+-----------+--------------------------------------+---------------------+
| cell1 | 79648906-0ea8-4672-8c5f-73d5998a7b73 | compute0.os.lab.com |
+-----------+--------------------------------------+---------------------+

The "No valid host was found" message can mean many things, you could  
turn on debug logs for nova and see what exactly it complains about.

Zitat von André Ferreira <andreocferreira at gmail.com>:

> Hello,
>
> I'm trying to setup an openstack cluster by following the instructions on
> https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-yoga
>
> I'm using two CentOS8 VMs:
> - VM1: controller node
> - VM2: compute node
>
> After installing all the minimum services, I've tried to create a server
> instance but it's failing.
>
> From the logs, looks like that nova is not able to find a compute node to
> launch the instance:
>
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager
> [req-c1aa668e-73db-4799-baa4-d782ec5986e9 e6529a38880d4efcb55308277aeabb88
> 6a857bb3fb7f47849ff5a11d97968344 - default default] Failed to schedule i
> host was found.
> Traceback (most recent call last):
>
> File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line
> 241, in inner
> return func(*args, **kwargs)
>
> File "/usr/lib/python3.6/site-packages/nova/scheduler/manager.py", line
> 209, in select_destinations
> raise exception.NoValidHost(reason="")
>
> nova.exception.NoValidHost: No valid host was found.
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager Traceback (most
> recent call last):
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File
> "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1549, in
> schedule_and_build_instances
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager instance_uuids,
> return_alternates=True)
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File
> "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 910, in
> _schedule_instances
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager
> return_alternates=return_alternates)
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File
> "/usr/lib/python3.6/site-packages/nova/scheduler/client/query.py", line 42,
> in select_destinations
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager instance_uuids,
> return_objects, return_alternates)
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File
> "/usr/lib/python3.6/site-packages/nova/scheduler/rpcapi.py", line 160, in
> select_destinations
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager return
> cctxt.call(ctxt, 'select_destinations', **msg_args)
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File
> "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 192,
> in call
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager retry=self.retry,
> transport_options=self.transport_options)
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File
> "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128,
> in _send
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager
> transport_options=transport_options)
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File
> "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py",
> line 691, in send
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager
> transport_options=transport_options)
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File
> "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py",
> line 681, in _send
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager raise result
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager
> nova.exception_Remote.NoValidHost_Remote: No valid host was found.
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager Traceback (most
> recent call last):
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File
> "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 241,
> in inner
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager return
> func(*args, **kwargs)
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File
> "/usr/lib/python3.6/site-packages/nova/scheduler/manager.py", line 209, in
> select_destinations
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager raise
> exception.NoValidHost(reason="")
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager
> 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager
> nova.exception.NoValidHost: No valid host was found
>
>
> From nova-scheduler:
> 2022-11-15 16:54:11.797 3978 INFO nova.scheduler.manager
> [req-c1aa668e-73db-4799-baa4-d782ec5986e9 e6529a38880d4efcb55308277aeabb88
> 6a857bb3fb7f47849ff5a11d97968344 - default default] Got no allocation can
> cient resources or a temporary occurrence as compute nodes start up.
> 2022-11-15 16:54:23.160 3978 DEBUG oslo_service.periodic_task
> [req-d477e747-537b-48b2-8913-ef84447d5a21 - - - - -] Running periodic task
> SchedulerManager._discover_hosts_in_cells run_periodic_tasks /usr/li
> 2022-11-15 16:54:23.169 3978 DEBUG oslo_concurrency.lockutils
> [req-4dbefd76-0b81-47d9-b6cf-382f43e3505e - - - - -] Lock
> "79648906-0ea8-4672-8c5f-73d5998a7b73" acquired by
> "nova.context.set_target_cell.<loc
> .000s inner
> /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:390
> 2022-11-15 16:54:23.170 3978 DEBUG oslo_concurrency.lockutils
> [req-4dbefd76-0b81-47d9-b6cf-382f43e3505e - - - - -] Lock
> "79648906-0ea8-4672-8c5f-73d5998a7b73" "released" by
> "nova.context.set_target_cell.<l
> .001s inner
> /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405
> 2022-11-15 16:54:59.097 3977 DEBUG oslo_service.periodic_task
> [req-bb4ca11b-2089-414c-b8d5-6c45aa58c1bf - - - - -] Running periodic task
> SchedulerManager._discover_hosts_in_cells run_periodic_tasks /usr/li
> 2022-11-15 16:54:59.112 3977 DEBUG oslo_concurrency.lockutils
> [req-cf62a2b4-6721-4f98-a97d-7b51e58a34b3 - - - - -] Lock
> "79648906-0ea8-4672-8c5f-73d5998a7b73" acquired by
> "nova.context.set_target_cell.<loc
> .000s inner
> /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:390
> 2022-11-15 16:54:59.113 3977 DEBUG oslo_concurrency.lockutils
> [req-cf62a2b4-6721-4f98-a97d-7b51e58a34b3 - - - - -] Lock
> "79648906-0ea8-4672-8c5f-73d5998a7b73" "released" by
> "nova.context.set_target_cell.<l
> .001s inner
> /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405
>
>
> The problem is that the compute node is not placed on the cell. The list of
> hypervisors is also empty.
> I've searched online but I don't find a way to fix this.
>
> (admin-rc) [andrefe at controller0 ~]$ openstack compute service list
> +--------------------------------------+----------------+------------------------+----------+---------+-------+----------------------------+
> | ID | Binary | Host | Zone | Status | State | Updated At |
> +--------------------------------------+----------------+------------------------+----------+---------+-------+----------------------------+
> | fa781cbb-732c-43db-8c9f-4c31bd73bbd2 | nova-scheduler |
> controller0.os.lab.com | internal | enabled | up |
> 2022-11-15T15:59:40.000000 |
> | 106e9574-5ae3-45e2-a7c2-09ce3624a1d6 | nova-conductor |
> controller0.os.lab.com | internal | enabled | up |
> 2022-11-15T15:59:40.000000 |
> | 28072820-609a-4066-abc8-affea51c3600 | nova-compute | compute0.os.lab.com
> | nova | enabled | up | 2022-11-15T15:59:41.000000 |
> +--------------------------------------+----------------+------------------------+----------+---------+-------+----------------------------+
>
> [root at controller0 ~]# /bin/sh -c "nova-manage cell_v2 list_cells" nova
> +-------+--------------------------------------+-------------------------------------------+--------------------------------------------------+----------+
> | Name | UUID | Transport URL | Database Connection | Disabled |
> +-------+--------------------------------------+-------------------------------------------+--------------------------------------------------+----------+
> | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ |
> mysql+pymysql://nova:****@controller0/nova_cell0 | False |
> | cell1 | 79648906-0ea8-4672-8c5f-73d5998a7b73 |
> rabbit://openstack:****@controller0:5672/ |
> mysql+pymysql://nova:****@controller0/nova | False |
> +-------+--------------------------------------+-------------------------------------------+--------------------------------------------------+----------+
>
> [root at controller0 ~]# /bin/sh -c "nova-manage cell_v2 list_hosts"
> +-----------+--------------------------------------+---------------------+
> | Cell Name | Cell UUID | Hostname |
> +-----------+--------------------------------------+---------------------+
> | cell1 | 79648906-0ea8-4672-8c5f-73d5998a7b73 | compute0.os.lab.com |
> +-----------+--------------------------------------+---------------------+
>
> [root at controller0 ~]# /bin/sh -c "nova-manage cell_v2 discover_hosts
> --verbose" nova
> Found 2 cell mappings.
> Skipping cell0 since it does not contain hosts.
> Getting computes from cell 'cell1': 79648906-0ea8-4672-8c5f-73d5998a7b73
> Found 0 unmapped computes in cell: 79648906-0ea8-4672-8c5f-73d5998a7b73
>
> (admin-rc) [andrefe at controller0 ~]$ nova hypervisor-list
> +----+---------------------+-------+--------+
> | ID | Hypervisor hostname | State | Status |
> +----+---------------------+-------+--------+
> +----+---------------------+-------+--------+
>
> Any idea on how I can fix this and add the compute to the cell?
>
> Thanks.






More information about the openstack-discuss mailing list