[openstack-dev] [stable/ocata] [nova] [devstack multi-node] nova-conductor complaining about "No cell mapping found for cell0"

Prashant Shetty prashantshetty1985 at gmail.com
Wed Feb 22 06:33:12 UTC 2017


Appreciate some help on this issue.

Thanks,
Prashant

On Tue, Feb 21, 2017 at 9:08 PM, Prashant Shetty <
prashantshetty1985 at gmail.com> wrote:

> Hi Mark,
>
> Thanks for your reply.
>
> I tried "nova-manage cell_v2 discover_hosts" and it returned nothing and
> still I have same issue on the node.
>
> Problem seems be the way devstack is getting configured,
> As code suggest below we create cell0 on node where n-api and n-cpu runs.
> In my case compute is running only n-cpu and controller is running n-api
> service, due to this code there are no cell created in controller or
> compute.
>
> We will not have this  problem in all-in-one-node setup.
> --
> # Do this late because it requires compute hosts to have started
> if is_service_enabled n-api; then
>     if is_service_enabled n-cpu; then
>         create_cell
>     else
>         # Some CI systems like Hyper-V build the control plane on
>         # Linux, and join in non Linux Computes after setup. This
>         # allows them to delay the processing until after their whole
>         # environment is up.
>         echo_summary "SKIPPING Cell setup because n-cpu is not enabled.
> You will have to do this manually before you have a working environment."
>     fi
> fi
> ---
>
> vmware at cntr11:~$ nova-manage cell_v2 discover_hosts
> vmware at cntr11:~$ nova service-list
> +----+------------------+---------------+----------+--------
> -+-------+----------------------------+-----------------+
> | Id | Binary           | Host          | Zone     | Status  | State |
> Updated_at                 | Disabled Reason |
> +----+------------------+---------------+----------+--------
> -+-------+----------------------------+-----------------+
> | 3  | nova-conductor   | cntr11        | internal | enabled | up    |
> 2017-02-21T15:34:13.000000 | -               |
> | 5  | nova-scheduler   | cntr11        | internal | enabled | up    |
> 2017-02-21T15:34:15.000000 | -               |
> | 6  | nova-consoleauth | cntr11        | internal | enabled | up    |
> 2017-02-21T15:34:11.000000 | -               |
> | 7  | nova-compute     | esx-ubuntu-02 | nova     | enabled | up    |
> 2017-02-21T15:34:14.000000 | -               |
> | 8  | nova-compute     | esx-ubuntu-03 | nova     | enabled | up    |
> 2017-02-21T15:34:16.000000 | -               |
> | 9  | nova-compute     | kvm-3         | nova     | enabled | up    |
> 2017-02-21T15:34:07.000000 | -               |
> | 10 | nova-compute     | kvm-2         | nova     | enabled | up    |
> 2017-02-21T15:34:13.000000 | -               |
> | 11 | nova-compute     | esx-ubuntu-01 | nova     | enabled | up    |
> 2017-02-21T15:34:14.000000 | -               |
> | 12 | nova-compute     | kvm-1         | nova     | enabled | up    |
> 2017-02-21T15:34:09.000000 | -               |
> +----+------------------+---------------+----------+--------
> -+-------+----------------------------+-----------------+
> vmware at cntr11:~$
> vmware at cntr11:~$ nova-manage cell_v2 list_cells
> +------+------+
> | Name | UUID |
> +------+------+
> +------+------+
> vmware at cntr11:~$
>
>
> Thanks,
> Prashant
>
> On Tue, Feb 21, 2017 at 1:02 AM, Matt Riedemann <mriedemos at gmail.com>
> wrote:
>
>> On 2/20/2017 10:31 AM, Prashant Shetty wrote:
>>
>>> Thanks Jay for the response. Sorry I missed out on copying right error.
>>>
>>> Here is the log:
>>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost: No
>>> valid host was found. There are not enough hosts available.
>>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>> 2017-02-20 14:24:06.217 ERROR nova.conductor.manager
>>> [req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] No cell mapping
>>> found for cell0 while trying to record scheduling failure. Setup is
>>> incomplete.
>>>
>>> I tried command you mentioned, still I see same error on conductor.
>>>
>>> As part of stack.sh on controller I see below command was executed
>>> related to "cell". Isn't it devstack should take care of this part
>>> during initial bringup or am I missing any parameters in localrc for
>>> same?.
>>>
>>> NOTE: I have not explicitly enabled n-cell in localrc
>>>
>>> 2017-02-20 14:11:47.510 INFO migrate.versioning.api [-] done
>>> +lib/nova:init_nova:683                    recreate_database nova
>>> +lib/database:recreate_database:112        local db=nova
>>> +lib/database:recreate_database:113        recreate_database_mysql nova
>>> +lib/databases/mysql:recreate_database_mysql:56  local db=nova
>>> +lib/databases/mysql:recreate_database_mysql:57  mysql -uroot -pvmware
>>> -h127.0.0.1 -e 'DROP DATABASE IF EXISTS nova;'
>>> +lib/databases/mysql:recreate_database_mysql:58  mysql -uroot -pvmware
>>> -h127.0.0.1 -e 'CREATE DATABASE nova CHARACTER SET utf8;'
>>> +lib/nova:init_nova:684                    recreate_database nova_cell0
>>> +lib/database:recreate_database:112        local db=nova_cell0
>>> +lib/database:recreate_database:113        recreate_database_mysql
>>> nova_cell0
>>> +lib/databases/mysql:recreate_database_mysql:56  local db=nova_cell0
>>> +lib/databases/mysql:recreate_database_mysql:57  mysql -uroot -pvmware
>>> -h127.0.0.1 -e 'DROP DATABASE IF EXISTS nova_cell0;'
>>> +lib/databases/mysql:recreate_database_mysql:58  mysql -uroot -pvmware
>>> -h127.0.0.1 -e 'CREATE DATABASE nova_cell0 CHARACTER SET utf8;'
>>> +lib/nova:init_nova:689                    /usr/local/bin/nova-manage
>>> --config-file /etc/nova/nova.conf db sync
>>> WARNING: cell0 mapping not found - not syncing cell0.
>>> 2017-02-20 14:11:50.846 INFO migrate.versioning.api
>>> [req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] 215 -> 216...
>>> 2017-02-20 14:11:54.279 INFO migrate.versioning.api
>>> [req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] done
>>> 2017-02-20 14:11:54.280 INFO migrate.versioning.api
>>> [req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] 216 -> 217...
>>> 2017-02-20 14:11:54.288 INFO migrate.versioning.api
>>> [req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] done
>>>
>>>
>>>
>>> Thanks,
>>> Prashant
>>>
>>> On Mon, Feb 20, 2017 at 8:21 PM, Jay Pipes <jaypipes at gmail.com
>>> <mailto:jaypipes at gmail.com>> wrote:
>>>
>>>     On 02/20/2017 09:33 AM, Prashant Shetty wrote:
>>>
>>>         Team,
>>>
>>>         I have multi node devstack setup with single controller and
>>> multiple
>>>         computes running stable/ocata.
>>>
>>>         On compute:
>>>         ENABLED_SERVICES=n-cpu,neutron,placement-api
>>>
>>>         Both KVM and ESxi compute came up fine:
>>>         vmware at cntr11:~$ nova hypervisor-list
>>>
>>>           warnings.warn(msg)
>>>         +----+----------------------------------------------------+-
>>> ------+---------+
>>>         | ID | Hypervisor hostname                                |
>>> State |
>>>         Status  |
>>>         +----+----------------------------------------------------+-
>>> ------+---------+
>>>         | 4  | domain-c82529.2fb3c1d7-fe24-49ea-9096-fcf148576db8 | up
>>>   |
>>>         enabled |
>>>         | 7  | kvm-1                                              | up
>>>   |
>>>         enabled |
>>>         +----+----------------------------------------------------+-
>>> ------+---------+
>>>         vmware at cntr11:~$
>>>
>>>         All services seems to run fine. When tried to launch instance I
>>> see
>>>         below errors in nova-conductor logs and instance stuck in
>>>         "scheduling"
>>>         state forever.
>>>         I dont have any config related to n-cell in controller. Could
>>>         someone
>>>         help me to identify why nova-conductor is complaining about
>>> cells.
>>>
>>>         2017-02-20 14:24:06.128 WARNING oslo_config.cfg
>>>         [req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Option
>>>         "scheduler_default_filters" from group "DEFAULT" is deprecated.
>>> Use
>>>         option "enabled_filters" from group "filter_scheduler".
>>>         2017-02-20 14:24:06.211 ERROR nova.conductor.manager
>>>         [req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Failed to
>>>         schedule instances
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback
>>> (most
>>>         recent call last):
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/opt/stack/nova/nova/conductor/manager.py", line 866, in
>>>         schedule_and_build_instances
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>>         request_specs[0].to_legacy_filter_properties_dict())
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/opt/stack/nova/nova/conductor/manager.py", line 597, in
>>>         _schedule_instances
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager     hosts =
>>>         self.scheduler_client.select_destinations(context, spec_obj)
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/opt/stack/nova/nova/scheduler/utils.py", line 371, in wrapped
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager     return
>>>         func(*args, **kwargs)
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/opt/stack/nova/nova/scheduler/client/__init__.py", line 51, in
>>>         select_destinations
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager     return
>>>         self.queryclient.select_destinations(context, spec_obj)
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/opt/stack/nova/nova/scheduler/client/__init__.py", line 37, in
>>>         __run_method
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager     return
>>>         getattr(self.instance, __name)(*args, **kwargs)
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/opt/stack/nova/nova/scheduler/client/query.py", line 32, in
>>>         select_destinations
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager     return
>>>         self.scheduler_rpcapi.select_destinations(context, spec_obj)
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/opt/stack/nova/nova/scheduler/rpcapi.py", line 129, in
>>>         select_destinations
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager     return
>>>         cctxt.call(ctxt, 'select_destinations', **msg_args)
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/c
>>> lient.py",
>>>         line 169, in call
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>>          retry=self.retry)
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/usr/local/lib/python2.7/dist-packages/oslo_messaging/trans
>>> port.py",
>>>         line 97, in _send
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>>         timeout=timeout, retry=retry)
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_driv
>>> ers/amqpdriver.py",
>>>         line 458, in send
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>>          retry=retry)
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_driv
>>> ers/amqpdriver.py",
>>>         line 449, in _send
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager     raise
>>>         result
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>>         NoValidHost_Remote:
>>>         No valid host was found. There are not enough hosts available.
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback
>>> (most
>>>         recent call last):
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/s
>>> erver.py",
>>>         line 218, in inner
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager     return
>>>         func(*args, **kwargs)
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/opt/stack/nova/nova/scheduler/manager.py", line 98, in
>>>         select_destinations
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager     dests =
>>>         self.driver.select_destinations(ctxt, spec_obj)
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>>>         "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 79,
>>> in
>>>         select_destinations
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager     raise
>>>         exception.NoValidHost(reason=reason)
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>> NoValidHost: No
>>>         valid host was found. There are not enough hosts available.
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>>         2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>>         2017-02-20 14:24:06.217 ERROR nova.conductor.manager
>>>         [req-e17fda8d-0d53-4735-
>>>
>>>
>>>     I don't see anything above that is complaining about "No cell
>>>     mapping found for cell0"? Perhaps you pasted the wrong snippet from
>>>     the logs.
>>>
>>>     Regardless, I think you simply need to run nova-manage cell_v2
>>>     simple_cell_setup. This is a required step in Ocata deployments. You
>>>     can read about this here:
>>>
>>>     https://docs.openstack.org/developer/nova/man/nova-manage.html
>>>     <https://docs.openstack.org/developer/nova/man/nova-manage.html>
>>>
>>>     and the release notes here:
>>>
>>>     https://docs.openstack.org/releasenotes/nova/ocata.html
>>>     <https://docs.openstack.org/releasenotes/nova/ocata.html>
>>>
>>>     and more information about cells here:
>>>
>>>     https://docs.openstack.org/developer/nova/cells.html
>>>     <https://docs.openstack.org/developer/nova/cells.html>
>>>
>>>     Best,
>>>     -jay
>>>
>>>     ____________________________________________________________
>>> ______________
>>>     OpenStack Development Mailing List (not for usage questions)
>>>     Unsubscribe:
>>>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>     <http://OpenStack-dev-request@lists.openstack.org?subject:un
>>> subscribe>
>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>     <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>>
>>>
>>>
>>>
>>> ____________________________________________________________
>>> ______________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> You're doing multinode. You need to run this after the subnode n-cpu is
>> running:
>>
>> nova-manage cell_v2 discover_hosts
>>
>> Run ^ from the master (control) node where the API database is located.
>> We do the same in devstack-gate for multinode jobs:
>>
>> https://github.com/openstack-infra/devstack-gate/blob/f5dccd
>> 60c20b08be6f0b053265e26a491307946e/devstack-vm-gate.sh#L717
>>
>> Single-node devstack will take care of discovering the n-cpu compute host
>> as part of the stack.sh run, but the multinode case is special in that you
>> need to explicitly discover the subnode n-cpu after it's running. Devstack
>> is not topology aware so this is something you have to handle in an
>> orchestrator (like d-g) outside of the devstack run.
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170222/c57f75ea/attachment-0001.html>


More information about the OpenStack-dev mailing list