<div dir="ltr"><div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;color:rgb(0,0,0)">Appreciate some help on this issue.<br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;color:rgb(0,0,0)">Thanks,<br></div><div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;color:rgb(0,0,0)">Prashant<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Feb 21, 2017 at 9:08 PM, Prashant Shetty <span dir="ltr"><<a href="mailto:prashantshetty1985@gmail.com" target="_blank">prashantshetty1985@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;color:rgb(0,0,0)">Hi Mark,<br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;color:rgb(0,0,0)">Thanks for your reply. <br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;color:rgb(0,0,0)">I tried "nova-manage cell_v2 discover_hosts" and it returned nothing and still I have same issue on the node.<br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;color:rgb(0,0,0)">Problem seems be the way devstack is getting configured,<br></div><div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;color:rgb(0,0,0)">As code suggest below we create cell0 on node where n-api and n-cpu runs. In my case compute is running only n-cpu and controller is running n-api service, due to this code there are no cell created in controller or compute.<br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;color:rgb(0,0,0)">We will not have this problem in all-in-one-node setup.<br></div><div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;color:rgb(0,0,0)">--<br># Do this late because it requires compute hosts to have started<br>if is_service_enabled n-api; then<br> if is_service_enabled n-cpu; then<br> create_cell<br> else<br> # Some CI systems like Hyper-V build the control plane on<br> # Linux, and join in non Linux Computes after setup. This<br> # allows them to delay the processing until after their whole<br> # environment is up.<br> echo_summary "SKIPPING Cell setup because n-cpu is not enabled. You will have to do this manually before you have a working environment."<br> fi<br>fi<br>---<br><br>vmware@cntr11:~$ nova-manage cell_v2 discover_hosts<br>vmware@cntr11:~$ nova service-list<br>+----+------------------+-----<wbr>----------+----------+--------<wbr>-+-------+--------------------<wbr>--------+-----------------+<br>| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |<br>+----+------------------+-----<wbr>----------+----------+--------<wbr>-+-------+--------------------<wbr>--------+-----------------+<br>| 3 | nova-conductor | cntr11 | internal | enabled | up | 2017-02-21T15:34:13.000000 | - |<br>| 5 | nova-scheduler | cntr11 | internal | enabled | up | 2017-02-21T15:34:15.000000 | - |<br>| 6 | nova-consoleauth | cntr11 | internal | enabled | up | 2017-02-21T15:34:11.000000 | - |<br>| 7 | nova-compute | esx-ubuntu-02 | nova | enabled | up | 2017-02-21T15:34:14.000000 | - |<br>| 8 | nova-compute | esx-ubuntu-03 | nova | enabled | up | 2017-02-21T15:34:16.000000 | - |<br>| 9 | nova-compute | kvm-3 | nova | enabled | up | 2017-02-21T15:34:07.000000 | - |<br>| 10 | nova-compute | kvm-2 | nova | enabled | up | 2017-02-21T15:34:13.000000 | - |<br>| 11 | nova-compute | esx-ubuntu-01 | nova | enabled | up | 2017-02-21T15:34:14.000000 | - |<br>| 12 | nova-compute | kvm-1 | nova | enabled | up | 2017-02-21T15:34:09.000000 | - |<br>+----+------------------+-----<wbr>----------+----------+--------<wbr>-+-------+--------------------<wbr>--------+-----------------+<br>vmware@cntr11:~$<br>vmware@cntr11:~$ nova-manage cell_v2 list_cells<br>+------+------+<br>| Name | UUID |<br>+------+------+<br>+------+------+<br>vmware@cntr11:~$<br><br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;color:rgb(0,0,0)">Thanks,<br></div><div class="gmail_default" style="font-family:verdana,sans-serif;font-size:small;color:rgb(0,0,0)">Prashant<br></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Feb 21, 2017 at 1:02 AM, Matt Riedemann <span dir="ltr"><<a href="mailto:mriedemos@gmail.com" target="_blank">mriedemos@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="m_-8590242370985804602HOEnZb"><div class="m_-8590242370985804602h5">On 2/20/2017 10:31 AM, Prashant Shetty wrote:<br>
</div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="m_-8590242370985804602h5">
Thanks Jay for the response. Sorry I missed out on copying right error.<br>
<br>
Here is the log:<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost: No<br>
valid host was found. There are not enough hosts available.<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager<br>
2017-02-20 14:24:06.217 ERROR nova.conductor.manager<br>
[req-e17fda8d-0d53-4735-922e-d<wbr>d635d2ab7c0 admin admin] No cell mapping<br>
found for cell0 while trying to record scheduling failure. Setup is<br>
incomplete.<br>
<br>
I tried command you mentioned, still I see same error on conductor.<br>
<br>
As part of stack.sh on controller I see below command was executed<br>
related to "cell". Isn't it devstack should take care of this part<br>
during initial bringup or am I missing any parameters in localrc for same?.<br>
<br>
NOTE: I have not explicitly enabled n-cell in localrc<br>
<br>
2017-02-20 14:11:47.510 INFO migrate.versioning.api [-] done<br>
+lib/nova:init_nova:683 recreate_database nova<br>
+lib/database:recreate_databas<wbr>e:112 local db=nova<br>
+lib/database:recreate_databas<wbr>e:113 recreate_database_mysql nova<br>
+lib/databases/mysql:recreate_<wbr>database_mysql:56 local db=nova<br>
+lib/databases/mysql:recreate_<wbr>database_mysql:57 mysql -uroot -pvmware<br>
-h127.0.0.1 -e 'DROP DATABASE IF EXISTS nova;'<br>
+lib/databases/mysql:recreate_<wbr>database_mysql:58 mysql -uroot -pvmware<br>
-h127.0.0.1 -e 'CREATE DATABASE nova CHARACTER SET utf8;'<br>
+lib/nova:init_nova:684 recreate_database nova_cell0<br>
+lib/database:recreate_databas<wbr>e:112 local db=nova_cell0<br>
+lib/database:recreate_databas<wbr>e:113 recreate_database_mysql<br>
nova_cell0<br>
+lib/databases/mysql:recreate_<wbr>database_mysql:56 local db=nova_cell0<br>
+lib/databases/mysql:recreate_<wbr>database_mysql:57 mysql -uroot -pvmware<br>
-h127.0.0.1 -e 'DROP DATABASE IF EXISTS nova_cell0;'<br>
+lib/databases/mysql:recreate_<wbr>database_mysql:58 mysql -uroot -pvmware<br>
-h127.0.0.1 -e 'CREATE DATABASE nova_cell0 CHARACTER SET utf8;'<br>
+lib/nova:init_nova:689 /usr/local/bin/nova-manage<br>
--config-file /etc/nova/nova.conf db sync<br>
WARNING: cell0 mapping not found - not syncing cell0.<br>
2017-02-20 14:11:50.846 INFO migrate.versioning.api<br>
[req-145fe57e-7751-412f-a1f6-0<wbr>6dfbd39b711 None None] 215 -> 216...<br>
2017-02-20 14:11:54.279 INFO migrate.versioning.api<br>
[req-145fe57e-7751-412f-a1f6-0<wbr>6dfbd39b711 None None] done<br>
2017-02-20 14:11:54.280 INFO migrate.versioning.api<br>
[req-145fe57e-7751-412f-a1f6-0<wbr>6dfbd39b711 None None] 216 -> 217...<br>
2017-02-20 14:11:54.288 INFO migrate.versioning.api<br>
[req-145fe57e-7751-412f-a1f6-0<wbr>6dfbd39b711 None None] done<br>
<br>
<br>
<br>
Thanks,<br>
Prashant<br>
<br>
On Mon, Feb 20, 2017 at 8:21 PM, Jay Pipes <<a href="mailto:jaypipes@gmail.com" target="_blank">jaypipes@gmail.com</a><br></div></div><div><div class="m_-8590242370985804602h5">
<mailto:<a href="mailto:jaypipes@gmail.com" target="_blank">jaypipes@gmail.com</a>>> wrote:<br>
<br>
On 02/20/2017 09:33 AM, Prashant Shetty wrote:<br>
<br>
Team,<br>
<br>
I have multi node devstack setup with single controller and multiple<br>
computes running stable/ocata.<br>
<br>
On compute:<br>
ENABLED_SERVICES=n-cpu,neutron<wbr>,placement-api<br>
<br>
Both KVM and ESxi compute came up fine:<br>
vmware@cntr11:~$ nova hypervisor-list<br>
<br>
warnings.warn(msg)<br>
+----+------------------------<wbr>----------------------------+-<wbr>------+---------+<br>
| ID | Hypervisor hostname | State |<br>
Status |<br>
+----+------------------------<wbr>----------------------------+-<wbr>------+---------+<br>
| 4 | domain-c82529.2fb3c1d7-fe24-49<wbr>ea-9096-fcf148576db8 | up |<br>
enabled |<br>
| 7 | kvm-1 | up |<br>
enabled |<br>
+----+------------------------<wbr>----------------------------+-<wbr>------+---------+<br>
vmware@cntr11:~$<br>
<br>
All services seems to run fine. When tried to launch instance I see<br>
below errors in nova-conductor logs and instance stuck in<br>
"scheduling"<br>
state forever.<br>
I dont have any config related to n-cell in controller. Could<br>
someone<br>
help me to identify why nova-conductor is complaining about cells.<br>
<br>
2017-02-20 14:24:06.128 WARNING oslo_config.cfg<br>
[req-e17fda8d-0d53-4735-922e-d<wbr>d635d2ab7c0 admin admin] Option<br>
"scheduler_default_filters" from group "DEFAULT" is deprecated. Use<br>
option "enabled_filters" from group "filter_scheduler".<br>
2017-02-20 14:24:06.211 ERROR nova.conductor.manager<br>
[req-e17fda8d-0d53-4735-922e-d<wbr>d635d2ab7c0 admin admin] Failed to<br>
schedule instances<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most<br>
recent call last):<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/opt/stack/nova/nova/conducto<wbr>r/manager.py", line 866, in<br>
schedule_and_build_instances<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager<br>
request_specs[0].to_legacy_fil<wbr>ter_properties_dict())<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/opt/stack/nova/nova/conducto<wbr>r/manager.py", line 597, in<br>
_schedule_instances<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager hosts =<br>
self.scheduler_client.select_d<wbr>estinations(context, spec_obj)<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/opt/stack/nova/nova/schedule<wbr>r/utils.py", line 371, in wrapped<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return<br>
func(*args, **kwargs)<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/opt/stack/nova/nova/schedule<wbr>r/client/__init__.py", line 51, in<br>
select_destinations<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return<br>
self.queryclient.select_destin<wbr>ations(context, spec_obj)<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/opt/stack/nova/nova/schedule<wbr>r/client/__init__.py", line 37, in<br>
__run_method<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return<br>
getattr(self.instance, __name)(*args, **kwargs)<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/opt/stack/nova/nova/schedule<wbr>r/client/query.py", line 32, in<br>
select_destinations<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return<br>
self.scheduler_rpcapi.select_d<wbr>estinations(context, spec_obj)<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/opt/stack/nova/nova/schedule<wbr>r/rpcapi.py", line 129, in<br>
select_destinations<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return<br>
cctxt.call(ctxt, 'select_destinations', **msg_args)<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/usr/local/lib/python2.7/dist<wbr>-packages/oslo_messaging/rpc/c<wbr>lient.py",<br>
line 169, in call<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager<br>
retry=self.retry)<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/usr/local/lib/python2.7/dist<wbr>-packages/oslo_messaging/trans<wbr>port.py",<br>
line 97, in _send<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager<br>
timeout=timeout, retry=retry)<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/usr/local/lib/python2.7/dist<wbr>-packages/oslo_messaging/_driv<wbr>ers/amqpdriver.py",<br>
line 458, in send<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager<br>
retry=retry)<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/usr/local/lib/python2.7/dist<wbr>-packages/oslo_messaging/_driv<wbr>ers/amqpdriver.py",<br>
line 449, in _send<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager raise<br>
result<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager<br>
NoValidHost_Remote:<br>
No valid host was found. There are not enough hosts available.<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most<br>
recent call last):<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/usr/local/lib/python2.7/dist<wbr>-packages/oslo_messaging/rpc/s<wbr>erver.py",<br>
line 218, in inner<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return<br>
func(*args, **kwargs)<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/opt/stack/nova/nova/schedule<wbr>r/manager.py", line 98, in<br>
select_destinations<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager dests =<br>
self.driver.select_destination<wbr>s(ctxt, spec_obj)<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File<br>
"/opt/stack/nova/nova/schedule<wbr>r/filter_scheduler.py", line 79, in<br>
select_destinations<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager raise<br>
exception.NoValidHost(reason=r<wbr>eason)<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost: No<br>
valid host was found. There are not enough hosts available.<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager<br>
2017-02-20 14:24:06.211 TRACE nova.conductor.manager<br>
2017-02-20 14:24:06.217 ERROR nova.conductor.manager<br>
[req-e17fda8d-0d53-4735-<br>
<br>
<br>
I don't see anything above that is complaining about "No cell<br>
mapping found for cell0"? Perhaps you pasted the wrong snippet from<br>
the logs.<br>
<br>
Regardless, I think you simply need to run nova-manage cell_v2<br>
simple_cell_setup. This is a required step in Ocata deployments. You<br>
can read about this here:<br>
<br>
<a href="https://docs.openstack.org/developer/nova/man/nova-manage.html" rel="noreferrer" target="_blank">https://docs.openstack.org/dev<wbr>eloper/nova/man/nova-manage.ht<wbr>ml</a><br>
<<a href="https://docs.openstack.org/developer/nova/man/nova-manage.html" rel="noreferrer" target="_blank">https://docs.openstack.org/de<wbr>veloper/nova/man/nova-manage.h<wbr>tml</a>><br>
<br>
and the release notes here:<br>
<br>
<a href="https://docs.openstack.org/releasenotes/nova/ocata.html" rel="noreferrer" target="_blank">https://docs.openstack.org/rel<wbr>easenotes/nova/ocata.html</a><br>
<<a href="https://docs.openstack.org/releasenotes/nova/ocata.html" rel="noreferrer" target="_blank">https://docs.openstack.org/re<wbr>leasenotes/nova/ocata.html</a>><br>
<br>
and more information about cells here:<br>
<br>
<a href="https://docs.openstack.org/developer/nova/cells.html" rel="noreferrer" target="_blank">https://docs.openstack.org/dev<wbr>eloper/nova/cells.html</a><br>
<<a href="https://docs.openstack.org/developer/nova/cells.html" rel="noreferrer" target="_blank">https://docs.openstack.org/de<wbr>veloper/nova/cells.html</a>><br>
<br>
Best,<br>
-jay<br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe:<br>
<a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br></div></div>
<<a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">http://OpenStack-dev-request@<wbr>lists.openstack.org?subject:un<wbr>subscribe</a>><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
<<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cg<wbr>i-bin/mailman/listinfo/opensta<wbr>ck-dev</a>><span><br>
<br>
<br>
<br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
<br>
</span></blockquote>
<br>
You're doing multinode. You need to run this after the subnode n-cpu is running:<br>
<br>
nova-manage cell_v2 discover_hosts<br>
<br>
Run ^ from the master (control) node where the API database is located. We do the same in devstack-gate for multinode jobs:<br>
<br>
<a href="https://github.com/openstack-infra/devstack-gate/blob/f5dccd60c20b08be6f0b053265e26a491307946e/devstack-vm-gate.sh#L717" rel="noreferrer" target="_blank">https://github.com/openstack-i<wbr>nfra/devstack-gate/blob/f5dccd<wbr>60c20b08be6f0b053265e26a491307<wbr>946e/devstack-vm-gate.sh#L717</a><br>
<br>
Single-node devstack will take care of discovering the n-cpu compute host as part of the stack.sh run, but the multinode case is special in that you need to explicitly discover the subnode n-cpu after it's running. Devstack is not topology aware so this is something you have to handle in an orchestrator (like d-g) outside of the devstack run.<span class="m_-8590242370985804602HOEnZb"><font color="#888888"><br>
<br>
-- <br>
<br>
Thanks,<br>
<br>
Matt Riedemann</font></span><div class="m_-8590242370985804602HOEnZb"><div class="m_-8590242370985804602h5"><br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>