[Openstack-operators] [openstack] Unable to launch an instance

Peter Kirby peter.kirby at objectstream.com
Tue Feb 28 18:04:43 UTC 2017


The last time I saw this error, my neutron agent on the compute host wasn't
running.  If you run: neutron agent-list    is the neutron agent on your
compute node up and running?


On Tue, Feb 28, 2017 at 11:49 AM, Amit Kumar <ebiibe82 at gmail.com> wrote:

> Hi All,
>
> I have installed Openstack Newton using Openstack-Ansible. While creating
> an instance, it is failing with following error:
>
> *Message**No valid host was found. There are not enough hosts available.*
> *Code**500**Details**File
> "/openstack/venvs/nova-14.0.8/lib/python2.7/site-packages/nova/conductor/manager.py",
> line 496, in build_instances context, request_spec, filter_properties) File
> "/openstack/venvs/nova-14.0.8/lib/python2.7/site-packages/nova/conductor/manager.py",
> line 567, in _schedule_instances hosts =
> self.scheduler_client.select_destinations(context, spec_obj) File
> "/openstack/venvs/nova-14.0.8/lib/python2.7/site-packages/nova/scheduler/utils.py",
> line 370, in wrapped return func(*args, **kwargs) File
> "/openstack/venvs/nova-14.0.8/lib/python2.7/site-packages/nova/scheduler/client/__init__.py",
> line 51, in select_destinations return
> self.queryclient.select_destinations(context, spec_obj) File
> "/openstack/venvs/nova-14.0.8/lib/python2.7/site-packages/nova/scheduler/client/__init__.py",
> line 37, in __run_method return getattr(self.instance, __name)(*args,
> **kwargs) File
> "/openstack/venvs/nova-14.0.8/lib/python2.7/site-packages/nova/scheduler/client/query.py",
> line 32, in select_destinations return
> self.scheduler_rpcapi.select_destinations(context, spec_obj) File
> "/openstack/venvs/nova-14.0.8/lib/python2.7/site-packages/nova/scheduler/rpcapi.py",
> line 126, in select_destinations return cctxt.call(ctxt,
> 'select_destinations', **msg_args) File
> "/openstack/venvs/nova-14.0.8/lib/python2.7/site-packages/oslo_messaging/rpc/client.py",
> line 169, in call retry=self.retry) File
> "/openstack/venvs/nova-14.0.8/lib/python2.7/site-packages/oslo_messaging/transport.py",
> line 97, in _send timeout=timeout, retry=retry) File
> "/openstack/venvs/nova-14.0.8/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py",
> line 464, in send retry=retry) File
> "/openstack/venvs/nova-14.0.8/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py",
> line 455, in _send raise result*
>
> I executed following commands before launching an instance.
>
>    1.
> *openstack image create --disk-format qcow2 --container-format bare
>    --public --file ./cirros-0.3.4-x86_64-disk.img cirros0.3.4-image *
>    2.
> *openstack flavor create --public m1.extra_tiny --id auto --ram 2048
>    --disk 0 --vcpus 1 --rxtx-factor 1 *
>    3.
> *openstack network create net1 *
>    4. *openstack subnet create subnet1 --network net1 --subnet-range
>    192.168.2.0/24 <http://192.168.2.0/24>*
>
> */var/log/nova/nova-compute.log* are showing are something like this:
>
> *2017-02-28 23:11:44.610 1142 INFO nova.compute.resource_tracker
> [req-ad2fc0ae-e98f-45c5-89cc-3211a69a37ec - - - - -] Auditing locally
> available compute resources for node compute1*
> *2017-02-28 23:11:44.721 1142 WARNING nova.scheduler.client.report
> [req-ad2fc0ae-e98f-45c5-89cc-3211a69a37ec - - - - -] No authentication
> information found for placement API. Placement is optional in Newton, but
> required in Ocata. Please enable the placement service before upgrading.*
> *2017-02-28 23:11:44.721 1142 WARNING nova.scheduler.client.report
> [req-ad2fc0ae-e98f-45c5-89cc-3211a69a37ec - - - - -] Unable to refresh my
> resource provider record*
> *2017-02-28 23:11:44.800 1142 INFO nova.compute.resource_tracker
> [req-ad2fc0ae-e98f-45c5-89cc-3211a69a37ec - - - - -] Total usable vcpus: 8,
> total allocated vcpus: 0*
> *2017-02-28 23:11:44.800 1142 INFO nova.compute.resource_tracker
> [req-ad2fc0ae-e98f-45c5-89cc-3211a69a37ec - - - - -] Final resource view:
> name=compute1 phys_ram=7877MB used_ram=2048MB phys_disk=908GB used_disk=2GB
> total_vcpus=8 used_vcpus=0 pci_stats=[]*
> *2017-02-28 23:11:44.896 1142 WARNING nova.scheduler.client.report
> [req-ad2fc0ae-e98f-45c5-89cc-3211a69a37ec - - - - -] Unable to refresh my
> resource provider record*
> *2017-02-28 23:11:44.896 1142 INFO nova.compute.resource_tracker
> [req-ad2fc0ae-e98f-45c5-89cc-3211a69a37ec - - - - -] Compute_service record
> updated for compute1:compute1*
> *2017-02-28 23:12:30.611 1142 WARNING nova.virt.libvirt.imagecache
> [req-ad2fc0ae-e98f-45c5-89cc-3211a69a37ec - - - - -] Unknown base file:
> /var/lib/nova/instances/_base/3451088abe875a7f691a9e229d767aa128dc0da3*
> *2017-02-28 23:12:30.612 1142 INFO nova.virt.libvirt.imagecache
> [req-ad2fc0ae-e98f-45c5-89cc-3211a69a37ec - - - - -] Removable base files:
> /var/lib/nova/instances/_base/3451088abe875a7f691a9e229d767aa128dc0da3*
> *2017-02-28 23:12:30.612 1142 INFO nova.virt.libvirt.imagecache
> [req-ad2fc0ae-e98f-45c5-89cc-3211a69a37ec - - - - -] Base or swap file too
> young to remove:
> /var/lib/nova/instances/_base/3451088abe875a7f691a9e229d767aa128dc0da3*
> *2017-02-28 23:12:45.609 1142 INFO nova.compute.resource_tracker
> [req-ad2fc0ae-e98f-45c5-89cc-3211a69a37ec - - - - -] Auditing locally
> available compute resources for node compute1*
> *2017-02-28 23:12:45.750 1142 WARNING nova.scheduler.client.report
> [req-ad2fc0ae-e98f-45c5-89cc-3211a69a37ec - - - - -] Unable to refresh my
> resource provider record*
> *2*
>
> *nova service-list* is showing following results:
> *root at infra1-utility-container-07316276:~# nova service-list*
>
> *+----+------------------+------------------------------------------+----------+---------+-------+----------------------------+-----------------+*
> *| Id | Binary           | Host                                     | Zone
>     | Status  | State | Updated_at                 | Disabled Reason |*
>
> *+----+------------------+------------------------------------------+----------+---------+-------+----------------------------+-----------------+*
> *| 1  | nova-scheduler   | infra1-nova-scheduler-container-34614124 |
> internal | enabled | up    | 2017-02-28T17:46:43.000000 | -               |*
> *| 4  | nova-conductor   | infra1-nova-conductor-container-e7a47165 |
> internal | enabled | up    | 2017-02-28T17:46:41.000000 | -               |*
> *| 8  | nova-cert        | infra1-nova-cert-container-5e9b6e14      |
> internal | enabled | up    | 2017-02-28T17:46:39.000000 | -               |*
> *| 9  | nova-consoleauth | infra1-nova-console-container-0f851e59   |
> internal | enabled | up    | 2017-02-28T17:46:42.000000 | -               |*
> *| 10 | nova-compute     | compute1                                 | nova
>     | enabled | up    | 2017-02-28T17:46:41.000000 | -               |*
>
> *+----+------------------+------------------------------------------+----------+---------+-------+----------------------------+-----------------+*
>
> Apart from this, computer and controller nodes are able to ping each other.
>
> Could you please suggest any pointers to understand and fix the problem.
>
> Thanks.
>
> Regards,
> Amit
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170228/7ff10a8b/attachment.html>


More information about the OpenStack-operators mailing list