[Openstack] Openstack Ocata Error Message

Alex Evonosky alex.evonosky at gmail.com
Thu Jun 22 06:07:08 UTC 2017


Just for reference this is resolved.  After researching this message the
following helped to track it down:

On the controller, issue: *nova-status upgrade check*


This produced a warning about resources.

After tracking this down, looked back in the /etc/nova/nova.conf on the
controller and all compute nodes; under [placement] the following line:

*os_region_name = RegionOne* was there, however from the default config,
the following was also there: *os_region_name = openstack*

comment out the os_region_name = openstack if using the default nova.conf




On Wed, Jun 21, 2017 at 2:51 PM, Alex Evonosky <alex.evonosky at gmail.com>
wrote:

> for some reason, it appears to be a filter issue:
>
> 2017-06-21 14:49:00.976 18775 INFO nova.filters
> [req-883f8bb9-bfce-42fc-897b-c13558c3593a 7e7176b79f94483c8b802a7004466e66
> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filter RetryFilter returned 0
> hosts
> 2017-06-21 14:49:00.977 18775 INFO nova.filters
> [req-883f8bb9-bfce-42fc-897b-c13558c3593a 7e7176b79f94483c8b802a7004466e66
> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filtering removed all hosts for
> the request with instance ID '195dd456-043a-450d-9185-eefefc829bbe'.
> Filter results: ['RetryFilter: (start: 0, end: 0)']
>
>
>
> Both compute2 and compute3 nodes are using dedicated physical servers..
>
> Is this a common issue?  Just trying to test ocata and have yet launched
> an instance.
>
> Thank you.
>
>
>
> On Tue, Jun 20, 2017 at 8:36 PM, Alex Evonosky <alex.evonosky at gmail.com>
> wrote:
>
>> Chris-
>>
>> I enabled debugging and also brought up my compute node1 (which I admin
>> down ealier):
>>
>>
>> 2017-06-20 20:33:35.438 18169 DEBUG oslo_messaging._drivers.amqpdriver
>> [-] received message msg_id: dd5f438571494a8499980c12a2a90116 reply to
>> reply_137c1eb50cf64fceb71cecc336b4773d __call__
>> /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/
>> amqpdriver.py:194
>> 2017-06-20 20:33:36.986 18169 DEBUG oslo_concurrency.lockutils
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Lock "(u'openstack-compute1',
>> u'openstack-compute1')" acquired by "nova.scheduler.host_manager._locked_update"
>> :: waited 0.000s inner /usr/lib/python2.7/dist-packag
>> es/oslo_concurrency/lockutils.py:273
>> 2017-06-20 20:33:37.004 18169 DEBUG nova.scheduler.host_manager
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Update host state from compute
>> node: ComputeNode(cpu_allocation_ratio=16.0,cpu_info='{"vendor": "AMD",
>> "model": "cpu64-rhel6", "arch": "x86_64", "features": ["pge", "avx",
>> "clflush", "sep", "syscall", "sse4a", "msr", "xsave", "cmov", "nx", "pat",
>> "lm", "tsc", "3dnowprefetch", "fpu", "fxsr", "sse4.1", "pae", "sse4.2",
>> "pclmuldq", "cmp_legacy", "vme", "mmx", "osxsave", "cx8", "mce",
>> "fxsr_opt", "cr8legacy", "ht", "pse", "pni", "abm", "popcnt", "mca",
>> "apic", "sse", "mmxext", "lahf_lm", "rdtscp", "aes", "sse2", "hypervisor",
>> "misalignsse", "ssse3", "de", "cx16", "pse36", "mtrr", "x2apic"],
>> "topology": {"cores": 2, "cells": 1, "threads": 1, "sockets":
>> 1}}',created_at=2017-06-21T00:18:37Z,current_workload=0,dele
>> ted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_
>> available_least=6,free_disk_gb=12,free_ram_mb=2495,host='
>> openstack-compute1',host_ip=10.10.10.8,hypervisor_hostname
>> ='openstack-compute1',hypervisor_type='QEMU',hypervisor_
>> version=2008000,id=9,local_gb=12,local_gb_used=5,memory_mb=
>> 3007,memory_mb_used=626,metrics='[]',numa_topology='{"nova_object.version":
>> "1.2", "nova_object.changes": ["cells"], "nova_object.name":
>> "NUMATopology", "nova_object.data": {"cells": [{"nova_object.version":
>> "1.2", "nova_object.changes": ["cpu_usage", "memory_usage", "cpuset",
>> "mempages", "pinned_cpus", "memory", "siblings", "id"], "nova_object.name":
>> "NUMACell", "nova_object.data": {"cpu_usage": 0, "memory_usage": 0,
>> "cpuset": [0, 1], "pinned_cpus": [], "siblings": [], "memory": 3007,
>> "mempages": [{"nova_object.version": "1.1", "nova_object.changes":
>> ["total", "used", "reserved", "size_kb"], "nova_object.name":
>> "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 769991,
>> "reserved": 0, "size_kb": 4}, "nova_object.namespace": "nova"},
>> {"nova_object.version": "1.1", "nova_object.changes": ["total", "used",
>> "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology",
>> "nova_object.data": {"used": 0, "total": 0, "reserved": 0, "size_kb":
>> 2048}, "nova_object.namespace": "nova"}], "id": 0},
>> "nova_object.namespace": "nova"}]}, "nova_object.namespace":
>> "nova"}',pci_device_pools=PciDevicePoolList,ram_allocation_
>> ratio=1.5,running_vms=0,service_id=None,stats={},
>> supported_hv_specs=[HVSpec,HVSpec],updated_at=2017-06-21T00:
>> 32:52Z,uuid=9fd1b365-5ff9-4f75-a771-777fbe7a54ad,vcpus=2,vcpus_used=0)
>> _locked_update /usr/lib/python2.7/dist-packag
>> es/nova/scheduler/host_manager.py:168
>> 2017-06-20 20:33:37.209 18169 DEBUG nova.scheduler.host_manager
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Update host state with
>> aggregates: [] _locked_update /usr/lib/python2.7/dist-packag
>> es/nova/scheduler/host_manager.py:171
>> 2017-06-20 20:33:37.217 18169 DEBUG nova.scheduler.host_manager
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Update host state with service
>> dict: {'binary': u'nova-compute', 'deleted': False, 'created_at':
>> datetime.datetime(2017, 5, 17, 3, 26, 12, tzinfo=<iso8601.Utc>),
>> 'updated_at': datetime.datetime(2017, 6, 21, 0, 33, 34,
>> tzinfo=<iso8601.Utc>), 'report_count': 96355, 'topic': u'compute', 'host':
>> u'openstack-compute1', 'version': 16, 'disabled': False, 'forced_down':
>> False, 'last_seen_up': datetime.datetime(2017, 6, 21, 0, 33, 34,
>> tzinfo=<iso8601.Utc>), 'deleted_at': None, 'disabled_reason': None, 'id':
>> 7} _locked_update /usr/lib/python2.7/dist-packag
>> es/nova/scheduler/host_manager.py:174
>> 2017-06-20 20:33:37.218 18169 DEBUG nova.scheduler.host_manager
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Update host state with
>> instances: {} _locked_update /usr/lib/python2.7/dist-packag
>> es/nova/scheduler/host_manager.py:177
>> 2017-06-20 20:33:37.219 18169 DEBUG oslo_concurrency.lockutils
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Lock "(u'openstack-compute1',
>> u'openstack-compute1')" released by "nova.scheduler.host_manager._locked_update"
>> :: held 0.232s inner /usr/lib/python2.7/dist-packag
>> es/oslo_concurrency/lockutils.py:285
>> 2017-06-20 20:33:37.219 18169 DEBUG nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Starting with 1 host(s)
>> get_filtered_objects /usr/lib/python2.7/dist-packages/nova/filters.py:70
>> 2017-06-20 20:33:37.238 18169 DEBUG nova.scheduler.filters.retry_filter
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Re-scheduling is disabled
>> host_passes /usr/lib/python2.7/dist-packages/nova/scheduler/filters/
>> retry_filter.py:34
>> 2017-06-20 20:33:37.240 18169 DEBUG nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filter RetryFilter returned 1
>> host(s) get_filtered_objects /usr/lib/python2.7/dist-packag
>> es/nova/filters.py:104
>> 2017-06-20 20:33:37.268 18169 DEBUG nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filter AvailabilityZoneFilter
>> returned 1 host(s) get_filtered_objects /usr/lib/python2.7/dist-packag
>> es/nova/filters.py:104
>> 2017-06-20 20:33:37.270 18169 DEBUG nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filter RamFilter returned 1
>> host(s) get_filtered_objects /usr/lib/python2.7/dist-packag
>> es/nova/filters.py:104
>> 2017-06-20 20:33:37.297 18169 DEBUG nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filter DiskFilter returned 1
>> host(s) get_filtered_objects /usr/lib/python2.7/dist-packag
>> es/nova/filters.py:104
>> 2017-06-20 20:33:37.348 18169 DEBUG nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filter ComputeFilter returned 1
>> host(s) get_filtered_objects /usr/lib/python2.7/dist-packag
>> es/nova/filters.py:104
>> 2017-06-20 20:33:37.362 18169 DEBUG nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filter ComputeCapabilitiesFilter
>> returned 1 host(s) get_filtered_objects /usr/lib/python2.7/dist-packag
>> es/nova/filters.py:104
>> 2017-06-20 20:33:37.363 18169 DEBUG nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filter ImagePropertiesFilter
>> returned 1 host(s) get_filtered_objects /usr/lib/python2.7/dist-packag
>> es/nova/filters.py:104
>> 2017-06-20 20:33:37.365 18169 DEBUG nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filter
>> ServerGroupAntiAffinityFilter returned 1 host(s) get_filtered_objects
>> /usr/lib/python2.7/dist-packages/nova/filters.py:104
>> 2017-06-20 20:33:37.366 18169 DEBUG nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filter ServerGroupAffinityFilter
>> returned 1 host(s) get_filtered_objects /usr/lib/python2.7/dist-packag
>> es/nova/filters.py:104
>> 2017-06-20 20:33:37.367 18169 DEBUG nova.scheduler.filter_scheduler
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filtered [(openstack-compute1,
>> openstack-compute1) ram: 2495MB disk: 6144MB io_ops: 0 instances: 0]
>> _schedule /usr/lib/python2.7/dist-packages/nova/scheduler/filter_
>> scheduler.py:115
>> 2017-06-20 20:33:37.371 18169 DEBUG nova.scheduler.filter_scheduler
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Weighed [WeighedHost [host:
>> (openstack-compute1, openstack-compute1) ram: 2495MB disk: 6144MB io_ops: 0
>> instances: 0, weight: 0.0]] _schedule /usr/lib/python2.7/dist-packag
>> es/nova/scheduler/filter_scheduler.py:120
>> 2017-06-20 20:33:37.382 18169 DEBUG nova.scheduler.filter_scheduler
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Selected host: WeighedHost
>> [host: (openstack-compute1, openstack-compute1) ram: 2495MB disk: 6144MB
>> io_ops: 0 instances: 0, weight: 0.0] _schedule
>> /usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:127
>> 2017-06-20 20:33:37.406 18169 DEBUG oslo_concurrency.lockutils
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Lock "(u'openstack-compute1',
>> u'openstack-compute1')" acquired by "nova.scheduler.host_manager._locked"
>> :: waited 0.000s inner /usr/lib/python2.7/dist-packag
>> es/oslo_concurrency/lockutils.py:273
>> 2017-06-20 20:33:37.710 18169 DEBUG nova.virt.hardware
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Require both a host and instance
>> NUMA topology to fit instance on host. numa_fit_instance_to_host
>> /usr/lib/python2.7/dist-packages/nova/virt/hardware.py:1328
>> 2017-06-20 20:33:37.914 18169 DEBUG oslo_concurrency.lockutils
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Lock "(u'openstack-compute1',
>> u'openstack-compute1')" released by "nova.scheduler.host_manager._locked"
>> :: held 0.508s inner /usr/lib/python2.7/dist-packag
>> es/oslo_concurrency/lockutils.py:285
>> 2017-06-20 20:33:37.920 18169 DEBUG oslo_messaging._drivers.amqpdriver
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] sending reply msg_id:
>> dd5f438571494a8499980c12a2a90116 reply queue:
>> reply_137c1eb50cf64fceb71cecc336b4773d time elapsed: 2.47975096499s
>> _send_reply /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/
>> amqpdriver.py:73
>> 2017-06-20 20:33:44.425 18169 DEBUG oslo_messaging._drivers.amqpdriver
>> [-] received message with unique_id: 4e3b01aca6fb4f2c87cb686cfb6237a9
>> __call__ /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/
>> amqpdriver.py:196
>> 2017-06-20 20:33:44.434 18169 DEBUG oslo_concurrency.lockutils
>> [req-cfa0fd37-a757-45b6-a684-2528c1c303f5 - - - - -] Lock
>> "host_instance" acquired by "nova.scheduler.host_manager.sync_instance_info"
>> :: waited 0.000s inner /usr/lib/python2.7/dist-packag
>> es/oslo_concurrency/lockutils.py:273
>> 2017-06-20 20:33:44.434 18169 INFO nova.scheduler.host_manager
>> [req-cfa0fd37-a757-45b6-a684-2528c1c303f5 - - - - -] Successfully synced
>> instances from host 'openstack-compute3'.
>> 2017-06-20 20:33:44.435 18169 DEBUG oslo_concurrency.lockutils
>> [req-cfa0fd37-a757-45b6-a684-2528c1c303f5 - - - - -] Lock
>> "host_instance" released by "nova.scheduler.host_manager.sync_instance_info"
>> :: held 0.001s inner /usr/lib/python2.7/dist-packag
>> es/oslo_concurrency/lockutils.py:285
>> 2017-06-20 20:33:55.617 18169 DEBUG oslo_messaging._drivers.amqpdriver
>> [-] received message msg_id: a5eeb496378d494cad21cf35e0f0642e reply to
>> reply_00bca84adb354a91bfa3c31c0e70a288 __call__
>> /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/
>> amqpdriver.py:194
>> 2017-06-20 20:33:55.763 18169 DEBUG oslo_concurrency.lockutils
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Lock "(u'openstack-compute1',
>> u'openstack-compute1')" acquired by "nova.scheduler.host_manager._locked_update"
>> :: waited 0.000s inner /usr/lib/python2.7/dist-packag
>> es/oslo_concurrency/lockutils.py:273
>> 2017-06-20 20:33:55.763 18169 DEBUG nova.scheduler.host_manager
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Update host state from compute
>> node: ComputeNode(cpu_allocation_ratio=16.0,cpu_info='{"vendor": "AMD",
>> "model": "cpu64-rhel6", "arch": "x86_64", "features": ["pge", "avx",
>> "clflush", "sep", "syscall", "sse4a", "msr", "xsave", "cmov", "nx", "pat",
>> "lm", "tsc", "3dnowprefetch", "fpu", "fxsr", "sse4.1", "pae", "sse4.2",
>> "pclmuldq", "cmp_legacy", "vme", "mmx", "osxsave", "cx8", "mce",
>> "fxsr_opt", "cr8legacy", "ht", "pse", "pni", "abm", "popcnt", "mca",
>> "apic", "sse", "mmxext", "lahf_lm", "rdtscp", "aes", "sse2", "hypervisor",
>> "misalignsse", "ssse3", "de", "cx16", "pse36", "mtrr", "x2apic"],
>> "topology": {"cores": 2, "cells": 1, "threads": 1, "sockets":
>> 1}}',created_at=2017-06-21T00:18:37Z,current_workload=0,dele
>> ted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_
>> available_least=6,free_disk_gb=12,free_ram_mb=2495,host='
>> openstack-compute1',host_ip=10.10.10.8,hypervisor_hostname
>> ='openstack-compute1',hypervisor_type='QEMU',hypervisor_
>> version=2008000,id=9,local_gb=12,local_gb_used=0,memory_mb=
>> 3007,memory_mb_used=512,metrics='[]',numa_topology='{"nova_object.version":
>> "1.2", "nova_object.changes": ["cells"], "nova_object.name":
>> "NUMATopology", "nova_object.data": {"cells": [{"nova_object.version":
>> "1.2", "nova_object.changes": ["cpu_usage", "memory_usage", "cpuset",
>> "mempages", "pinned_cpus", "memory", "siblings", "id"], "nova_object.name":
>> "NUMACell", "nova_object.data": {"cpu_usage": 0, "memory_usage": 0,
>> "cpuset": [0, 1], "pinned_cpus": [], "siblings": [], "memory": 3007,
>> "mempages": [{"nova_object.version": "1.1", "nova_object.changes":
>> ["total", "used", "reserved", "size_kb"], "nova_object.name":
>> "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 769991,
>> "reserved": 0, "size_kb": 4}, "nova_object.namespace": "nova"},
>> {"nova_object.version": "1.1", "nova_object.changes": ["total", "used",
>> "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology",
>> "nova_object.data": {"used": 0, "total": 0, "reserved": 0, "size_kb":
>> 2048}, "nova_object.namespace": "nova"}], "id": 0},
>> "nova_object.namespace": "nova"}]}, "nova_object.namespace":
>> "nova"}',pci_device_pools=PciDevicePoolList,ram_allocation_
>> ratio=1.5,running_vms=0,service_id=None,stats={},
>> supported_hv_specs=[HVSpec,HVSpec],updated_at=2017-06-21T00:
>> 33:55Z,uuid=9fd1b365-5ff9-4f75-a771-777fbe7a54ad,vcpus=2,vcpus_used=0)
>> _locked_update /usr/lib/python2.7/dist-packag
>> es/nova/scheduler/host_manager.py:168
>> 2017-06-20 20:33:55.764 18169 DEBUG nova.scheduler.host_manager
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Update host state with
>> aggregates: [] _locked_update /usr/lib/python2.7/dist-packag
>> es/nova/scheduler/host_manager.py:171
>> 2017-06-20 20:33:55.765 18169 DEBUG nova.scheduler.host_manager
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Update host state with service
>> dict: {'binary': u'nova-compute', 'deleted': False, 'created_at':
>> datetime.datetime(2017, 5, 17, 3, 26, 12, tzinfo=<iso8601.Utc>),
>> 'updated_at': datetime.datetime(2017, 6, 21, 0, 33, 54,
>> tzinfo=<iso8601.Utc>), 'report_count': 96357, 'topic': u'compute', 'host':
>> u'openstack-compute1', 'version': 16, 'disabled': False, 'forced_down':
>> False, 'last_seen_up': datetime.datetime(2017, 6, 21, 0, 33, 54,
>> tzinfo=<iso8601.Utc>), 'deleted_at': None, 'disabled_reason': None, 'id':
>> 7} _locked_update /usr/lib/python2.7/dist-packag
>> es/nova/scheduler/host_manager.py:174
>> 2017-06-20 20:33:55.765 18169 DEBUG nova.scheduler.host_manager
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Update host state with
>> instances: {} _locked_update /usr/lib/python2.7/dist-packag
>> es/nova/scheduler/host_manager.py:177
>> 2017-06-20 20:33:55.766 18169 DEBUG oslo_concurrency.lockutils
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Lock "(u'openstack-compute1',
>> u'openstack-compute1')" released by "nova.scheduler.host_manager._locked_update"
>> :: held 0.003s inner /usr/lib/python2.7/dist-packag
>> es/oslo_concurrency/lockutils.py:285
>> 2017-06-20 20:33:55.766 18169 DEBUG nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Starting with 1 host(s)
>> get_filtered_objects /usr/lib/python2.7/dist-packages/nova/filters.py:70
>> 2017-06-20 20:33:55.767 18169 INFO nova.scheduler.filters.retry_filter
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Host [u'openstack-compute1',
>> u'openstack-compute1'] fails.  Previously tried hosts:
>> [[u'openstack-compute1', u'openstack-compute1']]
>> 2017-06-20 20:33:55.767 18169 INFO nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filter RetryFilter returned 0
>> hosts
>> 2017-06-20 20:33:55.767 18169 DEBUG nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filtering removed all hosts for
>> the request with instance ID 'acf31677-8a2a-4dfe-adf2-2c8c48ba9dcf'.
>> Filter results: [('RetryFilter', None)] get_filtered_objects
>> /usr/lib/python2.7/dist-packages/nova/filters.py:129
>> 2017-06-20 20:33:55.768 18169 INFO nova.filters
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Filtering removed all hosts for
>> the request with instance ID 'acf31677-8a2a-4dfe-adf2-2c8c48ba9dcf'.
>> Filter results: ['RetryFilter: (start: 1, end: 0)']
>> 2017-06-20 20:33:55.768 18169 DEBUG nova.scheduler.filter_scheduler
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] There are 0 hosts available but
>> 1 instances requested to build. select_destinations
>> /usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:76
>> 2017-06-20 20:33:55.768 18169 DEBUG oslo_messaging.rpc.server
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] Expected exception during
>> message handling () _process_incoming /usr/lib/python2.7/dist-packag
>> es/oslo_messaging/rpc/server.py:158
>> 2017-06-20 20:33:55.769 18169 DEBUG oslo_messaging._drivers.amqpdriver
>> [req-ac2c8b22-0284-46b1-a90a-1126fae4e550 7e7176b79f94483c8b802a7004466e66
>> 664dc5e6023140eca0faeb2d0ecc31c2 - - -] sending reply msg_id:
>> a5eeb496378d494cad21cf35e0f0642e reply queue:
>> reply_00bca84adb354a91bfa3c31c0e70a288 time elapsed: 0.150797337003s
>> _send_reply /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/
>> amqpdriver.py:73
>> 2017-06-20 20:33:59.573 18169 DEBUG oslo_service.periodic_task
>> [req-e3fbda68-20b5-4371-b594-a055bf78c8da - - - - -] Running periodic
>> task SchedulerManager._expire_reservations run_periodic_tasks
>> /usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215
>>
>>
>> I only notice *compute1* in the logs, but I never see my other two nodes
>> in here as well:
>>
>>
>> root at openstack-controller1://etc/nova# openstack hypervisor list
>> +----+---------------------+-----------------+--------------+-------+
>> | ID | Hypervisor Hostname | Hypervisor Type | Host IP      | State |
>> +----+---------------------+-----------------+--------------+-------+
>> |  7 | openstack-compute2  | QEMU            | 10.10.10.122 | up    |
>> |  8 | openstack-compute3  | QEMU            | 10.10.10.123 | up    |
>> |  9 | openstack-compute1  | QEMU            | 10.10.10.8   | up    |
>> +----+---------------------+-----------------+--------------+-------+
>>
>>
>> On Tue, Jun 20, 2017 at 4:57 PM, Alex Evonosky <alex.evonosky at gmail.com>
>> wrote:
>>
>>> Chris
>>>
>>> I have not enabled debug on scheduler but I will tonight.  thank you for
>>> the feedback.
>>>
>>>
>>>
>>>
>>>
>>> Sent from my Samsung S7 Edge
>>>
>>>
>>> On Jun 20, 2017 4:22 PM, "Chris Friesen" <chris.friesen at windriver.com>
>>> wrote:
>>>
>>>> On 06/20/2017 01:45 PM, Alex Evonosky wrote:
>>>>
>>>>> Openstackers-
>>>>>
>>>>> I am getting the familiar: *No hosts found* when launching an
>>>>> instance.  After
>>>>> research I found many issues such as this at least going back to
>>>>> 2015.  However,
>>>>> the solutions that were presented did not really seem to help mine, so
>>>>> I am
>>>>> checking if my error maybe a more common one that could be fixed.
>>>>>
>>>>
>>>> <snip>
>>>>
>>>> some from nova-scheduler:
>>>>>
>>>>> 2017-06-20 15:18:14.879 11720 INFO nova.filters
>>>>> [req-128bca26-06da-49de-9d14-ad1ae967d084
>>>>> 7e7176b79f94483c8b802a7004466e66
>>>>> 5f8b2c83921b4b3eb74e448667b267b1 -
>>>>> - -] Filter RetryFilter returned 0 hosts
>>>>> 2017-06-20 15:18:14.888 11720 INFO nova.filters
>>>>> [req-128bca26-06da-49de-9d14-ad1ae967d084
>>>>> 7e7176b79f94483c8b802a7004466e66
>>>>> 5f8b2c83921b4b3eb74e448667b267b1 -
>>>>> - -] Filtering removed all hosts for the request with instance ID
>>>>> '1a461902-4b93-40e5-9a95-76bb9ccbae63'. Filter results:
>>>>> ['RetryFilter: (start:
>>>>> 0, end: 0)']
>>>>> 2017-06-20 15:19:10.930 11720 INFO nova.scheduler.host_manager
>>>>> [req-003eec5d-441a-45af-9784-0d857a9d111a - - - - -] Successfully
>>>>> synced
>>>>> instances from host 'o
>>>>> penstack-compute2'.
>>>>> 2017-06-20 15:20:15.113 11720 INFO nova.scheduler.host_manager
>>>>> [req-b1c4044c-6973-4f28-94bc-5b40c957ff48 - - - - -] Successfully
>>>>> synced
>>>>> instances from host 'o
>>>>> penstack-compute3'.
>>>>>
>>>>
>>>> If you haven't already, enable debug logs on nova-scheduler.  You
>>>> should be able to see which filter is failing and hopefully why.
>>>>
>>>> In your example, look for nova-scheduler logs with
>>>> "req-c4d5e734-ba41-4fe8-9397-4b4165f4a133" in them since that is the
>>>> failed request.  The timestamp should be around 15:30:56.
>>>>
>>>> Chris
>>>>
>>>>
>>>> _______________________________________________
>>>> Mailing list: http://lists.openstack.org/cgi
>>>> -bin/mailman/listinfo/openstack
>>>> Post to     : openstack at lists.openstack.org
>>>> Unsubscribe : http://lists.openstack.org/cgi
>>>> -bin/mailman/listinfo/openstack
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20170622/9383dfba/attachment.html>


More information about the Openstack mailing list