[Openstack] Ceilometer meters missing in mitaka with mongo

Tracy Comstock Roesler tracycomstock at overstock.com
Thu Mar 16 19:30:03 UTC 2017


I spent a good two weeks trying to get ceilometer configured with gnocchi before giving up.  Now I’m trying to get it working with mongod, but ceilometer meter-list is not showing all of my instance meters.  It seems to only ever show 2 values.  I don’t know how the two resource Ids get there in the first place, or why the rest aren’t making it in. I’ve tried reinstalling mongod from scratch, but I run into the same problem.  I followed the instructions in the openstack mitaka rdo installation<https://docs.openstack.org/mitaka/install-guide-rdo/ceilometer-install.html>.

Can somebody help me figure out what’s going on? I do see the appropriate meters in `ceilometer sample-list`.

Here’s my ceilometer.conf on my controllers:

[DEFAULT]
auth_strategy = keystone
rpc_backend = rabbit
[api]
[central]
[collector]
[compute]
[coordination]
[cors]
[cors.subdomain]
[database]
metering_time_to_live = 432000
connection = mongodb://ceilometer:SECRET@openstack01:27017,openstack02:27017,openstack03:27017/ceilometer
[dispatcher_file]
[dispatcher_gnocchi]
[event]
[exchange_control]
[hardware]
[ipmi]
[keystone_authtoken]
auth_uri = http://openstack:5000
auth_url = http://openstack:35357
memcached_servers = openstack01:11211,openstack02:11211,openstack03:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = ceilometer
password = SECRET
[matchmaker_redis]
[meter]
[notification]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_hosts = openstack01:5672,openstack02:5672,openstack03:5672
rabbit_userid=openstack
rabbit_password = SECRET
rabbit_retry_interval=1
rabbit_retry_backoff=2
rabbit_max_retries=0
rabbit_ha_queues=true
rabbit_durable_queues=true
[oslo_policy]
[polling]
[publisher]
[publisher_notifier]
[rgw_admin_credentials]
[service_credentials]
region_name = RegionOne
interface = internalURL
auth_type = password
auth_url = http://openstack:5000/v3
project_name = service
project_domain_name = default
username = ceilometer
user_domain_name = default
password = SECRET
[service_types]
[storage]
[vmware]
[xenapi]

Here’s the relevant portions for my pipeline.yaml:

---
sources:
    - name: meter_source
      interval: 600
      meters:
          - "*"
      sinks:
          - meter_sink
    - name: cpu_source
      interval: 60
      meters:
          - "cpu"
      sinks:
          - cpu_sink
          - cpu_delta_sink
    - name: disk_source
      interval: 60
      meters:
          - "disk.read.bytes"
          - "disk.read.requests"
          - "disk.write.bytes"
          - "disk.write.requests"
          - "disk.device.read.bytes"
          - "disk.device.read.requests"
          - "disk.device.write.bytes"
          - "disk.device.write.requests"


Here’s the relevant nova.conf on my compute nodes:

[DEFAULT]
my_ip=<%= @ipaddress_bond0_100 %>
notify_on_state_change= vm_and_task_state
notification_driver = messagingv2
enabled_apis=osapi_compute,metadata
instance_usage_audit_period= hour
auth_strategy=keystone
instance_usage_audit= True
cpu_allocation_ratio=2.0
ram_allocation_ratio=1.0
allow_resize_to_same_host=True
scheduler_default_filters=DifferentHostFilter,AggregateCoreFilter,AggregateInstanceExtraSpecsFilter,CoreFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
use_cow_images=False
firewall_driver=nova.virt.firewall.NoopFirewallDriver
use_neutron=True
rpc_backend=rabbit

[keystone_authtoken]
auth_uri= http://openstack:5000
auth_url = http://openstack:35357
memcached_servers = openstack01:11211,openstack02:11211,openstack03:11211
auth_type=password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = SECRET

[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_hosts=openstack01:5672,openstack02:5672,openstack03:5672
rabbit_userid = openstack
rabbit_password=SECRET
rabbit_retry_interval=1
rabbit_retry_backoff=2
rabbit_max_retries=0
rabbit_durable_queues=true
rabbit_ha_queues=true


Here’s the only relevant information I’m seeing on the ceilometer-compute logs:

2017-03-16 13:09:17.951 129278 INFO ceilometer.agent.manager [req-0a4d3cac-b09c-469b-964e-a852cfa04238 admin - - - -] Polling pollster cpu_util in the context of meter_source
2017-03-16 13:09:17.951 129278 DEBUG ceilometer.compute.pollsters.cpu [req-0a4d3cac-b09c-469b-964e-a852cfa04238 admin - - - -] Checking CPU util for instance c1da0f61-7a6e-4754-9eb3-47c65493d11c get_samples /usr/lib/python2.7/site-packages/ceilometer/compute/pollsters/cpu.py:70
2017-03-16 13:09:17.951 129278 DEBUG ceilometer.compute.pollsters.cpu [req-0a4d3cac-b09c-469b-964e-a852cfa04238 admin - - - -] Obtaining CPU Util is not implemented for LibvirtInspector get_samples /usr/lib/python2.7/site-packages/ceilometer/compute/pollsters/cpu.py:90
2017-03-16 13:09:17.951 129278 DEBUG ceilometer.compute.pollsters.cpu [req-0a4d3cac-b09c-469b-964e-a852cfa04238 admin - - - -] Checking CPU util for instance a2d6a015-aa82-4540-b7af-0063a52b1727 get_samples /usr/lib/python2.7/site-packages/ceilometer/compute/pollsters/cpu.py:70
2017-03-16 13:09:17.952 129278 DEBUG ceilometer.compute.pollsters.cpu [req-0a4d3cac-b09c-469b-964e-a852cfa04238 admin - - - -] Obtaining CPU Util is not implemented for LibvirtInspector get_samples /usr/lib/python2.7/site-packages/ceilometer/compute/pollsters/cpu.py:90

I’m using the following libvirt packages:

[root] # rpm -qa | grep libvirt
libvirt-daemon-driver-network-2.0.0-10.el7_3.4.x86_64
libvirt-daemon-driver-secret-2.0.0-10.el7_3.4.x86_64
libvirt-daemon-driver-qemu-2.0.0-10.el7_3.4.x86_64
libvirt-daemon-kvm-2.0.0-10.el7_3.4.x86_64
libvirt-python-2.0.0-2.el7.x86_64
libvirt-daemon-2.0.0-10.el7_3.4.x86_64
libvirt-daemon-driver-nwfilter-2.0.0-10.el7_3.4.x86_64
libvirt-daemon-driver-interface-2.0.0-10.el7_3.4.x86_64
libvirt-daemon-driver-storage-2.0.0-10.el7_3.4.x86_64
libvirt-client-2.0.0-10.el7_3.4.x86_64
libvirt-daemon-driver-nodedev-2.0.0-10.el7_3.4.x86_64
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20170316/bd637f95/attachment.html>


More information about the Openstack mailing list