Getting CPU Utilization null value

Sean Mooney smooney at redhat.com
Thu Jan 14 15:45:35 UTC 2021


On Thu, 2021-01-14 at 13:49 +0000, Stephen Finucane wrote:
> On Thu, 2021-01-14 at 18:23 +0500, ahsan ashraf wrote:
> > Hi Team,
> > I have been using openstack apis and have following issue:
> > * API used: /servers/{server_id}/diagnostics
> > * API version: v2.1
> > Problem: Couldn't get utilization value of my server it is showing null even i
> > have applied stress on the cpu 
> This feature is specific to certain virt drivers. Only the XenAPI virt driver
> supported reporting this metric and this driver was removed from nova in the
> Victoria release, meaning there are no longer any in-tree drivers with support
> for this. The libvirt driver, which I suspect you're using here, reports ID and
> time but not utilization.
so technically we have the ablity to plug in metric monitors for this.

we have a cpu monitor avaihble 
https://github.com/openstack/nova/blob/master/nova/compute/monitors/cpu/virt_driver.py
but that is based on the host cpu utilisation which is why we dont hook that up to this api.

there was also a numa mem bandwith one proposed at one point
https://review.opendev.org/c/openstack/nova/+/270344

they can be use with the metrics filter and since this is a stevadore entroy point
you can write your own. since that data is nto hooked up to the diagnostics endpoint
we dont have that info in the api responce. i belive we can get a per instace view from
libvirt too so the libvirt driver could provide some of this info but it was never implemeted.
there is a performace overhad to collecting that info however so we support disabling the
PMU in the guest to reduce that. that normally only important for realtime instances.
when its disabled there is no way for libvirt to get this info form qemu as far as i know.

in anycase i agree with stephn that htis is not expected to work for libvirt currently.

> 
> Hope this helps,
> Stephen
> 
> > API Result:
> > { "state": "running", "driver": "libvirt", "hypervisor": "kvm",
> > "hypervisor_os": "linux", "uptime": 10020419, "config_drive": false,
> > "num_cpus": 2, "num_nics": 1, "num_disks": 2, "disk_details": [ {
> > "read_bytes": 983884800, "read_requests": 26229, "write_bytes": 7373907456,
> > "write_requests": 574537, "errors_count": -1 }, { "read_bytes": 3215872,
> > "read_requests": 147, "write_bytes": 0, "write_requests": 0, "errors_count": -
> > 1 } ], "cpu_details": [ { "id": 0, "time": 12424380000000, "utilisation": null
> > }, { "id": 1, "time": 12775460000000, "utilisation": null } ], "nic_details":
> > [ { "mac_address": "fa:16:3e:0e:c7:f2", "rx_octets": 943004980, "rx_errors":
> > 0, "rx_drop": 0, "rx_packets": 4464445, "rx_rate": null, "tx_octets":
> > 785254710, "tx_errors": 0, "tx_drop": 0, "tx_packets": 4696786, "tx_rate":
> > null } ], "memory_details": { "maximum": 2, "used": 2 } }
> > 
> > Regards,
> > Muhammad Ahsan
> 





More information about the openstack-discuss mailing list