[ops] [nova] How to get CPUtime and wallclokctime consumed by a project (without using ceilometer) ?

Massimo Sgaravatto massimo.sgaravatto at gmail.com
Wed Dec 12 15:32:53 UTC 2018


If I am not wrong another problem with the "openstack usage show" command
is that it doesn't take into accounts instances that are in the shadow
tables (i.e. deleted instances moved to the shadow tables using the
'nova-manage db archive_deleted_rows' command).
Or the problem is with the  'nova-manage db archive_deleted_rows' which
doesn't allow to specify a time range (i.e. to archive only instances
deleted > x days ago) :-)


I had a quick look to the collectd+virt plugin. It would be great if it
would allow to also use the OpenStack project id when reporting stats
(since I am interesting in producing per project accounting reports) ...

Thanks again

Cheers, Massimo

On Thu, Nov 29, 2018 at 11:57 AM Massimo Sgaravatto <
massimo.sgaravatto at gmail.com> wrote:

> Thanks a lot for the useful information !
>
> Cheers, Massimo
>
> On Wed, Nov 28, 2018 at 10:20 PM Sean Mooney <smooney at redhat.com> wrote:
>
>> On Wed, 2018-11-28 at 12:14 -0500, Jay Pipes wrote:
>> > On 11/28/2018 10:38 AM, Massimo Sgaravatto wrote:
>> > > Hi
>> > >
>> > > I was wondering if nova allows to get the CPUtime and wallclocktime
>> > > consumed by a project in a certain time period, without using
>> ceilometer
>> > >
>> > > Among the data returned by the command "openstack usage show" there
>> is
>> > > also a "CPU Hours" but, if I am not wrong, this is actually the
>> > > WallClockTime. Did I get it right ?
>> >
>> > It's neither. It's the calculated time that the VM has been "up"
>> > multiplied by the number of vCPUs the VM consumes.
>> >
>> > It's basically worthless as anything more than a simplistic indicator
>> of
>> > rough resource consumption.
>> >
>> > You can see how the calculation is done by following the fairly
>> > convoluted code in the os-simple-tenant-usage API:
>> >
>> > This calculates the "hours used":
>> >
>> >
>> https://github.com/openstack/nova/blob/62245235bc15da6abcdfd3df1c24bd856d69fbb4/nova/api/openstack/compute/simple_tena
>> > nt_usage.py#L51-L82
>> >
>> > And here is where that is multiplied by the VM's vCPUs:
>> >
>> >
>> https://github.com/openstack/nova/blob/62245235bc15da6abcdfd3df1c24bd856d69fbb4/nova/api/openstack/compute/simple_tena
>> > nt_usage.py#L213
>> >
>> > > If so, it is also possible to get the CPUTime ?
>> >
>> > If you are referring to getting the amount of time a *physical host
>> CPU*
>> > has spent performing tasks for a particular VM, the closest you can get
>> > to this would be the "server diagnostics" API:
>> >
>> >
>> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/views/server_diagnostics.py
>> >
>> > That said, the server diagnostics API is a very thin shim over a
>> > virt-driver-specific interface:
>> >
>> >
>> https://github.com/openstack/nova/blob/62245235bc15da6abcdfd3df1c24bd856d69fbb4/nova/compute/manager.py#L4698
>> >
>> > The libvirt driver's get_server_diagnostics() (host-specific) and
>> > get_instance_diagnostics() (VM-specific) implementation is here:
>> >
>> >
>> https://github.com/openstack/nova/blob/62245235bc15da6abcdfd3df1c24bd856d69fbb4/nova/virt/libvirt/driver.py#L8543-L867
>> > 8
>> >
>> > You might want to look at that code and implement a simple
>> > collectd/statsd/fluentd/telegraf collector to grab those stats directly
>> if you go the collectd route the libvirt plugin was modifed about 2 years
>> ago to be able to use the domain uuid
>> instaed the domain instance name when reporting stats and since we set
>> the domain uuid to the nova uuid it makes it
>> really easy to map back to the instance later. so ihink the colloectd
>> libvirt plugin may be able to give you this info
>> already and you can then just export that ot influxdb or gnnoci use
>> later. the other monitoring solution can proably do
>> the same but nova does not really track vm cpu usage that closely.
>> > from the libvirt daemon on the compute nodes themselves.
>> >
>> > Best,
>> > -jay
>> >
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20181212/7065cc0a/attachment.html>


More information about the openstack-discuss mailing list