[telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics

Lingxian Kong anlin.kong at gmail.com
Thu Aug 1 00:01:30 UTC 2019


Hi Bernd,

There were a lot of people asked the same question before, unfortunately, I
don't know the answer either(we are still using an old version of
Ceilometer). The original cpu_util support has been removed from Ceilometer
in favor of Gnocchi, but AFAIK, there is no doc in Gnocchi mentioned how to
achieve the same thing and no clear answer from the Gnocchi maintainers.

It'd be much appreciated if you could find the answer in the end, or there
will be someone who has the already solved the issue.

Best regards,
Lingxian Kong
Catalyst Cloud


On Wed, Jul 31, 2019 at 1:28 PM Bernd Bausch <berndbausch at gmail.com> wrote:

> The message at the end of this email is some three months old. I have the
> same problem. The question is: *How to use the new rate metrics in
> Gnocchi. *I am using a Stein Devstack for my tests.
>
> For example, I need the CPU rate, formerly named *cpu_util*. I created a
> new archive policy that uses *rate:mean* aggregation and has a 1 minute
> granularity:
>
> $ gnocchi archive-policy show ceilometer-medium-rate
>
> +---------------------+------------------------------------------------------------------+
> | Field               |
> Value                                                            |
>
> +---------------------+------------------------------------------------------------------+
> | aggregation_methods | rate:mean,
> mean                                                  |
> | back_window         |
> 0                                                                |
> | definition          | - points: 10080, granularity: 0:01:00, timespan: 7
> days, 0:00:00 |
> | name                |
> ceilometer-medium-rate                                           |
>
> +---------------------+------------------------------------------------------------------+
>
> I added the new policy to the publishers in *pipeline.yaml*:
>
> $ tail -n5 /etc/ceilometer/pipeline.yaml
> sinks:
>     - name: meter_sink
>       publishers:
>           - gnocchi://?archive_policy=medium&filter_project=gnocchi_swift
>           *-
> gnocchi://?archive_policy=ceilometer-medium-rate&filter_project=gnocchi_swift*
>
> After restarting all of Ceilometer, my hope was that the CPU rate would
> magically appear in the metric list. But no: All metrics are linked to
> archive policy *medium*, and looking at the details of an instance, I
> don't detect anything rate-related:
>
> $ gnocchi resource show ae3659d6-8998-44ae-a494-5248adbebe11
>
> +-----------------------+---------------------------------------------------------------------+
> | Field                 |
> Value                                                               |
>
> +-----------------------+---------------------------------------------------------------------+
> ...
> | metrics               | compute.instance.booting.time:
> 76fac1f5-962e-4ff2-8790-1f497c99c17d |
> |                       | cpu:
> af930d9a-a218-4230-b729-fee7e3796944                           |
> |                       | disk.ephemeral.size:
> 0e838da3-f78f-46bf-aefb-aeddf5ff3a80           |
> |                       | disk.root.size:
> 5b971bbf-e0de-4e23-ba50-a4a9bf7dfe6e                |
> |                       | memory.resident:
> 09efd98d-c848-4379-ad89-f46ec526c183               |
> |                       | memory.swap.in:
> 1bb4bb3c-e40a-4810-997a-295b2fe2d5eb                |
> |                       | memory.swap.out:
> 4d012697-1d89-4794-af29-61c01c925bb4               |
> |                       | memory.usage:
> 93eab625-0def-4780-9310-eceff46aab7b                  |
> |                       | memory:
> ea8f2152-09bd-4aac-bea5-fa8d4e72bbb1                        |
> |                       | vcpus:
> e1c5acaf-1b10-4d34-98b5-3ad16de57a98                         |
> | original_resource_id  |
> ae3659d6-8998-44ae-a494-5248adbebe11                                |
> ...
>
> | type                  |
> instance                                                            |
> | user_id               |
> a9c935f52e5540fc9befae7f91b4b3ae                                    |
>
> +-----------------------+---------------------------------------------------------------------+
>
> Obviously, I am missing something. Where is the missing link? What do I
> have to do to get CPU usage rates? Do I have to create metrics? Do I have
> to ask Ceilometer to create metrics? How?
>
> Right now, no instructions seem to exist at all. If that is correct, I
> would be happy to write documentation once I understand how it works.
>
> Thanks a lot.
>
> Bernd
> On 5/10/2019 3:49 PM, info at dantalion.nl wrote:
>
> Hello,
>
> I am working on Watcher and we are currently changing how metrics are
> retrieved from different datasources such as Monasca or Gnocchi. Because
> of this major overhaul I would like to validate that everything is
> working correctly.
>
> Almost all of the optimization strategies in Watcher require the cpu
> utilization of an instance as metric but with newer versions of
> Ceilometer this has become unavailable.
>
> On IRC I received the information that Gnocchi could be used to
> configure an aggregate and this aggregate would then report cpu
> utilization, however, I have been unable to find documentation on how to
> achieve this.
>
> I was also notified that cpu_util is something that could be computed
> from other metrics. When readinghttps://docs.openstack.org/ceilometer/rocky/admin/telemetry-measurements.html#openstack-compute
> the documentation seems to agree on this as it states that cpu_util is
> measured by using a 'rate of change' transformer. But I have not been
> able to find how this can be computed.
>
> I was hoping someone could spare the time to provide documentation or
> information on how this currently is best achieved.
>
> Kind Regards,
> Corne Lukken (Dantali0n)
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190801/d1698490/attachment.html>


More information about the openstack-discuss mailing list