[openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage.

Won wjstk16 at gmail.com
Fri Nov 30 07:40:18 UTC 2018


Hi,

I checked that both of the methods you propose work well.
After I add 'should_delete_outdated_entities' function to InstanceDriver,
it took about 10 minutes to clear the old Instance.
And I added two sentences you said to Nova-cpu.conf, so the vitrage
collector get notifications well.

Thank you for your help.

Best regards,
Won

2018년 11월 22일 (목) 오후 9:35, Ifat Afek <ifatafekn at gmail.com>님이 작성:

> Hi,
>
> A deleted instance should be removed from Vitrage in one of two ways:
> 1. By reacting to a notification from Nova
> 2. If no notification is received, then after a while the instance vertex
> in Vitrage is considered "outdated" and is deleted
>
> Regarding #1, it is clear from your logs that you don't get notifications
> from Nova on the second compute.
> Do you have on one of your nodes, in addition to nova.conf, also a
> nova-cpu.conf? if so, please make the same change in this file:
>
> notification_topics = notifications,vitrage_notifications
>
> notification_driver = messagingv2
>
> And please make sure to restart nova compute service on that node.
>
> Regarding #2, as a second-best solution, the instances should be deleted
> from the graph after not being updated for a while.
> I realized that we have a bug in this area and I will push a fix to gerrit
> later today. In the meantime, you can add to
> InstanceDriver class the following function:
>
>     @staticmethod
>     def should_delete_outdated_entities():
>         return True
>
> Let me know if it solved your problem,
> Ifat
>
>
> On Wed, Nov 21, 2018 at 1:50 PM Won <wjstk16 at gmail.com> wrote:
>
>> I attached four log files.
>> I collected the logs from about 17:14 to 17:42. I created an instance of
>> 'deltesting3' at 17:17. 7minutes later, at 17:24, the entity graph showed
>> the dentesting3 and vitrage colletor and graph logs are appeared.
>> When creating an instance in ubuntu server, it appears immediately in the
>> entity graph and logs, but when creating an instance in computer1 (multi
>> node), it appears about 5~10 minutes later.
>> I deleted an instance of 'deltesting3' around 17:26.
>>
>>
>>> After ~20minutes, there was only Apigateway. Does it make sense? did you
>>> delete the instances on ubuntu, in addition to deltesting?
>>>
>>
>> I only deleted 'deltesting'. After that, only the logs from 'apigateway'
>> and 'kube-master' were collected. But other instances were working well. I
>> don't know why only two instances are collected in the log.
>> NOV 19 In this log, 'agigateway' and 'kube-master' were continuously
>> collected in a short period of time, but other instances were sometimes
>> collected in long periods.
>>
>> In any case, I would expect to see the instances deleted from the graph
>>> at this stage, since they were not returned by get_all.
>>> Can you please send me the log of vitrage-graph at the same time (Nov
>>> 15, 16:35-17:10)?
>>>
>>
>> Information  'deldtesting3' that has already been deleted continues to be
>> collected in vitrage-graph.service.
>>
>> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20181130/2ff85096/attachment.html>
-------------- next part --------------
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


More information about the openstack-discuss mailing list