[OpenStack-Infra] Options for logstash of ansible tasks

Ian Wienand iwienand at redhat.com
Wed Mar 28 06:57:11 UTC 2018


On 03/28/2018 11:30 AM, James E. Blair wrote:
> As soon as I say that, it makes me think that the solution to this
> really should be in the log processor.  Whether it's a grok filter, or
> just us parsing the lines looking for task start/stop -- that's where we
> can associate the extra data with every line from a task.  We can even
> generate a uuid right there in the log processor.

I'd agree the logstash level is probably where to do this.  How to
acheive that ...

In trying to bootstrap myself on the internals of this, one thing I've
found is that the multi-line filter [1] is deprecated for the
multiline codec plugin [2].

We make extensive use of this deprecated filter [3].  It's not clear
how we can go about migrating away from it?  The input is coming in as
"json_lines" as basically a json-dict -- with a tag that we then use
different multi-line matches for.

>From what I can tell, it seems like the work of dealing with
multiple-lines has actually largley been put into filebeat [5] which
is analagous to our logstash-workers (it feeds the files into
logstash).

Ergo, do we have to add multi-line support to the logstash-pipeline,
so that events sent into logstash are already bundled together?

-i

[1] https://www.elastic.co/guide/en/logstash/2.4/plugins-filters-multiline.html
[2] https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html
[3] https://git.openstack.org/cgit/openstack-infra/logstash-filters/tree/filters/openstack-filters.conf
[4] http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/templates/logstash/input.conf.erb
[5] https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html



More information about the OpenStack-Infra mailing list