[OpenStack-Infra] Adding index and views/dashboards for Kata to ELK stack

Whaley, Graham graham.whaley at intel.com
Fri Oct 19 10:36:21 UTC 2018


> Paul [0] and I [1] had both responded a while back with some thoughts on next
> steps to make this possible, but those only went to the infra mailing list (I didn't
> check if you were subscribed). Hopefully the steps there make sense and we can
> move forward with something along those lines?

Ah, mea culpa! - I should have made it clearer I was not subscribed to the list, and pushed myself onto the CC. Apologies for that.

Let's tackle what looks like the easier one first:

> --- copied from the list ---
> Yah, i think it would be great is we could use the JJB / grafyaml
> workflow here where we store the view / dashboard in yaml then write a
> job to push them into the application.  Then we don't need to deal with
> public authentication.

If I understand then, we'd set up a kata git repo (or somewhere in our existing kata CI repo) to store the grafyaml files that define our views, and upon merges, a JJB defined Zuul job would take those, convert them to JSON and inject them into the grafana index of the elastic DB - that sounds nice to me, and I think should be workable. I like that flow.

For the data injection itself then:
> --- copied from the list ---
> Currently all of our jobs submit jobs to the logstash processing system via the ansible role here [0]. That is then fed through our logstash configuration [1]. The first step > in this is probably to update the logstash configuration and update the existing Kata jobs in zuul to submit jobs to index that information as appropriate?

Right, I think the logstash config will need updating to understand there is metrics JSON being injected, and to pass it on. We'll have to see how we get the filter for the JSON into that setup, and if we need (or can) have the filebeat inject a specific tag for instance.

I don't think the Zuul Ansible role will be applicable - the metrics run on bare metal machines running Jenkins, and export their JSON results via a filebeat socket. My theory was we'd then add the socket input to the logstash server to receive from that filebeat - as in my gist at https://gist.github.com/grahamwhaley/aa730e6bbd6a8ceab82129042b186467

One crux here is that the metrics have to run on a machine with guaranteed performance (so not a shared/virtual cloud instance), and hence currently run under Jenkins and not on the OSF/Zuul CI infra.

Let me know you see any issues with that Jenkins/filebeat/socket/JSON flow.

I need to deploy a new machine to process master branch merges to generate the data (currently we have a machine that is processing PRs at submission, not merge, which is not the data we want to track long term). I'll let you know when I have that up and running. If we wanted to move on this earlier, then I could inject data to a test index from my local test setup - all it would need I believe is the valid keys for the filebeat->logstash connection.

> Clark
Thanks!
  Graham (now on copy ;-)
---------------------------------------------------------------------
Intel Corporation (UK) Limited
Registered No. 1134945 (England)
Registered Office: Pipers Way, Swindon SN3 1RJ
VAT No: 860 2173 47

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.


More information about the OpenStack-Infra mailing list