[OpenStack-Infra] Adding index and views/dashboards for Kata to ELK stack

Clark Boylan cboylan at sapwetik.org
Tue Dec 4 20:27:28 UTC 2018


On Mon, Dec 3, 2018, at 9:12 AM, Whaley, Graham wrote:
>

snip

> 
> I spoke with some of the other Kata folks - we agreed I'd try to move 
> the Kata metrics CI into Zuul utilizing the packet.net hardware, and 
> let's see how that pans out. I think that will help both sides 
> understand the current state of kata/zuul so we can move things forwards 
> there.
> 
> Wrt the packet.net slaves, I believe we can do that using some of the 
> packet.net/zuul integration work done by JohnStudarus - John and I had 
> some chats at the Summit in Berlin.
> https://opensource.com/article/18/10/building-zuul-cicd-cloud
> 
> I'll do some Zuul readup, work out how I need to PR the additional 
> ansible/yaml items to the infra repos to add the metrics build/runs (I 
> see the repos and code, and a metrics run is very very similar to a 
> normal kata CI run - and to begin with we  can do those runs in the VM 
> builders to test out the flows before moving to the packet.net 
> hardware).

I think running the jobs on VMs to get the workflow going first is a great idea.

> 
> [move this down to the end...]
> > 
> > No, we would inject the data through the existing test node -> Zuul -> Logstash -
> > > Elasticsearch path.
> 
> This might be one bit we have to work out. The metrics generates raw 
> JSON results. The best method I found for landing that directly into 
> logstash was through the socket filebeat. It is not clear in my head how 
> this ties in with Zuul - the best method I found previously for direct 
> JSON injection into logstash, and thus elastic, was using the socket 
> filebeat. Will that fit in with the infra?
> 

The current setup has the zuul job list out the files we want logstash to process into elasticsearch then submit gearman jobs to a tool in front of logstash which feeds logstash. We did this because way back when the "officially" documented way to feed logstash was via Redis and we found that we need far more memory for that model than this one. The other upside to the way we've set this up is the Zuul job submits requests for processing and that processing happens asynchronously so we can report results back to Gerrit (or Github, etc) without waiting for elasticsearch things to happen.

For your use case the way I sort of envision the json data getting fed into logstash is via this same mechanism. Your job would log the json data, Zuul would submit a gearman job request to have that processed, then this data would be fetched and fed into logstash. Since this is metric and not log event data we may need to have a slightly different gearman job that knows how to feed logstash for the json data specifically rather than the log data. Then on the logstash side have a tcp input ingest that data which will send it down a different filter path from the log inputs.

I'm happy to help with this as I've got far too much insider info on how this all works. If you'd like to look at the internals the Zuul job side happens here [0] and this small script runs as a service in front of logstash [1] to feed the logstashes. If we can get the outline of a job running on a VM that stashes the json data in its logs that will give us a good start for figuring out how to plug that into this system.

[0] https://git.openstack.org/cgit/openstack-infra/project-config/tree/roles/submit-log-processor-jobs/library/submit_log_processor_jobs.py
[1] https://git.openstack.org/cgit/openstack-infra/puppet-log_processor/tree/files/log-gearman-worker.py

Clark



More information about the OpenStack-Infra mailing list