filebeat +elk

I am trying to ingest bro 2.5 json logs into an elk stack, using filebeat to push the logs. Is that even the best way to do this? I have found MUCH outdated material on ingesting bro logs into an elk stack, but very little that is up to date, and some of which is up to date but is using older versions of software from elastic.co. If anyone has a modern bro/elk integration document they use(d) to set their environment up, it would be greatly appreciated if you could share. Thanks!

Erik

Do you specifically need to send it to logstash or do you just need it to get inserted into elasticsearch?

Jon

Erik,

I’m doing this with Ubuntu and Pi devices. I’ll send you all of my notes outside of the main channel.

Patrick Kelley, CISSP, C|EH, ITIL
Principal Security Engineer
patrick.kelley@criticalpathsecurity.com

I guess you refer to securityonion they already have done that and lot of logstash config file.

Hats off to SO folks and Justin Henderson

Sending details outside of the mailing list is not cool and against what the open source community stands for.

Anyway, we’ve had a great success with taking Bro JSON logs and shipping them over to RabbitMQ with syslog-ng (no parsing done on the syslog-ng side) and fetching those with MozDef workers (which are python).

6k eps no sweat.

By the way Bro does log in JSON format that can directly be ingested into elastic search

Michał,

It’s not cool to assume the role of defining how others give away their personal time and efforts. I’m sure you’ll manage just fine with my pushing unpolished notes to a particular person, as opposed to mass transmitting what could be a complete “goat rodeo” for everyone else.

The “community” worked just fine. A person had a need. They asked. It was filled.

Go have a coke and a smile.

-PK

I would use json to stdout with a python script to

insert it in elasticsearch. I think its the most efficient

and stable method. The latest elasticsearch needs

separate index for the different log types.

There is a bro-pkg for json to stdout.

I just need to get it into ES. I am going to pump eve.json in as well. I have no experience with the ELK stack at all, other than some ES work from dealing with moloch content going in there and configuring it appropriately.
If I can just bypass everything and push eve.json and bro json logs directly in, that would be fantastic.

Thanks Jon!

No guarantees, but this[1] may be helpful. I’ve recently moved to pushing things to kafka using this[2], which eventually feeds into ES using Apache Metron which adds some other benefits but is meant for large scale environments (i.e. it is definitely not lightweight).

1: https://github.com/bro/bro-plugins/tree/00d039442b97ba545e6020200d96a3cba9d9181b/elasticsearch
2: https://github.com/apache/metron-bro-plugin-kafka

Jon

So at job I was using logstash on bro and reading each file, parsing and enhancing the data then sending to elasticsearch. But then that was talking too many resources from bro, do not I’m using filebeat to send each log to a logstash server which parses, enhances and sends to elasticsearch.

At home I’m using syslog-ng to send bro logs to logstash

The suggestion to use rabbitmq is good as well.

Undoubtedly go ahead with Filebeat and elasticsearch and you should be good to go. ES will automatically index since those being into JSON

image1.jpeg

No, no logstash. Like this , with bro writing json to stdout

The python script takes data from stdin and writes it real-time

into elasticsearch. You need to add _stream to know which type

of log it is.

On this subject, We’ve had issues with both filebeats and logstash reading logs (written to files) once events per second reaches upwards of 3k. We are currently looking into using the bro kafka plugin. Has anyone else had issues with logstash or filebeats bottlenecking?

I’ve had some issues as you described with Logstash. About the same EPS. I moved away from Filebeat some time ago. Unrelated issues.

Kafka has worked quite well. I recommend the Apache Metron.

https://metron.apache.org/current-book/metron-sensors/bro-plugin-kafka/index.html

bro -N should output the following:

Apache::Kafka (dynamic, version 0.2)

I had logstash reading Bro http log and then turned on DNS lookup in logstash. It quickly got overloaded. I turned off DNS in logstash and had no more issues of that sort. Logstash geoip-lite is able to keep up.

I am not using JSON files, btw.

David

One unresolved issue is that my http data often has a uri field that does not get parsed correctly by logstash. If the uri (or any field) contains " (quote) characters, this causes a _csvparsefailure in logstash. I posed the problem to discuss.elastic.co<http://discuss.elastic.co>, but got no solution, just the suggestion to replace the quote characters with something else, and hints elsewhere that the problem might lie deep in Ruby libraries used by logstash. Search for "CSV Filter - Quote character causing _csvparsefailure" to see the interactions.

David

I ended up using logstash, rsyslog, es, and kibana. Next up, using Yelps elastalert! Thank you all for your assistance!

image1.jpeg