I would like to use logs-to-elasticsearch.bro this script to log the Bro Elasticsearch。
My Bro Version: 2.4.1
1.Use this script is not you do not need logstash, Bro will be sent directly to the log Elasticsearch?
2.I follow the official document: https: //www.bro.org/sphinx/components/bro-plugins/elasticsearch/README.html is configured in /usr/local/bro/share/bro/site/local. bro added @load bro/ElasticSearch/logs-to-elasticsearch.bro. But it was not successful, in addition to the configuration of the document still need additional configuration?
I know this discussion has been going on for a while and unfortunately I've been a bit behind the curve on keeping up with it closely. As someone who seems to have been coping with this problem for a while, what do you recommend? Would it be best if we could do nested json documents in the json output? i.e....
I am happy this came up, as I have been going through the same issues for testing Brownian vs. ELK with Bro filters
If it is not supported in Bro’s JSON output, it would be nice to be able to configure it, as there may already be some parsing of the default JSON output of Bro with tools like Splunk.
I use bro with ELK in production and it works great. I use bro to json and all my logs are in json. Then use logstash to pick up the logs and the good folks at elastic have created a plugin for de_dot. It’s not perfect but with some mutates it works fine for the time being. Kibana is a fine interface to build dashboards and query the data.
Bro and ELK integration works great with a little tweaking. I’m happy to share come configs if you’re interested.
I actually ran into this again this morning. I patched the Elasticsearch writer and I’m still testing it [1]. It uses some code adapted from “g-clef” that built it into a Kafka output plugin.
I'm not sure about that mechanism, but I think it should be integrated deeper into Bro, like probably into the json formatter and then exposed in the writers as a configuration option. Alternately, I could see having a configuration option for the writers (that flows through into the json formatter) which provides the structured output instead of flattened output as is done now.
Seth, I know some people prefer different timestamp formats, it might be best to parameterize that so that it can be modified in script land using the existing Bro formatting libraries. I’ve found that TS_8601 works extremely well with the Elasticsearch Joda parsing library.
Instead of making the change as you've specified, can you add ISO8601 output as a config option as the ascii logger does? You can even set the default to be ISO8601, but I think there is some value in having that be configurable in the same way that it is in other parts of Bro.
I also changed the ts field name to @timestamp, since it’s almost universal now for the standard field for use in Elasticsearch data (used by Fluentd, Logstash, and even Spark).
That's actually part of a larger change that I would like to address soon . The timestamps in each log right now are protocol specific. You definitely don't always want to use the ts field as the @timestamp field (to keep things concrete). We need to add a new field that is displayed whenever json output is enabled and I could even see a justification for adding it to the ascii logs that represents when the log was written. It's basically metadata about the log line.
I would also add, that I use ELK almost exclusively for Bro logs, but I go through a Kafka output plugin. There’s an easy setup using Chef to automate for a simple test environment over at http://rocknsm.io/.
Disclaimer, I’m one of the authors of that open source project.
Hi, all. (sorry for missing this conversation yesterday)
I'm the author of that Bro Kafka logging plugin ( https://github.com/g-clef/KafkaLogger ). If folks have any questions about it or issues with it, please let me know. (Happy to fix bugs if people hit them.)
The way I'm using it right now is that bro logs to a Kafka topic, then Logstash pulls the events off Kafka for insertion into Elasticsearch. That's working quite well for me at the moment (several thousand events per second going to Kafka with no noticeable impact to bro or kafka).