I’m interested in dumping my bro logs into an elastic search instance and, based on what I was able to learn thus far, it seems I have two different options:
- use the elasticsearch writer (which the documentation says should not be used in production as it doesn’t have any error checking)
- or use logstash to read info directly from the bro logs and externally dump it into elasticsearch
It seems to me the logstash route is better, given that I should be able to massage the data into more “user friendly” fields that can be easily queried with elasticsearch.
So my question is, based on your experience, what is the best option? And, if you do use logstash, can you share your logstash config?
Thanks in advance,
…I just found a website that has a tutorial on how to parse bro logs with logstash AND points to the config used in the distro Security Onion.
So I’d just like to know what your thoughts are on using the elasticsearch writer vs logstash?
I thought the ES writer had some issues it needed worked out around indexes or something. Seth?
Some things to think about:
- Logstash is easy, but all the easiness that comes with it comes at a performance hit.
a. If you go this way, you could probably make this ‘easier’ by logging Bro’s logs to JSON for Logstash to send to Elasticsearch.
i. This will put you in an odd spot compared to other Bro deployments. Not many people log JSON logs. If you do this, you’ll want to use jq as a replacement for bro-cut.
b. Make sure you look at Heka as an alternative.
- Some people have had success with the NSQ writer and using NSQ, but that is also not what most people would consider a “production” deployment.
If you do nothing else, please use a recent version of Elasticsearch. Older versions of Elasticsearch were MUCH worse on performance and lacked features that are very nice to have. You’ll want to look into tuning Elasticsearch as well. There are MANY articles out there on how to tune Elasticsearch for indexing large data volumes.
Finally, keep in mind that a lot of how you keep Bro’s logs can vary depending on the size of your environment and your tolerance level for risk. If you can’t risk losing indexed logs when Elasticsearch is down, then you’ll want to look into a queuing system like Redis, NSQ, or RabbitMQ. Seems like everyone has their pet implementation of AMQP, so I’ll let you sort that one out. This conversation could really go on forever… feel free to hop on #bro on freenode if you want to chat.
I’ve used bro and logstash with good success…one setup is everything is on one machine, the other is remote using rsyslog to get the data to logstash. I tried going direct bro->elasticsearch, but logstash creates logstash-* shards, and bro creates bro-* shards, and kibana had a hard time seeing both. I’m currently just piping conn.log, but here’s my logstash entry:
An interesting gotcha is the fact that the above doesn’t see sizes as values but strings, so I had to add a mutate to get that to work:
convert => [ "resp_bytes", "integer" ]
convert => [ "resp_ip_bytes", "integer" ]
convert => [ "orig_bytes", "integer" ]
convert => [ "orig_ip_bytes", "integer" ]
Hope that helps…feel free to ping me off list if you need any help.
I keep Bro logging to files just to keep some local cache that's easy
for quick browsing/query and to offload Bro from writing to ES.
Heka reads Bro logs, transforms with Lua scripts inside a sandbox and
can output to whatever you want. I think there's ES output from Heka
I’ll go ahead and do the long response.
This has been an area of confusion for people for quite some time. That’s been my fault to a great degree, I’ve been looking to provide guidance on the topic and offer easy configurations, but it’s been difficult to create that. I’ll do a break down of the current status of a number of different methods.
== Bro -> ES (with ES log writer) ==
This was the original output method and has been in Bro since 2.1 I think. It was written pretty quickly because we assumed that we would just be able to shove logs at ES as fast as we could and it could accept them. This method absolutely works on many small deployments and is very easy. Just load one script and away you go.
The problem with this method is that on larger deployments this causes the log messages to get queued up in Bro as the main Bro thread shuttles them over to the thread that will actually transfer the logs to ES. Typically people think this is a memory leak, but it’s just that too many logs are being held in memory and not getting a chance to be flushed because ES is taking so long to respond. It’s not a very fun result. We’ve been stuck for quite some time at this dilemma.
== Bro -> NSQ -> Forwarding tool -> ES ==
This seems to be the most promising mechanism right now. We take advantage of the fact that NSQ spools to disk to deal with any memory overload issues and it always quickly accepts logs from Bro which causes Bro to keep it log queues flushed nicely. There is a prototype of a tool for forwarding, but it’s still pretty rough. I haven’t had time to get back to it and clean it up and write documentation. (it’s written in Go if anyone’s interested in taking this on, get in touch with me!).
It looks like this method works well and can cope with ES becoming overloaded without causing anything to crash. There are still some larger questions that we need to answer relating to ES tuning because the default template that ES uses for Bro logs does a lot of stuff that we don’t need and causes a lot of unnecessary overhead. Vlad Grigorescu has done some work in this area, but in my opinion we still need to explore ways to automate this process.
== Bro -> JSON logs -> logstash -> ES ==
Some people are using this because it’s really easy to setup and somewhat resilient. At least Bro doesn’t get overwhelmed because it’s just writing logs to disk. I do recommend that people write JSON logs and try to avoid creating filters with logstash to parse the Bro logs. It’s just going to increase your work for this one small part of your overall architecture. The logstash config with JSON logs should be *very* short (I don’t have it offhand).
To get Bro to output JSON logs, it just takes putting this in local.bro (or some other loaded script)…
I really don’t know about the performance of logstash with really high log volume, but I don’t have high hopes for it either.
I hope this helps with some of the background.
Yet another option: use nxlog on the Bro node. Have it forward the logs to Logstash as raw JSON and use the json_lines codec in Logstash to feed to Elasticsearch.
The reason I like this option is that it allows you to do complex processing locally rather than use a lot of complex grok filters in Logstash, which can be really slow. Nxlog can do type conversions, regex filters, and a lot more. It also keeps your Logstash config simple. I configure nxlog to use a separate TCP or UDP output using JSON formatting, then my Logstash config just looks like:
port => 5000
type => bro_dns
codec => json_lines
I would love to have the native Elasticsearch writer fixed up and blessed for production use, though!
Everyone, since Seth’s answer was the most complete one, I’ll just reply to this one and talk about the other options you guys kindly pointed me to!