Integrating bro-ids on sguil or snorby

Hi all,

  Sorry if this question sounds stupid, but I am very new using bro as an IDS. Is it possible to integrate bro logs on sguil or snorby or some type of front-ends like these ones??


I'm going to say no with the caveat that we will almost certainly have some sort of integration with those in the future.

If you only look at the output of Bro in those interfaces though, you'd currently be missing out on much of the benefit since Bro does extensive protocol logging. Are you running from our repository or a released version?


I have installed released version: 1.5.3 ...

Sorry if this question sounds stupid, but I am very new using bro as
an IDS. Is it possible to integrate bro logs on sguil or snorby or some
type of front-ends like these ones??

As Seth pointed out, Bro's not really the same kind of "alerting"
device that Snort is, so it doesn't fit the SIEM mold very well.
However, if you have a log management solution, you can forward your
Bro logs into it. If you look at the email I sent the list two days
ago regarding the Bro cluster quickstart, you'll see a link to my post where I show how to forward Bro logs
using rsyslog. A similar setup could be achieved with syslog-ng if
that's already on the box. Hopefully your log management solution
will let you explore Bro's outputs a bit better and provide some
alerting capabilties. Otherwise, don't ever be afraid to plow into
the logs with grep and sort.

Thansk Martin. I have do it some google searches about this, and i think the best option is to use an OSSEC agent, and then from the OSSEC server forward all logs to a splunk server to collect statistics, etc.

Another option is to use rsyslog like appears in your blog's post like this:

rsyslog -> ossec_agent -> ossec_server -> splunk

What is your opinion??

I want to support splunk as a direct output for Bro eventually, we already have some users that are very successfully using that model. With my "ext" scripts available from people have been using the Splunk forwarder to the send those logs directly to splunk which automatically does field extraction and it correctly recognizes the epoch time timestamps at the beginning of the lines as what they are.

We are planning on doing closer integration with OSSEC once we figure out what that means, but what does that 4 step pipeline gain you over just using the splunk forwarder directly?

BTW, I don't really recommend people begin using my "ext" scripts at this point. We're going to be doing a new release very soon and all of the scripts have incorporated "lessons learned" from my experience in writing the ext scripts. It makes more sense to follow our in-development quickstart guide[1] and Martin's recent blog post[2].



Another option is to use the syslog output and pipe that into prelude-ids/prewikka for handling. I've done that with 1.5.x using its native syslog output. I've been experimenting with doing the same with the development version.

Prelude is nice because it's a fairly distributed model, and it encrypts traffic from sensors to manager/display. And its development bits are Python, so there's potential for much tighter integration.

Distributed is nice too, if you tend to move back and forth between using multiple specialized bro clusters and the one all-uniting fearsome bro mega-cluster.

The model I tend to use looks like this:

But picture it with HIDS output, commercial IDS output, and other syslog output all correlated by IP in my diagram.