Web GUI for Bro?

Got Bro 2.4.1 working on a RHEL 6 system. Can anyone provide suggestions on what I should use as a web GUI for bro? What is the best options out there? NOTE - my version of Bro was compiled from source.

You might consider using an ELK stack for it for an open-source solution. If your traffic is light, there is a free version of Splunk out there.

Adjust your filebeat yaml file to pickup the Bro logs. /usr/local/bro/logs/current/*.log

https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04

Packetsled makes a solid commercial solution built on Bro.

Patrick Kelley, CISSP

Hyperion Avenue Labs

(770) 881-6538

The limit to which you have accepted being comfortable is the limit to which you have grown. Accept new challenges as an opportunity to enrich yourself and not as a point of potential failure.

Thanks All. I am looking into ELK.

Hi,

Check my docker project.

https://hub.docker.com/r/danielguerra/bro-debian-elasticsearch/

The quick way :

export DOCKERHOST=":8080"
wget https://raw.githubusercontent.com/danielguerra69/bro-debian-elasticsearch/master/docker-compose.yml
docker-compose pull
docker-compose up

You can send pcap data with pcap to port 1969 “nc dockerip 1969 < mypcapfile”

After this open your browser to dockerip:5601 for kibana, its preconfigured with some
queries and desktops.

This ELK/Bro combo is turning out to be more of a learning curve than I has hoped for. I can get the logs over to elasticsearch and into Kibana, but I can only see them on the “Discovery” tab. I save the search to use with a visualization, but it wants to do something by “count” and its not breaking down the connections in conn.log and graphing them like I had hoped for. Here is my logstash conf file.

input {
stdin { }
file {
path => “/opt/bro/logs/current/*.log”
start_position => “beginning”
}
}

filter {
if [message] =~ /^(\d+.\d{6}\s+\S+\s+(?:[\d.]+|[\w:]+|-)\s+(?:\d+|-)\s+(?:[\d.]+|[\w:]+|-)\s+(?:\d+|-)\s+\S+\s+\S+\s+\S+\s+\S+\s+[^:]+::\S+\s+[^:]+::\S+\s+\S+(?:\s\S+)*$)/ {
grok{
patterns_dir => “/opt/logstash/custom_patterns”
match => {
message => “%{291009}”
}
add_field => [ “rule_id”, “291009” ]
add_field => [ “Device Type”, “IPSIDSDevice” ]
add_field => [ “Object”, “Process” ]
add_field => [ “Action”, “General” ]
add_field => [ “Status”, “Informational” ]
}
}

#translate {

field => “evt_dstip”

destination => “malicious_IP”

dictionary_path => ‘/opt/logstash/maliciousIPV4.yaml’

#}
#translate {

field => “evt_srcip”

destination => “malicious_IP”

dictionary_path => ‘/opt/logstash/maliciousIPV4.yaml’

#}
#translate {

field => “md5”

destination => “maliciousMD5”

dictionary_path => ‘/opt/logstash/maliciousMD5.yaml’

#}
#date {

match => [ “start_time”, “UNIX” ]

#}

}

output {
elasticsearch { hosts => [“localhost:9200”] }
stdout { codec => rubydebug }

In Kibana under the Discover tab I can see my messages from conn.log. How can I get this data properly graphed and broken down more like how the connection summary emails are broken down?

Mod this to your liking and see how it goes:

I started to use the csv filter instead of grok. Just change the delimiter to a literal tab. Also make sure to not use "." in the column names. I just copied the bro field names.

   if [type] == "bro_conn" {
     csv {
       columns => [ "ts","uid","orig_h","orig_p","resp_h","resp_p","proto","service","duration","orig_bytes","resp_bytes","conn_state","local_orig","local_resp","missed_bytes","history","orig_pkts","orig_ip_bytes","resp_pkts","resp_ip_bytes","tunnel_parents","peer_descr","orig_cc","resp_cc" ]
       separator => " "
     }
   }

Oh yea that’s a lot easier…thanks for that Craig!

James