loging to elasticsearch git clone

Hi :

I follow the https://www.bro.org/sphinx/frameworks/logging-elasticsearch.html
with git clone latest source, seem it can't take effect to find it
should build elasticsearch. So how can I build elasticsearch with
latest source?

I log to json files. After this I use logstash to store it in elasticsearch.
Logstash has an embeded elasicsearch + kibana

in bro edit init-default.bro and add @load policy/tuning/json-logs

a config i use for logstash might be handy for you

Regards,
Daniel

input {
file {
codec => json
path => “/input/*.log”
type => “bro_log”
}
}

filter {

Parse the time attribute as a UNIX timestamp (seconds since epoch)

and store it in @timestamp attribute. This will be used in Kibana later on.

date {
match => [ “ts”, “UNIX” ]
}
translate {
field => “conn_state”
destination => “conn_state_full”
dictionary => [
“S0”, “Attempt”,
“S1”, “Established”,
“S2”, “Originator close only”,
“S3”, “Responder close only”,
“SF”, “SYN/FIN completion”,
“REJ”, “Rejected”,
“RSTO”, “Originator aborted”,
“RSTR”, “Responder aborted”,
“RSTOS0”, “Originator SYN + RST”,
“RSTRH”, “Responder SYN ACK + RST”,
“SH”, “Originator SYN + FIN”,
“SHR”, “Responder SYN ACK + FIN”,
“OTH”, “Midstream traffic”
]
}
grok {
match => { “path” => “.*/(?<bro_type>[a-zA-Z0-9]+).log$” }
}
}

output {
elasticsearch {
embedded => true
}
}

Also, you could have a look at this for an alternative way of getting Bro into Logstash.
http://www.appliednsm.com/parsing-bro-logs-with-logstash/

Logging local and then parse (the logstash way) it is not really preferred. I have been playing with docker and created a docker image for bro with elasticsearch. This works great bro uses elasticsearch to log, only kibana needs a different timestamp (ts).
To check your bro can do elasticsearch do :

/usr/local/bro/bin/bro -N Bro::ElasticSearch

should give

Bro::ElasticSearch - ElasticSearch log writer (dynamic, version 1.0)

Setup elasticsearch
vi /usr/local/bro/share/bro/base/frameworks/logging/main.bro
and set

const enable_local_logging = F

to avoid local logging

vi /usr/local/bro/lib/bro/plugins/Bro_ElasticSearch/scripts/init.bro
and set

Name of the ES cluster.

const cluster_name = “" &redef;

ES server.

const server_host = “" &redef;

to get clustername and ip check with your browser http://:9200/_nodes

mkdir /usr/local/bro/share/bro/elasticsearch and copy from the git bro source dir aux/plugins/elasticsearch/scripts/Bro/ElasticSearch/logs-to-elasticsearch.bro to
/usr/local/bro/share/bro/elasticsearch

add to /usr/local/bro/share/bro/base/init-default.bro

@load elasticsearch/logs-to-elasticsearch

You are now ready to log to elasticsearch

In kibana use bro-* to get your indices or check http://:9200/_cat/indices?v

Hopefully bro can log a YYYY:mm:dd HH:MM:ss format for ts, work in progress …….

Regards,

Daniel

It can. :slight_smile:

If you want to make JSON logs globally into ISO8601, you can do...
redef LogAscii::json_timestamps = JSON::TS_ISO8601;

  .Seth

Do what you said, ElasticSearch was installed .

I think I success at first. But after I rm the dir elasticsearch-1.5.2/data/
And do it again it don't work any more .

In logs only have stderr.log and stdout.log (disable local log take effect)

ikfb@ikfb:/usr/local/bro/logs/current$ cat stderr.log
listening on eth0, capture length 8192 bytes

ikfb@ikfb:/usr/local/bro/logs/current$ cat stdout.log
max memory size (kbytes, -m) unlimited
data seg size (kbytes, -d) unlimited
virtual memory (kbytes, -v) unlimited
core file size (blocks, -c) unlimited

seem work fine.
I think after I rm elasticsearch-1.5.2/data/ it can rebuild. I don't
change bro system. Any suggestion to debug why bro can't connect
elasticsearch?

I add print in share/bro/elasticsearch/logs-to-elasticsearch.bro

event bro_init() &priority=-5
{
if ( server_host == "" )
return;

print "beofore for";

for ( stream_id in Log::active_streams )
{
if ( stream_id in excluded_log_ids ||
    (|send_logs| > 0 && stream_id !in send_logs) )
next;

print "after if"

local filter: Log::Filter = [$name = "default-es",
                            $writer = Log::WRITER_ELASTICSEARCH,
                            $interv = LogElasticSearch::rotation_interval];
Log::add_filter(stream_id, filter);
}
}

It don't show msg in broctl where I start. I think it may be in
current/stdout.log
and I am wrong.

Logging local and then parse (the logstash way) it is not really preferred.
I have been playing with docker and created a docker image for bro with
elasticsearch. This works great bro uses elasticsearch to log, only kibana
needs a different timestamp (ts).
To check your bro can do elasticsearch do :
/usr/local/bro/bin/bro -N Bro::ElasticSearch
should give
Bro::ElasticSearch - ElasticSearch log writer (dynamic, version 1.0)

Setup elasticsearch
vi /usr/local/bro/share/bro/base/frameworks/logging/main.bro
and set
const enable_local_logging = F
to avoid local logging
vi /usr/local/bro/lib/bro/plugins/Bro_ElasticSearch/scripts/init.bro
and set

by the way : can we just add these line to local.bro

@load elasticsearch/logs-to-elasticsearch
export {
redef Log::enable_local_logging = F;
redef LogAscii::json_timestamps = JSON::TS_ISO8601;
}

I have the same it stops after 1549 records(tried twice). After this,
even after a restart bro and/or the removal of all bro databases
from elasticsearch.
Its like it changed the config and not logging is permanent now. Very strange.
I used docker to reproduce it.
The diff of the before and after elasticsearch logging with bro.

C /root
C /root/brotest
C /root/brotest/.state
C /root/brotest/.state/.tmp
C /root/brotest/.state/state.bst
C /root/elasticsearch-1.5.2
C /root/elasticsearch-1.5.2/data
C /root/elasticsearch-1.5.2/data/elasticsearch
C /root/elasticsearch-1.5.2/data/elasticsearch/nodes
C /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0
C /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0/_state
D /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0/_state/global-0.st
A /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0/_state/global-1.st
C /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0/indices
C /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0/indices/.kibana
C /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0/indices/.kibana/0
C /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0/indices/.kibana/0/_state
D /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0/indices/.kibana/0/_state/state-1.st
A /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0/indices/.kibana/0/_state/state-2.st
C /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0/indices/.kibana/0/translog
C /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0/indices/.kibana/0/translog/translog-1430433284419
D /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0/indices/@bro-meta
D /root/elasticsearch-1.5.2/data/elasticsearch/nodes/0/indices/bro-201505011800
A /root/elasticsearch-1.5.2/e.out
C /root/elasticsearch-1.5.2/logs
C /root/elasticsearch-1.5.2/logs/elasticsearch.log
A /root/elasticsearch-1.5.2/logs/elasticsearch.log.2015-05-01
A /root/elasticsearch-1.5.2/o.out
C /tmp
C /tmp/hsperfdata_root
D /tmp/hsperfdata_root/21864
A /tmp/hsperfdata_root/14

The elasticsearch log.

[2015-05-02 17:49:52,315][INFO ][node ] [Midnight Man] version[1.5.2], pid[14], build[62ff986/2015-04-27T09:21:06Z]
[2015-05-02 17:49:52,316][INFO ][node ] [Midnight Man] initializing …
[2015-05-02 17:49:52,323][INFO ][plugins ] [Midnight Man] loaded [], sites []
[2015-05-02 17:49:56,501][INFO ][node ] [Midnight Man] initialized
[2015-05-02 17:49:56,501][INFO ][node ] [Midnight Man] starting …
[2015-05-02 17:49:56,641][INFO ][transport ] [Midnight Man] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/172.17.0.3:9300]}
[2015-05-02 17:49:56,661][INFO ][discovery ] [Midnight Man] elasticsearch/vQdCc78tSOi-NDV-EQkItg
[2015-05-02 17:50:00,445][INFO ][cluster.service ] [Midnight Man] new_master [Midnight Man][vQdCc78tSOi-NDV-EQkItg][7338ce54205e][inet[/172.17.0.3:9300]], reason: zen-disco-join (elected_as_master)
[2015-05-02 17:50:00,517][INFO ][http ] [Midnight Man] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.17.0.3:9200]}
[2015-05-02 17:50:00,518][INFO ][node ] [Midnight Man] started
[2015-05-02 17:50:01,753][INFO ][gateway ] [Midnight Man] recovered [3] indices into cluster_state
[2015-05-02 17:55:19,907][INFO ][cluster.metadata ] [Midnight Man] [bro-201505011800] deleting index
[2015-05-02 17:55:28,711][INFO ][cluster.metadata ] [Midnight Man] [@bro-meta] deleting index
[2015-05-02 18:14:39,947][INFO ][node ] [Midnight Man] stopping …
[2015-05-02 18:14:40,031][INFO ][node ] [Midnight Man] stopped
[2015-05-02 18:14:40,033][INFO ][node ] [Midnight Man] closing …
[2015-05-02 18:14:40,042][INFO ][node ] [Midnight Man] closed
[2015-05-02 18:15:15,298][INFO ][node ] [Magician] version[1.5.2], pid[146], build[62ff986/2015-04-27T09:21:06Z]
[2015-05-02 18:15:15,299][INFO ][node ] [Magician] initializing …
[2015-05-02 18:15:15,308][INFO ][plugins ] [Magician] loaded [], sites []
[2015-05-02 18:15:19,437][INFO ][node ] [Magician] initialized
[2015-05-02 18:15:19,438][INFO ][node ] [Magician] starting …
[2015-05-02 18:15:19,635][INFO ][transport ] [Magician] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/172.17.0.3:9300]}
[2015-05-02 18:15:19,671][INFO ][discovery ] [Magician] elasticsearch/yPmlXzBLQpS-VK8iFOGmfg
[2015-05-02 18:15:23,461][INFO ][cluster.service ] [Magician] new_master [Magician][yPmlXzBLQpS-VK8iFOGmfg][7338ce54205e][inet[/172.17.0.3:9300]], reason: zen-disco-join (elected_as_master)
[2015-05-02 18:15:23,507][INFO ][http ] [Magician] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.17.0.3:9200]}
[2015-05-02 18:15:23,507][INFO ][node ] [Magician] started
[2015-05-02 18:15:23,535][INFO ][gateway ] [Magician] recovered [0] indices into cluster_state

I removed the elastic data, the tmp and the .state files. Restarted everything but no more elasticsearch
loging in this image. Sorry, rather had a nice story here :wink:

Daniel

Hi, what's your elasticsearch version using? I am using 1.5.2

I only get indice like :

health status index pri rep docs.count docs.deleted
store.size pri.store.size
yellow open .packetbeat-topology 5 1 0 0
  3.6kb 3.6kb
yellow open kibana-int 5 1 6 0
58.2kb 58.2kb
yellow open .kibana 1 1 6 0
20.5kb 20.5kb
yellow open bro-201505040900 5 1 33 0
98.6kb 98.6kb
yellow open @bro-meta 5 1 1 0
  3.4kb 3.4kb
yellow open packetbeat-2015.05.04 5 1 780 0
693.2kb 693.2kb

no indice for proto analsys.

But it is strage after I

redef Log::enable_local_logging = T;

Seem that I need enable local logging so that the elasticsearch can work?