logs-to-elasticsearch.bro error

Dear

Use logs-to-elasticsearch.bro send logs to ES. Is now work.

ES error logs:

[2016-03-25 17:30:52,957][DEBUG][action.bulk ] [node-1] [whbro-201603251500][1] failed to execute bulk item (index) index {[whbro-201603251500][dns][AVOtHLQHooGOx5uLgLSQ], source[{"_timestamp":1458898236411,“ts”:1458898206267,“uid”:“ClbNI74bIcRQ8Gs6Wc”,“id.orig_h”:“10.100.78.88”,“id.orig_p”:137,“id.resp_h”:“10.100.79.255”,“id.resp_p”:137,“proto”:“udp”,“trans_id”:47282,“query”:“ISATAP”,“qclass”:1,“qclass_name”:“C_INTERNET”,“qtype”:32,“qtype_name”:“NB”,“AA”:false,“TC”:false,“RD”:true,“RA”:false,“Z”:1,“rejected”:false}]}

MapperParsingException[Field [_timestamp] is a metadata field and cannot be added inside a document. Use the index API request parameters.]

at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:213)

at org.elasticsearch.index.mapper.DocumentParser.innerParseDocument(DocumentParser.java:131)

at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:79)

at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:304)

at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:500)

at org.elasticsearch.index.shard.IndexShard.prepareCreateOnPrimary(IndexShard.java:481)

at org.elasticsearch.action.index.TransportIndexAction.prepareIndexOperationOnPrimary(TransportIndexAction.java:214)

at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:223)

at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:326)

at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:119)

at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:68)

at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:595)

at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)

at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:263)

at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260)

at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)

at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

Bro config file:

/usr/local/bro/lib/bro/plugins/Bro_ElasticSearch/scripts/init.bro

module LogElasticSearch;

export {

## Destination for the ES logs. Valid options are

## “direct” to directly connect to ES and “nsq” to

## transfer the logs into an nsqd instance.

const destination = “direct” &redef;

## Name of the ES cluster.

const cluster_name = “my-application” &redef;

## ES server.

const server_host = “10.100.79.10” &redef;

## ES port.

const server_port = 9200 &redef;

## Name of the ES index.

const index_prefix = “testooo” &redef;

## Should the index names be in UTC or in local time?

## Setting this to true would be more compatible with Kibana and other tools.

const index_name_in_utc = F &redef;

## Format for the index names.

## Setting this to “%Y.%m.%d-%H” would be more compatible Kibana and other tools.

#const index_name_fmt = “%Y%m%d” &redef;

const index_name_fmt = “%Y%m%d%H%M” &redef;

## The ES type prefix comes before the name of the related log.

## e.g. prefix = “bro_” would create types of bro_dns, bro_software, etc.

const type_prefix = “” &redef;

## The time before an ElasticSearch transfer will timeout. Note that

## the fractional part of the timeout will be ignored. In particular,

## time specifications less than a second result in a timeout value of

## 0, which means “no timeout.”

const transfer_timeout = 2secs;

## The batch size is the number of messages that will be queued up before

## they are sent to be bulk indexed.

const max_batch_size = 1000 &redef;

## The maximum amount of wall-clock time that is allowed to pass without

## finishing a bulk log send. This represents the maximum delay you

## would like to have with your logs before they are sent to ElasticSearch.

const max_batch_interval = 1min &redef;

## The maximum byte size for a buffered JSON string to send to the bulk

## insert API.

const max_byte_size = 1024 * 1024 &redef;

## If the “nsq” destination is given, this is the topic

## that Bro will push logs into.

const nsq_topic = “bro_logs” &redef;

}

Hi,

To make this work you need some patches
or use an elasticsearch version lower than 2 (1.7)

I made a docker image for this
https://hub.docker.com/r/danielguerra/bro-debian-elasticsearch/
In the git there is a map bro-patch
https://github.com/danielguerra69/bro-debian-elasticsearch.git

Regards,

Daniel

HI

I installed the patch you provide, and the emergence of new error.By the way: I did not find this file krb-main bro source and elasticsearch plugins source code, so krb-main.patch I did not install the patch

[2016-03-27 10:06:31,295][DEBUG][action.bulk ] [node-1] [mzh-201603190900][0] failed to execute bulk item (index) index {[mzh-201603190900][http][AVO10phhcNJqDxEYDvYi], source[{“ts”:“2016-03-19T09:48:21.250090Z”,“uid”:“CU0uvD4pYTE2YeoKh”,“id.orig_h”:“222.246.191.234”,“id.orig_p”:11325,“id.resp_h”:“119.143.122.225”,“id.resp_p”:80,“trans_depth”:1,“method”:“GET”,“host”:“xxxxxx.com.cn”,“uri”:"/img/xxxxxx/Uploads/2015-08-24/55daef788341b.jpg",“user_agent”:“WeChat/6.3.13.17 CFNetwork/758.2.8 Darwin/15.0.0”,“request_body_len”:0,“response_body_len”:15201,“status_code”:200,“status_msg”:“OK”,“tags”:[],“resp_fuids”:[“F3xb6m3Ffqs0QW1AI4”],“resp_mime_types”:[“image/jpeg”]}]}

MapperParsingException[Field name [id.orig_h] cannot contain ‘.’]

at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:276)

at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:221)

at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:138)

at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:119)

at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:100)

at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:435)

at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)

at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230)

at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:458)

at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:762)

at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)

at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

Daniel (and others), thank you for persisting with getting data into data stores that are currently having trouble with Bro data. I have some changes queued up and I'm hoping to get a bit more work done in the upcoming week or two which should make you very happy and make it possible to use mainline Bro again. :slight_smile:

  .Seth

Hi Seth,

That’s great news because I think elasticsearch is very useful
in combination with kibana. I wanted to use the latest version
of E/K because of the map projection. With the plugin I can do
continuous processing instead of batch processing with logstash.

Regards,

Daniel

Use this patch for the elasticsearch plugin

ElasticSearch.cc.patch <https://github.com/danielguerra69/bro-debian-elasticsearch/blob/master/bro-patch/ElasticSearch.cc.patch>