Cannot send logs to their individual Kafka topics

Hello folks:

I have successfully been able to send everything to a remote single Kafka Topic from a local Bro machine and following is my local.bro file to make that happen:

##! Local site policy. Customize as appropriate.
##!
##! This file will not be overwritten when upgrading or reinstalling!

#@load packages

@load /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka
redef Kafka::send_all_active_logs = T;
redef Kafka::tag_json = T;
redef Kafka::kafka_conf = table([“metadata.broker.list”] = “XX.XX.XX.XX:9092”);

However, when I change that to write logs to their individual Kafka topics I get an error message under stderr.log. Following is my updated local.bro:

##! Local site policy. Customize as appropriate.
##!
##! This file will not be overwritten when upgrading or reinstalling!

#@load packages

#@load /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka
#redef Kafka::send_all_active_logs = T;
#redef Kafka::tag_json = T;
#redef Kafka::kafka_conf = table([“metadata.broker.list”] = “XX.XX.XX.XX:9092”);

Just looking through the kafka writer makes it look like most options can't be passed through that way. It looks like the kafka writer only pays attention to those kafka config options through the Kafka::kafka_conf variable. It was just a quick skim, but that's how it looked to me.

   .Seth

Are you using master? The easiest way to fix this is likely to add a key of “topic_name” and a value of “dns” to your $config table, similar to as shown here. Please let me know if that works for you.

There is a known issue in master where the plugin is not falling back to use $path as the destination topic name, and I have a PR open for it but unfortunately haven’t had a lot of time to finish (it is just pending some btests - functionally it is done) and get that merged.

I used the master.

I changed the beginning of my local.bro as follows and did a “broctl check” and “broctl deploy”:

#@load packages

#@load /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka
#redef Kafka::send_all_active_logs = T;
#redef Kafka::tag_json = T;
#redef Kafka::kafka_conf = table([“metadata.broker.list”] = “XX.XX.XX.XX:9092”);

Hello again:

I tried the script on the web site and it still fails the check:

##! Local site policy. Customize as appropriate.
##!
##! This file will not be overwritten when upgrading or reinstalling!

#@load packages

#@load /usr/local/bro/lib/bro/plugins/packages/metron-bro-plugin-kafka/scripts/Apache/Kafka
#redef Kafka::send_all_active_logs = T;
#redef Kafka::tag_json = T;
#redef Kafka::kafka_conf = table([“metadata.broker.list”] = “13.88.224.129:9092”);

To run a local proof of concept and see a working config, apply the below patch to master and then run ./run_end_to_end.sh --kafka-topic=dns (just requires docker and bash > 4) from the docker/ folder. The issue is, like Seth said earlier, you need to configure the metadata.broker.list in Kafka::kafka_conf not in the logging filter’s $config table (although we could likely add that option pretty easily - feel free to open a ticket at https://issues.apache.org/jira/browse/METRON-2060?filter=-4&jql=project%20%3D%20METRON%20order%20by%20created%20DESC).

If you’re going to run up the PoC and have already built the plugin’s bro docker container on your computer in the recent past you can add --skip-docker-build to speed things up, but it will need to be built the first time around at least. If you want to poke around in the container running bro after things are up you can run ./scripts/docker_execute_shell.sh from the docker/ folder for convenience and it will drop you into a shell. Also, don’t forget to run ./finish_end_to_end.sh from docker/ when you’re done to clean everything up. Our docker testing environment is currently limited to testing one kafka topic at a time but this same approach should work if you configure multiple filters with different topics specified. I’m doing exactly this in one of my bro clusters using master of the plugin.


diff --git a/docker/in_docker_scripts/configure_bro_plugin.sh b/docker/in_docker_scripts/configure_bro_plugin.sh
index c292504..afdd0ad 100755
--- a/docker/in_docker_scripts/configure_bro_plugin.sh
+++ b/docker/in_docker_scripts/configure_bro_plugin.sh
@@ -28,13 +28,22 @@ shopt -s nocasematch
echo "Configuring kafka plugin"
{
echo "@load packages"
- echo "redef Kafka::logs_to_send = set(HTTP::LOG, DNS::LOG, Conn::LOG, DPD::LOG, FTP::LOG, Files::LOG, Known::CERTS_LOG, SMTP::LOG, SSL::LOG, Weird::LOG, Notice::LOG, DHCP::LOG, SSH::LOG, Software::LOG, RADIUS::LOG, X509::LOG, Known::DEVICES_LOG, RFB::LOG, Stats::LOG, CaptureLoss::LOG, SIP::LOG);"
- echo "redef Kafka::topic_name = \"bro\";"
+ echo "redef Kafka::topic_name = \"\";"
echo "redef Kafka::tag_json = T;"
echo "redef Kafka::kafka_conf = table([\"metadata.broker.list\"] = \"kafka:9092\");"
- echo "redef Kafka::logs_to_exclude = set(Conn::LOG, DHCP::LOG);"
echo "redef Known::cert_tracking = ALL_HOSTS;"
echo "redef Software::asset_tracking = ALL_HOSTS;"
+ echo 'event bro_init() &priority=-10
+{
+# handles DNS
+local dns_filter: Log::Filter = [
+$name = "kafka-dns",
+$writer = Log::WRITER_KAFKAWRITER,
+$config = table(["topic_name"] = "dns"),
+$path = "dns"
+];
+Log::add_filter(DNS::LOG, dns_filter);
+}'
} >> /usr/local/bro/share/bro/site/local.bro

# Load "known-devices-and-hostnames.bro" which is necessary in bro 2.5.5 to

Let me know if that works for you or if you have any other questions

Sorry, I was in a rush to send that prior email out. I should have mentioned that there are actually two issues with your original config, and the example I show above fixes both of them. One is the bug that I mentioned earlier and the other is the issue that Seth mentioned.

Jon Zeolla