Broker has landed in master, please test

We merged the new Broker version into Bro master yesterday. As this a
major change to one of Bro's core components, I wanted to send a quick
heads-up here, along with a couple of notes.

With this merge we are completely replacing usage of Bro's classic
communication system with Broker. All the standard scripts have been
ported over, and BroControl has been adapted as well. The old
communication system remains available for the time being, but is now
deprecated and scheduled to be removed in Bro 2.7 (not 2.6). Broccoli
is now turned off by default.

With such a large change, I'm sure there'll be some more kinks to iron
out still; that's where we need everybody's help. If you have an
environment where you can test drive new Bro versions, please give
this a try. We're interested in any feedback you have, both specific
issues you encounter (best to file tickets) and general experiences
with the new version, including in particular any observations about
performance (best to send to this list).

From a user's perspective, not much should even be changing, most of

the new stuff is under the hood. The exception are custom scripts
doing communication themselves, they need to be ported over to Broker.
Documentation for that is here:, including a
porting guide for existing scripts. Let us know if there's anything
missing there that would be helpful. The Broker library itself comes
with a new user manual as well, we'll get that online shortly.

One specific note on upgrading existing Bro clusters: the meaning of
"proxy" has changed. They still exist, but play a quite different role
now. If you're currently using more than one proxy, we recommend going
back to one; that'll most likely be fine with the standard scripts
(and if not, please let us know!)

Many thanks to Jon Siwek for the recent integration work tying up all
the loose ends and getting Broker mergeable. Also thanks to those who
have tested it already from the actor-system branch.


Do you want dumb programming questions asked here or on the main Bro list? While most people might not need it yet, discussion there might help get more people interested or help avoid issues with custom policy conversion.

For here though, can you elaborate on the going down to one proxy? My understanding still isn’t strong, but that seems to be opposed to the idea of using Cluster::publish_hrw to spread memory across proxies.


I think those question belong on the main list which is for using Bro and its language. This list is really more about development of Bro itself.

The idea is to try starting with a single proxy and then scale your deployment based on what you actually need, and there may not be that great of a need at the moment as the default scripts that ship with Bro do not widely use the HRW/pool/partitioning APIs yet.

By default, it's currently just the Software framework that will use Cluster::publish_hrw. I also plan to soon change the Intel framework to make use of Cluster::relay_rr.

There's also an option in the various Known::* scripts for users to opt-in to an alternate implementation that uses HRW + tables instead of the default approach of data stores.

Different sites could also have different requirements/usage of those default scripts and it's all too new to give better suggestions other than "try one proxy, add more as needed".

- Jon

Just to give context here, the reason I sent the original mail about
Broker here, including the request for feedback, was to limit the
initial round of testing to folks quite familiar with Bro and its
internals. That gives us a chance to spot any obvious issues quickly
before annoying everybody else. :slight_smile: But discussing it at either place
is fine of course, whatever works best for folks. If things seem to
work, we should definitely also announce the merge more broadly.


Have this running on a few clusters now, so far it’s been really good. This graph shows how stable it has been on one of of clusters.

On that cluster manager node we were seeing a random cpu+traffic+memory spike on one of the proxies that would eventually be killed by the OOM killer… then it would restart and get killed again shortly after that. A larger cluster would see the same spikes but it had 4x the ram and wouldn’t OOM.

Since switching to the broker version around 5/25, that completely stopped. The base CPU usage is a bit higher, but all the random spikes are gone. The base memory usage is also lower.

I could never figure out what was causing the problem, and it’s possible that &synchronized not doing anything anymore is why it’s better now. I’m mostly using &synchronized for syncing input files across all the workers and one of them does have 300k entries in it. That file is fairly constant though, only a few k changes every 5 minutes and nothing that should use 20G of ram.

I still need to replace all of our uses of &synchronized. The config framework may work for most cases once the cluster bits are done, but probably not for syncing the 300k item set.

Another GREAT thing I noticed that we may want to add to NEWS is that it looks like the ‘file descriptors used per worker’ is down from 5 (1 socket + 2+2 from pipes) to just 1 socket (no more flares?). This means that even though select() is still not gone, the limitation of ~175 workers for 2.5.x will go away and people would be able to run 500+ worker clusters if they wanted to.

FWIW, I figured out what was causing this problem. While the file wasn't changing that much, I was using something like

    curl -o $URL && mv file.csv

to download the file, and apparently unless you pass -f to curl, it doesn't actually exit with a non-zero status code on server errors.

This was causing a server error page to be written to the csv file every now and then. When this happened:

* the input reader would throw a warning that the file couldn't be parsed, and clear out the set
* bro would then clear the set, triggering a removal of 300k items across all nodes (56 in the case of the test cluster)
* 5 minutes later the next download would work
* bro would then fill back in the set, and trigger 300k items to be synced to all 56 nodes again.

so within 5 minutes, 300,000*56*2 updates would be kicked off, which is 33million updates. This seemed to max out the proxies for 30 minutes.
The raw size of the data is only ~4M, or 261M total, which makes it a little crazy that memory usage would blow up by dozens of gigabytes of ram.

&synchronized not having an effect in master made this problem go away, and adding a -f to curl on our pre-broker clusters fixed those too.

All the more reason to port the method of distributing the data off of &synchronized. I think I will just run the curl command on the worker nodes too,
effectively replacing &synchronized with curl.