Running specific scripts on specific workers

I have a cluster that has three workers configured in node.cfg and I’m looking for the best approach for limiting the scripts on each. For example, with v2.4 this style config in local.bro worked great:


@if ( Cluster::is_enabled() )

INTERNAL ONLY - Matches on workers (MID_INT-1), proxies (MID_INT_PXY_1), and manager (MGR_INT).

@if ( /^.{3,3}_INT.*/ in Cluster::node)

load internal specific scripts here


GLR ONLY - Matches on workers (MID_GLR-1), proxies (MID_INT_PXY), and manager (MGR_INT).

@if ( /^(MID_GLR|[DIMNW]{3,3}_INT_PXY|MGR_INT).*/ in Cluster::node )

Load GLR specifc scripts


DNS ONLY - Matches on workers (MID_GLR-1), proxies (MID_INT_PXY), and manager (MGR_INT).

@if ( /^(MID_DNS|[DIMNW]{3,3}_INT_PXY|MGR_INT).*/ in Cluster::node )

Load DNS specifc scripts


However, I’ve started seeing an oddity since moving to v2.5 where some events in notice.log have an entirely unrelated “note” value. If I remove the conditional script loading, and load all scripts everywhere, the problem goes away.

I did limited testing with “aux_scripts” in nod.cfg but was unsure of the proper config. I vaguely recall reading that if scripts weren’t loaded on the proxies and manager, as well as the worker, things could malfunction.

Would a better approach be to move conditional logic into the specific scripts themselves? For example, if node ==“GLR” then exit.



what exactly do you mean by "an entirely unrelated note value"? Do they
have the wrong type associated with them?

In general, the most significant change between Bro 2.5 and 2.4 in regards
to logging is the introduction of the logger node, which might impact your
deployments. If you are using the logger node, you probably have to change
your scripts in a way that the parts of the scripts that previously were
running on the manager are also running on the logger now.

Without really looking into details of how enums are currently serialized,
I expect that the different nodes of your cluster currently might extend
Notice::Type with different values, depending on which scripts were loaded
on which nodes. Which might lead to issues like this. Is is possible that
that happens?

If yes, the solution probably is to run the parts of the script that
perform the data type initialization and redef Notice::Type, define data
types, open log files, etc. on all cluster nodes, and just use your
conditionals around the parts of the scripts that actually perform works
(like catch events).

I hope this helps :slight_smile: