I think I'm specifying restrict_filters correctly to stop some hosts from being logged, but it's not working as I intend/expect.
My local.bro redefinition of restrict_filters (below) is being recognized and propagated by broctl install, as confirmed by print restrict_filters after restarting.
As further confirmation that the redef is being noticed, if I specify a pcap syntax impossibility in restrict_filters, I get workers quitting with
"fatal error in /raid/bro/share/bro/base/frameworks/packet-filter/./main.bro,
line 282: Bad pcap filter ..." on a restart.
Yet when the restrict_filter is OK and is seemingly recognized, the IP addresses in the restrict_filters still appear in log entries.
This logging continues after a broctl install and update, after a broctl install and restart, as well as after a complete cluster reboot.
I'm seeing this under Bro 2.3-7 on CentOS 6.5 with pfring.  Whether the capture_filters are redef'ed as shown in the details below, or not, doesn't change the restrict_filters failure I'm seeing.
Any ideas for where to take this debugging odyssey?  What am I missing that's obvious?
Richard
             
            
              
              
              
            
            
           
          
            
            
              
I think I'm specifying restrict_filters correctly to stop some hosts from
being logged, but it's not working as I intend/expect.
...
Yet when the restrict_filter is OK and is seemingly recognized, the IP
addresses in the restrict_filters still appear in log entries.
...
[manager-host ~] grep capture\_filters /raid/bro/share/bro/site/local\.bro
redef capture\_filters = \{ \["all"\] = "ip or not ip" \};
\[manager\-host \~\] grep restrict_filters /raid/bro/share/bro/site/local.bro
redef restrict_filters += { ["not-these-hosts"] = "not host 172.16.1.1 and not
host 172.16.22.22 and not host 172.16.39.39 and not host 172.16.88.88" };
...
[manager-host current]$ grep 172.16.88.88 conn.log | tail -3
1429461245.805348       CpuepS3Ds2GYzABCtb      xx.xx.xx.xx   xxxxx
172.16.88.88   443    tcp     ssl     4192.655995     14660   16441   S1
F      0ShADda  50      17268   49      19001   (empty)
1429464730.699197       CqVMY53iVvTFSWclAi      xx.xx.xx.xx    xxxxx
172.16.88.88   443    tcp     ssl     1002.988461     5491    4481    SF
F      0ShADdaFf        21      6591    17      5377    (empty)
1429464286.982078       CUl3Cl24bUWkgbhAGd      xx.xx.xx.xx   xxxxx
172.16.88.88   443     tcp     ssl     1447.315821     7095    5595    SF
   F      0ShADdafF        25      8403    21      6699    (empty)
For the record, this is solved, thanks to the distributed kibitzing of Adam Slagell, Vern Paxson, Seth Hall, and others in the hallway track at BroCon 2015.  "Check for VLAN tags."  "Try 'vlan ####' in capture_filters."
Our upstream feed had been switched to a trunk, and began carrying other VLANs in addition to the main tap feed we were expecting.  When that happened, Bro quietly stepped past the VLAN tags in policy processing.
As a result, there was no Bro monitoring outage.  We just had some duplicate and unintentionally monitored connections which we didn't spot due to low volume.  Thus the change slipped past us.
However, the pcap filter specification in restrict_filters would no longer match due to the VLAN tags.  Specifying the VLAN(s) to watch in capture_filters clears the match for the IP addresses in restrict_filters.
Our fix, in local.bro (where 321 is the VLAN number of the tap feed):