Patch for multiple loggers

Finally made this work, the previous changes didn’t assign a logger to the manager and proxies.

Patches attached modify

  • lib/broctl/BroControl/install.py
  • lib/broctl/BroControl/config.py

To use, adjust node.cfg to include logger-n entries, similar to proxies.

Memory usage remains stable over time… (so far)

Wed Apr 19 18:06:21 UTC 2017
Checking Bro status…
Name Type Host Pid Proc VSize Rss Cpu Cmd
logger-1 logger 10.1.1.16 24527 parent 744M 225M 56% bro
logger-1 logger 10.1.1.16 25476 child 174M 87M 4% bro
logger-10 logger 10.1.1.16 24540 parent 731M 239M 51% bro
logger-10 logger 10.1.1.16 25087 child 154M 94M 3% bro
logger-11 logger 10.1.1.16 24543 parent 723M 222M 54% bro
logger-11 logger 10.1.1.16 25390 child 154M 94M 3% bro
logger-12 logger 10.1.1.16 24559 parent 719M 230M 54% bro
logger-12 logger 10.1.1.16 25197 child 138M 77M 3% bro
logger-2 logger 10.1.1.16 24557 parent 719M 228M 53% bro
logger-2 logger 10.1.1.16 25477 child 154M 92M 3% bro
logger-3 logger 10.1.1.16 24577 parent 715M 229M 55% bro
logger-3 logger 10.1.1.16 25086 child 150M 90M 3% bro
logger-4 logger 10.1.1.16 24585 parent 723M 234M 53% bro
logger-4 logger 10.1.1.16 25204 child 138M 78M 3% bro
logger-5 logger 10.1.1.16 24587 parent 727M 224M 54% bro
logger-5 logger 10.1.1.16 25499 child 162M 97M 3% bro
logger-6 logger 10.1.1.16 24593 parent 711M 229M 57% bro
logger-6 logger 10.1.1.16 25366 child 142M 83M 3% bro
logger-7 logger 10.1.1.16 24599 parent 715M 229M 53% bro
logger-7 logger 10.1.1.16 25480 child 154M 95M 3% bro
logger-8 logger 10.1.1.16 24600 parent 747M 239M 54% bro
logger-8 logger 10.1.1.16 25166 child 142M 82M 3% bro
logger-9 logger 10.1.1.16 24606 parent 723M 218M 60% bro
logger-9 logger 10.1.1.16 25481 child 150M 91M 3% bro
manager manager 10.1.1.16 25449 child 522M 256M 100% bro
manager manager 10.1.1.16 25303 parent 566M 506M 27% bro

Loggers using more CPU…

last pid: 36661; load averages: 20.99, 26.13, 75.80757 up 3+04:59:57 18:10:51
89 processes: 3 running, 86 sleeping
CPU: 21.6% user, 0.6% nice, 15.1% system, 0.6% interrupt, 62.1% idle
Mem: 1920M Active, 3494M Inact, 19G Wired, 35M Cache, 100G Free
ARC: 7603M Total, 2708M MFU, 4469M MRU, 16K Anon, 50M Header, 377M Other
Swap: 12G Total, 17M Used, 12G Free

PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
25449 bro 1 108 5 522M 256M CPU21 21 22:53 100.00% bro
24593 bro 157 20 0 711M 229M select 27 42:15 72.46% bro
24587 bro 162 20 0 727M 224M select 26 41:58 71.58% bro
24557 bro 157 20 0 719M 228M select 21 42:07 70.70% bro
24606 bro 162 20 0 723M 218M select 31 42:12 70.61% bro
24600 bro 162 20 0 747M 239M select 17 42:11 70.51% bro
24540 bro 157 20 0 731M 239M select 6 41:33 70.46% bro
24585 bro 157 20 0 723M 235M select 21 41:48 69.53% bro
24543 bro 162 20 0 723M 222M select 7 42:05 68.75% bro
24577 bro 157 20 0 715M 229M select 34 42:03 67.72% bro
24599 bro 157 20 0 715M 229M select 21 41:08 64.60% bro
24527 bro 167 20 0 744M 226M select 19 43:20 64.11% bro
24559 bro 157 20 0 719M 231M select 36 42:05 62.50% bro
25303 bro 7 20 0 574M 512M uwait 43 7:35 27.98% bro
36661 bro 1 79 0 112M 19248K CPU19 19 0:03 23.39% python2.7
36449 bro 1 52 0 52696K 7992K select 10 0:26 19.29% ssh
36451 bro 1 52 0 52696K 7992K select 25 0:26 19.29% ssh
36450 bro 1 52 0 52696K 7992K select 36 0:26 18.99% ssh
36452 bro 1 52 0 17100K 2404K piperd 41 0:26 18.46% sh
25476 bro 1 28 5 174M 89224K select 19 2:01 5.47% bro
25166 bro 1 27 5 142M 84528K select 2 1:40 4.69% bro
25499 bro 1 27 5 162M 99636K select 39 1:38 4.49% bro
25366 bro 1 27 5 142M 85204K select 3 1:41 4.39% bro
25481 bro 1 27 5 150M 93376K select 0 1:40 4.30% bro
25480 bro 1 27 5 154M 97280K select 7 1:40 4.30% bro
25087 bro 1 27 5 154M 96464K select 16 1:38 4.30% bro
25390 bro 1 27 5 154M 97024K select 9 1:39 4.20% bro
25086 bro 1 27 5 150M 92540K select 26 1:37 4.20% bro
25477 bro 1 27 5 154M 94392K select 34 1:39 4.05% bro
25197 bro 1 27 5 138M 79808K select 24 1:40 3.96% bro
25204 bro 1 27 5 138M 80316K select 43 1:35 3.96% bro
28300 bro 1 20 0 21952K 3204K CPU16 16 0:26 1.27% top

Mostly even distribution of packets across workers…

tcpdump -tnn -c 2000 -i lagg1 src portrange 47761-47780 | awk -F “.” ‘{print $1"."$2"."$3"."$4"–"$5}’ | sort | uniq -c | sort -nr | awk ‘{print $1, $2, $3}’ | sort -k3

2000 packets captured
16263 packets received by filter
0 packets dropped by kernel
113 IP 10.1.1.16–47761
132 IP 10.1.1.16–47762
138 IP 10.1.1.16–47763
114 IP 10.1.1.16–47764
105 IP 10.1.1.16–47765
118 IP 10.1.1.16–47766
99 IP 10.1.1.16–47767
115 IP 10.1.1.16–47768
120 IP 10.1.1.16–47769
105 IP 10.1.1.16–47770
105 IP 10.1.1.16–47771
105 IP 10.1.1.16–47772
631 IP 10.1.1.16–47773

multi-logger__config.py.patch (671 Bytes)

multi-logger__install.py.patch (3.86 KB)