Troubleshooting crashes

Hello all,

What’s the best way to disable Bro in a systematic way to isolate crashes ?

I disabled all the protocols except SSH and a few default scripts that utilize it. After ~12 hours I returned to find many of the worker nodes had crashed. I forgot to look at the diag for the crashed workers before stopping the cluster.

Suggestions greatly appreciated.

Thanks,

-TC

local.bro configuration

What's the best way to disable Bro in a systematic way to isolate crashes ?

Sending us the diag output from broctl is best since it will include a back trace.

I disabled all the protocols except SSH and a few default scripts that utilize it.

How were you disabling protocols? (nevermind, i see the answer later and i'll comment there)

After ~12 hours I returned to find many of the worker nodes had crashed. I forgot to look at the diag for the crashed workers before stopping the cluster.

Do you have the cron command setup correctly? The workers should have been restart automatically after they crashed and a diagnostic email sent to you.
  Mentioned in this section: http://bro-ids.org/documentation/quickstart.html#a-minimal-starting-configuration

base/init-default.bro configuration

bro@bc : [9:20pm] : bro : grep -v "^#" base/init-default.bro | grep "[a-z]"

I see that you were modifying scripts in the base directory to disable analyzers and I just wanted to point out that we don't support directly making changes to scripts there at all and it's possible that you could get into trouble if you try to update or reinstall with changes in that area.

That said, nice job figuring that out since we don't provide a lot of support in that area right now. :slight_smile: I'm hoping to introduce an analyzer framework which will provide an API for doing that with the 2.2 release.

Total rings : 10

How many CPU cores do you have?

-rw-r--r-- 1 bro bro 10323 Aug 30 21:15 reporter.log
-rw-r--r-- 1 bro bro 52846117 Aug 30 21:27 weird.log

I'm curious about what's in reporter.log, normally that shouldn't have too much in it. Also, that's an astonishingly large weird.log. Is there anything that stands out in those two?

Could you show me your node.cfg configuration too?

Oh, and one last thing, have you made sure to disable all of special NIC features?
  Security Onion: When is full packet capture NOT full packet capture?

  .Seth

Finally getting back to this.

What’s the best way to disable Bro in a systematic way to isolate crashes ?

Sending us the diag output from broctl is best since it will include a back trace.

==== No reporter.log

==== stderr.log
listening on eth5, capture length 8192 bytes

/usr/local/3rd-party/bro/share/broctl/scripts/run-bro: line 60: 15452 Segmentation fault nohup mybro @

==== stdout.log
unlimited
unlimited
unlimited

==== .cmdline
-i eth5 -U .status -p broctl -p broctl-live -p local -p worker-5-9 local.bro broctl base/frameworks/cluster local-worker.bro broctl/auto

==== .env_vars
PATH=/usr/local/3rd-party/bro/bin:/usr/local/3rd-party/bro/share/broctl/scripts:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
BROPATH=/usr/local/3rd-party/bro/spool/installed-scripts-do-not-touch/site::/usr/local/3rd-party/bro/spool/installed-scripts-do-not-touch/auto:/usr/local/3rd-party/bro/share/bro:/usr/local/3rd-party/bro/share/bro/policy:/usr/local/3rd-party/bro/share/bro/site
CLUSTER_NODE=worker-5-9

==== .status
RUNNING [net_run]

==== No prof.log

==== No packet_filter.log

==== No loaded_scripts.log

– [Automatically generated.]

After ~12 hours I returned to find many of the worker nodes had crashed. I forgot to look at the diag for the crashed workers before stopping the cluster.

Do you have the cron command setup correctly? The workers should have been restart automatically after they crashed and a diagnostic email sent to you.
Mentioned in this section: http://bro-ids.org/documentation/quickstart.html#a-minimal-starting-configuration

I did not; it’s working properly now.

(…)

Total rings : 10

How many CPU cores do you have?

48 per server.

-rw-r–r-- 1 bro bro 10323 Aug 30 21:15 reporter.log

-rw-r–r-- 1 bro bro 52846117 Aug 30 21:27 weird.log

I’m curious about what’s in reporter.log, normally that shouldn’t have too much in it. Also, that’s an astonishingly large weird.log. Is there anything that stands out in those two?

reporter.log – looks like I need to setup GeoIPV6 database: /usr/share/GeoIP/GeoIPCityv6.dat (empty)

50 Reporter::INFO processing continued (empty) <cut…>
50 Reporter::INFO Failed to open GeoIP database: <cut…>
29 Reporter::INFO processing suspended (empty) <cut…>

weird.log –

bro@bc : [12:33am] : 2012-08-30 : ls -l weird.* | tail -5
-rw-r–r-- 1 bro bro 16757363 Aug 30 21:00 weird.20:00:00-21:00:00.log.gz
-rw-r–r-- 1 bro bro 304697 Aug 30 21:02 weird.21:00:00-21:02:10.log.gz
-rw-r–r-- 1 bro bro 39351508 Aug 30 22:00 weird.21:12:53-22:00:00.log.gz
-rw-r–r-- 1 bro bro 55141105 Aug 30 23:00 weird.22:00:00-23:00:00.log.gz
-rw-r–r-- 1 bro bro 38190282 Aug 31 00:00 weird.23:00:00-00:00:00.log.gz

bro@bc : [12:33am] : 2012-08-30 : gzcat weird.23:00:00-00:00:00.log.gz | awk ‘{print $7}’ | sort | uniq -c | sort -rn | head -10
614589 data_before_established
585445 possible_split_routing
260703 window_recision
190652 SYN_seq_jump
100211 inappropriate_FIN
64533 above_hole_data_without_any_acks
37882 connection_originator_SYN_ack
33611 data_after_reset
19106 Teredo_bubble_with_payload
11510 SYN_after_reset

bro@bc : [12:34am] : current : awk ‘{print $7}’ weird.log | sort | uniq -c | sort -rn | head -10

51561 window_recision
49218 possible_split_routing
47776 data_before_established
24526 Teredo_bubble_with_payload
19894 connection_originator_SYN_ack
11718 SYN_seq_jump
8938 inappropriate_FIN
8701 data_after_reset
7523 above_hole_data_without_any_acks
5765 inner_IP_payload_length_mismatch

Could you show me your node.cfg configuration too?

bro@bc : [12:41am] : bro : cat etc/node.cfg
[manager]
type=manager
host=z.z.z.M

[proxy-1]
type=proxy
host=z.z.z.M

[worker-1]
type=worker
host=z.z.z.A
interface=eth5
lb_procs=10
lb_method=pf_ring

[worker-2]
type=worker
host=z.z.z.B
interface=eth5
lb_procs=10
lb_method=pf_ring

[worker-3]
type=worker
host=z.z.z.C
interface=eth5
lb_procs=10
lb_method=pf_ring

[worker-4]
type=worker
host=z.z.z.D
interface=eth5
lb_procs=10
lb_method=pf_ring

[worker-5]
type=worker
host=z.z.z.E
interface=eth5
lb_procs=10
lb_method=pf_ring

Oh, and one last thing, have you made sure to disable all of special NIC features?
http://securityonion.blogspot.com/2011/10/when-is-full-packet-capture-not-full.html

Yeah, I’ve used those recommendations from the start with one exception; the Intel X520-DA2 cards I’m using do not support disabling “ufo” (UDP large send offload).

Adjust interface features

listening on eth5, capture length 8192 bytes

I see later in the email that you have the MTU on your NIC set to 9600. You may want to add the following line to local.bro to make Bro's snap length match that.

redef snaplen = 9600;

/usr/local/3rd-party/bro/share/broctl/scripts/run-bro: line 60: 15452 Segmentation fault nohup mybro @

Hm, looks like you aren't getting stack traces. Your OS is probably not keeping core dumps or writing them to some OS-wide core dump directory. Change the sysctl variable for your OS to dump core files and make sure they're being dropped into the CWD prefixed with "core".

Daniel, do you think that's something that you could add to the documentation somewhere?

bro@bc : [12:33am] : 2012-08-30 : gzcat weird.23:00:00-00:00:00.log.gz | awk '{print $7}' | sort | uniq -c | sort -rn | head -10
614589 data_before_established
585445 possible_split_routing

I'm a little curious about these two. Normally lots of these lines indicates that something is wrong with how Bro is collecting packets. I'm interested to find out if these go away when you adapt the snap length. Is the MTU of your network actually 9600 or did you just set that MTU for the interface to the maximum it would allow?

[worker-1]
type=worker
host=z.z.z.A
interface=eth5
lb_procs=10
lb_method=pf_ring

Nice, I don't think that many people are using the load balancing feature of BroControl yet since I don't think we have it documented.

Daniel, did that end up getting documented anywhere?

Yeah, I've used those recommendations from the start with one exception; the Intel X520-DA2 cards I'm using do not support disabling "ufo" (UDP large send offload).

I would think that's fine.

Thanks for all of the debugging information, it's really helpful.

  .Seth

(…cut…)

/usr/local/3rd-party/bro/share/broctl/scripts/run-bro: line 60: 15452 Segmentation fault nohup mybro @

Hm, looks like you aren’t getting stack traces. Your OS is probably not keeping core dumps or writing them to some OS-wide core dump directory. Change the sysctl variable for your OS to dump core files and make sure they’re being dropped into the CWD prefixed with “core”.

I’ll make the change and send the info when it’s available.

bro@bc : [12:33am] : 2012-08-30 : gzcat weird.23:00:00-00:00:00.log.gz | awk ‘{print $7}’ | sort | uniq -c | sort -rn | head -10
614589 data_before_established
585445 possible_split_routing

I’m a little curious about these two. Normally lots of these lines indicates that something is wrong with how Bro is collecting packets. I’m interested to find out if these go away when you adapt the snap length. Is the MTU of your network actually 9600 or did you just set that MTU for the interface to the maximum it would allow?

The MTU in use is actually 9600. I made the snaplen change you recommended but I’m still seeing more or less the same results in weird.log. I checked the source to see what “possible_split_routing” represents and that leads me to think our tap/aggregation setup may be incorrectly load balancing traffic; I’ll get back to you on that later.

417369 possible_split_routing
353346 data_before_established
258014 window_recision
121325 active_connection_reuse
91851 connection_originator_SYN_ack
48796 Teredo_bubble_with_payload
48324 bad_SYN_ack
44451 inappropriate_FIN
44451 above_hole_data_without_any_acks
31832 SYN_seq_jump

The front-end setup is working ok. I was missing PFRINGClusterID in broctl.conf; fixing that seems to have helped with memory and cpu usage.

I still have lots of entries in weird.log with split_routing being the most common. broctl’s “netstats” command does not show any drops, nor does the PF_RING info from /proc/net/pf_ring/. The count of “split_routing” events is about equal across all workers so I think it’s something to do with the load-balancing via PF_RING. The traffic is 802.1Q tagged so maybe pf_ring is using 6-tuple load balancing for the cluster.

The front-end setup is working ok. I was missing PFRINGClusterID in broctl.conf; fixing that seems to have helped with memory and cpu usage.

Oh, that should have been set already. Well, I suppose it might not have been if you upgraded this installation from a previous non-pf_ring enabled installation.

It may be time to revisit our decision to only set that variable when building against a pf_ring enabled libpcap since this "upgrading to pf_ring" problem exposes itself. Daniel, Jon, what do you guys think?

The count of "split_routing" events is about equal across all workers so I think it's something to do with the load-balancing via PF_RING.

That sounds like the culprit.

The traffic is 802.1Q tagged so maybe pf_ring is using 6-tuple load balancing for the cluster.

They made that configurable a while back for me. I would recommend trying 2-tuple or 4-tuple balancing (I don't remember their default). If you figure out how to configure it, could you let us know how so we don't have to go look it up? :slight_smile:

Are you loading the misc/capture-loss script too? I would recommend loading that once you get this pf_ring issue all sorted out. That should be the final (or nearly final) measurement to see if you are getting all of your traffic correctly.
  http://www.bro-ids.org/documentation/scripts/policy/misc/capture-loss.html

Thanks!
  .Seth

The front-end setup is working ok. I was missing PFRINGClusterID in broctl.conf; fixing that seems to have helped with memory and cpu usage.

Oh, that should have been set already. Well, I suppose it might not have been if you upgraded this installation from a previous non-pf_ring enabled installation.

It may be time to revisit our decision to only set that variable when building against a pf_ring enabled libpcap since this "upgrading to pf_ring" problem exposes itself. Daniel, Jon, what do you guys think?

The PF_RING user's guide has a note about recompiling existing applications in the section about libpfring and libpcap installation. As long as that happens, I think Bro should already automatically upgrade to use it because the default value for PFRINGClusterID in $prefix/lib/broctl/BroControl/options.py gets overwritten by the new installation (no change to broctl.conf is technically necessary, setting it there is probably for advanced users or maybe if the default value happens to conflict with another application).

    Jon

I was seeing a lot of these as well. I am mirroring two ports, hence a
lot of duplicate traffic. Are you doing something similar? When I had
my networking engineer turn off one of the mirrored ports, I saw a 60%
reduction in data_before_established and 66% decrease in
possible_split_routing. I'm comparing data between the same hour on
Thursday and Friday, so some of that drop is related to a normal drop in
traffic, but most is probably turning off the mirror.

Tyler

No. It was due to the traffic being 802.1Q tagged and the default hashing algorithm for PF_RING. The default hashing included the VLAN id and in this network the traffic is tagged according to peering session and direction of flow; as a result a “5-tuple” flow is really two “6-tuple” flows so the flow ends up split among different workers.

I think the fix should be as easy as including another PF_RING environment variable when starting bro.

The way I understand possible_split_routing means bro is missing some packets so you should check the front-end setup.

grep -n -B5 possible_split src/TCP.cc
521- // We’ve already sent a SYN, but that
522- // hasn’t roused the other end, yet we’re
523- // ack’ing their data.
524-
525- if ( ! Conn()->DidWeird() )
526: Weird(“possible_split_routing”);

/tc

Oh! I didn't realize this was settable through an environment variable when using their libpcap wrappers. Do you happen to know the variable? That's something we definitely need to be setting, almost everyone with VLAN tagged traffic has trouble with the default PF_RING setting.

  .Seth

I checked and bro already has the right env variables.

From lib/broctl/plugins/lb_pf_ring.py:

23: if BroControl.config.Config.pfringclusterid != “0”:
24: nn.env_vars += [“PCAP_PF_RING_USE_CLUSTER_PER_FLOW=1”]
25: nn.env_vars += [“PCAP_PF_RING_CLUSTER_ID=%s” % BroControl.config.Config.pfringclusterid]

The problem is the changes this triggers inside PF_RING does not work as expected, so I’m still working to prove that to the developer and find out why. For the time being I’m using a slight mod to pf_ring as a workaround.

/tc

There should be some other PF_RING configuration option for setting the tuple's to use for load balancing too. It's a relatively new feature, I'll look into the variable soon. You shouldn't have to make a modification to pf_ring.

  .Seth

Yeah I know, and I’ve read the PF_RING source for the changes you’re referring to. You can select 2-tuple, 4-tuple, 5-tuple-tcp only with 2-tuple for all other traffic, regular 5-tuple, or the default which is 6-tuple (if the vlan_id is present, otherwise it is essentially 5-tuple).

The point I’m trying to make is the knobs within PF_RING that control that behavior are not working properly when 802.1Q tags are involved. My custom patch just prevents the parsed vlan from being assigned. All other suggestions, including the recommended approach, do not work; bro spews split_routing alarms for everything except my patched version.

/tc

Ok, I found the problem. We got tripped up when they added the configuration option and we may have to do a 2.1.1 release. We were setting the PCAP_PF_RING_USE_CLUSTER_PER_FLOW env var to "1" just to have it set to something. They started using that value to set how the load balancing is done and "1" was chosen to be round-robin.

We probably need to make that configurable (from broctl.cfg) with a default of 2 (2-tuple… it causes less trouble).

Here's the structure that defines the different load balancing approaches with that variable…

typedef enum {
  cluster_per_flow = 0, /* 6-tuple: <src ip, src port, dst ip, dst port, proto, vlan> */
  cluster_round_robin,
  cluster_per_flow_2_tuple, /* 2-tuple: <src ip, dst ip > */
  cluster_per_flow_4_tuple, /* 4-tuple: <src ip, src port, dst ip, dst port > */
  cluster_per_flow_5_tuple, /* 5-tuple: <src ip, src port, dst ip, dst port, proto > */
} cluster_type;

Daniel could you make this change?

  .Seth

Ok that’s good to hear, however I still think the problem is with PF_RING. I’m not sure your assessment of that environment variable is correct. See the last three emails to the pf_ring developers here: [1].

From pf_ring libpcap…

pcap-linux.c.orig:1183: if(getenv(“PCAP_PF_RING_USE_CLUSTER_PER_FLOW”))
pcap-linux.c.orig-1184- pfring_set_cluster(handle->ring, atoi(clusterId), cluster_per_flow);
pcap-linux.c.orig-1185- else
pcap-linux.c.orig-1186- pfring_set_cluster(handle->ring, atoi(clusterId), cluster_round_robin);

Notice the call to pfring_set_cluster.

“cluster_per_flow” is the default 5-tuple hashing mode. (Actually 6-tuple due to the vlan problem). Below I’ve included the relevant code from pf_ring that shows the only difference between cluster_per_flow and cluster_per_flow_5_tuple is the call to hash_pkt_header().

“hash_pkt_header” is called from “hash_pkt_cluster” and “default_rehash_rss_func”.

Inside kernel/pf_ring.c, the function “hash_pkt_cluster” will determine the hashing mode and call another function “hash_pkt_header”. The last argument to “hash_pkt_header” instructs PF_RING to mask/prevent the VLAN from being used in the hash calculation.

It looks like this → idx = hash_pkt_header(hdr, 0, 0, 0, 0, 1);

Notice every call to hash_pkt_header() except the default case of “cluster_per_flow” attempts to mask the vlan. (Up until a few days ago, all calls to hash_pkt_header() did not mask the vlan).

I patched PF_RING in various places to force hash_pkt_header() to always mask the vlan… regardless of what environment variable was set. This did not work so I took it a step further.

Of course it’s always possible I’m highly confused, time will tell :o

/tc

3889:static u_int hash_pkt_cluster(ring_cluster_element *cluster_ptr,
3890- struct pfring_pkthdr hdr)
3891-{
3892- u_int idx;
3893-
3894- switch(cluster_ptr->cluster.hashing_mode) {
3895- case cluster_round_robin:
3896- idx = cluster_ptr->cluster.hashing_id++;
3897- break;
3898-
3899- case cluster_per_flow_2_tuple:
3900- idx = hash_pkt_header(hdr, 0, 0, 1, 1, 1);
3901- break;
3902-
3903- case cluster_per_flow_4_tuple:
3904- idx = hash_pkt_header(hdr, 0, 0, 0, 1, 1);
3905- break;
3906-
3907- case cluster_per_flow_tcp_5_tuple:
3908- if(((hdr->extended_hdr.parsed_pkt.tunnel.tunnel_id == NO_TUNNEL_ID) ?
3909- hdr->extended_hdr.parsed_pkt.l3_proto : hdr->extended_hdr.parsed_pkt.tunnel.tunneled_proto) == IPPROTO_TCP)
3910- idx = hash_pkt_header(hdr, 0, 0, 0, 0, 1); /
5 tuple /
3911- else
3912- idx = hash_pkt_header(hdr, 0, 0, 1, 1, 1); /
2 tuple */
3913- break;
3914-
3915- case cluster_per_flow_5_tuple:
3916- idx = hash_pkt_header(hdr, 0, 0, 0, 0, 1);
3917- break;
3918-
3919- case cluster_per_flow:
3920- default:
3921- idx = hash_pkt_header(hdr, 0, 0, 0, 0, 0);
3922- break;
3923- }
3924-
3925- return(idx % cluster_ptr->cluster.num_cluster_elements);
3926-}

(…)

Ok that’s good to hear, however I still think the problem is with PF_RING. I’m not sure your assessment of that environment variable is correct. See the last three emails to the pf_ring developers here: [1].

I forgot to include the link for [1].

http://www.mail-archive.com/ntop-misc@listgateway.unipi.it/msg02944.html

/tc

Arg! You're right, I didn't read through that code closely enough. I guess we're stuck in a position still where can't configure that since they haven't exposed it through their libpcap wrappers yet. I'll try and bring that up with them soon.

Daniel, you can either just delete that branch or wait until we have more configurability and make the correct modification then.

Thanks,
.Seth

The PF_RING issue has been fixed and the API now allows selecting cluster mode via API (with environment variables).

http://www.mail-archive.com/ntop-misc@listgateway.unipi.it/msg02972.html

I made the following patch for Bro. To set the cluster mode from broctl.cnf, set “PFRINGClusterType” to one of the following:

  • “2tuple”
  • “4tuple”
  • “5tupletcp”
  • “5tuple”

If PFRINGClusterType is not defined then the default will be “cluster_per_flow”, which is essentially 5-tuple unless the 802.1Q header is present.

/tc

— …/bro-2012-09-22/aux/broctl/BroControl/plugins/lb_pf_ring.py 2012-09-22 08:43:01.000000000 +0000
+++ aux/broctl/BroControl/plugins/lb_pf_ring.py 2012-09-25 10:48:54.000000000 +0000
@@ -21,5 +21,19 @@

if nn.lb_method == “pf_ring”:
if BroControl.config.Config.pfringclusterid != “0”:

  • nn.env_vars += [“PCAP_PF_RING_USE_CLUSTER_PER_FLOW=1”]
  • nn.env_vars += [“PCAP_PF_RING_CLUSTER_ID=%s” % BroControl.config.Config.pfringclusterid]
  • if BroControl.config.Config.pfringclustertype == “2tuple”:
  • nn.env_vars += [“PCAP_PF_RING_USE_CLUSTER_PER_FLOW_2_TUPLE=1”]
  • nn.env_vars += [“PCAP_PF_RING_CLUSTER_ID=%s” % BroControl.config.Config.pfringclusterid]
  • elif BroControl.config.Config.pfringclustertype == “4tuple”:
  • nn.env_vars += [“PCAP_PF_RING_USE_CLUSTER_PER_FLOW_4_TUPLE=1”]
  • nn.env_vars += [“PCAP_PF_RING_CLUSTER_ID=%s” % BroControl.config.Config.pfringclusterid]
  • elif BroControl.config.Config.pfringclustertype == “5tupletcp”:
  • nn.env_vars += [“PCAP_PF_RING_USE_CLUSTER_PER_FLOW_TCP_5_TUPLE=1”]
  • nn.env_vars += [“PCAP_PF_RING_CLUSTER_ID=%s” % BroControl.config.Config.pfringclusterid]
  • elif BroControl.config.Config.pfringclustertype == “5tuple”:
  • nn.env_vars += [“PCAP_PF_RING_USE_CLUSTER_PER_FLOW_5_TUPLE=1”]
  • nn.env_vars += [“PCAP_PF_RING_CLUSTER_ID=%s” % BroControl.config.Config.pfringclusterid]
  • else:
  • nn.env_vars += [“PCAP_PF_RING_USE_CLUSTER_PER_FLOW=1”]
  • nn.env_vars += [“PCAP_PF_RING_CLUSTER_ID=%s” % BroControl.config.Config.pfringclusterid]

The PF_RING issue has been fixed and the API now allows selecting cluster mode via API (with environment variables).

Nice!

If PFRINGClusterType is not defined then the default will be "cluster_per_flow", which is essentially 5-tuple unless the 802.1Q header is present.

Cool! Daniel, can you merge this into your branch where you had fixed this previously (when I was wrong about how to fix it)?

Thanks,
  .Seth