I've been seeing AddressScan alerts, but when I check conn.log, I can't
find the corresponding entries. I got an alert yesterday about a
5060/udp scan hitting 100 hosts. Below are the conn.log, flowscan, and
notice.log for the entire day matching the IP and port.
conn.log:
Dec 1 11:27:45 0.000000 172.21.210.116 151.32.190.137 other 51272 5060
udp 101 ? S0 L
As you can see, at 11:27, Bro thinks 100 hosts were scanned on
5060/udp. But the conn.log and flowscan data only show one host being
scanned. Any ideas why this alert thinks 100 hosts are being hit when
it is one host with a single SYN?
I've been seeing AddressScan alerts, but when I check conn.log, I can't
find the corresponding entries.
In general with these sorts of problems, it helps hugely if you can supply
a trace that reproduces the problem, and also summarize the command line /
analysis you're using.
Any suggestions on how to grab a trace of these events? They are fairly
random and infrequent. I've been thinking about running TimeMachine,
but haven't had time to play with it.
As you can see, at 11:27, Bro thinks 100 hosts were scanned on
5060/udp. But the conn.log and flowscan data only show one host being
scanned. Any ideas why this alert thinks 100 hosts are being hit when
it is one host with a single SYN?
It might be the ConnectionCompressor, but I'm not 100% sure what the conn.log semantics are when the ConnectionCompressor is used. Robin, will know this.
FYI, from ConnCompressor.cc:
// The basic model of the compressor is to wait for an answer before
// instantiating full connection state. Until we see a reply, only a minimal
// amount of state is stored. This has some consequences:
//
// - We try to mimic TCP.cc as close as possible, but this works only to a
// certain degree; e.g., we don't consider any of the wait-a-bit-after-
// the-connection-has-been-closed timers. That means we will get differences
// in connection semantics if the compressor is turned on. On the other
// hand, these differences will occur only for not well-established
// sessions, and experience shows that for these kinds of connections
// semantics are ill-defined in any case.
//
// - If an originator sends multiple different packets before we see a reply,
// we lose the information about additional packets (more precisely, we
// merge the packet headers into one). In particular, we lose any payload.
// This is a major problem if we see only one direction of a connection.
// When analyzing only SYN/FIN/RSTs this leads to differences if we miss
// the SYN/ACK.
//
// To avoid losing payload, there is the option cc_instantiate_on_data:
// if enabled and the originator sends a non-control packet after the
// initial packet, we instantiate full connection state.
//
// - We lose some of the information contained in initial packets (e.g., most
// IP/TCP options and any payload). If you depend on them, you don't
// want to use the compressor.
//
// Optionally, the compressor can take care only of initial SYNs and
// instantiate full connection state for all other connection setups.
// To enable, set cc_handle_only_syns to true.
//
// - The compressor may handle refused connections (i.e., initial packets
// followed by RST from responder) itself. Again, this leads to differences
// from default TCP processing and is therefore turned off by default.
// To enable, set cc_handle_resets to true.
//
// - We don't match signatures on connections which are completely handled
// by the compressor. Matching would require significant additional state
// w/o being very helpful.
//
// - Trace rewriting doesn't work if the compressor is turned on (this is
// not a conceptual problem, but simply not implemented).
Any suggestions on how to grab a trace of these events? They are fairly
random and infrequent.
The usual way is to run bro with -w trace to generate a trace file of the
traffic it analyzes. I sometimes run with (separate) full packet recording
using tcpdump, because -w files don't always include everything Bro captured
(there are mechanisms to not record some packets to it in an attempt to
save space).
As you note, the Time Machine is another possibility.
Finally, Justin's observation about UDP is a good one. What flags and
analyzer scripts are you using when running Bro?
The usual way is to run bro with -w trace to generate a trace file of the
traffic it analyzes. I sometimes run with (separate) full packet recording
using tcpdump, because -w files don't always include everything Bro captured
(there are mechanisms to not record some packets to it in an attempt to
save space).
I am running a cluster on a span port that is receiving upwards of 1
Gbps. I'm guessing the -w would quickly fill my disk. I guess I should
try to recreate the traffic myself.
Finally, Justin's observation about UDP is a good one. What flags and
analyzer scripts are you using when running Bro?
I wasn't thinking about UDP not having a handshake. I saw the S0 and
assumed that means SYN. I see that just means a connection attempt. It
appears that conn.log is logging UDP streams. If there were 100+ scans
from that IP address, those should have shown up in conn.log, right?
I'm running a majority of the default scripts that are included in the
default cluster configuration, Seth's scripts, some of my own, and a few
others that I've collected. I have the capture filter set to ip.
Actually it means that 100 hosts have been scanned and the *last*
attempt triggering the alert was on port 506 (not necessarily all).
When you were checking conn.log, did you filter for all connections
involving that IP or just those on port 5060?
That would explain it. I'm guessing this machine was some sort of
software like P2P or Skype. Is there a way to change the scanner so it
only fires alerts when 100 hosts have been scanned on a single port?
It seems P2P type applications tend to fire a lot of scan
notifications. The other ones I see a lot are the Apple servers. Maybe
people connecting to them for updates?
No, the script doesn't provide that currently. The problem is that
it would require quite a bit more state to keep. I know that it
would be useful though, others have been running into similar
problems already. Perhaps we should think about adding that.