inbound PortScans that aren't really...

I'm running a minimal set of BRO (1.3.2) policies, scan.bro plus a few others, on the Fermilab traffic. I see a lot of inbound scans that appear to be bogus. For example...

1190841523.673433:PortScan:NOTICE_ALARM_ALWAYS::216.7.172.212::216.7.172.212:80/tcp::::::216.7.172.212 has scanned 50 ports of 131.225.22.131::@9765

This notice seems to be the result of an internal host visiting a web page (e.g. 212.172.7.216.in-addr.arpa domain name pointer forums.snapstream.com) where the web browser is incrementing the source port for each TCP connection to the destination port 80 web server. In scan.bro, this looks like the remote system is (inbound) port scanning the internal host.

Have I missed a configuration in scan.bro that will ignore this?

Thanks,
Randy Reitz

Can you send me a trace of one of these scans? (Just TCP control
packets is fine if there's content you can't pass on).

Robin

few others, on the Fermilab traffic. I see a lot of inbound scans
that appear to be bogus. For example...

Can you send me a trace of one of these scans? (Just TCP control
packets is fine if there's content you can't pass on).

Robin

-- Robin Sommer * Phone +1 (510) 931-5555 * robin@icir.org
LBNL/ICSI * Fax +1 (510) 666-2956 * www.icir.org
_______________________________________________
Bro mailing list
bro@bro-ids.org
mailman.icsi.berkeley.edu Mailing Lists

We have a free copy of splunk indexing the /usr/local/bro/logs/* files. Using splunk provides an easy way to retrieve data from all of the BRO files - conn, notice, info, etc. Tim Rupp did this. He's available for hire!

I saw an outbound scan report today and used this splunk command ...

. /opt/splunk/bin/setSplunkEnv; splunk search "FER.MI.LAB.IP endtime::10/01/2007:14:37:19 searchtimespanminutes::10 maxresults::1000" | cf > ~/h/bro_scan_question.txt

I've attached the file. I know, you don't need all 1000 lines, but hey?

At the top of the file is some stuff you won't recognize ...

Oct 1 14:37:19:AddressDropped:NOTICE_ALARM_ALWAYS::FER.MI.LAB.IP:::::::::dropping address 131.225.107.90 (131.
225.107.90 has scanned 250 ports of 195.56.77.182):
Oct 1 14:37:19 ? 195.56.77.182 FER.MI.LAB.IP https 1218 443 tcp ? ? S0 X cc=1
Oct 1 14:37:19 ? 195.56.77.182 FER.MI.LAB.IP https 64944 443 tcp ? ? S0 X cc=1
Oct 1 14:37:19 0.000000 195.56.77.182 FER.MI.LAB.IP https 64937 443 tcp ? 0 SHR X
Dec 31 18:00:07 <- I don't know where this came from
   Create_events(dev): IP='FER.MI.LAB.IP' with 1 issues <- This is my code that creates
   Create_events(dev): issues['FER.MI.LAB.IP'] is <type 'list'> <- and event in our TIssue tracking
     save_issue:oid=132843792 -> issue_id=1751 <- system
Oct 1 14:37:19 AddressDropped dropping address FER.MI.LAB.IP (FER.MI.LAB.IP has scanned 250 ports of 195.56.7
7.182) <- message from scan.bro
Oct 1 14:37:19 PortScan FER.MI.LAB.IP has scanned 250 ports of 195.56.77.182
Oct 1 14:37:18 1.007632 195.56.77.182 FER.MI.LAB.IP https 1209 443 tcp ? ? RSTO X @20572 <- here you see that the
Oct 1 14:37:18 ? 195.56.77.182 FER.MI.LAB.IP https 64938 443 tcp ? ? OTH X cc=1 <- web server running on FER.MI.LAB.IP
Oct 1 14:37:18 ? 195.56.77.182 FER.MI.LAB.IP https 64936 443 tcp ? ? OTH X cc=1 <- the web browser (or whatever) is
Oct 1 14:37:17 ? 195.56.77.182 FER.MI.LAB.IP https 64927 443 tcp ? ? S1 X <- makeing requests with a different
Oct 1 14:37:17 ? 195.56.77.182 FER.MI.LAB.IP https 64932 443 tcp ? ? OTH X cc=1 <- source port. So scan.bro's counter
Oct 1 14:37:17 ? 195.56.77.182 FER.MI.LAB.IP https 64926 443 tcp ? ? S0 X cc=1 <- increases with each connection and
Oct 1 14:37:17 ? 195.56.77.182 FER.MI.LAB.IP https 64925 443 tcp ? ? OTH X cc=1 <- reports a port scan???

Here is the file...

bro_scan_question.txt (83.9 KB)

> Can you send me a trace of one of these scans? (Just TCP control
> packets is fine if there's content you can't pass on).
...
We have a free copy of splunk indexing the /usr/local/bro/logs/*
files. Using splunk provides an easy way to retrieve data from all
of the BRO files - conn, notice, info, etc. Tim Rupp did this. He's
available for hire!

I saw an outbound scan report today and used this splunk command ...

To figure this out, we really need a raw trace. The reason is the appearance
of a bunch of connections with state given as "OTH". Those reflect a
non-standard connection establishment (often due to Bro missing the beginning
of the connection, or multi-pathing, or the packet filter reordering SYNs
with SYN ACKs), which are probably what's confusing the scan detector about
the direction of the activity.

You can anonymize a raw trace using ipsumdump -A. Alternatively, you
could run Bro on it using "record_state_history=T" at the command line
to turn on connection state history tracking, which would probably let us
infer what's going on.

    Vern