required ports open for cluster?

Ok, so I dont see this in any documentation on bro.org. I have a logger running on the same box as the manager, but I do not see any logs being generated in /data/bro/logs/current.

I am assuming this is because traffic is being dropped on the floor because iptables is in a default reject state? Where is the explicit listing of ports that you need to punch in either firewalld or iptables?

https://www.bro.org/sphinx/components/broctl/README.html

does not have them listed, or any rule to have an entry in node.cfg to set the port to a specific number… Thanks!

It will be in the documentation for 2.5

https://www.bro.org/sphinx-git/components/broctl/README.html#bro-communication

(Sorry accidentally sent this to just Justin)…

Cool. I had punched the holes after running tcpdump on it for a while and saw it trying to talk back. However, the one thing I don’t understand is that my logs arent being written back to the logger host, even though communication is open.

/data/bro/logs/current

is empty on the logger. All I have there is an stderr.log and an stdout.og. Neither the workers on the logger machine itself, nor the remote host, are logging to that directory. Are they being kept somewhere else? I dont see them anywhere in the /data/bro/(spool/log) directory…

Just to compound the issue, no matter what host in the cluster I set the logger destination to, I get zero logs. I am running 2.5beta1. I’ve disabled iptables all around to see if that was causing a problem, but it does not seem to be the case, as I have a pcap showing what appears to be bro dns logs attempting to go across the wire to the logger. They just aren’t being written…

Ah, solution found. Use fqdns. I ran into this problem before where I specified 127.0.0.1 for localhost and things broke. When will Bro support ip addresses in node.cfg properly? I would have thought that it would be in 2.5 :slight_smile:

It does. You can't use an ip address that does not work from all nodes.
If you have evidence otherwise, file a bug.

Ok, so then if I have things segregated into different enclaves that can only talk to the logger and manager, you are saying this breaks the cluster?

If you're having trouble with cluster communication, that's probably why. Workers don't need to be able to connect to each other, but the other node types and workers need to be able to see each other. If you use a hostname it has to resolve properly to an IP that works everywhere. 127.0.0.1 or similar will not work.