Bro Logging Error: count value too high for JSON

I am having some trouble with a bro script we have. It is listening on the tcp_packet event and logging using the ASCII writer in JSON. As the subject indicates, I am getting the error as follows:

error: conn/Log::WRITER_ASCII: count value too large for JSON: 184467440718605600000

From the bro manual I understand that the count data type is an unsigned 64 bit int, while int is a signed 64bit int. From bro’s git and my error message, I understand that we cannot print values to JSON larger than the signed int max. With my bro script, I printed out the count data types passed to me in the tcp_packet hook (being SEQ LEN ACK), and noticed that my SEQ numbers were the values that bro was having trouble serializing properly as they were bigger than the signed int maximum. This raised the eyebrows of a team member smarter than myself, as he reminded me that SEQ numbers are 32 bits in length in TCP packets.

After changing the datatypes of the structs I am logging to int and “downcasting” the count values, I no longer run into this problem… but then I also get negative sequence numbers in my result : )

I wonder:

A) Am I doing something wrong?

B) There seems to be a related issue on the issue tracker: https://bro-tracker.atlassian.net/browse/BIT-1863 , but I am thinking there might be some intricacies with how bro generates sequence numbers for a given packet/pcap?

Bro is passing these values directly to the tcp_event hook, and I am doing no manipulation before printing out these too large sequence numbers, which is why I am not attaching my broscript.

Thanks in advance for your time,

Ian

I printed out the `count` data types
passed to me in the `tcp_packet` hook (being SEQ LEN ACK), and noticed that
my SEQ numbers were the values that bro was having trouble serializing
properly as they were bigger than the signed int maximum. This raised the
eyebrows of a team member smarter than myself, as he reminded me that SEQ
numbers are 32 bits in length in TCP packets.

The sequence numbers in that event are actually "relative" to the
starting sequence number. Bro internally tracks how many times the
32-bit sequence space may have wrapped around and so expands any given
sequence number in a packet out into where it would be located in a
larger 64-bit space before passing that value along into the
`tcp_packet` event.

After changing the datatypes of the structs I am logging to `int` and
"downcasting" the `count` values, I no longer run into this problem... but
then I also get negative sequence numbers in my result : )

I wonder:

A) Am I doing something wrong?

Maybe it's partly yes, partly no :slight_smile:

I think one way you might want to get around the JSON limitation in
this case would be logging two different numbers based on the result
of the sequence number modulo INT64_MAX. E.g. if you have both the
quotient and remainder of that, then you can generate back the full,
relative sequence number in the 64-bit sequence space.

error: conn/Log::WRITER_ASCII: count value too large for JSON:
184467440718605600000

Though in this case, it looks close to the upper bound for an unsigned
64-bit integer and I doubt a sequence number actually wrapped around
the 32-bit sequence space enough times to get that high. A guess
would be that Bro has gotten the starting sequence number wrong and so
incorrectly wrapped the full 64-bit space going backwards.

If you can provide an example pcap that produces sequence numbers like
that, it would be interesting to take a look and see if something
needs to be fixed/improved in Bro.

- Jon