Hui Lin_Out of Bound Exception from flowunit

HI,

when I am writing a flowunit for dnp3 protocol, I find a weird situation. Here is the record type for the flowunit:

type Dnp3_Test = record {
start: uint16;
len: uint8;
ctrl: uint8;
dest_addr: uint16;
src_addr: uint16;
rest: bytestring &restofdata;

} &byteorder = bigendian
&length= (8 + len -5 - 1)
;

I am writing this type for step by step debug. There is no compiling and linking error. But when you parse the traffic dump by this protocol analyzer, binpac will generate following exception

1217561494.208541 weird: binpac exception: out_of_bound: Dnp3_Test:src_addr: 8 > 3

8 is the size of all data before “rest” the bytestring, and 3 is the size of data “start” and “len”. “len” is used to define the &length of this record. It seems that after “len”, you can not define extra data, such as “ctrl”, “dest_addr” and doing this will generate the above exception. However, if you change the type of all data after “len” into bytestring, then the exception will not happen. But I still want to keep those data as the “uint8”. Any suggestion to solve this problem?

Best,

Hui

It looks like you probably want to do: &length=(8+len)

You also forgot to explain what the "5" is for and it looks like binpac tried to parse 5 bytes too far (8>3). From a more broad perspective, if you have framing around this parse unit (&length applied to a parent unit) it probably makes more sense to define this record like this:

type Dnp3_Test = record {
        start: uint16;
        len: uint8;
        ctrl: uint8;
        dest_addr: uint16;
        src_addr: uint16;
        rest: bytestring &length=len;
} &byteorder = bigendian;

Binpac shouldn't have any problems with that as long as it can calculate the fully parsed record size based on a parent record. (to avoid complaints about incremental parsing)

  .Seth

Actually, -5 comes from the meaning of the “len” which is specified in the protocol itself. I also try to add 5 on the &length to the record type. It still generate same exception. So I guess it is not the overall length of the record, but the length before “rest”.
Your second method to put length on the bytestring instead of record actually generate the incremental input warning.

Actually, I also consider about define “rest” as a uint8[]. But I just don’t know how to declare the array type in event.bif. How can I pass the array of uint8 as the input to the event handler?

Actually, -5 comes from the meaning of the "len" which is specified in the protocol itself. I also try to add 5 on the &length to the record type. It still generate same exception. So I guess it is not the overall length of the record, but the length before "rest".
Your second method to put length on the bytestring instead of record actually generate the incremental input warning.

Ah, ok. So this is your "top level" data structure?

It just looks to me like you might be doing your field length calculation wrong. I'd try thinking about it a bit more.

Alternately, if all of the messages start with "start" and "len" like you have in the record that you sent, you could make a higher level container and apply the length there to provide yourself a framing unit. Like this...

type Dnp3_Head = record {
        start: uint16;
        len: uint8;
  # len-3 could very well be wrong since I'm probably misunderstanding the protocol.
        body: Dnp3_Test &length = len-3;
} &byteorder=bigendian;

type Dnp3_Test = record {
        ctrl: uint8;
        dest_addr: uint16;
        src_addr: uint16;
        # applying &length to the parent unit should allow us to use &restofdata
        rest: bytestring &restofdata;
} &byteorder = bigendian;

Actually, I also consider about define "rest" as a uint8[]. But I just don't know how to declare the array type in event.bif. How can I pass the array of uint8 as the input to the event handler?

I would probably try to avoid doing that unless the data makes sense as an array of ints.

  .Seth

Actually, -5 comes from the meaning of the “len” which is specified in the protocol itself. I also try to add 5 on the &length to the record type. It still generate same exception. So I guess it is not the overall length of the record, but the length before “rest”.
Your second method to put length on the bytestring instead of record actually generate the incremental input warning.

Ah, ok. So this is your “top level” data structure?

It just looks to me like you might be doing your field length calculation wrong. I’d try thinking about it a bit more.

Alternately, if all of the messages start with “start” and “len” like you have in the record that you sent, you could make a higher level container and apply the length there to provide yourself a framing unit. Like this…

type Dnp3_Head = record {
start: uint16;
len: uint8;

len-3 could very well be wrong since I’m probably misunderstanding the protocol.

body: Dnp3_Test &length = len-3;
} &byteorder=bigendian;

I was doing this before actually. There is still problem when you put a uint8 data type after this high level record. However, I find that right after the int data type, you have to set a “bytestring” to eliminate this problem. I don’t know why. So what I am doing is that I actually defined a dump variable which is of type bytestring with length 0 and it works.