Doc generation tests fail

Jon,

after merging in the logging code yesterday, the doc tests now fail,
and I'm not quite sure why. Here's the output:

    > bro --doc-scripts ../testing/btest/doc/autogen-reST-enums.bro
    [...]
    Documenting source: /home/robin/bro/master/policy/logging.bro
    Created reST document: logging.rst
    /home/robin/bro/master/policy/logging.bro, line 88: error: syntax error, at or near "}"

The interesting thing is that running without --doc-scripts doesn't
report any error:

    > bro ../testing/btest/doc/autogen-reST-enums.bro
    [no output]

Is there markup that can't be parsed?

Robin

/home/robin/bro/master/policy/logging.bro, line 88: error: syntax
error, at or near "}"
...
Is there markup that can't be parsed?

Here's what it's choking on:

    ## Information passed into rotation callback functions.
    type RotationInfo: record {
        writer: Writer; ##> Writer.
        path: string; ##> Original path value.
        open: time; ##> Time when opened.
        close: time; ##> Time when closed.
    };

That comment markup is using the wrong angle bracket. Replace "##>" with "##<" and it should work. The reason why the current markup results in a syntax error is because the parser will interpret that last "##>" comment as a "##" token, which is something that has to precede a record field declaration, and there are no more of those in that record.

Side note: the "doc.autogen-reST-example" might still fail because (1) there's a missing period in one place in the baseline (2) the ordering of set/table values seems to be able to change between runs. I fixed those last night in my topic/jsiwek/doc-framework branch. I'm not sure if you want to try cherry-picking that commit or if you just want to wait until merging that branch again.

Also, these logging tests are failing for me:

logging.remote-types ... failed
logging.remote ... failed
logging.rotate-custom ... failed
logging.rotate ... failed
logging.types ... failed

I put relevant `btest -D` output here:
http://www.ncsa.illinois.edu/~jsiwek/tmp/bro_logging_test_failures

- Jon

That comment markup is using the wrong angle bracket. Replace "##>"
with "##<" and it should work.

Ah. :slight_smile:

  The reason why the current markup results in a syntax error is
  because the parser will interpret that last "##>" comment as a "##"

Ok, two thoughts:

    - is there a way to produce an error messages that makes clear
    it's a problem with the doc markup? Otherwise this can be quite
    confusing as one would normally just ignore comments when looking
    for errors the parser is reporting.

    - I'm wondering whether Bro should report this also when not
    running in doc mode. Otherwise, such things are hard to catch.

    On the other hand, I'm not sure I want to enforce rules on
    comments for cases where the doc generation isn't used.

    Perhaps a middle ground: should we have a test that ensures that
    all scripts we ship do correctly parse in doc mode? The existing
    tests reported logging.bro but only because it's a central one.
    Ones that are less used will be harder to find (and will in
    particular mess up the auto-generated pages on the web server)

Side note: the "doc.autogen-reST-example" might still fail because (1)
there's a missing period in one place in the baseline (2) the ordering
of set/table values seems to be able to change between runs.

In theory, that shouldn't be the case as long as the RNG's seed is
set, which btest.cfg does. (But see below for practice).

I'm not sure if you want to try cherry-picking that commit or if you
just want to wait until merging that branch again.

Fine waiting.

http://www.ncsa.illinois.edu/~jsiwek/tmp/bro_logging_test_failures

Funny, some of that looks like a timezone difference! :slight_smile:

And the tables do actually come out in a different order here as well.
I'm wondering whether seeding the RNG isn't sufficient. It has been in
the past afairc, but I guess same order isn't guarnateed *across*
systems but only within a single system. But then we have a problem
here ...

I saw you added a canonifier script for (2) above, but I'm guessing
that works only in this specific case? (Haven't looked closer at it).

Robin

Maybe we should just have it run in doc mode on one of the NMI platforms to go through Jon's tests daily.

- is there a way to produce an error messages that makes clear
it's a problem with the doc markup? Otherwise this can be quite
confusing as one would normally just ignore comments when looking
for errors the parser is reporting.

Not sure how I can get it to be 100% sure a given parse error is because of doc markup, but I can easily add a hint to the message when in doc mode to remember to check ## style comments for syntax errors.

- I'm wondering whether Bro should report this also when not
running in doc mode. Otherwise, such things are hard to catch.

Right now it's using flex start conditions to only send doc-related tokens to the parser in certain places (enum/record bodies). If instead I always send tokens, then this type of error could be reported when not running in doc mode. But that would also probably mean that we can't re-use the "##" syntax in enum/record bodies -- we'd need yet another style.

Perhaps a middle ground: should we have a test that ensures that
all scripts we ship do correctly parse in doc mode? The existing
tests reported logging.bro but only because it's a central one.
Ones that are less used will be harder to find (and will in
particular mess up the auto-generated pages on the web server)

I think maybe this can be covered by a build test on NMI that checks `make doc` doesn't have any error.

> the ordering of set/table values seems to be able to change between runs.

In theory, that shouldn't be the case as long as the RNG's seed is
set, which btest.cfg does. (But see below for practice).

And the tables do actually come out in a different order here as well.
I'm wondering whether seeding the RNG isn't sufficient. It has been in
the past afairc, but I guess same order isn't guarnateed *across*
systems but only within a single system. But then we have a problem
here ...

I saw you added a canonifier script for (2) above, but I'm guessing
that works only in this specific case? (Haven't looked closer at it).

Yeah, it's really crappy; not generalizable. I think we need something better (but I'm not sure where to start looking to fix the general problem).

- Jon

Yes. BTW, are we testing this Don?

Not sure how I can get it to be 100% sure a given parse error is
because of doc markup, but I can easily add a hint to the message when
in doc mode to remember to check ## style comments for syntax errors.

That sounds good enough.

I think maybe this can be covered by a build test on NMI that checks
`make doc` doesn't have any error.

Ack, except that I'd also like to see that when running tests locally
already. Can we write a btest that does just the relevant subpart of
"make doc"? (e.g., doesn't need Sphinx).

something better (but I'm not sure where to start looking to fix the
general problem).

Would it make sense to provide our own fully predictable (and simple)
RNG implementation (stolen from somewhere) that kicks in only when a
seed is explicitly set?

Robin

Ack, except that I'd also like to see that when running tests locally
already. Can we write a btest that does just the relevant subpart of
"make doc"? (e.g., doesn't need Sphinx).

Seems reasonable. I'll work on that.

FYI, I'm thinking about rewriting how `make doc` does things almost entirely. The scripts I did before don't seem like they're robust enough to deal with some of these new requirements we're working out.

Would it make sense to provide our own fully predictable (and simple)
RNG implementation (stolen from somewhere) that kicks in only when a
seed is explicitly set?

Probably worth looking at.

- Jon

We're not testing this yet. I added a note to my task list to run "make doc" at NMI. If we develop a BTest that validates docs, then this becomes moot.

Don

Which it looks like we are now.

Yes, but I'll go ahead and test "make doc". I can remove it once we have a BTest for it (not sure when that will happen), and even if I don't, it's a harmless redundancy to test both ways.

Don

And it's testing more, like the Sphinx setup, so that's still helpful.

Robin

Do we want to test Sphinx setup on NMI B&T? That adds a bunch more dependencies on all of these targets?

Do we want to test Sphinx setup on NMI B&T? That adds a bunch more dependencies on all of these targets?

It should just be adding 1 dependency on Sphinx to the relevant target (currently `make doc`, but that will probably be subdivided into smaller targets per earlier discussion). And it should be possible to just use easy_install to install Sphinx into the home directory. Doesn't seem like a huge hassle to do it on NMI.

- Jon

Sounds good. Then I suggest we go forward with that.

Ok, I'll look into that. It's the only solution I can come up with.

Robin