[Bro-Commits] [git/bro] topic/robin/parallel-btest: Adding group "comm" to communication tests for parallelizing execution with new btest version. (32cb6d6)

It seems like TEST-GROUP is very similar to TEST-PROFILE (in the btest profile branch I'm working on). . . any way to merge these?

--Gilbert

Does TEST-PROFILE limit a test to run only when a specific profile has
been selected, and be skipped if not? What's a use case for using it?

Btw, another question: the "transform" pass runs after the test. Can
we add another similar one that runs before the test executes? We
could then remove BTest's filters and use profiles instead I believe.

And: your README says "four scripts need to be defined": please make
them optional so that one can skip scripts that aren't needed for a
profile.

Robin

It seems like TEST-GROUP is very similar to TEST-PROFILE (in the btest
profile branch I'm working on). .

Does TEST-PROFILE limit a test to run only when a specific profile has
been selected, and be skipped if not?

At the moment, no. It doesn't seem like something that would be hard to do, though.

What's a use case for using it?

The primary use-case at the moment is log testing. For example, if there were a 'dataseries' profile and an 'sql' profile, like so:

@TEST-PROFILE dataseries sql

the test containing this directive would run three times:

* once under the 'default' profile
* once under the 'dataseries' profile
* once under the 'sql' profile

This way, we get to re-use a subset of existing tests to exercise alternative logging formats.

Btw, another question: the "transform" pass runs after the test. Can
we add another similar one that runs before the test executes? We
could then remove BTest's filters and use profiles instead I believe.

Sure.

And: your README says "four scripts need to be defined": please make
them optional so that one can skip scripts that aren't needed for a
profile.

Okay; in the absence of the 'supported' script, would it make sense to assume that the profile should always run?

--Gilbert

@TEST-PROFILE dataseries sql

Ah, then I misunderstood how you trigger activating a profile. I was
assuming there would be something like a command line argument to run
all tests with, say, profile "dataseries". Wouldn't that be better?
Then one wouldn't need to add a line to pretty much all tests (nor
touch them all when adding a new profile).

Okay; in the absence of the 'supported' script, would it make sense to
assume that the profile should always run?

Yes, I think so.

Robin

Yeah, but a lot of alternative logging targets would only really use a subset of the tests. Testing log rotation, for example, wouldn't make sense when dealing with an SQL backend. We could use groups to specify different classes of tests. . . but I'm afraid we'd run into granularity issues if we went that route (which is the reason I went with tags instead).

So, how about this: in each profile directory, we add a 'tests' file. This file contains a list of all the tests corresponding to a given profile. The exception here would be the default profile, for which the tests file would list all tests that *would not* run (since e.g. some SQL-specific tests might not make sense when dealing with vanilla log files, but it would be tedious to manually update this file every time we added a new test).

And yeah, you're right; there definitely needs to be a command line option to only run tests associated with a certain profile.

--Gilbert

Yeah, but a lot of alternative logging targets would only really use a
subset of the tests. Testing log rotation, for example, wouldn't make
sense when dealing with an SQL backend.

Is it more tests that use the profile, or more that don't? I'd suspect
the former, which makes listing all tests to be used with a profile
quite cumbersome; and also error prone, because for each new test, one
would need to add it to potentially a number of profiles.

So, how about this: in each profile directory, we add a 'tests' file.
This file contains a list of all the tests corresponding to a given
profile.

How about doing both: we do two files "include" and "exclude", each
listing tests. If the first exists, only those are run. If the second
exists, all are run except those. And if both exists, "include except
exclude" is run. Then each profile can decide on its own what makes
more sense.

  The exception here would be the default profile, for which the tests
  file would list all tests that *would not* run (since e.g. some
  SQL-specific tests might not make sense when dealing with vanilla
  log files, but it would be tedious to manually update this file
  every time we added a new test).

I think the TEST-REQUIRES command is the better way to express that a
test a SQL-specific. Otherwise, the information gets separated from
the test itself.

Robin

How about doing both: we do two files "include" and "exclude", each
listing tests. If the first exists, only those are run. If the second
exists, all are run except those. And if both exists, "include except
exclude" is run. Then each profile can decide on its own what makes
more sense.

Makes sense.

I think the TEST-REQUIRES command is the better way to express that a
test a SQL-specific. Otherwise, the information gets separated from
the test itself.

Cool.

--Gilbert