简体   繁体   中英

JUnit: same tests for different implementations and fine grained result states

We have two different implementations of the same interface. Think of it as a reference and a production implementation. The two implementations are implemented by different teams, and the goal is to get the same results from both implementations.

The team creating the reference implementation has created a large Junit based amount of test cases (right now ~700 test cases), and those unit tests are run frequently during development. We can run the same set of test cases against the production implementation.

Functionality of the production implementation is tested via regression testing. However, beeing able to run the unit tests against the production implementation gives us a quick feedback whether something got seriously broken each time we get a new release of the production code.

But since certain functionality int the production release is missing, or results differ because of known bugs, not all tests pass with this implementation. This makes it hard to spot regressions early.

There are several categories here:

  • (A) test cases that are only meaningful for the reference implementation and will never be important for the production implementation

  • (B) test cases where only certain assertions have to be omitted when testing the production implementation (ie. additional values reported in reference implementaion)

  • (C) test cases that are known not to work in the production implementation because development of certain features lags behind, but should be included later

So far, we have these options:

  • Cluttering our code with if-statements surrounding assertions that only work in reference implementation. This solves (B) but is hard to maintain.

  • Using assumeTrue. This is OK for (A), but gives the false impression everything is OK in (B).

What I'd like to have is

  • Being able to skip certain tests based on a runtime condition like with assumeTrue, but these should be reported as skipped rather than successful for (C)

  • Having more result states that take into account whether a test case is known to have worked before, that gives

    • Success for a test case that was known to have worked before
    • Fixed for a test case that was known not to have worked before
    • Failure for a test case that was known not to have worked before
    • Regression for a test case that was known to have worked before
    • Skipped

Has anyone done something like that before or is it even possible with JUnit (and preferably in conjunction with using the eclipse JUnit plugin)?

To skip a test with a runtime condition, you can use Filter , and you can choose to ignore a test or not depending upon a runtime condition based upon an aspect of the test (a name, better an Annotation @Development() or @Version() on the test method).

To use this to solve (B) you would need different test methods for each version, one for 3.1 and one for 3.2, etc. This may seem like it clutters your unit tests, but actually it makes your job easier to pick out the tests that apply to 3.1.

For the 'time machine' part of your question, it's very hard for junit to know whether a test has passed before. You would need to record the old results somewhere.

To analyse which tests have changed status (from passed to failed), get your junit tests run in a systematic way by a CI system for instance, and then save the results somewhere where they can be post-processed to give you regressions. For instance, surefire xml reports are fairly easy to parse.

I don't know about the first part of your question, but as far as the multiple result states are concerned you're probably going to need some sort of continuous integration. As far as I know there is no way for JUnit to "know" whether your test cases have succeeded or failed in the past so you're going to need to get this information elsewhere.

Could you create a base class that contains all test cases known to work in both cases and extend that with two other classes that contain the test cases that are particular to the production and reference implementations?

If it's more granular than that ( for example, a test is quite long and 90% of it works in both prod and reference ) then you could take the same approach, but put the assertions that differ in overridden methods in the subclasses. ( you'd had to create a number of abstract methods in the base class to support this )

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM