简体   繁体   中英

How to recognize a bug in software rather than an error by the tester?

I'm currently working on setting up an experiment where I'm going to investigate how modifying a certain testing technique can impact the amount of bugs found by the tester. I plan to use a library with methods, which I have seeded with a specific amount of programming mistakes which will result in bugs when the user executes them. Through an ant script I execute all the JUnit tests and parse the results from the created testreports (info of the test suite, number of passed/failures/errors etc.).

My questions is now if you have any idea how to recognize if a failure (in one of the tests) is actually a bug found by the tester or if she has made an error in her test.

Example: Let us say that I have a simple method like:

public int multi(int n1, int n2) { return n1 * n2; }

And the tester is presented with the requirement that: "The multi function takes two ints and returns the product of the two".

If she writes a test case like:

assertEquals(8, multi(5,10));

This would result in a failure of the test but it is not a bug, but rather the tester that has made a incorrect test case.

If I rather had the following method, with a seeded mistake as an addition of 1 in the multiplication statement:

public int multiBug(int n1, int n2) { return (n1 * n2) + 1; }

If the tester writes a test case as:

assertEquals(8, multi(2,4));

She would have found a bug, since she expects a "correct" result but the test still fails.

The reason why I want to do this dynamically and not making an analysis about this after the experiment is that I want to give the tester feedback during the testing session.

Does anyone have any idea on how this problem could be tackled? Could one make a "double-call" when the "bug-method" is called to verify the result against a "correct-method"? So if the user tests multiBug I would call the multi function with the same parameters and verify the results?

Remember that I know exactly which bugs are present in each method.

Ciao!

I think it is impossible to find a bug in a test if the expectation of the test is wrong. To find this bug someone has to have a different expectation, which could be wrong as well. This would lead to an infinite cascade of expectations. Instead I suggest to treat tests as the specification / the requirements of a project. What I mean is to remove any redundant source of requirements the tests are based on. Those may lead to bugs in the test. Tread the tests as the second most specific specification, besides the code itself, which is the most specific specification of an application. And then apply Test Driven Development at its best: for each requirement the application has to fulfill, write the test first. Generate all classes and methods to make the code compile, nothing more. Watch the test fail. This is most important. If it doesn't fail, it is not a test. Now provide the most stupid implementation to make the test succeed. Its very important to keep the implementation simply stupid. Otherwise, you may run later on into tests which succeed without modifying / adding code. This is a bad sign. Each test should tackle exactly one aspect, no less and no more. And the test has to make as few assumptions as possible. Otherwise it is testing the implementation and not the requirements.

I stop here, because if I countinue, it would result into a blog post or tutorial :-)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM