简体   繁体   中英

How do you unit test?

I've read a little bit about unit testing and was wondering how YOU unit test. Apparently unit testing is supposed to break a program down into very small "units" and test functionality from there.

But I'm wondering, is it enough to unit test a class? Or do you take it even further and unit test algorithms, formulas, etc.? Or do you broaden it to unit test asp pages/functionality? Or do you unit test at all?

But I'm wondering, is it enough to unit test a class? Or do you take it even further and unit test algorithms, formulas, etc.? Or do you broaden it to unit test asp pages/functionality? Or do you unit test at all?

An algorithm should be in a class and automatically should be Unit Tested. A formula is inside the class as a function and they are unit tested too. Unit testing tests behavior, state and all things that can be tested for the smallest unit of development. So yes, the algorithm is tested with all its detail. Later, when you have classes that use the other class, you will do integration tests (those are often tested with Unit Test programs). This will be the same but at a higher level.

I use unit tests as a tool to measure whether something still works or not, after I've done some changes to the code (for example, refactoring, fixing a bug, adding an enhancement). Since I use Java, unit tests are largely automated using JUnit. Ijust invoke one command line script and it runs hundreds of tests to verify that the code isn't broken.

i unit test features , not individual methods or classes. The overhead of writing and maintaining unit tests is not insignificant, and in general I don't recommend writing unit tests for every little bit of code. Unit testing features, however, is worthwhile, because features are what the client is paying you for.

These are generic guidelines I find useful for unit testing:

1) Identify Boundary Objects (Win/WebForms, CustomControls etc).

2) Identify Control Objects (Business layer objects)

3) Make sure to Write Unit tests at least for control objects public methods invoked by boundary objects.

This way you'll be sure you're covering main functional aspects (features) of your app and you don't run the risk of micro-testing (unless you want).

I'm a pretty sloppy unit tester, but then I'm an academic, so most of my programs are scripts for my own use or more heavy-duty programs for my research. A lot of my work is on compilers, so classical unit testing is difficult---for example it's not so easy to unit test a register allocator unless you a compiler to wrap around it, and at that point you may as well just go to regression testing.

But there are some big exceptions. I do an enormous among of scripting in Lua, which is divided into modules (a source file can be and often is a module). If I'm working on new stuff, like let's say utilities for interacting with the shell, I'll just drop some unit tests inside the module itself. where they get run every time the module is loaded . Lua is so fast that usually this doesn't matter. Here are some examples:

assert(os.quote [[three]] == [[three]])
assert(os.quote [[three"]] == [['three"']])
assert(os.quote [[your mama]] == [['your mama']])
assert(os.quote [[$i]] == [['$i']])

If I'm a good dog I write some simple tests like these before I write the function.

The other thing I do with unit testing is that if it's anything hard, I test algebraic laws using QuickCheck , which is a random testing tool that has to be seen to be believed. It's the only tool I've ever used that makes unit testing fun . A link there is dangling, but you can find Tom Moertel's story about the ICFP programming contest on his blog.

Hope you find this helpful. QuickCheck has saved my bacon many times. Most recently, I tested code for discrete cosine transform using exact rational arithmetic---then ported it to C!

We usually program in Java libraries for machines. A program is usually formed by over twenty libraries so what we do is unit test each library. It is not an easy task since many times libraries are very coupled between each other and this is not many times possible.

Our code is not as modular as we would like it to be but we must live with it for compatibility issues and break coupling means breaking compatibility in many cases too.

I test as much of the public interface as I can (I am using C++ but the language doesn't really matter). The most important aspect is writing tests when you write the code (immediately before or after). From experience I assure you that developing in this way will lead to more reliable code. and will make it easier to maintain (as changes that break tests will be obviously immediately).

For all projects I recommend you take testing into account from the very beginning - if you write a class that depends on another complicated class, then use an interface so you can 'mock' the more complicated objects when testing (database access, network access, etc.).

Writing lots of tests will appear to slow you down but in reality, over the lifetime of a project, you'll spend less time fixing bugs.

Test often - if it can break, it will - and better it breaks when your testing it than when a customer is trying to use it.

It is not enough to just unit test a class. Classes work together, and that has to be tested too.

There are more units than just classes:

  • modules,
  • layers,
  • frameworks.

And there are of course different forms of testing, eg integration and acceptance.

I test the things I find difficult, the things I think might change, interfaces and the things I've had to fix. And I mostly start with the test, trying to make sure I understand the problem I'm trying to solve.

Just because it compiles doesn't mean it runs! That's the essence of unit testing. Try the code out. Make sure it's doing what you thought it was doing.

Lets face it, if you bring over a matrix transform from matlab, it's easy to mess up a plus or minus sign somewhere. That sort of thing is hard to see. Without trying it out, you just don't know whether it will work correctly. Debugging 100 lines of code is a lot easier than debugging 100,000 lines of code.


Some folks take this to extremes. They try to test every conceivable thing. Testing becomes an end unto itself.

That can be useful later on during maintenance phases. You can quickly check to make sure your updates haven't broken anything.

But the overhead involved can cripple product development! And future changes that alter functionality can involve extensive test-updating overhead.

(It can also get messy with respect to multi-threading and arbitrary execution order.)


Ultimately, unless directed otherwise, my tests try to hit the middle ground.

I look to test at larger granularities, providing a means of verifying basic general functionality. I don't worry so much about every possible fencepost scenario. (That's what ASSERT macros are for.)

For example: When I wrote code to send/receive messages over UDP, I threw together a quick test to send/receive data using that class via the loopback interface. Nothing fancy. Quick, fast, & dirty code. I just wanted to try it out. To make sure that it was actually working before I built something on top of it.

Another example: Reading in camera images from a Firewire camera. I threw together a quick&dirty GTK app to read the images, process them, and display them in realtime. Other folks call that integration testing. But I can use it to verify my Firewire interface, my Image class, my Bayer RGGB->RGB transform, my image orientation & alignment, even whether the camera was mounted upside down again. More detailed testing would only have been warranted if this had proven insufficient.

On the other hand, even for something as simple as:

template<class TYPE> inline TYPE MIN(const TYPE & x, const TYPE & y) { return x > y ? y : x; }
template<class TYPE> inline TYPE MAX(const TYPE & x, const TYPE & y) { return x < y ? y : x; }

I wrote a 1 line SHOW macro to make sure I hadn't messed up the sign:

  SHOW(MIN(3,4));  SHOW(MAX(3,4));

All I wanted to do was to verify that it was doing what it should be doing in the general case. I worry less about how it handles NaN / +-Infinity / (double,int) than whether one of colleagues decided to change the argument order and goofed.


Tool wise, there's a lot of unit-testing stuff out there. If it helps you, more power to you. If not, well you don't really need to get too fancy.

I'll often write a test program that dumps data into and out of a class, and then prints it all out with a SHOW macro:

#define SHOW(X)  std::cout << # X " = " << (X) << std::endl

(Alternatively, many of my classes can self-print using a built-in operator<<(ostream&) method. It's an amazingly useful technique for debugging as well as for testing!)

Makefiles can be trivially extended to automatically generate output files from test programs, and to automatically compare (diff) these output files against previously known (reviewed) results.

Not fancy, perhaps somewhat less than elegant, but as techniques go this is very effective, fast to implement, and very low overhead. (Which has its advantages when your manager disapproves of wasting time on that testing stuff.)


One last thought I'll leave you with. This is going to get me marked down, so DON'T do it!

Some time ago I needed a testing program. It was a required deliverable. The program itself had to verify that another class was working properly. But it couldn't access external datafiles. (We couldn't rely on where the program would be located relative to anything else. No absolute paths either.) The unit-testing framework for the project was incompatible with the compiler I was required to use. It also had to be in one file. The project makefile system didn't support linking multiple files together for a lowly test program. (Application programs, sure. They could use libraries. But only a single file for each test program.)

So, God forgive me, I "broke the rules" ...

<embarrassed>
I used macros. When a #define macro was set, the data was written into a second .c file as an initializer for a struct array. Subsequently, when the software was recompiled, and that second .c file (with the struct array) was #included, and the #define macro was not set, it compared the new results against the previously stored data. Yes, I #included a .c file. O' the embarrassment of it all.
</embarrassed>

But it can be done...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM