简体   繁体   中英

Single vs. Multiple Unit Test Projects per Solution?

I have typically had a 1:1 mapping between my product assemblies and my unit test assemblies. I generally try to keep the overall number of assemblies low, and a typical solution may look something like...

  • Client (Contains Views, Controllers, etc)
  • Client.Tests
  • Common (Contains Data/Service Contracts, Common Utilities etc)
  • Common.Tests
  • Server (Contains Domain, Services, etc)
  • Server.Tests
  • Server.WebHost

Lately at work, people have been mentioning having just a single Unit Test project vs. breaking them down by the assembly that they are testing. I know back in the day, this made life easier if you were running NCover etc as part of your build (no longer matters of course).

What is the general rational behind single vs. multiple UnitTest projects? Other than reducing the number of projects in a solution, is there a concrete reason to go one way or the other? I get the impression this may be one of those "preference" things, but Googling hasn't turned up much.

There is no definite answer because it all depends on what you work on as well as personal taste. However, you definitely want to arrange things in a way so you can work effectively .

For me this means, I want to find things quickly, I want to see what tests what, I want to run smaller things to have better control in case I want to profile or do other things on the tests as well. This is usually good when you're debugging failing tests. I don't want to spend extra time figuring anything out, it should speak for itself how things are mapped and what belongs to what.

Another very important thing for me is that I want to isolate as much as possible and have clear boundaries. You want to provide an easy way to refactor/move out parts of your big project into an independent project.

Personally, I always arrange my tests around how my software is structured which means a one-to-one mapping between class and its tests, library and test executable. This gives you a nice test structure which mirrors your software structure, which in turn provides clarity for finding things. In addition, it provides a natural split in case something is moved out independently.

This is my personal choice after trying various ways to do things.

In my opinion, grouping things when there are too many is not necessarily a good thing. It can be, but I believe in the context of this discussion it is the wrong argument for a single test project. Too many test projects with many files inside means just one with a lot of test files. I believe the real problem is that the solution you're working on is getting big. Maybe there are other things you can do to avoid having "one world"? :)

In addition to the other (good) answers, consider that on larger project teams individual team members may create their own solutions to include only the subset of projects they are working on.

Assuming a monolithic solution, with one test project covering everything in that solution, breaks down in that scenario.

Part of the reasoning I would say is that it forces you to enlist only the assembly you are testing. So your Unit tests don't accidentally turn into integration tests or worse. It helps to ensure separation of concerns.

Having one UnitTest Project might slow you down considerably if you test functionality in a lower tier of your architecture. you would spend more time on waiting for the compiler to finish than on actually writing code. imagine one unit test project would depend on all projects. The projects depend on each other. a change in the code of a lower tier project could result in recompiling the projects above that project. that could happen if you trigger a unit test

for example if you have business logic depending on a model project. you change your model project. the test project depends on the business logic and on the model.

unit test -> model

unit test -> business logic -> model

but you only test changes in the model. now it might happen that both the model and the business logic gets recompiled though you only changed your model. this happened to me in a more complex project and changing to having one test project per productive project helped.

I am uncertain as to a particular standard, but in my opinion, splitting them into separate test projects seems to be a better practice. It seems to me to be much akin to object oriented programming on a solution/project level. Suppose one of your projects needs to be reused somewhere and you want to be able to take your tests with you. If they are in a separate project from the other tests and are exclusive only to that particular project, you simply transfer that project along as well. Otherwise you have to go file fishing through the mass test project.

In addition, separation of projects helps keep things tidy when debugging. Rather than having to crack open a single massive project and go digging for the test file you need, it is much simpler if you have a matching test project for the functional projects...dramatically narrows down your search.

Again, I think this is a preference thing, I've never heard a definite settlement one way or the other, but there is my two cents worth...

I might actually go to the extreme of saying that if you consider splitting up your tests in different projects for testing one assembly, then that assembly is too large.. I generally split up my projects into small, reusable components, with small, 1-1 mapped tests. This might of course lead to DLL-overflow, and might not always be possible with certain projects (such as a web project, where it's not as logically clean to split up controllers into different assemblies). Just an idea.

我们正在尝试一个单独的测试解决方案,其中所有测试项目都与项目解决方案在 1-1 的基础上进行。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM