简体   繁体   中英

is NUnit bad choice for Selenium test?

I have read umpteen answers on SO while searching for NUnit + dependent methods + order of test execution. Every single answer suggests that forcing any set of order for unit tests is extremely evil.

I am writing Selenium tests using NUnit. So I am trying to write integration tests using Unit testing framework!!!

To cite an example of integration tests (and this is just one example). I need to create a valid account before proceeding with other tests. If creation of account fails then I would like to abort entire test execution.

Since I don't want to rely on alphabetic order of test and in true spirit of NUnit, decided to create an account before any further test. Though it does not look right to me for two core reasons -

  1. Unnecessary code duplication/execution
  2. What if application account creation is not working, all my tests would still try to create and account again and again and failing

I am inclined to think that NUnit may be not be right deal with Selenium tests. But if not Nunit then what should I use?

Selenium Core itself comes with a TestRunner that is written in Javascript and you can run your tests directly from the browser.

For more see:

http://www.developerfusion.com/article/84484/light-up-your-development-with-selenium-tests/

Apart from that, using Nunit and tests written in C# are much more easier to write and maintain. Are you using SetUp and TearDown while writing your tests? That way you can avoid code duplication.

Regarding you second point, you can have a flag that is set on first setup failure and skips the setup the next time or the setup itself tracking it and quickly failing the next time. And tests don't run if setup fails in Nunit.

I run Selenium with NUnit all the time. It just depends on how you write your tests. To avoid code duplication, I make a library of helper functions that do common things, like log in or log out of my site, that the other tests use to get to the page they need to test. (I use the term 'library' in a loose sense; I don't actually split them into their own C# project.)

You are right that if the account creation function is broken, the other tests will fail. But personally, I don't see that as a problem, as the point of unit tests is to make sure that your changes didn't have unintended effects elsewhere in your project. If the account creation broke, clearly that affects a lot of things. Ditto if my login helper method fails: if you can't log in, you can't get to anything in the site. Effectively, the whole site is broken.

If you need to create new accounts on each test then the approach that I would take is to have that code moved into your SetUp code. If some of your tests don't require login, split them out into different files.

Any bits of duplcation should be removed, test code should be as clean and robust as production code. Splitting the files with different tests help maintain the idea of Single Responsibility .

Did you also look at PNunit?

See one of the anwers in this question:

Has anyone found a way to run C# Selenium RC tests in parallel?

I'm still not 100% sure how TestNG would work with grid, suppose you have a 3-step registration process and you divide this up in 3 tests. Is TestNG with grid going to help you here? I suppose not, or will it detect that test C needs to have test A and B runned on the same thread?

PNunit looks like it could provide a way to distribute dependent tests to the same machine. Although it's probably quite complicated to set up.

Two approaches might help you, with the problem you describe as an answer to AutomatedTester:

First , NUnit 2.4.4 defines a SuiteAttribute that lets you run tests in the order you want. Very handy but it has a major restriction: it is not compatible with TestCaseAttribute . That means all your tests have to be triggered only by TestAttribute ; which is very annoying if you target coverage of value-based boundary tests (thus several data-driven test cases). More info on http://www.nunit.org/index.php?p=suite&r=2.5.10

Another approach is to prepare an integration sample database tailored just for your test cases. Say you have a 15-steps registration process: create a student record and push it to step one, then another student and push it to step two, and so on. Save your database and restore it as test fixture setup. Then test each step with a different student.

It is perfectly valid in most cases to do integration tests on different records for each step, as it provides the same functionwise and codewise coverage, and it follows the idea of integration testing because your records in the DB are true records (created by the UI with all flaws that comes with the UI).

Of course it needs more time to run and storage space because of the DB copies you'll have to store. If your system can't afford that, then you'll probably want to look at the first solution.

It also gives you the advantage of being able to spot bugs on later steps even if earlier steps are unstable: all tests are run on each test campaign which is not the case in the solution you ask for.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM