简体   繁体   中英

Randomly failing Specflow tests


Although I mentioned this problem earlier in the topic Specflow stability , I would like to bring this up again in a new topic. This because the title of the previous topic is misleading (I now think that Specflow stability is not the problem) and I can address the problem more precise.


When I run the complete test set of 50 or so tests, most of the time there are randomly one or two tests failing (sometimes no tests fail). When I slice up the complete test set into smaller test sets (for instance, a test set of 7 or 8 tests for each separate user story), and these sets are run seperately, all the tests pass. Like Luke McGregor stated in Specflow stability , it seems like the tests are sharing data and therefore fail. But why does this happen only when the complete set is ran and not when I'm using the smaller sets?


I'm trying to run a set of a 50 or so Specflow tests. All of these tests are designed to test the UI of a website. The tests are run in Visual Studio 2010, using MsTest as the runner tool. The browser used is FireFox. Right now, the steps taken in testing are:

  • Before each scenario an new IIS Process and a new BrowserSession is started;
  • Scenario is run;
  • After each scenario the IIS Proccess and the BrowserSession are terminated;

The reason I'm starting a new IIS Process and a new BrowserSession before each individual test scenario is to minimize the risk of 'data sharing' Luke mentioned. Unfortunately to no avail.

I'm a bit lost now in what the problem could be. Am I missing something obvious (or maybe not so obvious) here?

Thank you in advance!

It will help if you could give some example of tests failures.

Do the tests fail because they can't find some elements on the page ? If that's the case, if you are using WebDriver, I suggest you enable implicits waits. This will make your test suite run slower but you will gain stability.

WebDriver driver = new FirefoxDriver();
driver.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(10));

Testing throught the browser, is something that can cause random failures specially if your tests are not designed to wait for elements to appear on the page. Things can get really tricky also with ajax calls.

On a side note I'll recomend that you avoid running a new browser and start IIS before each tests. The reason is that your test suite will take a really long time to run. Instead I'd suggest that: - You run IIS just once in the begining of your test suite. (you can use the tag [BeforeTestRun]) - Open a browser session just once at the begining of your test suite. ([BeforeTestRun]) - At the end of each tests, just log out the user so that all cookies get cleaned. ([AfterScenario])

This will speed up your test suite a lots.

Regarding shared state, I'd suggest that you reset all the data your tests use by using the attribute [BeforeScenario]. So for example, if your tests setup data in a database, you will clean the database before each test run.

Finaly, make sure your tests are self contained: The should not use data used by other tests. The tests will need to always run from a clean initial state and create the data they need.

A static class would be one possibility.
Please take a close look at the failing tests. Can you see a pattern? Maybe some tests fail never while others fail once in a while? Have a closer look at those that fail.


A static class used in multiple unit tests is a classical case of data sharing or state sharing.

For example, consider this class:

public static class TimeProvider
{
    static TimeProvider()
    {
        CurrentTimeProvider = () => DateTime.Now;
    }

    public static Func<DateTime> CurrentTimeProvider { get; set; }

    public static DateTime Now { get { return CurrentTimeProvider(); } }
}

Now, assume one unit test wants to test something in which the current time is relevant:

public void AddItemSetsOrderDateAsCurrentTime()
{
    // Arrange
    var currentTime = new DateTime(2011, 1, 1, 12, 15);
    TimeProvider.CurrentTimeProvider = () => currentTime;

    // Act
    //...
}

All the following unit tests that use TimeProvider.Now will get 2011-01-01 12:15 instead of the current time. That's one example how one test can affect a different test.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM