简体   繁体   中英

How Do You Keep Automated Tests in Synch With Test Plans

Everyone can agree that it's helpful to have many automated tests (system tests and unit tests) and to have written test plans. But how do you keep those tests synchronized with your Test Case Management tool or with your written test plans?

In particular, how do you avoid the awkwardness of needing to update tests in both places?

I always think about the purpose of a document. Who will read it. What will they want to learn from it, and how much effort are they prepared to put in to get that information.

For me, a Test plan should be read and understood by as many stakeholders as possible. I use it to define and explain the scope of the testing. So I keep it really short with very little detail. That way I have a better chance of a wide audience actually reading, and more importantly, understanding the scope of the testing.

In the plan I list the risks, and the approach we will take to test around those risks. I don't list the tests. The tests themselves are all based around these risks, but the coupling between test plan and test scripts is very loose. If I decide to add more risks to the plan, the naturally I'll need more tests, but if I keep the level of detail low enough, then updates to the plan are minor.

I think that it's probably worth us clarifying what you mean when you say test plan. I've seen quite a variety of different documents with very different purposes described as a "test plan", so I'm not sure what it means for your project, and for your company.

1) Who reads it? 2) Who should probably read it, but currently you suspect they don't bother? (Do you know why they don't bother?) 3) What information do they need to get from it? Does it give them that info? 4) How do you currently present that information? Does that work for your readers/non-readers? 5) What sort of feedback do you need to get from the readers of your test plan? 6) Do you have any regulatory requirements that you need to satisfy with your test planning?

If your primary purpose with the test plan is to seek feedback, and identify important gaps in your testing (there will always be big holes in your tests, the question is whether they are important ones) - then (apologies, I can only post one link) - there's a useful webinar on the EuroSTAR conference site, from Rikard Edgren, called "More and better test ideas".

You may find that managing tests through a wiki might be worth exploring - here's a blog posts discussing the idea: http://marlenacompton.com/?p=1894

I'm not aware of any silver bullet solution for this. To my knowledge there is no round-trip testing system. I think it will be necessary to update your test plan as you implement new tests.

Another option is to leave the test plan a little less specific and use it primarily as a starting point. Write the test plan and have it reviewed by the relevant persons. After this, ensure that everything in the plan is written and don't bother updating it with the extra things you write. The point of the test plan is to ensure you don't miss anything. There is less value in using it later to see what was tested.

If you are in an industry that must document everything, I think you'll have to do the work in both places, but for most purposes, this isn't necessary.

Actually, I don't think that everyone would agree that it's useful to have written test plans. A (much) better approach is to create well structured tests (both system and unit) which should be understandable by all the stakeholders.

Once you have well defined tests, you don't need written test plans and there's nothing to keep in sync!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM