简体   繁体   中英

Determining which test cases covered a method

The current project I'm working on requires me to write a tool which runs functional tests on a web application, and outputs method coverage data, recording which test case traversed which method.

Details: The web application under test will be a Java EE application running in a servlet container (eg. Tomcat). The functional tests will be written in Selenium using JUnit. Some methods will be annotated so that they will be instrumented prior to deployement into the test enviornment. Once the Selenium tests are executed, the execution of annotated methods will be recorded.

Problem : The big obstacle of this project is finding a way to relate an execution of a test case with the traversal of a method, especially that the tests and the application run on different JVMs, and there's no way to transmit the name of the test case down the application, and no way in using thread information to relate test with code execution.

Proposed solution: My solution would consist of using the time of execution: I extend the JUnit framework to record the time the test case was executed, and I instrument the application so that it saves the time the method was traversed. And I try to use correlation to link the test case with method coverage.

Expected problems: This solution assumes that test cases are executed sequentially, and a test case ends befores the next one starts. Is this assumption reasonable with JUnit?

Question: Simply, can I have your input on the proposed solution, and perhaps suggestions on how to improve and make it more robust and functional on most Java EE applications? Or leads to already implemented solutions?

Thank you

Edit : To add more requirements, the tool should be able to work on any Java EE application and require the least amount of configuration or change in the application. While I know it isn't a realistic requirement, the tool should at least not require any huge modification of the application itself, like adding classes or lines of code.

Have you looked at existing coverage tools (Cobertura, Clover, Emma, ...). I'm not sure if one of them is able to link the coverage data to test cases, but at least with Cobertura, which is open-source, you might be able to do the following:

  • instrument the classes with cobertura
  • deploy the instrumented web app
  • start a test suite
    • after each test, invoke a URL on the web app which saves the coverage data to some file named after the test which has just been run, and resets the coverage data
  • after the test suite, generate a cobertura report for every saved file. Each report will tell which code has been run by the test

If you need a merged report, I guess it shouldn't be too hard to generate it from the set of saved files, using the cobertura API.

Your proposed solution seems like a reasonable one, except for the proposed solution to relate the test and request by timing. I've tried to do this sort of thing before, and it works. Most of the time. Unless you write your JUnit code very carefully, you'll have lots of issues, because of differences in time between the two machines, or if you've only got one machine, just matching one time against another.

A better solution would be to implement a Tomcat Valve which you can insert into the lifecycle in the server.xml for your webapp. Valves have the advantage that you define them in the server.xml, so you're not touching the webapp at all.

You will need to implement invoke(). The best place to start is probably with AccessLogValve . This is the implementation in AccessLogValve:

/**
 * Log a message summarizing the specified request and response, according
 * to the format specified by the <code>pattern</code> property.
 *
 * @param request Request being processed
 * @param response Response being processed
 *
 * @exception IOException if an input/output error has occurred
 * @exception ServletException if a servlet error has occurred
 */
public void invoke(Request request, Response response) throws IOException,
        ServletException {

    if (started && getEnabled()) {                
        // Pass this request on to the next valve in our pipeline
        long t1 = System.currentTimeMillis();

        getNext().invoke(request, response);

        long t2 = System.currentTimeMillis();
        long time = t2 - t1;

        if (logElements == null || condition != null
                && null != request.getRequest().getAttribute(condition)) {
            return;
        }

        Date date = getDate();
        StringBuffer result = new StringBuffer(128);

        for (int i = 0; i < logElements.length; i++) {
            logElements[i].addElement(result, date, request, response, time);
        }

        log(result.toString());
    } else
        getNext().invoke(request, response);       
}

All this does is log the fact that you've accessed it.

You would implement a new Valve. For your requests you pass a unique id as a parameter for the URL, which is used to identify the tests that you're running. Your valve would do all of the heavy lifting before and after the invoke(). You could remove remove the unique parameter for the getNext().invoke() if needed.

To measure the coverage, you could use a coverage tool as suggested by JB Nizet, based on the unique id that you're passing over.

So, from junit, if your original call was

@Test void testSomething() {
    selenium.open("http://localhost/foo.jsp?bar=14");
}

You would change this to be:

@Test void testSomething() {
    selenium.open("http://localhost/foo.jsp?bar=14&testId=testSomething");
}

Then you'd pick up the parameter testId in your valve.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM