简体   繁体   中英

Googletest: How to run tests asynchronously?

Given a large project with thousands of tests, some of which take multiple minutes to complete. When executed sequentially, the whole set of test takes more than an hour to finish. The testing time could be reduced by executing tests in parallel.

As far as I know there are no means to do that directly from googletest/mock, like a --async option. Or am I wrong?

One solution is to determine the tests which can run in parallel and write a script that starts each in a separate job, ie

./test --gtest_filter=TestSet.test1 &
./test --gtest_filter=TestSet.test2 &
...

But this would require additional maintenance effort and introduce another "layer" between the test code and its execution. I'd like a more convenient solution. For example, one could suffix the TEST and TEST_F macros and introduce TEST_ASYNC, TEST_F_ASYNC. Tests defined with TEST_ASYNC would then be executed by independent threads, starting at the same time.

How can this be achieved? Or is there another solution?

Late response, but I'll put it here for anyone searching for a similar answer. Working on WebRTC I found a similar need to speed up our test execution. Executing all of our tests sequentially took more than 20 minutes, and a bunch of them spend at least some time waiting (so they don't even fully utilize a core).

Even for "proper unit tests" I'd argue this is still relevant, because there's a difference between your single-threaded tests taking 20 seconds and ~1 second to execute (if your workstation is massively parallel, this speedup is not uncommon).

To solve this for us, I developed a script that test executes tests in parallel. This is stable enough to run on our continuous integration, and released here: https://github.com/google/gtest-parallel/

This python script essentially takes --gtest_filter=Foo (which you can specify) of one or more gtest binary specified, splits them up on several workers and runs individual tests in parallel. This works fine so long as the tests are independent (don't write to shared files, etc). For tests that didn't work fine, we put them in a webrtc_nonparallel_tests binary and ran those separately, but the vast majority were already fine, and we fixed several of them because we wanted the speedup.

I would suggest you are solving the wrong problem. You want unit tests to run quickly, in fact if a test takes several minutes to run, it's not a unit test.

I suggest you split your tests into proper unit tests and integration/regression or slow running tests. You can then run the unit tests as you develop and just run the longer running ones before a push/commit.

You could even run the two (or more) sets of tests yourself simultaneously.

The docs themselves suggest using filters to solve this.


Edit in light of a downvote, and a new toy mentioned in the docs.

Since I gave this answer, the docs have been updated and now mention a parallel runner , which "works by listing the tests of each binary, and then executing them on workers in separate processes" which would solve the problem. When I first wrote the answer this didn't exist.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM