简体   繁体   English

单元测试与测试之间的依赖关系

[英]Unit-testing with dependencies between tests

How do you do unit testing when you have 你有什么时候做单元测试

  • some general unit tests 一些一般的单元测试
  • more sophisticated tests checking edge cases, depending on the general ones 更复杂的测试检查边缘情况,取决于一般情况

To give an example, imagine testing a CSV-reader (I just made up a notation for demonstration), 举个例子,想象一下测试一个CSV阅读器(我刚刚编写了一个演示符号),

def test_readCsv(): ...

@dependsOn(test_readCsv)
def test_readCsv_duplicateColumnName(): ...

@dependsOn(test_readCsv)
def test_readCsv_unicodeColumnName(): ...

I expect sub-tests to be run only if their parent test succeeds. 我希望只有在父测试成功的情况下才能运行子测试。 The reason behind this is that running these tests takes time. 这背后的原因是运行这些测试需要时间。 Many failure reports that go back to a single reason wouldn't be informative, either. 许多失败的报告可以归结为一个原因也不会提供信息。 Of course, I could shoehorn all edge-cases into the main test, but I wonder if there is a more structured way to do this. 当然,我可以把所有边缘情况都塞进主要测试中,但我想知道是否有更有条理的方法来做到这一点。

I've found these related but different questions, 我发现了这些相关但不同的问题,

UPDATE: 更新:

I've found TestNG which has great built-in support for test dependencies. 我发现TestNG具有很强的内置支持测试依赖性。 You can write tests like this, 你可以写这样的测试,

@Test{dependsOnMethods = ("test_readCsv"))
public void test_readCsv_duplicateColumnName() {
   ...
}

Personally, I wouldn't worry about creating dependencies between unit tests. 就个人而言,我不担心在单元测试之间创建依赖关系。 This sounds like a bit of a code smell to me. 这对我来说听起来有点像代码味道。 A few points: 几点:

  • If a test fails, let the others fail to and get a good idea of the scale of the problem that the adverse code change made. 如果测试失败,让其他人失败并且很好地了解不利代码更改所带来的问题的规模。
  • Test failures should be the exception rather than the norm, so why waste effort and create dependencies when the vast majority of the time (hopefully!) no benefit is derived? 测试失败应该是例外而不是常态,那么为什么在绝大多数时间(希望!)没有任何好处的情况下浪费精力并创建依赖关系? If failures happen often, your problem is not with unit test dependencies but with frequent test failures. 如果经常发生故障,则问题不在于单元测试依赖性,而在于频繁的测试失败。
  • Unit tests should run really fast. 单元测试应该运行得非常快。 If they are running slow, then focus your efforts on increasing the speed of these tests rather than preventing subsequent failures. 如果它们运行缓慢,那么请集中精力提高这些测试的速度,而不是防止后续故障。 Do this by decoupling your code more and using dependency injection or mocking. 通过更多地解耦代码并使用依赖注入或模拟来完成此操作。

Proboscis is a python version of TestNG (which is a Java library). ProboscisTestNG的python版本(它是一个Java库)。

See packages.python.org/proboscis/ 请参阅packages.python.org/proboscis/

It supports dependencies, eg 它支持依赖关系,例如

@test(depends_on=[test_readCsv])
public void test_readCsv_duplicateColumnName() {
   ...
}

I have implemented a plugin for Nose (Python) which adds support for test dependencies and test prioritization. 我已经为Nose (Python)实现了一个插件,它增加了对测试依赖性和测试优先级的支持。

As mentioned in the other answers/comments this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests - with a huge overhead for getting into a testable state, minutes vs hours). 正如其他答案/评论中所提到的,这通常是一个坏主意,但是可能有例外情况你想要这样做(在我的情况下,它是集成测试的性能 - 进入可测试状态的巨大开销,分钟vs小时)。

You can find it here: nosedep . 你可以在这里找到它: nosedep

A minimal example is: 一个最小的例子是:

def test_a:
  pass

@depends(before=test_a)
def test_b:
  pass

To ensure that test_b is always run before test_a . 确保test_b始终在test_a之前test_a

I'm not sure what language you're referring to (as you don't specifically mention it in your question) but for something like PHPUnit there is an @depends tag that will only run a test if the depended upon test has already passed. 我不确定你指的是哪种语言(因为你没有在你的问题中特别提到它)但是对于像PHPUnit这样的东西,有一个@depends标签只有在依赖于测试的情况下才能运行测试。

Depending on what language or unit testing you use there may also be something similar available 根据您使用的语言或单元测试,可能还有类似的可用内容

You may want use pytest-dependency . 您可能需要使用pytest-dependency According to theirs documentation code looks elegant: 根据他们的文档代码看起来很优雅:

import pytest

@pytest.mark.dependency()
@pytest.mark.xfail(reason="deliberate fail")
def test_a():
    assert False

@pytest.mark.dependency()
def test_b():
    pass

@pytest.mark.dependency(depends=["test_a"])
def test_c():
    pass

@pytest.mark.dependency(depends=["test_b"])
def test_d():
    pass

@pytest.mark.dependency(depends=["test_b", "test_c"])
def test_e():
    pass

Please note, it is plugin for pytest , not unittest which is part of python itself. 请注意,它是pytest的插件,而不是unittest ,它是python本身的一部分。 So, you need 2 more dependencies (fe add into requirements.txt ): 所以,你还需要2个依赖项(fe add into requirements.txt ):

pytest==5.1.1
pytest-dependency==0.4.0

According to best practices and unit testing principles unit test should not depend on other ones. 根据最佳实践和单元测试原则,单元测试不应该依赖于其他测试。

Each test case should check concrete isolated behavior. 每个测试用例都应检查具体的隔离行为。

Then if some test case fail you will know exactly what became wrong with our code. 然后,如果某个测试用例失败,您将确切知道我们的代码出了什么问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM