简体   繁体   English

Python运行连续测试或每次测试多次

[英]Python run unittests continuously or each test multiple times

I have wrote unit test cases to test my application. 我编写了单元测试用例来测试我的应用程序。 and it is working as expected with out issues. 它正在按预期工作,没有问题。

below is some sample testcase 下面是一些示例测试用例

import os
import unittest

class CreateUser(unittest.TestCase):
    def setUp(self):
        pass

    def tearDown(self):
        pass
    def test_send_message(self):
        #my script goes here
        print "success"
if __name__ == '__main__':
    unittest.main()

If i run this test it executing as expected but I want to run this test case 'N' no of times , 如果我运行此测试它按预期执行但我想要运行此测试用例'N'没有时间

for that i added for loop in main function, also its running only one time, code i used as below 因为我在main函数中添加了for循环,也只运行了一次,代码我使用如下

 if __name__ == '__main__':
    for i in range(1, 5):
         unittest.main()  

I also used schedule lib to run test every 10 mins but no luck 我还使用schedule lib每10分钟运行一次测试,但没有运气

Is there any way to run this test case multiple times or any other work around that i am missing or any other continuous build tool to achive this? 有没有办法多次运行这个测试用例或任何其他我缺少的工作或任何其他连续构建工具来实现这个?

Thanks in advance 提前致谢

First, a little bit of caution. 首先,一点点谨慎。

Why do you want to run the same test five times? 为什么要运行相同的测试五次? I don't want to make assumptions without seeing your code, but this is a pretty serious code smell. 我不想在没有看到你的代码的情况下做出假设,但这是一个非常严重的代码味道。 A unit test must be repeatable , and if running it five times in a row does not give the same result as running it once, then something in the test is not repeatable. 单元测试必须是可重复的 ,如果连续运行五次并不会产生与运行一次相同的结果,那么测试中的某些内容是不可重复的。 In particular, if early runs of the test are creating side effects used by later runs, or if there is some kind of random number involved, both of those are very bad situations that need to be repaired rather than running the test multiple times. 特别是,如果测试的早期运行会产生后续运行所使用的副作用,或者如果涉及某种随机数,则这两种情况都是非常糟糕的情况需要修复而不是多次运行测试。 Given just the information that we have here, it seems very likely that the best advice will be don't run that test multiple times! 鉴于我们在这里提供的信息,最好的建议似乎不是多次运行该测试!

Having said that, you have a few different options. 话虽如此,你有几个不同的选择。

  1. Put the loop inside the test 把循环放在测试中

Assuming there is something meaningful about calling a function five times, it is perfectly reasonable to do something like: 假设有五次调用函数有一些有意义的事情,那么执行以下操作是完全合理的:

def test_function_calls(self):
    for _ in xrange(1, 5):
        self.assertTrue(f())
  1. Since you mentioned the nose tag, you have some options for parameterized tests , which usually consists of running the same (code) test over different input values. 由于您提到了nose标记,因此您有一些参数化测试选项,通常包括对不同的输入值运行相同的(代码)测试。 If you used something like https://github.com/wolever/nose-parameterized , then your result might be something like this: 如果你使用像https://github.com/wolever/nose-parameterized这样的东西,那么你的结果可能是这样的:
@parameterized.expand([('atest', 'a', 1), ('btest', 'b', 2)])
def test_function_calls(self, name, input, expected):
    self.assertEqual(expected, f(input))

Parameterized tests are, as the name implies, typically for checking one code test with several pieces of data. 顾名思义,参数化测试通常用于检查具有多个数据的一个代码测试。 You can have a list with dummy data if you just want the test to run several times, but that's another suspicious code structure that goes back to my original point. 如果您只想让测试运行几次,那么您可以拥有一个包含虚拟数据的列表,但这是另一个可疑的代码结构,可以追溯到我的原始点。

Bonus side note: almost all "continuous" build tools are set up to trigger builds/tests/etc. 附加说明:几乎所有“连续”构建工具都设置为触发构建/测试/等。 on specific conditions or events, like when code is submitted to a repository. 在特定条件或事件上,例如将代码提交到存储库时。 It is very unusual for them to simply run tests continuously . 他们只是不断地进行测试是非常不寻常的。

I've done my best to answer your question here, but I feel like something is missing. 我已尽力在这里回答你的问题,但我觉得缺少了一些东西。 You may want to clarify exactly what you are trying to accomplish in order to get the best answer. 您可能想要明确说明您要完成的工作以获得最佳答案。

I like to run this kind of thing in a simple bash loop. 我喜欢在一个简单的bash循环中运行这种东西。 Of course, this only works if you're using bash: 当然,这只适用于你使用bash:

while true; do python setup.py test ; done

Apologies, as a new user I am unable to comment. 道歉,作为新用户,我无法评论。 I just wanted to respond to GrandOpener's concern over your want to re-execute tests. 我只想回应GrandOpener对你想要重新执行测试的担忧。 I myself am in a similar situation where I have 'unreliable' tests. 我本人处于类似的情况,我有'不可靠'的测试。 The problem as I see it with unreliable or indeterministic tests are that it's very hard to prove you've fixed them as they may only fail 1/100 times. 我用不可靠或不确定的测试看到的问题是,很难证明你已经修复它们,因为它们可能只会失败1/100次。

My thinking was that I would execute the test in question X times, building up a distribution of how many test iterations happened before failure. 我的想法是,我将执行有问题的测试X次,建立一个在失败之前发生了多少次测试迭代的分布。 With that I thought I would either be able to a) use the max iterations before failure as the number of iterations to check post unreliability 'fix' or b) use some fancy statistical methods to show that the reliability 'fix' is Y% likely to have fixed the issue. 有了这个我认为我要么能够a)使用失败前的最大迭代次数作为检查后不可靠性'修复'的迭代次数或b)使用一些花哨的统计方法来证明可靠性'修复'可能是Y%解决了这个问题。

With no malice or condescension, how does GrandOpener look to prove his imaginary unreliable tests have been fixed? 没有恶意或屈尊俯就,GrandOpener如何证明他想象中的不可靠测试已被修复?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM