[英]write pytest test function return value to file with pytest.hookimpl
I am looking for a way to access the return value of a test function in order to include that value in a test report file (similar to http://doc.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures ).我正在寻找一种方法来访问测试函数的返回值,以便将该值包含在测试报告文件中(类似于http://doc.pytest.org/en/latest/example/simple.html#post -process-test-reports-failures )。
Code example that I would like to use:我想使用的代码示例:
# modified example code from http://doc.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures
import pytest
import os.path
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
if rep.when == "call" and rep.passed:
mode = "a" if os.path.exists("return_values") else "w"
with open("return_values.txt", mode) as f:
# THE FOLLOWING LINE IS THE ONE I CANNOT FIGURE OUT
# HOW DO I ACCESS THE TEST FUNCTION RETURN VALUE?
return_value = item.return_value
f.write(rep.nodeid + ' returned ' + str(return_value) + "\n")
I expect the return value to be written to the file "return_values.txt".我希望将返回值写入文件“return_values.txt”。 Instead, I get an AttributeError.相反,我得到了一个 AttributeError。
Background (in case you can recommend a totally different approach):背景(如果您可以推荐一种完全不同的方法):
I have a Python library for data analysis on a given problem.我有一个 Python 库,用于对给定问题进行数据分析。 I have a standard set of test data which I routinely run my analysis to produce various "benchmark" metrics on the quality of the analysis algorithms on.我有一组标准的测试数据,我经常运行我的分析来生成关于分析算法质量的各种“基准”指标。 For example, one such metric is the trace of a normalized confusion matrix produced by the analysis code (which I would like to be as close to 1 as possible).例如,一个这样的度量是由分析代码生成的归一化混淆矩阵的迹线(我希望它尽可能接近 1)。 Another metric is the CPU time to produce an analysis result.另一个指标是产生分析结果的 CPU 时间。
I am looking for a nice way to include these benchmark results into a CI framework (currently Jenkins), such that it becomes easy to see whether a commit improves or degrades the analysis performance.我正在寻找一种很好的方法将这些基准测试结果包含到 CI 框架(目前是 Jenkins)中,这样就可以轻松查看提交是提高还是降低了分析性能。 Since I am already running pytest in the CI sequence, and since I would like to use various features of pytest for my benchmarks (fixtures, marks, skipping, cleanup) I thought about simply adding a post-processing hook in pytest (see http://doc.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures ) that collects test function run time and return values and reports them (or only those which are marked as benchmarks) into a file, which will be collected and archived as a test artifact by my CI framework.由于我已经在 CI 序列中运行 pytest,并且由于我想将 pytest 的各种功能用于我的基准测试(夹具、标记、跳过、清理),我想简单地在 pytest 中添加一个后处理钩子(参见http: //doc.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures ) 收集测试函数运行时间和返回值并报告它们(或仅标记为基准的那些)到一个文件中,该文件将被我的 CI 框架收集并存档为测试工件。
I am open to other ways to solve this problem, but my google search conclusion is that pytest is the framework closest to already providing what I need.我对解决这个问题的其他方法持开放态度,但我的谷歌搜索结论是 pytest 是最接近已经提供我需要的框架。
Sharing the same problem, here is a different solution i came up with:分享同样的问题,这是我想出的不同解决方案:
using the fixture record_property
in the test:在测试中使用夹具record_property
:
def test_mytest(record_property):
record_property("key", 42)
and then in conftest.py
we can use the pytest_runtest_teardown
hook :然后在conftest.py
我们可以使用pytest_runtest_teardown
钩子:
#conftest.py
def pytest_runtest_teardown(item, nextitem):
results = dict(item.user_properties)
if not results:
return
with open(f'{item.name}_return_values.txt','a') as f:
for key, value in results.items():
f.write(f'{key} = {value}\n')
and then the content of test_mytest_return_values.txt
:然后是test_mytest_return_values.txt
的内容:
key = 42
results = dict(item.user_properties)
to obtain the keys and values that were added in the test instead of adding a dict to config and then access it in the test.这可以与heofling的答案结合使用results = dict(item.user_properties)
获取在测试中添加的键和值,而不是将 dict 添加到 config 然后在测试中访问它。pytest
ignores test functions return value, as can be seen in the code : pytest
忽略测试函数的返回值,如代码所示:
@hookimpl(trylast=True)
def pytest_pyfunc_call(pyfuncitem):
testfunction = pyfuncitem.obj
...
testfunction(**testargs)
return True
You can, however, store anything you need in the test function;但是,您可以在测试函数中存储您需要的任何内容; I usually use the config
object for that.我通常为此使用config
对象。 Example: put the following snippet in your conftest.py
:示例:将以下代码段放入您的conftest.py
:
import pathlib
import pytest
def pytest_configure(config):
# create the dict to store custom data
config._test_results = dict()
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
if rep.when == "call" and rep.passed:
# get the custom data
return_value = item.config._test_results.get(item.nodeid, None)
# write to file
report = pathlib.Path('return_values.txt')
with report.open('a') as f:
f.write(rep.nodeid + ' returned ' + str(return_value) + "\n")
Now store the data in tests:现在将数据存储在测试中:
def test_fizz(request):
request.config._test_results[request.node.nodeid] = 'mydata'
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.