简体   繁体   中英

setup and teardown with py.test data driven testing

I have a lot of tests that basically do the same actions but with different data, so I wanted to implement them using pytest, I managed to do it in a typical junit way like this:

import pytest
import random

d = {1: 'Hi',2: 'How',3: 'Are',4:'You ?'}

def setup_function(function):
    print("setUp",flush=True)

def teardown_function(functions):
    print("tearDown",flush=True)

@pytest.mark.parametrize("test_input", [1,2,3,4])
def test_one(test_input):
    print("Test with data " + str(test_input))
    print(d[test_input])
    assert True

Which gives me the following output

C:\\Temp>pytest test_prueba.py -s

============================= test session starts ============================= platform win32 -- Python 3.6.5, pytest-3.5.0, py-1.5.3, pluggy-0.6.0 rootdir: C:\\Temp, inifile: collected 4 items

test_prueba.py

setUp

Test with data 1

Hi

.tearDown

setUp

Test with data 2

How

.tearDown

setUp

Test with data 3

Are

.tearDown

setUp

Test with data 4

You ?

.tearDown

========================== 4 passed in 0.03 seconds ===========================

The problem now is that I would perform some actions also in the setup and teardown that I need to access to the test_input value

Is there any elegant solution for this ? Maybe to achieve this I should use the parametrization or the setup teardown in a different way ? If that the case can someone put an example of data driven testing with setup and teardown parametrized ?

Thanks !!!

parameterize on a test is more for just specifying raw inputs and expected outputs. If you need access to the parameter in the setup, then it's more part of a fixture than a test.

So you might like to try:

import pytest

d = {"good": "SUCCESS", "bad": "FAIL"}

def thing_that_uses_param(param):
    print("param is", repr(param))
    yield d.get(param)
    print("test done")

@pytest.fixture(params=["good", "bad", "error"])
def parameterized_fixture(request):
    param = request.param
    yield from thing_that_uses_param(param)

def test_one(parameterized_fixture):
    assert parameterized_fixture.lower() == "success"

Which outputs:

============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0 -- c:\Users\User\AppData\Local\Programs\Python\Python35-32\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\User\Documents\python, inifile:

collecting ... collected 3 items

a.py::test_one[good] PASSED                                              [ 33%]
a.py::test_one[bad] FAILED                                               [ 66%]
a.py::test_one[error] FAILED                                             [100%]

================================== FAILURES ===================================
________________________________ test_one[bad] ________________________________

parameterized_fixture = 'FAIL'

    def test_one(parameterized_fixture):
>       assert parameterized_fixture.lower() == "success"
E       AssertionError: assert 'fail' == 'success'
E         - fail
E         + success

a.py:28: AssertionError
---------------------------- Captured stdout setup ----------------------------
param is 'bad'
-------------------------- Captured stdout teardown ---------------------------
test done
_______________________________ test_one[error] _______________________________

parameterized_fixture = None

    def test_one(parameterized_fixture):
>       assert parameterized_fixture.lower() == "success"
E       AttributeError: 'NoneType' object has no attribute 'lower'

a.py:28: AttributeError
---------------------------- Captured stdout setup ----------------------------
param is 'error'
-------------------------- Captured stdout teardown ---------------------------
test done
===================== 2 failed, 1 passed in 0.08 seconds ======================

However, that requires that you create a parameterized fixture for each set of parameters you might want to use with a fixture.

You could alternatively mix and match the parameterized mark and a fixture that reads those params, but that requires the test to uses specific names for the parameters. It will also need to make sure such names are unique so it won't conflict with any other fixtures trying to do the same thing. For instance:

import pytest

d = {"good": "SUCCESS", "bad": "FAIL"}

def thing_that_uses_param(param):
    print("param is", repr(param))
    yield d.get(param)
    print("test done")

@pytest.fixture
def my_fixture(request):
    if "my_fixture_param" not in request.funcargnames:
        raise ValueError("could use a default instead here...")
    param = request.getfuncargvalue("my_fixture_param")
    yield from thing_that_uses_param(param)

@pytest.mark.parametrize("my_fixture_param", ["good", "bad", "error"])
def test_two(my_fixture, my_fixture_param):
    assert my_fixture.lower() == "success"

Which outputs:

============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0 -- c:\Users\User\AppData\Local\Programs\Python\Python35-32\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\User\Documents\python, inifile:
collecting ... collected 3 items

a.py::test_two[good] PASSED                                              [ 33%]
a.py::test_two[bad] FAILED                                               [ 66%]
a.py::test_two[error] FAILED                                             [100%]

================================== FAILURES ===================================
________________________________ test_two[bad] ________________________________

my_fixture = 'FAIL', my_fixture_param = 'bad'

    @pytest.mark.parametrize("my_fixture_param", ["good", "bad", "error"])
    def test_two(my_fixture, my_fixture_param):
>       assert my_fixture.lower() == "success"
E       AssertionError: assert 'fail' == 'success'
E         - fail
E         + success

a.py:25: AssertionError
---------------------------- Captured stdout setup ----------------------------
param is 'bad'
-------------------------- Captured stdout teardown ---------------------------
test done
_______________________________ test_two[error] _______________________________

my_fixture = None, my_fixture_param = 'error'

    @pytest.mark.parametrize("my_fixture_param", ["good", "bad", "error"])
    def test_two(my_fixture, my_fixture_param):
>       assert my_fixture.lower() == "success"
E       AttributeError: 'NoneType' object has no attribute 'lower'

a.py:25: AttributeError
---------------------------- Captured stdout setup ----------------------------
param is 'error'
-------------------------- Captured stdout teardown ---------------------------
test done
===================== 2 failed, 1 passed in 0.08 seconds ======================

I think that what you are looking for is yield fixtures, you can make an auto_use fixture the run something before every test and after and you can access all the test metadata (marks, parameters and etc) you can read it here

and the access to parameters is via function argument called request

IMO, set_up and tear_down should not access test_input values. If you want it to be that way, then probably there is some problem in your test logic.

set_up and tear_down must be independent of values used by test. However, you may use another fixture to get the task done.

I have a lot of tests that basically do the same actions but with different data

In addition to Dunes' answer relying solely on pytest, this part of your question makes me think that pytest-cases could be useful to you, too. Especially if some test data should be parametrized while other do not.

See this other post for an example, and also the documentation of course. I'm the author by the way ;)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM