简体   繁体   中英

Pytest: Parameterize unit test using a fixture that uses another fixture as input

I am new to parameterize and fixtures and still learning. I found a few post that uses indirect paramerization but it is difficult for me to implement based on what I have in my code. Would appreciate any ideas on how I could achieve this.

I have a couple of fixtures in my conftest.py that supply input files to a function "get_fus_output()" in my test file. That function process the input and generate two data-frames to compare in my testing. Further, I am subletting those two DF based on a common value ('Fus_id') to testthem individually. So the output of this function would be[(Truth_df1, test_df1),(Truth_df2, test_df2)...] just to parameterize the testing of each of these test and truth df. Unfortunately I am not able to use this in my test function "test_annotation_match" since this function needs a fixture.

I am not able to feed the fixture as input to another fixture to parameterize. Yes it is not supported in pytest but not able to figure out a workaround with indirect parameterization.

#fixtures from conftest.py

@pytest.fixture(scope="session")
def test_input_df(fixture_path):
    fus_bkpt_file = os.path.join(fixture_path, 'test_bkpt.tsv')
    test_input_df= pd.read_csv(fus_bkpt_file, sep='\t')
    return test_input_df


@pytest.fixture
def test_truth_df(fixture_path):
    test_fus_out_file = os.path.join(fixture_path, 'test_expected_output.tsv')
    test_truth_df = pd.read_csv(test_fus_out_file, sep='\t')
    return test_truth_df

@pytest.fixture
def res_path():
    return utils.get_res_path()
#test script

@pytest.fixture
def get_fus_output(test_input_df, test_truth_df, res_path):
    param_list = []
    # get output from script
    script_out = ex_annot.run(test_input_df, res_path)

    for index, row in test_input_df.iterrows():
        fus_id = row['Fus_id']
         param_list.append((get_frame(test_truth_df, fus_id), get_frame(script_out, fus_id)))
    
    # param_list eg : [(Truth_df1, test_df1),(Truth_df2, test_df2)...]
    print(param_list)
    return param_list


@pytest.mark.parametrize("get_fus_output", [test_input_df, test_truth_df, res_path], indirect=True)
def test_annotation_match(get_fus_output):
    test, expected = get_fusion_output
    assert_frame_equal(test, expected, check_dtype=False, check_like=True)

#OUTPUT
================================================================================ ERRORS ================================================================================
_______________________________________________________ ERROR collecting test_annotations.py
 _______________________________________________________
test_annotations.py:51: in <module>
    @pytest.mark.parametrize("get_fus_output", [test_input_df, test_truth_df, res_path], indirect=True)
E   NameError: name 'test_input_df' is not defined
======================================================================= short test summary info ========================================================================
ERROR test_annotations.py - NameError: name 'test_input_df' is not defined
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
=========================================================================== 1 error in 1.46s ===========================================================================

I'm not 100% sure I understand what you are trying to do here, but I think your understanding of parameterization and the role of fixtures is incorrect. It seems like you are trying to use the fixtures to create the parameter lists for your tests, which isn't really the right way to go about it (and the way you are doing it certainly won't work, as you are seeing).

To fully explain how to fix this, first, let me give a little background about how parameterization and fixtures are meant to be used.

Parameterization

I don't think anything here should be new, but just to make sure we are on the same page:

Normally, in Pytest, one test_* function is one test case:

def test_square():
    assert square(3) == 9

If you want to do the same test but with different data, you can write separate tests:

def test_square_pos():
    assert square(3) == 9

def test_square_frac():
    assert square(0.5) == 0.25

def test_square_zero():
    assert square(0) == 0

def test_square_neg():
    assert square(-3) == 9

This isn't great, because it violates the DRY principle. Parameterization is the solution to this. You turn one test case into several by providing a list of test parameters:

@pytest.mark.parametrize('test_input,expected',
                         [(3, 9), (0.5, 0.25), (0, 0), (-3, 9)])
def test_square(test_input, expected):
    assert square(test_input) == expected

Fixtures

Fixtures are also about DRY code, but in a different way.

Suppose you are writing a web app. You might have several tests that need a connection to the database. You can add the same code to each test to open and set up a test database, but that's definitely repeating yourself. If you, say, switch databases, that's a lot of test code to update.

Fixtures are functions that allow you to do some setup (and potentially teardown) that can be used for multiple tests:

@pytest.fixture
def db_connection():
    # Open a temporary database in memory
    db = sqlite3.connect(':memory:')
    # Create a table of test orders to use
    db.execute('CREATE TABLE orders (id, customer, item)')
    db.executemany('INSERT INTO orders (id, customer, item) VALUES (?, ?, ?)',
                   [(1, 'Max', 'Pens'),
                    (2, 'Rachel', 'Binders'),
                    (3, 'Max', 'White out'),
                    (4, 'Alice', 'Highlighters')])
    return db      

def test_get_orders_by_name(db_connection):
    orders = get_orders_by_name(db_connection, 'Max')
    assert orders = [(1, 'Max', 'Pens'),
                     (3, 'Max', 'White out')]

def test_get_orders_by_name_nonexistent(db_connection):
    orders = get_orders_by_name(db_connection, 'John')
    assert orders = []

Fixing Your Code

Ok, so with that background out of the way, let's dig into your code.

The first problem is with your @pytest.mark.parametrize decorator:

@pytest.mark.parametrize("get_fus_output", [test_input_df, test_truth_df, res_path], indirect=True)

This isn't the right situation to use indirect . Just like tests can be parameterized, fixtures can be parameterized , too. It's not very clear from the docs (in my opinion), but indirect is just an alternative way to parameterize fixtures. That's totally different from using a fixture in another fixture , which is what you want.

In fact, for get_fus_output to use the test_input_df , test_truth_df , and res_path fixtures, you don't need the @pytest.mark.parametrize line at all. In general, any argument to a test function or fixture is automatically assumed to be a fixture if it's not otherwise used (eg by the @pytest.mark.parametrize decorator).

So, your existing @pytest.mark.parametrize isn't doing what you expect. How do you parameterize your test then? This is getting into the bigger problem: you are trying to use the get_fus_output fixture to create the parameters for test_annotation_match . That isn't the sort of thing you can do with a fixture.

When Pytest runs, first it collects all the test cases, then it runs them one by one. Test parameters have to be ready during the collection stage, but fixtures don't run until the testing stage. There is no way for code inside a fixture to help with parameterization. You can still generate your parameters programmatically, but fixtures aren't the way to do it.

You'll need to do a few things:

First, convert get_fus_output from a fixture to a regular function. That means removing the @pytest.fixture decorator, but you've also got to update it not to use the test_input_df test_truth_df , and res_path fixtures. (If nothing else needs them as fixtures, you can convert them all to regular functions, in which case, you probably want to put them in their own module outside of conftest.py or just move them into the same test script.)

Then, @pytest.mark.parametrize needs to use that function to get a list of parameters:

@pytest.mark.parametrize("expected,test", get_fus_output())
def test_annotation_match(expected, test):
    assert_frame_equal(test, expected, check_dtype=False, check_like=True)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM