简体   繁体   中英

parametrize and running a single test in pytest

How can I run a single test out of a set configured with parametrize? Let's say I have the following test method:

@pytest.mark.parametrize(PARAMETERS_LIST, PARAMETERS_VALUES)
def test_my_feature(self, param1, param2, param3):
    """
    test doc
    """
    if param1 == 'value':
        assert True
    else:
        print 'not value'
        assert False

I have 3 parameters, and I generate a list of 15 different possible values for them, to test the function on.

How can I run just one of them? except for the obvious way - giving a single value instead of 15.

How can I run a single test out of a set configured with parametrize? Let's say I have the following test method:

@pytest.mark.parametrize(PARAMETERS_LIST, PARAMETERS_VALUES)
def test_my_feature(self, param1, param2, param3):
    """
    test doc
    """
    if param1 == 'value':
        assert True
    else:
        print 'not value'
        assert False

I have 3 parameters, and I generate a list of 15 different possible values for them, to test the function on.

How can I run just one of them? except for the obvious way - giving a single value instead of 15.

How can I run a single test out of a set configured with parametrize? Let's say I have the following test method:

@pytest.mark.parametrize(PARAMETERS_LIST, PARAMETERS_VALUES)
def test_my_feature(self, param1, param2, param3):
    """
    test doc
    """
    if param1 == 'value':
        assert True
    else:
        print 'not value'
        assert False

I have 3 parameters, and I generate a list of 15 different possible values for them, to test the function on.

How can I run just one of them? except for the obvious way - giving a single value instead of 15.

I know this question is answered, but I wasn't satisfied with the answer for my use case.

I have some parametrised tests that take longer than I'd like if I am to run them frequently. It would be useful to be able to pass a parameter to pytest on the command line to set a maximum number of runs for each parametrised test. That way I get resassurance that my code works for some data sets, without having to wait for every data set to be checked, but still preserving the code for testing every data set (to be run less frequently).

I have achieved this by adding the following to my conftest.py

def pytest_addoption(parser):
    parser.addoption(
        "--limit",
        action="store",
        default=-1,
        type=int,
        help="Maximum number of permutations of parametrised tests to run",
    )


def pytest_collection_modifyitems(session, config, items):
    def get_base_name(test_name):
        """
        Get name of test without parameters

        Parametrised tests have the [ character after the base test name, followed by
        the parameter values. This removes the [ and all that follows from test names.
        """
        try:
            return test_name[: test_name.index("[")]
        except ValueError:
            return test_name

    limit = config.getoption("--limit")
    if limit >= 0:
        tests_by_name = {item.name: item for item in items}
        test_base_names = set(get_base_name(name) for name in tests_by_name.keys())

        tests_to_run = []
        for base_name in test_base_names:
            to_skip = [t for n, t in tests_by_name.items() if base_name in n][limit:]
            for t in to_skip:
                t.add_marker("skip")


The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM