@pytest.mark.parametrize('feed', ['C', 'D'])
@pytest.mark.parametrize('file', ['foo.txt', 'boo.txt', 'doo.txt'])
def test_1(feed: Path, file: str):
assert (Path(feed / file).is_file()), 'Not file'
@pytest.mark.parametrize('feed_C, feed_D', [('C', 'D')])
@pytest.mark.parametrize('file', ['foo.txt', 'boo.txt', 'doo.txt'])
@pytest.mark.parametrize('column', ['name', 'surname'])
def test_2(feed_C: Path, feed_D: Path, file: str, column: str):
df1 = pd.read_csv(Path(feed_C / file), sep="\t")
df2 = pd.read_csv(Path(feed_D / file), sep="\t")
assert df1[column].equals(df2[column]), 'data frames are not equal.'
I have two test function test_1
and test_2
. test_2
should be dependent on the test_1
. But the iterations in both test are different.
test_1 iterations =>
test_2 iterations =>
I want, for example, test iteration (name_foo.txt_C_D) in test_2 to be dependent on result of 1 and 2. For example, if foo.txt_C or foo.txt_ (even one of them), then name_foo.txt_C_D test iteration in test_2 will be SKIPPED. the same is for surname_foo.txt_C_D
You don't need a separate test just two check if the paths are valid, you can do it in the same test. You also shouldn't create dependency between tests
@pytest.mark.parametrize('feed_C, feed_D', [('C:', 'D:')])
@pytest.mark.parametrize('file', ['foo.txt', 'boo.txt', 'doo.txt'])
def test(feed_C: Path, feed_D: Path, file: str):
path_c = Path(feed_C / file)
assert path_c.is_file(), 'Not file'
path_d = Path(feed_D / file)
assert path_d.is_file(), 'Not file'
df1 = pd.read_csv(path_c, sep="\t")
df2 = pd.read_csv(path_d, sep="\t")
assert df1.equals(df2), 'data frames are not equal.'
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.