简体   繁体   中英

How to accumulate state across tests in py.test

I currently have a project and tests similar to these.

class mylib:
    @classmethod
    def get_a(cls):
        return 'a'

    @classmethod
    def convert_a_to_b(cls, a):
        return 'b'

    @classmethod
    def works_with(cls, a, b):
        return True

class TestMyStuff(object):
    def test_first(self):
        self.a = mylib.get_a()

    def test_conversion(self):
        self.b = mylib.convert_a_to_b(self.a)

    def test_a_works_with_b(self):
        assert mylib.works_with(self.a, self.b)

With py.test 0.9.2, these tests (or similar ones) pass. With later versions of py.test, test_conversion and test_a_works_with_b fail with 'TestMyStuff has no attribute a'.

I am guessing this is because with later builds of py.test, a separate instance of TestMyStuff is created for each method that is tested.

What is the proper way to write these tests such that results can be given for each of the steps in the sequence, but the state from a previous (successful) test can (must) be used to perform subsequent tests?

Good unit test practice is to avoid state accumulated across tests. Most unit test frameworks go to great lengths to prevent you from accumulating state. The reason is that you want each test to stand on its own. This lets you run arbitrary subsets of your tests, and ensures that your system is in a clean state for each test.

I partly agree with Ned in that it's good to avoid somewhat random sharing of test state. But i also think it is sometimes useful to accumulate state incrementally during tests.

With py.test you can actually do that by making it explicit that you want to share test state. Your example rewritten to work:

class State:
    """ holding (incremental) test state """

def pytest_funcarg__state(request):
    return request.cached_setup(
        setup=lambda: State(),
        scope="module"
    )

class mylib:
    @classmethod
    def get_a(cls):
        return 'a'

    @classmethod
    def convert_a_to_b(cls, a):
        return 'b'

    @classmethod
    def works_with(cls, a, b):
        return True

class TestMyStuff(object):
    def test_first(self, state):
        state.a = mylib.get_a()

    def test_conversion(self, state):
        state.b = mylib.convert_a_to_b(state.a)

    def test_a_works_with_b(self, state):
        mylib.works_with(state.a, state.b)

You can run this with recent py.test versions. Each functions receives a "state" object and the "funcarg" factory creates it initially and caches it over the module scope. Together with the py.test guarantee that tests are run in file order the test functions can be rather they will work incrementally on the test "state".

However, It is a bit fragile because if you select just the running of "test_conversion" via eg "py.test -k test_conversion" then your test will fail because the first test hasn't run. I think that some way to do incremental tests would be nice so maybe we can eventually find a totally robust solution.

HTH, holger

This is surely a job for pytest fixtures: https://docs.pytest.org/en/latest/fixture.html

Fixtures allow test functions to easily receive and work against specific pre-initialized application objects without having to care about import/setup/cleanup details. It's a prime example of dependency injection where fixture functions take the role of the injector and test functions are the consumers of fixture objects.

So an example of setting up a fixture to hold state would be as follows:

import pytest


class State:

    def __init__(self):
        self.state = {}


@pytest.fixture(scope='session')
def state() -> State:
    state = State()
    state.state['from_fixture'] = 0
    return state


def test_1(state: State) -> None:
    state.state['from_test_1'] = 1
    assert state.state['from_fixture'] == 0
    assert state.state['from_test_1'] == 1


def test_2(state: State) -> None:
    state.state['from_test_2'] = 2
    assert state.state['from_fixture'] == 0
    assert state.state['from_test_1'] == 1
    assert state.state['from_test_2'] == 2

Note that you can specify the scope for the dependency injection (and hence the state). In this case I have set it to session, the other option would be module ( scope=function wouldn't work for your use-case as you would lose state between functions.

Obviously you can extend this pattern to hold other types of objects in the state, such as comparing outcomes from different tests.

As a word of warning - you still want to be able to run your tests in any order (my example breaches this s swapping the order of 1 and 2 results in failure). However I have not illustrated that for the sake of simplicity.

To complement hpk42's answer , you can also use pytest-steps to perform incremental testing, this can help you in particular if you wish to share some kind of incremental state/intermediate results between the steps.

With this package you do not need to put all the steps in a class (you can, but it is not required), simply decorate your "test suite" function with @test_steps .

EDIT: there is a new 'generator' mode to make it even easier:

from pytest_steps import test_steps

@test_steps('step_first', 'step_conversion', 'step_a_works_with_b')
def test_suite_with_shared_results():
    a = mylib.get_a()
    yield

    b = mylib.convert_a_to_b(a)
    yield

    assert mylib.works_with(a, b)
    yield

LEGACY answer:

You can add a steps_data parameter to your test function if you wish to share a StepsDataHolder object between your steps.

Your example would then write:

from pytest_steps import test_steps, StepsDataHolder

def step_first(steps_data):
    steps_data.a = mylib.get_a()


def step_conversion(steps_data):
    steps_data.b = mylib.convert_a_to_b(steps_data.a)


def step_a_works_with_b(steps_data):
    assert mylib.works_with(steps_data.a, steps_data.b)


@test_steps(step_first, step_conversion, step_a_works_with_b)
def test_suite_with_shared_results(test_step, steps_data: StepsDataHolder):

    # Execute the step with access to the steps_data holder
    test_step(steps_data)

Finally, note that you can automatically skip or fail a step if another has failed using @depends_on , check in the documentation for details.

(I'm the author of this package by the way ;) )

As I spent more time with this problem, I realized there was an implicit aspect to my question that I neglected to specify. In most scenarios, I found that I wanted to accumulate state within a single class, but discard it when the test class had completed.

What I ended up using for some of my classes, where the class itself represented a process that accumulated state, I stored the accumulated state in the class object itself.

class mylib:
    @classmethod
    def get_a(cls):
        return 'a'

    @classmethod
    def convert_a_to_b(cls, a):
        return 'b'

    @classmethod
    def works_with(cls, a, b):
        return True

class TestMyStuff(object):
    def test_first(self):
        self.__class__.a = mylib.get_a()

    def test_conversion(self):
        self.__class__.b = mylib.convert_a_to_b(self.a)

    def test_a_works_with_b(self):
        mylib.works_with(self.a, self.b)

The advantage to this approach is it keeps the state encapsulated within the test class (there are no auxiliary functions that have to be present for the test to run), and it would be suitably awkward for a different class to expect the TestMyStuff state to be present when the different class runs.

I think each of these approaches discussed thusfar have their merits, and intend to use each approach where it fits best.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM