简体   繁体   中英

Python Dynamic Test Plan generation

I am using Sphinx for documentation and pytest for testing. I need to generate a test plan but I really don't want to generate it by hand.

It occurred to me that a neat solution would be to actually embed test metadata in the tests' themselves, within their respective docstrings. This metadata would include things like % complete, time remaining etc. I could then run through all of the tests (which would at this point include mostly placeholders) and generate a test plan from them. This would then guarantee that the test plan and the tests themselves would be in sync.

I was thinking of making either a pytest plugin or a sphinx plugin to handle this.

Using pytest, the closest hook I can see looks like pytest_collection_modifyitems which gets called after all of the tests are collected.

Alternatively, I was thinking of using Sphinx and perhaps copying/modifying the todolist plugin as it seems like the closest match to this idea. The output of this would be more useful as the output would slot nicely in to the existing Sphinx based docs I have though there is a lot going on in this plugin and I don't really have the time to invest in understanding it.

The docstrings could have something like this within it:

:plan_complete: 50 #% indicator of how complete this test is
:plan_remaining: 2 #the number of hours estimated to complete this test
:plan_focus: something #what is the test focused on testing

The idea is to then generate a simple markdown/rst or similar table based on the function's name, docstring and embedded plan info and use that as the test plan.

Does something like this already exist?

In the end I went with a pytest based plugin as it was just so much simpler to code.

If anyone else is interested, below is the plugin:

"""Module to generate a test plan table based upon metadata extracted from test
docstrings. The test description is extracted from the first sentence or up to
the first blank line. The data which is extracted from the docstrings are of the
format:

    :test_remaining: 10 #number of hours remaining for this test to be complete. If
                     not present, assumed to be 0
    :test_complete: #the percentage of the test that is complete. If not
                    present, assumed to be 100
    :test_focus: The item the test is focusing on such as a DLL call.

"""
import pytest
import re
from functools import partial
from operator import itemgetter
from pathlib import Path

whitespace_re = re.compile(r'\s+')
cut_whitespace = partial(whitespace_re.sub, ' ')
plan_re = re.compile(r':plan_(\w+?):')
plan_handlers = {
        'remaining': lambda x:int(x.split('#')[0]),
        'complete': lambda x:int(x.strip().split('#')[0]),
        'focus': lambda x:x.strip().split('#')[0]
}
csv_template = """.. csv-table:: Test Plan
   :header: "Name", "Focus", "% Complete", "Hours remaining", "description", "path"
   :widths: 20, 20, 10, 10, 60, 100

{tests}

Overall hours remaining: {hours_remaining:.2f}
Overall % complete: {complete:.2f}

"""

class GeneratePlan:
    def __init__(self, output_file=Path('test_plan.rst')):
        self.output_file = output_file

    def pytest_collection_modifyitems(self, session, config, items):
        #breakpoint()
        items_to_parse = {i.nodeid.split('[')[0]:i for i in self.item_filter(items)}
        #parsed = map(parse_item, items_to_parse.items())
        parsed = [self.parse_item(n,i) for (n,i) in items_to_parse.items()]

        complete, hours_remaining = self.get_summary_data(parsed)

        self.output_file.write_text(csv_template.format(
                    tests = '\n'.join(self.generate_rst_table(parsed)),
                    complete=complete,
                    hours_remaining=hours_remaining))

    def item_filter(self, items):
        return items #override me

    def get_summary_data(self, parsed):
        completes = [p['complete'] for p in parsed]
        overall_complete = sum(completes)/len(completes)
        overall_hours_remaining = sum(p['remaining'] for p in parsed)
        return overall_complete, overall_hours_remaining


    def generate_rst_table(self, items):
        "Use CSV type for simplicity"
        sorted_items = sorted(items, key=lambda x:x['name'])
        quoter = lambda x:'"{}"'.format(x)
        getter = itemgetter(*'name focus complete remaining description path'.split())
        for item in sorted_items:
            yield 3*' ' + ', '.join(map(quoter, getter(item)))

    def parse_item(self, path, item):
        "Process a pytest provided item"

        data = {
            'name': item.name.split('[')[0],
            'path': path.split('::')[0],
            'description': '',
            'remaining': 0,
            'complete': 100,
            'focus': ''
        }

        doc = item.function.__doc__
        if doc:
            desc = self.extract_description(doc)
            data['description'] = desc
            plan_info = self.extract_info(doc)
            data.update(plan_info)

        return data

    def extract_description(self, doc):
        first_sentence = doc.split('\n\n')[0].replace('\n',' ')
        return cut_whitespace(first_sentence)

    def extract_info(self, doc):
        plan_info = {}
        for sub_str in doc.split('\n\n'):
            cleaned = cut_whitespace(sub_str.replace('\n', ' '))
            splitted = plan_re.split(cleaned)
            if len(splitted) > 1:
                i = iter(splitted[1:]) #splitter starts at index 1
                while True:
                    try:
                        key = next(i)
                        val = next(i)
                    except StopIteration:
                        break
                    assert key
                    if key in plan_handlers:
                        plan_info[key] = plan_handlers[key](val)
        return plan_info


From my conftest.py file, I have a command line argument configured within a pytest_addoption function : parser.addoption('--generate_test_plan', action='store_true', default=False, help="Generate test plan")

And I then configure the plugin within this function:

def pytest_configure(config):
    output_test_plan_file = Path('docs/source/test_plan.rst')
    class CustomPlan(GeneratePlan):
        def item_filter(self, items):
            return (i for i in items if 'tests/hw_regression_tests' in i.nodeid)

    if config.getoption('generate_test_plan'):
        config.pluginmanager.register(CustomPlan(output_file=output_test_plan_file))
        #config.pluginmanager.register(GeneratePlan())

Finally, in one of my sphinx documentation source files I then just include the output rst file:

Autogenerated test_plan
=======================

The below test_data is extracted from the individual tests in the suite.

.. include:: test_plan.rst


We have done something similar in our company by using Sphinx-needs and Sphinx-Test-Reports .

Inside a test file we use the docstring to store our test-case incl meta-data:

def my_test():
    """
    .. test:: My test case
       :id: TEST_001
       :status: in progress
       :author: me

       This test case checks for **awesome** stuff.
    """
    a = 2
    b = 5
    # ToDo: chek if a+b = 7

Then we document the test cases by using autodoc .

My tests
========

.. automodule:: test.my_tests:
   :members:

This results in some nice test-case objects in sphinx, which we can filter, link and present in table and flowcharts. See Sphinx-Needs .

With Sphinx-Test-Reports we are loading the results into the docs as well:

.. test-report: My Test report
   :id: REPORT_1
   :file: ../pytest_junit_results.xml
   :links: [[tr_link('case_name', 'signature')]]

This will create objects for each test case, which we also can filter and link. Thanks of tr_link the result objects get automatically linked to the test case objects.

After that we have all needed information in sphinx and can use eg .. needtable:: to get custom views on it.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM