329 lines
11 KiB
Plaintext
329 lines
11 KiB
Plaintext
|
|
||
|
.. _paramexamples:
|
||
|
|
||
|
Parametrizing tests
|
||
|
=================================================
|
||
|
|
||
|
.. currentmodule:: _pytest.python
|
||
|
|
||
|
py.test allows to easily parametrize test functions.
|
||
|
In the following we provide some examples using
|
||
|
the builtin mechanisms.
|
||
|
|
||
|
.. _parametrizemark:
|
||
|
|
||
|
Simple "decorator" parametrization of a test function
|
||
|
----------------------------------------------------------------------------
|
||
|
|
||
|
.. versionadded:: 2.2
|
||
|
|
||
|
The builtin ``pytest.mark.parametrize`` decorator directly enables
|
||
|
parametrization of arguments for a test function. Here is an example
|
||
|
of a test function that wants to compare that processing some input
|
||
|
results in expected output::
|
||
|
|
||
|
# content of test_expectation.py
|
||
|
import pytest
|
||
|
@pytest.mark.parametrize(("input", "expected"), [
|
||
|
("3+5", 8),
|
||
|
("2+4", 6),
|
||
|
("6*9", 42),
|
||
|
])
|
||
|
def test_eval(input, expected):
|
||
|
assert eval(input) == expected
|
||
|
|
||
|
we parametrize two arguments of the test function so that the test
|
||
|
function is called three times. Let's run it::
|
||
|
|
||
|
$ py.test -q
|
||
|
collecting ... collected 3 items
|
||
|
..F
|
||
|
================================= FAILURES =================================
|
||
|
____________________________ test_eval[6*9-42] _____________________________
|
||
|
|
||
|
input = '6*9', expected = 42
|
||
|
|
||
|
@pytest.mark.parametrize(("input", "expected"), [
|
||
|
("3+5", 8),
|
||
|
("2+4", 6),
|
||
|
("6*9", 42),
|
||
|
])
|
||
|
def test_eval(input, expected):
|
||
|
> assert eval(input) == expected
|
||
|
E assert 54 == 42
|
||
|
E + where 54 = eval('6*9')
|
||
|
|
||
|
test_expectation.py:8: AssertionError
|
||
|
1 failed, 2 passed in 0.01 seconds
|
||
|
|
||
|
As expected only one pair of input/output values fails the simple test function.
|
||
|
|
||
|
Note that there are various ways how you can mark groups of functions,
|
||
|
see :ref:`mark`.
|
||
|
|
||
|
Generating parameters combinations, depending on command line
|
||
|
----------------------------------------------------------------------------
|
||
|
|
||
|
.. regendoc:wipe
|
||
|
|
||
|
Let's say we want to execute a test with different computation
|
||
|
parameters and the parameter range shall be determined by a command
|
||
|
line argument. Let's first write a simple (do-nothing) computation test::
|
||
|
|
||
|
# content of test_compute.py
|
||
|
|
||
|
def test_compute(param1):
|
||
|
assert param1 < 4
|
||
|
|
||
|
Now we add a test configuration like this::
|
||
|
|
||
|
# content of conftest.py
|
||
|
|
||
|
def pytest_addoption(parser):
|
||
|
parser.addoption("--all", action="store_true",
|
||
|
help="run all combinations")
|
||
|
|
||
|
def pytest_generate_tests(metafunc):
|
||
|
if 'param1' in metafunc.funcargnames:
|
||
|
if metafunc.config.option.all:
|
||
|
end = 5
|
||
|
else:
|
||
|
end = 2
|
||
|
metafunc.parametrize("param1", range(end))
|
||
|
|
||
|
This means that we only run 2 tests if we do not pass ``--all``::
|
||
|
|
||
|
$ py.test -q test_compute.py
|
||
|
collecting ... collected 2 items
|
||
|
..
|
||
|
2 passed in 0.01 seconds
|
||
|
|
||
|
We run only two computations, so we see two dots.
|
||
|
let's run the full monty::
|
||
|
|
||
|
$ py.test -q --all
|
||
|
collecting ... collected 5 items
|
||
|
....F
|
||
|
================================= FAILURES =================================
|
||
|
_____________________________ test_compute[4] ______________________________
|
||
|
|
||
|
param1 = 4
|
||
|
|
||
|
def test_compute(param1):
|
||
|
> assert param1 < 4
|
||
|
E assert 4 < 4
|
||
|
|
||
|
test_compute.py:3: AssertionError
|
||
|
1 failed, 4 passed in 0.02 seconds
|
||
|
|
||
|
As expected when running the full range of ``param1`` values
|
||
|
we'll get an error on the last one.
|
||
|
|
||
|
A quick port of "testscenarios"
|
||
|
------------------------------------
|
||
|
|
||
|
.. _`test scenarios`: http://bazaar.launchpad.net/~lifeless/testscenarios/trunk/annotate/head%3A/doc/example.py
|
||
|
|
||
|
Here is a quick port to run tests configured with `test scenarios`_,
|
||
|
an add-on from Robert Collins for the standard unittest framework. We
|
||
|
only have to work a bit to construct the correct arguments for pytest's
|
||
|
:py:func:`Metafunc.parametrize`::
|
||
|
|
||
|
# content of test_scenarios.py
|
||
|
|
||
|
def pytest_generate_tests(metafunc):
|
||
|
idlist = []
|
||
|
argvalues = []
|
||
|
for scenario in metafunc.cls.scenarios:
|
||
|
idlist.append(scenario[0])
|
||
|
items = scenario[1].items()
|
||
|
argnames = [x[0] for x in items]
|
||
|
argvalues.append(([x[1] for x in items]))
|
||
|
metafunc.parametrize(argnames, argvalues, ids=idlist)
|
||
|
|
||
|
scenario1 = ('basic', {'attribute': 'value'})
|
||
|
scenario2 = ('advanced', {'attribute': 'value2'})
|
||
|
|
||
|
class TestSampleWithScenarios:
|
||
|
scenarios = [scenario1, scenario2]
|
||
|
|
||
|
def test_demo(self, attribute):
|
||
|
assert isinstance(attribute, str)
|
||
|
|
||
|
this is a fully self-contained example which you can run with::
|
||
|
|
||
|
$ py.test test_scenarios.py
|
||
|
=========================== test session starts ============================
|
||
|
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
||
|
collecting ... collected 2 items
|
||
|
|
||
|
test_scenarios.py ..
|
||
|
|
||
|
========================= 2 passed in 0.01 seconds =========================
|
||
|
|
||
|
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
|
||
|
|
||
|
|
||
|
$ py.test --collectonly test_scenarios.py
|
||
|
=========================== test session starts ============================
|
||
|
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
||
|
collecting ... collected 2 items
|
||
|
<Module 'test_scenarios.py'>
|
||
|
<Class 'TestSampleWithScenarios'>
|
||
|
<Instance '()'>
|
||
|
<Function 'test_demo[basic]'>
|
||
|
<Function 'test_demo[advanced]'>
|
||
|
|
||
|
============================= in 0.00 seconds =============================
|
||
|
|
||
|
Deferring the setup of parametrized resources
|
||
|
---------------------------------------------------
|
||
|
|
||
|
.. regendoc:wipe
|
||
|
|
||
|
The parametrization of test functions happens at collection
|
||
|
time. It is a good idea to setup expensive resources like DB
|
||
|
connections or subprocess only when the actual test is run.
|
||
|
Here is a simple example how you can achieve that, first
|
||
|
the actual test requiring a ``db`` object::
|
||
|
|
||
|
# content of test_backends.py
|
||
|
|
||
|
import pytest
|
||
|
def test_db_initialized(db):
|
||
|
# a dummy test
|
||
|
if db.__class__.__name__ == "DB2":
|
||
|
pytest.fail("deliberately failing for demo purposes")
|
||
|
|
||
|
We can now add a test configuration that generates two invocations of
|
||
|
the ``test_db_initialized`` function and also implements a factory that
|
||
|
creates a database object for the actual test invocations::
|
||
|
|
||
|
# content of conftest.py
|
||
|
|
||
|
def pytest_generate_tests(metafunc):
|
||
|
if 'db' in metafunc.funcargnames:
|
||
|
metafunc.parametrize("db", ['d1', 'd2'], indirect=True)
|
||
|
|
||
|
class DB1:
|
||
|
"one database object"
|
||
|
class DB2:
|
||
|
"alternative database object"
|
||
|
|
||
|
def pytest_funcarg__db(request):
|
||
|
if request.param == "d1":
|
||
|
return DB1()
|
||
|
elif request.param == "d2":
|
||
|
return DB2()
|
||
|
else:
|
||
|
raise ValueError("invalid internal test config")
|
||
|
|
||
|
Let's first see how it looks like at collection time::
|
||
|
|
||
|
$ py.test test_backends.py --collectonly
|
||
|
=========================== test session starts ============================
|
||
|
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
||
|
collecting ... collected 2 items
|
||
|
<Module 'test_backends.py'>
|
||
|
<Function 'test_db_initialized[d1]'>
|
||
|
<Function 'test_db_initialized[d2]'>
|
||
|
|
||
|
============================= in 0.00 seconds =============================
|
||
|
|
||
|
And then when we run the test::
|
||
|
|
||
|
$ py.test -q test_backends.py
|
||
|
collecting ... collected 2 items
|
||
|
.F
|
||
|
================================= FAILURES =================================
|
||
|
_________________________ test_db_initialized[d2] __________________________
|
||
|
|
||
|
db = <conftest.DB2 instance at 0x1d4eb00>
|
||
|
|
||
|
def test_db_initialized(db):
|
||
|
# a dummy test
|
||
|
if db.__class__.__name__ == "DB2":
|
||
|
> pytest.fail("deliberately failing for demo purposes")
|
||
|
E Failed: deliberately failing for demo purposes
|
||
|
|
||
|
test_backends.py:6: Failed
|
||
|
1 failed, 1 passed in 0.01 seconds
|
||
|
|
||
|
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``pytest_funcarg__db`` factory has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
|
||
|
|
||
|
.. regendoc:wipe
|
||
|
|
||
|
Parametrizing test methods through per-class configuration
|
||
|
--------------------------------------------------------------
|
||
|
|
||
|
.. _`unittest parameterizer`: http://code.google.com/p/unittest-ext/source/browse/trunk/params.py
|
||
|
|
||
|
|
||
|
Here is an example ``pytest_generate_function`` function implementing a
|
||
|
parametrization scheme similar to Michael Foord's `unittest
|
||
|
parameterizer`_ but in a lot less code::
|
||
|
|
||
|
# content of ./test_parametrize.py
|
||
|
import pytest
|
||
|
|
||
|
def pytest_generate_tests(metafunc):
|
||
|
# called once per each test function
|
||
|
funcarglist = metafunc.cls.params[metafunc.function.__name__]
|
||
|
argnames = list(funcarglist[0])
|
||
|
metafunc.parametrize(argnames, [[funcargs[name] for name in argnames]
|
||
|
for funcargs in funcarglist])
|
||
|
|
||
|
class TestClass:
|
||
|
# a map specifying multiple argument sets for a test method
|
||
|
params = {
|
||
|
'test_equals': [dict(a=1, b=2), dict(a=3, b=3), ],
|
||
|
'test_zerodivision': [dict(a=1, b=0), ],
|
||
|
}
|
||
|
|
||
|
def test_equals(self, a, b):
|
||
|
assert a == b
|
||
|
|
||
|
def test_zerodivision(self, a, b):
|
||
|
pytest.raises(ZeroDivisionError, "a/b")
|
||
|
|
||
|
Our test generator looks up a class-level definition which specifies which
|
||
|
argument sets to use for each test function. Let's run it::
|
||
|
|
||
|
$ py.test -q
|
||
|
collecting ... collected 3 items
|
||
|
F..
|
||
|
================================= FAILURES =================================
|
||
|
________________________ TestClass.test_equals[1-2] ________________________
|
||
|
|
||
|
self = <test_parametrize.TestClass instance at 0x10d2e18>, a = 1, b = 2
|
||
|
|
||
|
def test_equals(self, a, b):
|
||
|
> assert a == b
|
||
|
E assert 1 == 2
|
||
|
|
||
|
test_parametrize.py:18: AssertionError
|
||
|
1 failed, 2 passed in 0.01 seconds
|
||
|
|
||
|
Indirect parametrization with multiple resources
|
||
|
--------------------------------------------------------------
|
||
|
|
||
|
Here is a stripped down real-life example of using parametrized
|
||
|
testing for testing serialization, invoking different python interpreters.
|
||
|
We define a ``test_basic_objects`` function which is to be run
|
||
|
with different sets of arguments for its three arguments:
|
||
|
|
||
|
* ``python1``: first python interpreter, run to pickle-dump an object to a file
|
||
|
* ``python2``: second interpreter, run to pickle-load an object from a file
|
||
|
* ``obj``: object to be dumped/loaded
|
||
|
|
||
|
.. literalinclude:: multipython.py
|
||
|
|
||
|
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::
|
||
|
|
||
|
. $ py.test -rs -q multipython.py
|
||
|
collecting ... collected 75 items
|
||
|
............sss............sss............sss............ssssssssssssssssss
|
||
|
========================= short test summary info ==========================
|
||
|
SKIP [27] /home/hpk/p/pytest/doc/example/multipython.py:36: 'python2.8' not found
|
||
|
48 passed, 27 skipped in 1.71 seconds
|