test_ok1/doc/example/parametrize.txt

325 lines
11 KiB
Plaintext

.. _paramexamples:
Parametrizing tests
=================================================
.. currentmodule:: _pytest.python
py.test allows to easily parametrize test functions.
In the following we provide some examples using
the builtin mechanisms.
.. _parametrizemark:
simple "decorator" parametrization of a test function
----------------------------------------------------------------------------
.. versionadded:: 2.2
The builtin ``parametrize`` marker allows you to easily write generic
test functions that will be invoked with multiple input/output values::
# content of test_expectation.py
import pytest
@pytest.mark.parametrize(("input", "expected"), [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(input, expected):
assert eval(input) == expected
Here we parametrize two arguments of the test function so that the test
function is called three times. Let's run it::
$ py.test -q
collecting ... collected 3 items
..F
=================================== FAILURES ===================================
______________________________ test_eval[6*9-42] _______________________________
input = '6*9', expected = 42
@pytest.mark.parametrize(("input", "expected"), [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(input, expected):
> assert eval(input) == expected
E assert 54 == 42
E + where 54 = eval('6*9')
test_expectation.py:9: AssertionError
1 failed, 2 passed in 0.03 seconds
As expected only one pair of input/output values fails the simple test function.
Note that there are various ways how you can mark groups of functions,
see :ref:`mark`.
Generating parameters combinations, depending on command line
----------------------------------------------------------------------------
.. regendoc:wipe
Let's say we want to execute a test with different computation
parameters and the parameter range shall be determined by a command
line argument. Let's first write a simple (do-nothing) computation test::
# content of test_compute.py
def test_compute(param1):
assert param1 < 4
Now we add a test configuration like this::
# content of conftest.py
def pytest_addoption(parser):
parser.addoption("--all", action="store_true",
help="run all combinations")
def pytest_generate_tests(metafunc):
if 'param1' in metafunc.funcargnames:
if metafunc.config.option.all:
end = 5
else:
end = 2
metafunc.parametrize("param1", range(end))
This means that we only run 2 tests if we do not pass ``--all``::
$ py.test -q test_compute.py
collecting ... collected 2 items
..
2 passed in 0.02 seconds
We run only two computations, so we see two dots.
let's run the full monty::
$ py.test -q --all
collecting ... collected 5 items
....F
=================================== FAILURES ===================================
_______________________________ test_compute[4] ________________________________
param1 = 4
def test_compute(param1):
> assert param1 < 4
E assert 4 < 4
test_compute.py:3: AssertionError
1 failed, 4 passed in 0.03 seconds
As expected when running the full range of ``param1`` values
we'll get an error on the last one.
a quick port of "testscenarios"
------------------------------------
.. _`test scenarios`: http://bazaar.launchpad.net/~lifeless/testscenarios/trunk/annotate/head%3A/doc/example.py
Here is a quick port of to run tests configured with `test scenarios`_,
an add-on from Robert Collins for the standard unittest framework. We
only have to work a bit to construct the correct arguments for pytest's
:py:func:`Metafunc.parametrize`::
# content of test_scenarios.py
def pytest_generate_tests(metafunc):
idlist = []
argvalues = []
for scenario in metafunc.cls.scenarios:
idlist.append(scenario[0])
items = scenario[1].items()
argnames = [x[0] for x in items]
argvalues.append(([x[1] for x in items]))
metafunc.parametrize(argnames, argvalues, ids=idlist)
scenario1 = ('basic', {'attribute': 'value'})
scenario2 = ('advanced', {'attribute': 'value2'})
class TestSampleWithScenarios:
scenarios = [scenario1, scenario2]
def test_demo(self, attribute):
assert isinstance(attribute, str)
this is a fully self-contained example which you can run with::
$ py.test test_scenarios.py
============================= test session starts ==============================
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
collecting ... collected 2 items
test_scenarios.py ..
=========================== 2 passed in 0.02 seconds ===========================
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
$ py.test --collectonly test_scenarios.py
============================= test session starts ==============================
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
collecting ... collected 2 items
<Module 'test_scenarios.py'>
<Class 'TestSampleWithScenarios'>
<Instance '()'>
<Function 'test_demo[basic]'>
<Function 'test_demo[advanced]'>
=============================== in 0.01 seconds ===============================
Deferring the setup of parametrized resources
---------------------------------------------------
.. regendoc:wipe
The parametrization of test functions happens at collection
time. It is a good idea to setup expensive resources like DB
connections or subprocess only when the actual test is run.
Here is a simple example how you can achieve that, first
the actual test requiring a ``db`` object::
# content of test_backends.py
import pytest
def test_db_initialized(db):
# a dummy test
if db.__class__.__name__ == "DB2":
pytest.fail("deliberately failing for demo purposes")
We can now add a test configuration that generates two invocations of
the ``test_db_initialized`` function and also implements a factory that
creates a database object for the actual test invocations::
# content of conftest.py
def pytest_generate_tests(metafunc):
if 'db' in metafunc.funcargnames:
metafunc.parametrize("db", ['d1', 'd2'], indirect=True)
class DB1:
"one database object"
class DB2:
"alternative database object"
def pytest_funcarg__db(request):
if request.param == "d1":
return DB1()
elif request.param == "d2":
return DB2()
else:
raise ValueError("invalid internal test config")
Let's first see how it looks like at collection time::
$ py.test test_backends.py --collectonly
============================= test session starts ==============================
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
collecting ... collected 2 items
<Module 'test_backends.py'>
<Function 'test_db_initialized[d1]'>
<Function 'test_db_initialized[d2]'>
=============================== in 0.01 seconds ===============================
And then when we run the test::
$ py.test -q test_backends.py
collecting ... collected 2 items
.F
=================================== FAILURES ===================================
___________________________ test_db_initialized[d2] ____________________________
db = <conftest.DB2 instance at 0x1013195f0>
def test_db_initialized(db):
# a dummy test
if db.__class__.__name__ == "DB2":
> pytest.fail("deliberately failing for demo purposes")
E Failed: deliberately failing for demo purposes
test_backends.py:6: Failed
1 failed, 1 passed in 0.02 seconds
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``pytest_funcarg__db`` factory has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
.. regendoc:wipe
Parametrizing test methods through per-class configuration
--------------------------------------------------------------
.. _`unittest parameterizer`: http://code.google.com/p/unittest-ext/source/browse/trunk/params.py
Here is an example ``pytest_generate_function`` function implementing a
parametrization scheme similar to Michael Foords `unittest
parameterizer`_ but in a lot less code::
# content of ./test_parametrize.py
import pytest
def pytest_generate_tests(metafunc):
# called once per each test function
funcarglist = metafunc.cls.params[metafunc.function.__name__]
argnames = list(funcarglist[0])
metafunc.parametrize(argnames, [[funcargs[name] for name in argnames]
for funcargs in funcarglist])
class TestClass:
# a map specifying multiple argument sets for a test method
params = {
'test_equals': [dict(a=1, b=2), dict(a=3, b=3), ],
'test_zerodivision': [dict(a=1, b=0), ],
}
def test_equals(self, a, b):
assert a == b
def test_zerodivision(self, a, b):
pytest.raises(ZeroDivisionError, "a/b")
Our test generator looks up a class-level definition which specifies which
argument sets to use for each test function. Let's run it::
$ py.test -q
collecting ... collected 3 items
F..
=================================== FAILURES ===================================
__________________________ TestClass.test_equals[1-2] __________________________
self = <test_parametrize.TestClass instance at 0x1013158c0>, a = 1, b = 2
def test_equals(self, a, b):
> assert a == b
E assert 1 == 2
test_parametrize.py:18: AssertionError
1 failed, 2 passed in 0.03 seconds
Checking serialization between Python interpreters
--------------------------------------------------------------
Here is a stripped down real-life example of using parametrized
testing for testing serialization, invoking different python interpreters.
We define a ``test_basic_objects`` function which is to be run
with different sets of arguments for its three arguments::
* ``python1``: first python interpreter, run to pickle-dump an object to a file
* ``python2``: second interpreter, run to pickle-load an object from a file
* ``obj``: object to be dumped/loaded
.. literalinclude:: multipython.py
Running it (with Python-2.4 through to Python2.7 installed)::
. $ py.test -q multipython.py
collecting ... collected 75 items
ssssssssssssssssss.........ssssss.........ssssss.........ssssssssssssssssss
27 passed, 48 skipped in 4.87 seconds