add an example for postprocessing a test failure

This commit is contained in:
holger krekel 2012-11-08 23:36:16 +01:00
parent 664b01ca42
commit c790490387
2 changed files with 105 additions and 21 deletions

View File

@ -17,7 +17,7 @@
#
# The full version, including alpha/beta/rc tags.
# The short X.Y version.
version = release = "2.3.3.2"
version = release = "2.3.3.3"
import sys, os

View File

@ -106,7 +106,7 @@ directory with the above conftest.py::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.4.dev1
collected 0 items
============================= in 0.00 seconds =============================
@ -150,12 +150,12 @@ and when running it will see a skipped "slow" test::
$ py.test -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.4.dev1
collected 2 items
test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-140/conftest.py:9: need --runslow option to run
SKIP [1] /tmp/doc-exec-156/conftest.py:9: need --runslow option to run
=================== 1 passed, 1 skipped in 0.01 seconds ====================
@ -163,7 +163,7 @@ Or run it including the ``slow`` marked test::
$ py.test --runslow
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.4.dev1
collected 2 items
test_module.py ..
@ -253,7 +253,7 @@ which will add the string to the test header accordingly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.4.dev1
project deps: mylib-1.1
collected 0 items
@ -276,7 +276,7 @@ which will add info only when run with "--v"::
$ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.3.4.dev1 -- /home/hpk/venv/0/bin/python
info1: did you know that ...
did you?
collecting ... collected 0 items
@ -287,7 +287,7 @@ and nothing when run plainly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.4.dev1
collected 0 items
============================= in 0.00 seconds =============================
@ -319,7 +319,7 @@ Now we can profile which test functions execute the slowest::
$ py.test --durations=3
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.4.dev1
collected 3 items
test_some_are_slow.py ...
@ -380,7 +380,7 @@ If we run this::
$ py.test -rx
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.4.dev1
collected 4 items
test_step.py .Fx.
@ -388,7 +388,7 @@ If we run this::
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling instance at 0x19e8b48>
self = <test_step.TestUserHandling instance at 0x2c23878>
def test_modification(self):
> assert 0
@ -398,7 +398,7 @@ If we run this::
========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::()::test_deletion
reason: previous test failed (test_modification)
============== 1 failed, 2 passed, 1 xfailed in 0.02 seconds ===============
============== 1 failed, 2 passed, 1 xfailed in 0.01 seconds ===============
We'll see that ``test_deletion`` was not executed because ``test_modification``
failed. It is reported as an "expected failure".
@ -450,42 +450,52 @@ We can run this::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
collected 3 items
platform linux2 -- Python 2.7.3 -- pytest-2.3.4.dev1
collected 7 items
test_step.py .Fx.
a/test_db.py F
a/test_db2.py F
b/test_error.py E
================================== ERRORS ==================================
_______________________ ERROR at setup of test_root ________________________
file /tmp/doc-exec-133/b/test_error.py, line 1
file /tmp/doc-exec-156/b/test_error.py, line 1
def test_root(db): # no db here, will error out
fixture 'db' not found
available fixtures: pytestconfig, recwarn, monkeypatch, capfd, capsys, tmpdir
use 'py.test --fixtures [testpath]' for help on them.
/tmp/doc-exec-133/b/test_error.py:1
/tmp/doc-exec-156/b/test_error.py:1
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling instance at 0x170fc68>
def test_modification(self):
> assert 0
E assert 0
test_step.py:9: AssertionError
_________________________________ test_a1 __________________________________
db = <conftest.DB instance at 0x13d22d8>
db = <conftest.DB instance at 0x17a5368>
def test_a1(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB instance at 0x13d22d8>
E AssertionError: <conftest.DB instance at 0x17a5368>
a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________
db = <conftest.DB instance at 0x13d22d8>
db = <conftest.DB instance at 0x17a5368>
def test_a2(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB instance at 0x13d22d8>
E AssertionError: <conftest.DB instance at 0x17a5368>
a/test_db2.py:2: AssertionError
==================== 2 failed, 1 error in 0.02 seconds =====================
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.03 seconds ==========
The two test modules in the ``a`` directory see the same ``db`` fixture instance
while the one test in the sister-directory ``b`` doesn't see it. We could of course
@ -495,3 +505,77 @@ it (unless you use "autouse" fixture which are always executed ahead of the firs
executing).
post-process test reports / failures
---------------------------------------
If you want to postprocess test reports and need access to the executing
environment you can implement a hook that gets called when the test
"report" object is about to be created. Here we write out all failing
test calls and also access a fixture (if it was used by the test) in
case you want to query/look at it during your post processing. In our
case we just write some informations out to a ``failures`` file::
# content of conftest.py
import pytest
import os.path
@pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
# execute all other hooks to obtain the report object
rep = __multicall__.execute()
# we only look at actual failing test calls, not setup/teardown
if rep.when == "call" and rep.failed:
mode = "a" if os.path.exists("failures") else "w"
with open("failures", mode) as f:
# let's also access a fixture for the fun of it
if "tmpdir" in item.funcargs:
extra = " (%s)" % item.funcargs["tmpdir"]
else:
extra = ""
f.write(rep.nodeid + extra + "\n")
return rep
if you then have failing tests::
# content of test_module.py
def test_fail1(tmpdir):
assert 0
def test_fail2():
assert 0
and run them::
$ py.test test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.4.dev1
collected 2 items
test_module.py FF
================================= FAILURES =================================
________________________________ test_fail1 ________________________________
tmpdir = local('/tmp/pytest-3/test_fail10')
def test_fail1(tmpdir):
> assert 0
E assert 0
test_module.py:2: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:4: AssertionError
========================= 2 failed in 0.01 seconds =========================
you will have a "failures" file which contains the failing test ids::
$ cat failures
test_module.py::test_fail1 (/tmp/pytest-3/test_fail10)
test_module.py::test_fail2