add release announcement, bump version to 2.5.2,

add links to plugins index, regenerate doc examples.
This commit is contained in:
holger krekel 2014-01-29 13:47:11 +01:00
parent 8a3b4b9c37
commit 25ab906b8b
27 changed files with 539 additions and 469 deletions

View File

@ -1,4 +1,4 @@
UNRELEASED 2.5.2
----------------------------------- -----------------------------------
- fix issue409 -- better interoperate with cx_freeze by not - fix issue409 -- better interoperate with cx_freeze by not

View File

@ -1,2 +1,2 @@
# #
__version__ = '2.5.2.dev1' __version__ = '2.5.2'

View File

@ -4,6 +4,7 @@
<li><a href="{{ pathto('contributing') }}">Contribution Guide</a></li> <li><a href="{{ pathto('contributing') }}">Contribution Guide</a></li>
<li><a href="https://pypi.python.org/pypi/pytest">pytest @ PyPI</a></li> <li><a href="https://pypi.python.org/pypi/pytest">pytest @ PyPI</a></li>
<li><a href="https://bitbucket.org/hpk42/pytest/">pytest @ Bitbucket</a></li> <li><a href="https://bitbucket.org/hpk42/pytest/">pytest @ Bitbucket</a></li>
<li><a href="http://pytest.org/latest/plugins_index/index.html">3rd party plugins (beta)</a></li>
<li><a href="https://bitbucket.org/hpk42/pytest/issues?status=new&status=open">Issue Tracker</a></li> <li><a href="https://bitbucket.org/hpk42/pytest/issues?status=new&status=open">Issue Tracker</a></li>
<li><a href="http://pytest.org/latest/pytest.pdf">PDF Documentation</a> <li><a href="http://pytest.org/latest/pytest.pdf">PDF Documentation</a>
</ul> </ul>

View File

@ -5,6 +5,7 @@ Release announcements
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
release-2.5.2
release-2.5.1 release-2.5.1
release-2.5.0 release-2.5.0
release-2.4.2 release-2.4.2

View File

@ -0,0 +1,64 @@
pytest-2.5.2: fixes
===========================================================================
pytest is a mature Python testing tool with more than a 1000 tests
against itself, passing on many different interpreters and platforms.
The 2.5.2 release fixes a few bugs with two maybe-bugs remaining and
actively being worked on (and waiting for the bug reporter's input).
We also have a new contribution guide thanks to Piotr Banaszkiewicz
and others.
See docs at:
http://pytest.org
As usual, you can upgrade from pypi via::
pip install -U pytest
Thanks to the following people who contributed to this release:
Anatoly Bubenkov
Ronny Pfannschmidt
Floris Bruynooghe
Bruno Oliveira
Andreas Pelme
Jurko Gospodnetić
Piotr Banaszkiewicz
Simon Liedtke
lakka
Lukasz Balcerzak
Philippe Muller
Daniel Hahler
have fun,
holger krekel
2.5.2
-----------------------------------
- fix issue409 -- better interoperate with cx_freeze by not
trying to import from collections.abc which causes problems
for py27/cx_freeze. Thanks Wolfgang L. for reporting and tracking it down.
- fixed docs and code to use "pytest" instead of "py.test" almost everywhere.
Thanks Jurko Gospodnetic for the complete PR.
- fix issue425: mention at end of "py.test -h" that --markers
and --fixtures work according to specified test path (or current dir)
- fix issue413: exceptions with unicode attributes are now printed
correctly also on python2 and with pytest-xdist runs. (the fix
requires py-1.4.20)
- copy, cleanup and integrate py.io capture
from pylib 1.4.20.dev2 (rev 13d9af95547e)
- address issue416: clarify docs as to conftest.py loading semantics
- fix issue429: comparing byte strings with non-ascii chars in assert
expressions now work better. Thanks Floris Bruynooghe.
- make capfd/capsys.capture private, its unused and shouldnt be exposed

View File

@ -26,19 +26,19 @@ you will see the return value of the function call::
$ py.test test_assert1.py $ py.test test_assert1.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items collected 1 items
test_assert1.py F test_assert1.py F
================================= FAILURES ================================= ================================= FAILURES =================================
______________________________ test_function _______________________________ ______________________________ test_function _______________________________
def test_function(): def test_function():
> assert f() == 4 > assert f() == 4
E assert 3 == 4 E assert 3 == 4
E + where 3 = f() E + where 3 = f()
test_assert1.py:5: AssertionError test_assert1.py:5: AssertionError
========================= 1 failed in 0.01 seconds ========================= ========================= 1 failed in 0.01 seconds =========================
@ -116,14 +116,14 @@ if you run this module::
$ py.test test_assert2.py $ py.test test_assert2.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items collected 1 items
test_assert2.py F test_assert2.py F
================================= FAILURES ================================= ================================= FAILURES =================================
___________________________ test_set_comparison ____________________________ ___________________________ test_set_comparison ____________________________
def test_set_comparison(): def test_set_comparison():
set1 = set("1308") set1 = set("1308")
set2 = set("8035") set2 = set("8035")
@ -133,7 +133,7 @@ if you run this module::
E '1' E '1'
E Extra items in the right set: E Extra items in the right set:
E '5' E '5'
test_assert2.py:5: AssertionError test_assert2.py:5: AssertionError
========================= 1 failed in 0.01 seconds ========================= ========================= 1 failed in 0.01 seconds =========================

View File

@ -64,21 +64,21 @@ of the failing function and hide the other one::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items collected 2 items
test_module.py .F test_module.py .F
================================= FAILURES ================================= ================================= FAILURES =================================
________________________________ test_func2 ________________________________ ________________________________ test_func2 ________________________________
def test_func2(): def test_func2():
> assert False > assert False
E assert False E assert False
test_module.py:9: AssertionError test_module.py:9: AssertionError
----------------------------- Captured stdout ------------------------------ ----------------------------- Captured stdout ------------------------------
setting up <function test_func2 at 0x1eb37d0> setting up <function test_func2 at 0x1ec25f0>
==================== 1 failed, 1 passed in 0.01 seconds ==================== ==================== 1 failed, 1 passed in 0.01 seconds ====================
Accessing captured output from a test function Accessing captured output from a test function

View File

@ -17,8 +17,8 @@
# #
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
# The short X.Y version. # The short X.Y version.
version = "2.5.1" version = "2.5.2"
release = "2.5.1" release = "2.5.2"
import sys, os import sys, os

View File

@ -14,6 +14,7 @@ Full pytest documentation
overview overview
apiref apiref
plugins plugins
plugins_index/index
example/index example/index
talks talks
contributing contributing

View File

@ -44,7 +44,7 @@ then you can just invoke ``py.test`` without command line options::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items collected 1 items
mymodule.py . mymodule.py .

View File

@ -28,11 +28,11 @@ You can then restrict a test run to only run tests marked with ``webtest``::
$ py.test -v -m webtest $ py.test -v -m webtest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items collecting ... collected 3 items
test_server.py:3: test_send_http PASSED test_server.py:3: test_send_http PASSED
=================== 2 tests deselected by "-m 'webtest'" =================== =================== 2 tests deselected by "-m 'webtest'" ===================
================== 1 passed, 2 deselected in 0.01 seconds ================== ================== 1 passed, 2 deselected in 0.01 seconds ==================
@ -40,12 +40,12 @@ Or the inverse, running all tests except the webtest ones::
$ py.test -v -m "not webtest" $ py.test -v -m "not webtest"
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items collecting ... collected 3 items
test_server.py:6: test_something_quick PASSED test_server.py:6: test_something_quick PASSED
test_server.py:8: test_another PASSED test_server.py:8: test_another PASSED
================= 1 tests deselected by "-m 'not webtest'" ================= ================= 1 tests deselected by "-m 'not webtest'" =================
================== 2 passed, 1 deselected in 0.01 seconds ================== ================== 2 passed, 1 deselected in 0.01 seconds ==================
@ -61,11 +61,11 @@ select tests based on their names::
$ py.test -v -k http # running with the above defined example module $ py.test -v -k http # running with the above defined example module
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items collecting ... collected 3 items
test_server.py:3: test_send_http PASSED test_server.py:3: test_send_http PASSED
====================== 2 tests deselected by '-khttp' ====================== ====================== 2 tests deselected by '-khttp' ======================
================== 1 passed, 2 deselected in 0.01 seconds ================== ================== 1 passed, 2 deselected in 0.01 seconds ==================
@ -73,12 +73,12 @@ And you can also run all tests except the ones that match the keyword::
$ py.test -k "not send_http" -v $ py.test -k "not send_http" -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items collecting ... collected 3 items
test_server.py:6: test_something_quick PASSED test_server.py:6: test_something_quick PASSED
test_server.py:8: test_another PASSED test_server.py:8: test_another PASSED
================= 1 tests deselected by '-knot send_http' ================== ================= 1 tests deselected by '-knot send_http' ==================
================== 2 passed, 1 deselected in 0.01 seconds ================== ================== 2 passed, 1 deselected in 0.01 seconds ==================
@ -86,12 +86,12 @@ Or to select "http" and "quick" tests::
$ py.test -k "http or quick" -v $ py.test -k "http or quick" -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items collecting ... collected 3 items
test_server.py:3: test_send_http PASSED test_server.py:3: test_send_http PASSED
test_server.py:6: test_something_quick PASSED test_server.py:6: test_something_quick PASSED
================= 1 tests deselected by '-khttp or quick' ================== ================= 1 tests deselected by '-khttp or quick' ==================
================== 2 passed, 1 deselected in 0.01 seconds ================== ================== 2 passed, 1 deselected in 0.01 seconds ==================
@ -124,19 +124,19 @@ You can ask which markers exist for your test suite - the list includes our just
$ py.test --markers $ py.test --markers
@pytest.mark.webtest: mark a test as a webtest. @pytest.mark.webtest: mark a test as a webtest.
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html @pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html @pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples. @pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
For an example on how to add and work with markers from a plugin, see For an example on how to add and work with markers from a plugin, see
:ref:`adding a custom marker from a plugin`. :ref:`adding a custom marker from a plugin`.
@ -266,41 +266,41 @@ the test needs::
$ py.test -E stage2 $ py.test -E stage2
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items collected 1 items
test_someenv.py s test_someenv.py s
======================== 1 skipped in 0.01 seconds ========================= ======================== 1 skipped in 0.01 seconds =========================
and here is one that specifies exactly the environment needed:: and here is one that specifies exactly the environment needed::
$ py.test -E stage1 $ py.test -E stage1
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items collected 1 items
test_someenv.py . test_someenv.py .
========================= 1 passed in 0.01 seconds ========================= ========================= 1 passed in 0.01 seconds =========================
The ``--markers`` option always gives you a list of available markers:: The ``--markers`` option always gives you a list of available markers::
$ py.test --markers $ py.test --markers
@pytest.mark.env(name): mark test to run only on named environment @pytest.mark.env(name): mark test to run only on named environment
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html @pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html @pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples. @pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
Reading markers which were set from multiple places Reading markers which were set from multiple places
---------------------------------------------------- ----------------------------------------------------
@ -395,24 +395,24 @@ then you will see two test skipped and two executed tests as expected::
$ py.test -rs # this option reports skip reasons $ py.test -rs # this option reports skip reasons
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 4 items collected 4 items
test_plat.py s.s. test_plat.py s.s.
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIP [2] /tmp/doc-exec-63/conftest.py:12: cannot run on platform linux2 SKIP [2] /tmp/doc-exec-65/conftest.py:12: cannot run on platform linux2
=================== 2 passed, 2 skipped in 0.01 seconds ==================== =================== 2 passed, 2 skipped in 0.01 seconds ====================
Note that if you specify a platform via the marker-command line option like this:: Note that if you specify a platform via the marker-command line option like this::
$ py.test -m linux2 $ py.test -m linux2
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 4 items collected 4 items
test_plat.py . test_plat.py .
=================== 3 tests deselected by "-m 'linux2'" ==================== =================== 3 tests deselected by "-m 'linux2'" ====================
================== 1 passed, 3 deselected in 0.01 seconds ================== ================== 1 passed, 3 deselected in 0.01 seconds ==================
@ -459,11 +459,11 @@ We can now use the ``-m option`` to select one set::
$ py.test -m interface --tb=short $ py.test -m interface --tb=short
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 4 items collected 4 items
test_module.py FF test_module.py FF
================================= FAILURES ================================= ================================= FAILURES =================================
__________________________ test_interface_simple ___________________________ __________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple test_module.py:3: in test_interface_simple
@ -480,11 +480,11 @@ or to select both "event" and "interface" tests::
$ py.test -m "interface or event" --tb=short $ py.test -m "interface or event" --tb=short
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 4 items collected 4 items
test_module.py FFF test_module.py FFF
================================= FAILURES ================================= ================================= FAILURES =================================
__________________________ test_interface_simple ___________________________ __________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple test_module.py:3: in test_interface_simple

View File

@ -27,10 +27,10 @@ now execute the test specification::
nonpython $ py.test test_simple.yml nonpython $ py.test test_simple.yml
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items collected 2 items
test_simple.yml .F test_simple.yml F.
================================= FAILURES ================================= ================================= FAILURES =================================
______________________________ usecase: hello ______________________________ ______________________________ usecase: hello ______________________________
@ -56,11 +56,11 @@ consulted when reporting in ``verbose`` mode::
nonpython $ py.test -v nonpython $ py.test -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 2 items collecting ... collected 2 items
test_simple.yml:1: usecase: ok PASSED
test_simple.yml:1: usecase: hello FAILED test_simple.yml:1: usecase: hello FAILED
test_simple.yml:1: usecase: ok PASSED
================================= FAILURES ================================= ================================= FAILURES =================================
______________________________ usecase: hello ______________________________ ______________________________ usecase: hello ______________________________
@ -74,10 +74,10 @@ interesting to just look at the collection tree::
nonpython $ py.test --collect-only nonpython $ py.test --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items collected 2 items
<YamlFile 'test_simple.yml'> <YamlFile 'test_simple.yml'>
<YamlItem 'ok'>
<YamlItem 'hello'> <YamlItem 'hello'>
<YamlItem 'ok'>
============================= in 0.02 seconds ============================= ============================= in 0.02 seconds =============================

View File

@ -106,11 +106,11 @@ this is a fully self-contained example which you can run with::
$ py.test test_scenarios.py $ py.test test_scenarios.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 4 items collected 4 items
test_scenarios.py .... test_scenarios.py ....
========================= 4 passed in 0.01 seconds ========================= ========================= 4 passed in 0.01 seconds =========================
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:: If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
@ -118,7 +118,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
$ py.test --collect-only test_scenarios.py $ py.test --collect-only test_scenarios.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 4 items collected 4 items
<Module 'test_scenarios.py'> <Module 'test_scenarios.py'>
<Class 'TestSampleWithScenarios'> <Class 'TestSampleWithScenarios'>
@ -127,7 +127,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
<Function 'test_demo2[basic]'> <Function 'test_demo2[basic]'>
<Function 'test_demo1[advanced]'> <Function 'test_demo1[advanced]'>
<Function 'test_demo2[advanced]'> <Function 'test_demo2[advanced]'>
============================= in 0.01 seconds ============================= ============================= in 0.01 seconds =============================
Note that we told ``metafunc.parametrize()`` that your scenario values Note that we told ``metafunc.parametrize()`` that your scenario values
@ -182,12 +182,12 @@ Let's first see how it looks like at collection time::
$ py.test test_backends.py --collect-only $ py.test test_backends.py --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items collected 2 items
<Module 'test_backends.py'> <Module 'test_backends.py'>
<Function 'test_db_initialized[d1]'> <Function 'test_db_initialized[d1]'>
<Function 'test_db_initialized[d2]'> <Function 'test_db_initialized[d2]'>
============================= in 0.00 seconds ============================= ============================= in 0.00 seconds =============================
And then when we run the test:: And then when we run the test::
@ -196,15 +196,15 @@ And then when we run the test::
.F .F
================================= FAILURES ================================= ================================= FAILURES =================================
_________________________ test_db_initialized[d2] __________________________ _________________________ test_db_initialized[d2] __________________________
db = <conftest.DB2 instance at 0x12d4128> db = <conftest.DB2 instance at 0x1e5f050>
def test_db_initialized(db): def test_db_initialized(db):
# a dummy test # a dummy test
if db.__class__.__name__ == "DB2": if db.__class__.__name__ == "DB2":
> pytest.fail("deliberately failing for demo purposes") > pytest.fail("deliberately failing for demo purposes")
E Failed: deliberately failing for demo purposes E Failed: deliberately failing for demo purposes
test_backends.py:6: Failed test_backends.py:6: Failed
1 failed, 1 passed in 0.01 seconds 1 failed, 1 passed in 0.01 seconds
@ -251,14 +251,14 @@ argument sets to use for each test function. Let's run it::
$ py.test -q $ py.test -q
F.. F..
================================= FAILURES ================================= ================================= FAILURES =================================
________________________ TestClass.test_equals[2-1] ________________________ ________________________ TestClass.test_equals[1-2] ________________________
self = <test_parametrize.TestClass instance at 0x14493f8>, a = 1, b = 2 self = <test_parametrize.TestClass instance at 0x246c4d0>, a = 1, b = 2
def test_equals(self, a, b): def test_equals(self, a, b):
> assert a == b > assert a == b
E assert 1 == 2 E assert 1 == 2
test_parametrize.py:18: AssertionError test_parametrize.py:18: AssertionError
1 failed, 2 passed in 0.01 seconds 1 failed, 2 passed in 0.01 seconds
@ -281,8 +281,8 @@ Running it results in some skips if we don't have all the python interpreters in
. $ py.test -rs -q multipython.py . $ py.test -rs -q multipython.py
............sss............sss............sss............ssssssssssssssssss ............sss............sss............sss............ssssssssssssssssss
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIP [27] /home/hpk/p/pytest/doc/en/example/multipython.py:21: 'python2.8' not found SKIP [27] /home/hpk/p/pytest/doc/en/example/multipython.py:22: 'python2.8' not found
48 passed, 27 skipped in 1.34 seconds 48 passed, 27 skipped in 1.30 seconds
Indirect parametrization of optional implementations/imports Indirect parametrization of optional implementations/imports
-------------------------------------------------------------------- --------------------------------------------------------------------
@ -329,13 +329,13 @@ If you run this with reporting for skips enabled::
$ py.test -rs test_module.py $ py.test -rs test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items collected 2 items
test_module.py .s test_module.py .s
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-65/conftest.py:10: could not import 'opt2' SKIP [1] /tmp/doc-exec-67/conftest.py:10: could not import 'opt2'
=================== 1 passed, 1 skipped in 0.01 seconds ==================== =================== 1 passed, 1 skipped in 0.01 seconds ====================
You'll see that we don't have a ``opt2`` module and thus the second test run You'll see that we don't have a ``opt2`` module and thus the second test run

View File

@ -43,14 +43,14 @@ then the test collection looks like this::
$ py.test --collect-only $ py.test --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items collected 2 items
<Module 'check_myapp.py'> <Module 'check_myapp.py'>
<Class 'CheckMyApp'> <Class 'CheckMyApp'>
<Instance '()'> <Instance '()'>
<Function 'check_simple'> <Function 'check_simple'>
<Function 'check_complex'> <Function 'check_complex'>
============================= in 0.01 seconds ============================= ============================= in 0.01 seconds =============================
.. note:: .. note::
@ -88,7 +88,7 @@ You can always peek at the collection tree without running tests like this::
. $ py.test --collect-only pythoncollection.py . $ py.test --collect-only pythoncollection.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 3 items collected 3 items
<Module 'pythoncollection.py'> <Module 'pythoncollection.py'>
<Function 'test_function'> <Function 'test_function'>
@ -96,7 +96,7 @@ You can always peek at the collection tree without running tests like this::
<Instance '()'> <Instance '()'>
<Function 'test_method'> <Function 'test_method'>
<Function 'test_anothermethod'> <Function 'test_anothermethod'>
============================= in 0.01 seconds ============================= ============================= in 0.01 seconds =============================
customizing test collection to find all .py files customizing test collection to find all .py files
@ -141,11 +141,11 @@ interpreters and will leave out the setup.py file::
$ py.test --collect-only $ py.test --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items collected 1 items
<Module 'pkg/module_py2.py'> <Module 'pkg/module_py2.py'>
<Function 'test_only_on_python2'> <Function 'test_only_on_python2'>
============================= in 0.01 seconds ============================= ============================= in 0.01 seconds =============================
If you run with a Python3 interpreter the moduled added through the conftest.py file will not be considered for test collection. If you run with a Python3 interpreter the moduled added through the conftest.py file will not be considered for test collection.

View File

@ -13,84 +13,84 @@ get on the terminal - we are working on that):
assertion $ py.test failure_demo.py assertion $ py.test failure_demo.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 39 items collected 39 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
================================= FAILURES ================================= ================================= FAILURES =================================
____________________________ test_generative[0] ____________________________ ____________________________ test_generative[0] ____________________________
param1 = 3, param2 = 6 param1 = 3, param2 = 6
def test_generative(param1, param2): def test_generative(param1, param2):
> assert param1 * 2 < param2 > assert param1 * 2 < param2
E assert (3 * 2) < 6 E assert (3 * 2) < 6
failure_demo.py:15: AssertionError failure_demo.py:15: AssertionError
_________________________ TestFailing.test_simple __________________________ _________________________ TestFailing.test_simple __________________________
self = <failure_demo.TestFailing object at 0x12d9250> self = <failure_demo.TestFailing object at 0x29e5210>
def test_simple(self): def test_simple(self):
def f(): def f():
return 42 return 42
def g(): def g():
return 43 return 43
> assert f() == g() > assert f() == g()
E assert 42 == 43 E assert 42 == 43
E + where 42 = <function f at 0x1278b90>() E + where 42 = <function f at 0x296a9b0>()
E + and 43 = <function g at 0x1278c08>() E + and 43 = <function g at 0x296aa28>()
failure_demo.py:28: AssertionError failure_demo.py:28: AssertionError
____________________ TestFailing.test_simple_multiline _____________________ ____________________ TestFailing.test_simple_multiline _____________________
self = <failure_demo.TestFailing object at 0x1287210> self = <failure_demo.TestFailing object at 0x29cef50>
def test_simple_multiline(self): def test_simple_multiline(self):
otherfunc_multi( otherfunc_multi(
42, 42,
> 6*9) > 6*9)
failure_demo.py:33: failure_demo.py:33:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 42, b = 54 a = 42, b = 54
def otherfunc_multi(a,b): def otherfunc_multi(a,b):
> assert (a == > assert (a ==
b) b)
E assert 42 == 54 E assert 42 == 54
failure_demo.py:11: AssertionError failure_demo.py:11: AssertionError
___________________________ TestFailing.test_not ___________________________ ___________________________ TestFailing.test_not ___________________________
self = <failure_demo.TestFailing object at 0x12c6e10> self = <failure_demo.TestFailing object at 0x29be250>
def test_not(self): def test_not(self):
def f(): def f():
return 42 return 42
> assert not f() > assert not f()
E assert not 42 E assert not 42
E + where 42 = <function f at 0x12861b8>() E + where 42 = <function f at 0x296ac08>()
failure_demo.py:38: AssertionError failure_demo.py:38: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________ _________________ TestSpecialisedExplanations.test_eq_text _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1290c50> self = <failure_demo.TestSpecialisedExplanations object at 0x29c3990>
def test_eq_text(self): def test_eq_text(self):
> assert 'spam' == 'eggs' > assert 'spam' == 'eggs'
E assert 'spam' == 'eggs' E assert 'spam' == 'eggs'
E - spam E - spam
E + eggs E + eggs
failure_demo.py:42: AssertionError failure_demo.py:42: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________ _____________ TestSpecialisedExplanations.test_eq_similar_text _____________
self = <failure_demo.TestSpecialisedExplanations object at 0x12877d0> self = <failure_demo.TestSpecialisedExplanations object at 0x2acef90>
def test_eq_similar_text(self): def test_eq_similar_text(self):
> assert 'foo 1 bar' == 'foo 2 bar' > assert 'foo 1 bar' == 'foo 2 bar'
E assert 'foo 1 bar' == 'foo 2 bar' E assert 'foo 1 bar' == 'foo 2 bar'
@ -98,12 +98,12 @@ get on the terminal - we are working on that):
E ? ^ E ? ^
E + foo 2 bar E + foo 2 bar
E ? ^ E ? ^
failure_demo.py:45: AssertionError failure_demo.py:45: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________ ____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x12de1d0> self = <failure_demo.TestSpecialisedExplanations object at 0x29f1f50>
def test_eq_multiline_text(self): def test_eq_multiline_text(self):
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar' > assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
E assert 'foo\nspam\nbar' == 'foo\neggs\nbar' E assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
@ -111,12 +111,12 @@ get on the terminal - we are working on that):
E - spam E - spam
E + eggs E + eggs
E bar E bar
failure_demo.py:48: AssertionError failure_demo.py:48: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________ ______________ TestSpecialisedExplanations.test_eq_long_text _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x143b5d0> self = <failure_demo.TestSpecialisedExplanations object at 0x29e58d0>
def test_eq_long_text(self): def test_eq_long_text(self):
a = '1'*100 + 'a' + '2'*100 a = '1'*100 + 'a' + '2'*100
b = '1'*100 + 'b' + '2'*100 b = '1'*100 + 'b' + '2'*100
@ -128,12 +128,12 @@ get on the terminal - we are working on that):
E ? ^ E ? ^
E + 1111111111b222222222 E + 1111111111b222222222
E ? ^ E ? ^
failure_demo.py:53: AssertionError failure_demo.py:53: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________ _________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x1287810> self = <failure_demo.TestSpecialisedExplanations object at 0x29cee50>
def test_eq_long_text_multiline(self): def test_eq_long_text_multiline(self):
a = '1\n'*100 + 'a' + '2\n'*100 a = '1\n'*100 + 'a' + '2\n'*100
b = '1\n'*100 + 'b' + '2\n'*100 b = '1\n'*100 + 'b' + '2\n'*100
@ -152,34 +152,34 @@ get on the terminal - we are working on that):
E 2 E 2
E 2 E 2
E 2 E 2
failure_demo.py:58: AssertionError failure_demo.py:58: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________ _________________ TestSpecialisedExplanations.test_eq_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x12900d0> self = <failure_demo.TestSpecialisedExplanations object at 0x29c3810>
def test_eq_list(self): def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3] > assert [0, 1, 2] == [0, 1, 3]
E assert [0, 1, 2] == [0, 1, 3] E assert [0, 1, 2] == [0, 1, 3]
E At index 2 diff: 2 != 3 E At index 2 diff: 2 != 3
failure_demo.py:61: AssertionError failure_demo.py:61: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________ ______________ TestSpecialisedExplanations.test_eq_list_long _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x12c62d0> self = <failure_demo.TestSpecialisedExplanations object at 0x29e50d0>
def test_eq_list_long(self): def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100 a = [0]*100 + [1] + [3]*100
b = [0]*100 + [2] + [3]*100 b = [0]*100 + [2] + [3]*100
> assert a == b > assert a == b
E assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...] E assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...]
E At index 100 diff: 1 != 2 E At index 100 diff: 1 != 2
failure_demo.py:66: AssertionError failure_demo.py:66: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________ _________________ TestSpecialisedExplanations.test_eq_dict _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x12deb50> self = <failure_demo.TestSpecialisedExplanations object at 0x29c5dd0>
def test_eq_dict(self): def test_eq_dict(self):
> assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0} > assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0} E assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
@ -190,12 +190,12 @@ get on the terminal - we are working on that):
E {'c': 0} E {'c': 0}
E Right contains more items: E Right contains more items:
E {'d': 0} E {'d': 0}
failure_demo.py:69: AssertionError failure_demo.py:69: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________ _________________ TestSpecialisedExplanations.test_eq_set __________________
self = <failure_demo.TestSpecialisedExplanations object at 0x128b4d0> self = <failure_demo.TestSpecialisedExplanations object at 0x29e2690>
def test_eq_set(self): def test_eq_set(self):
> assert set([0, 10, 11, 12]) == set([0, 20, 21]) > assert set([0, 10, 11, 12]) == set([0, 20, 21])
E assert set([0, 10, 11, 12]) == set([0, 20, 21]) E assert set([0, 10, 11, 12]) == set([0, 20, 21])
@ -206,31 +206,31 @@ get on the terminal - we are working on that):
E Extra items in the right set: E Extra items in the right set:
E 20 E 20
E 21 E 21
failure_demo.py:72: AssertionError failure_demo.py:72: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________ _____________ TestSpecialisedExplanations.test_eq_longer_list ______________
self = <failure_demo.TestSpecialisedExplanations object at 0x12c6b10> self = <failure_demo.TestSpecialisedExplanations object at 0x29ceb50>
def test_eq_longer_list(self): def test_eq_longer_list(self):
> assert [1,2] == [1,2,3] > assert [1,2] == [1,2,3]
E assert [1, 2] == [1, 2, 3] E assert [1, 2] == [1, 2, 3]
E Right contains more items, first extra item: 3 E Right contains more items, first extra item: 3
failure_demo.py:75: AssertionError failure_demo.py:75: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________ _________________ TestSpecialisedExplanations.test_in_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x143b650> self = <failure_demo.TestSpecialisedExplanations object at 0x29c3050>
def test_in_list(self): def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5] > assert 1 in [0, 2, 3, 4, 5]
E assert 1 in [0, 2, 3, 4, 5] E assert 1 in [0, 2, 3, 4, 5]
failure_demo.py:78: AssertionError failure_demo.py:78: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________ __________ TestSpecialisedExplanations.test_not_in_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x128be10> self = <failure_demo.TestSpecialisedExplanations object at 0x29e5b10>
def test_not_in_text_multiline(self): def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail' text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
> assert 'foo' not in text > assert 'foo' not in text
@ -243,12 +243,12 @@ get on the terminal - we are working on that):
E ? +++ E ? +++
E and a E and a
E tail E tail
failure_demo.py:82: AssertionError failure_demo.py:82: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________ ___________ TestSpecialisedExplanations.test_not_in_text_single ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x12d9fd0> self = <failure_demo.TestSpecialisedExplanations object at 0x29f1610>
def test_not_in_text_single(self): def test_not_in_text_single(self):
text = 'single foo line' text = 'single foo line'
> assert 'foo' not in text > assert 'foo' not in text
@ -256,58 +256,58 @@ get on the terminal - we are working on that):
E 'foo' is contained here: E 'foo' is contained here:
E single foo line E single foo line
E ? +++ E ? +++
failure_demo.py:86: AssertionError failure_demo.py:86: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________ _________ TestSpecialisedExplanations.test_not_in_text_single_long _________
self = <failure_demo.TestSpecialisedExplanations object at 0x143bdd0> self = <failure_demo.TestSpecialisedExplanations object at 0x29cea50>
def test_not_in_text_single_long(self): def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20 text = 'head ' * 50 + 'foo ' + 'tail ' * 20
> assert 'foo' not in text > assert 'foo' not in text
E assert 'foo' not in 'head head head head hea...ail tail tail tail tail ' E assert 'foo' not in 'head head head head hea...ail tail tail tail tail '
E 'foo' is contained here: E 'foo' is contained here:
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? +++ E ? +++
failure_demo.py:90: AssertionError failure_demo.py:90: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______ ______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
self = <failure_demo.TestSpecialisedExplanations object at 0x12c6390> self = <failure_demo.TestSpecialisedExplanations object at 0x29e2a10>
def test_not_in_text_single_long_term(self): def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20 text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
> assert 'f'*70 not in text > assert 'f'*70 not in text
E assert 'fffffffffff...ffffffffffff' not in 'head head he...l tail tail ' E assert 'fffffffffff...ffffffffffff' not in 'head head he...l tail tail '
E 'ffffffffffffffffff...fffffffffffffffffff' is contained here: E 'ffffffffffffffffff...fffffffffffffffffff' is contained here:
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
failure_demo.py:94: AssertionError failure_demo.py:94: AssertionError
______________________________ test_attribute ______________________________ ______________________________ test_attribute ______________________________
def test_attribute(): def test_attribute():
class Foo(object): class Foo(object):
b = 1 b = 1
i = Foo() i = Foo()
> assert i.b == 2 > assert i.b == 2
E assert 1 == 2 E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x1287790>.b E + where 1 = <failure_demo.Foo object at 0x29c77d0>.b
failure_demo.py:101: AssertionError failure_demo.py:101: AssertionError
_________________________ test_attribute_instance __________________________ _________________________ test_attribute_instance __________________________
def test_attribute_instance(): def test_attribute_instance():
class Foo(object): class Foo(object):
b = 1 b = 1
> assert Foo().b == 2 > assert Foo().b == 2
E assert 1 == 2 E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x12c6bd0>.b E + where 1 = <failure_demo.Foo object at 0x29e5f10>.b
E + where <failure_demo.Foo object at 0x12c6bd0> = <class 'failure_demo.Foo'>() E + where <failure_demo.Foo object at 0x29e5f10> = <class 'failure_demo.Foo'>()
failure_demo.py:107: AssertionError failure_demo.py:107: AssertionError
__________________________ test_attribute_failure __________________________ __________________________ test_attribute_failure __________________________
def test_attribute_failure(): def test_attribute_failure():
class Foo(object): class Foo(object):
def _get_b(self): def _get_b(self):
@ -315,19 +315,19 @@ get on the terminal - we are working on that):
b = property(_get_b) b = property(_get_b)
i = Foo() i = Foo()
> assert i.b == 2 > assert i.b == 2
failure_demo.py:116: failure_demo.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.Foo object at 0x12daed0> self = <failure_demo.Foo object at 0x29e6b10>
def _get_b(self): def _get_b(self):
> raise Exception('Failed to get attrib') > raise Exception('Failed to get attrib')
E Exception: Failed to get attrib E Exception: Failed to get attrib
failure_demo.py:113: Exception failure_demo.py:113: Exception
_________________________ test_attribute_multiple __________________________ _________________________ test_attribute_multiple __________________________
def test_attribute_multiple(): def test_attribute_multiple():
class Foo(object): class Foo(object):
b = 1 b = 1
@ -335,78 +335,78 @@ get on the terminal - we are working on that):
b = 2 b = 2
> assert Foo().b == Bar().b > assert Foo().b == Bar().b
E assert 1 == 2 E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x128bcd0>.b E + where 1 = <failure_demo.Foo object at 0x29c3b10>.b
E + where <failure_demo.Foo object at 0x128bcd0> = <class 'failure_demo.Foo'>() E + where <failure_demo.Foo object at 0x29c3b10> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x128b050>.b E + and 2 = <failure_demo.Bar object at 0x29c3350>.b
E + where <failure_demo.Bar object at 0x128b050> = <class 'failure_demo.Bar'>() E + where <failure_demo.Bar object at 0x29c3350> = <class 'failure_demo.Bar'>()
failure_demo.py:124: AssertionError failure_demo.py:124: AssertionError
__________________________ TestRaises.test_raises __________________________ __________________________ TestRaises.test_raises __________________________
self = <failure_demo.TestRaises instance at 0x145c7e8> self = <failure_demo.TestRaises instance at 0x2aec878>
def test_raises(self): def test_raises(self):
s = 'qwe' s = 'qwe'
> raises(TypeError, "int(s)") > raises(TypeError, "int(s)")
failure_demo.py:133: failure_demo.py:133:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> int(s) > int(s)
E ValueError: invalid literal for int() with base 10: 'qwe' E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:983>:1: ValueError <0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:999>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________ ______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises instance at 0x1455f38> self = <failure_demo.TestRaises instance at 0x2aafef0>
def test_raises_doesnt(self): def test_raises_doesnt(self):
> raises(IOError, "int('3')") > raises(IOError, "int('3')")
E Failed: DID NOT RAISE E Failed: DID NOT RAISE
failure_demo.py:136: Failed failure_demo.py:136: Failed
__________________________ TestRaises.test_raise ___________________________ __________________________ TestRaises.test_raise ___________________________
self = <failure_demo.TestRaises instance at 0x1453998> self = <failure_demo.TestRaises instance at 0x2ae5758>
def test_raise(self): def test_raise(self):
> raise ValueError("demo error") > raise ValueError("demo error")
E ValueError: demo error E ValueError: demo error
failure_demo.py:139: ValueError failure_demo.py:139: ValueError
________________________ TestRaises.test_tupleerror ________________________ ________________________ TestRaises.test_tupleerror ________________________
self = <failure_demo.TestRaises instance at 0x1465560> self = <failure_demo.TestRaises instance at 0x29cf4d0>
def test_tupleerror(self): def test_tupleerror(self):
> a,b = [1] > a,b = [1]
E ValueError: need more than 1 value to unpack E ValueError: need more than 1 value to unpack
failure_demo.py:142: ValueError failure_demo.py:142: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______ ______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
self = <failure_demo.TestRaises instance at 0x1465758> self = <failure_demo.TestRaises instance at 0x29cf9e0>
def test_reinterpret_fails_with_print_for_the_fun_of_it(self): def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3] l = [1,2,3]
print ("l is %r" % l) print ("l is %r" % l)
> a,b = l.pop() > a,b = l.pop()
E TypeError: 'int' object is not iterable E TypeError: 'int' object is not iterable
failure_demo.py:147: TypeError failure_demo.py:147: TypeError
----------------------------- Captured stdout ------------------------------ ----------------------------- Captured stdout ------------------------------
l is [1, 2, 3] l is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________ ________________________ TestRaises.test_some_error ________________________
self = <failure_demo.TestRaises instance at 0x1468ab8> self = <failure_demo.TestRaises instance at 0x29d9ea8>
def test_some_error(self): def test_some_error(self):
> if namenotexi: > if namenotexi:
E NameError: global name 'namenotexi' is not defined E NameError: global name 'namenotexi' is not defined
failure_demo.py:150: NameError failure_demo.py:150: NameError
____________________ test_dynamic_compile_shows_nicely _____________________ ____________________ test_dynamic_compile_shows_nicely _____________________
def test_dynamic_compile_shows_nicely(): def test_dynamic_compile_shows_nicely():
src = 'def foo():\n assert 1 == 0\n' src = 'def foo():\n assert 1 == 0\n'
name = 'abc-123' name = 'abc-123'
@ -415,132 +415,132 @@ get on the terminal - we are working on that):
py.builtin.exec_(code, module.__dict__) py.builtin.exec_(code, module.__dict__)
py.std.sys.modules[name] = module py.std.sys.modules[name] = module
> module.foo() > module.foo()
failure_demo.py:165: failure_demo.py:165:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def foo(): def foo():
> assert 1 == 0 > assert 1 == 0
E assert 1 == 0 E assert 1 == 0
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError <2-codegen 'abc-123' /home/hpk/p/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________ ____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x1442908> self = <failure_demo.TestMoreErrors instance at 0x29ca8c0>
def test_complex_error(self): def test_complex_error(self):
def f(): def f():
return 44 return 44
def g(): def g():
return 43 return 43
> somefunc(f(), g()) > somefunc(f(), g())
failure_demo.py:175: failure_demo.py:175:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
x = 44, y = 43 x = 44, y = 43
def somefunc(x,y): def somefunc(x,y):
> otherfunc(x,y) > otherfunc(x,y)
failure_demo.py:8: failure_demo.py:8:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 44, b = 43 a = 44, b = 43
def otherfunc(a,b): def otherfunc(a,b):
> assert a==b > assert a==b
E assert 44 == 43 E assert 44 == 43
failure_demo.py:5: AssertionError failure_demo.py:5: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________ ___________________ TestMoreErrors.test_z1_unpack_error ____________________
self = <failure_demo.TestMoreErrors instance at 0x145bab8> self = <failure_demo.TestMoreErrors instance at 0x2ae2ea8>
def test_z1_unpack_error(self): def test_z1_unpack_error(self):
l = [] l = []
> a,b = l > a,b = l
E ValueError: need more than 0 values to unpack E ValueError: need more than 0 values to unpack
failure_demo.py:179: ValueError failure_demo.py:179: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________ ____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x1444368> self = <failure_demo.TestMoreErrors instance at 0x29da518>
def test_z2_type_error(self): def test_z2_type_error(self):
l = 3 l = 3
> a,b = l > a,b = l
E TypeError: 'int' object is not iterable E TypeError: 'int' object is not iterable
failure_demo.py:183: TypeError failure_demo.py:183: TypeError
______________________ TestMoreErrors.test_startswith ______________________ ______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors instance at 0x146e4d0> self = <failure_demo.TestMoreErrors instance at 0x29b8440>
def test_startswith(self): def test_startswith(self):
s = "123" s = "123"
g = "456" g = "456"
> assert s.startswith(g) > assert s.startswith(g)
E assert <built-in method startswith of str object at 0x12dfa58>('456') E assert <built-in method startswith of str object at 0x29ea328>('456')
E + where <built-in method startswith of str object at 0x12dfa58> = '123'.startswith E + where <built-in method startswith of str object at 0x29ea328> = '123'.startswith
failure_demo.py:188: AssertionError failure_demo.py:188: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________ __________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors instance at 0x143ed40> self = <failure_demo.TestMoreErrors instance at 0x2ae4e18>
def test_startswith_nested(self): def test_startswith_nested(self):
def f(): def f():
return "123" return "123"
def g(): def g():
return "456" return "456"
> assert f().startswith(g()) > assert f().startswith(g())
E assert <built-in method startswith of str object at 0x12dfa58>('456') E assert <built-in method startswith of str object at 0x29ea328>('456')
E + where <built-in method startswith of str object at 0x12dfa58> = '123'.startswith E + where <built-in method startswith of str object at 0x29ea328> = '123'.startswith
E + where '123' = <function f at 0x1286500>() E + where '123' = <function f at 0x29595f0>()
E + and '456' = <function g at 0x126db18>() E + and '456' = <function g at 0x2ab5320>()
failure_demo.py:195: AssertionError failure_demo.py:195: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________ _____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors instance at 0x1453b90> self = <failure_demo.TestMoreErrors instance at 0x2abf320>
def test_global_func(self): def test_global_func(self):
> assert isinstance(globf(42), float) > assert isinstance(globf(42), float)
E assert isinstance(43, float) E assert isinstance(43, float)
E + where 43 = globf(42) E + where 43 = globf(42)
failure_demo.py:198: AssertionError failure_demo.py:198: AssertionError
_______________________ TestMoreErrors.test_instance _______________________ _______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors instance at 0x146b128> self = <failure_demo.TestMoreErrors instance at 0x2aaf050>
def test_instance(self): def test_instance(self):
self.x = 6*7 self.x = 6*7
> assert self.x != 42 > assert self.x != 42
E assert 42 != 42 E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors instance at 0x146b128>.x E + where 42 = <failure_demo.TestMoreErrors instance at 0x2aaf050>.x
failure_demo.py:202: AssertionError failure_demo.py:202: AssertionError
_______________________ TestMoreErrors.test_compare ________________________ _______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors instance at 0x1469368> self = <failure_demo.TestMoreErrors instance at 0x2aedbd8>
def test_compare(self): def test_compare(self):
> assert globf(10) < 5 > assert globf(10) < 5
E assert 11 < 5 E assert 11 < 5
E + where 11 = globf(10) E + where 11 = globf(10)
failure_demo.py:205: AssertionError failure_demo.py:205: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________ _____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors instance at 0x12c4098> self = <failure_demo.TestMoreErrors instance at 0x29f2098>
def test_try_finally(self): def test_try_finally(self):
x = 1 x = 1
try: try:
> assert x == 0 > assert x == 0
E assert 1 == 0 E assert 1 == 0
failure_demo.py:210: AssertionError failure_demo.py:210: AssertionError
======================== 39 failed in 0.20 seconds ========================= ======================== 39 failed in 0.20 seconds =========================

View File

@ -108,9 +108,9 @@ directory with the above conftest.py::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 0 items collected 0 items
============================= in 0.00 seconds ============================= ============================= in 0.00 seconds =============================
.. _`excontrolskip`: .. _`excontrolskip`:
@ -152,24 +152,24 @@ and when running it will see a skipped "slow" test::
$ py.test -rs # "-rs" means report details on the little 's' $ py.test -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items collected 2 items
test_module.py .s test_module.py .s
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-68/conftest.py:9: need --runslow option to run SKIP [1] /tmp/doc-exec-70/conftest.py:9: need --runslow option to run
=================== 1 passed, 1 skipped in 0.01 seconds ==================== =================== 1 passed, 1 skipped in 0.01 seconds ====================
Or run it including the ``slow`` marked test:: Or run it including the ``slow`` marked test::
$ py.test --runslow $ py.test --runslow
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items collected 2 items
test_module.py .. test_module.py ..
========================= 2 passed in 0.01 seconds ========================= ========================= 2 passed in 0.01 seconds =========================
Writing well integrated assertion helpers Writing well integrated assertion helpers
@ -256,10 +256,10 @@ which will add the string to the test header accordingly::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
project deps: mylib-1.1 project deps: mylib-1.1
collected 0 items collected 0 items
============================= in 0.00 seconds ============================= ============================= in 0.00 seconds =============================
.. regendoc:wipe .. regendoc:wipe
@ -279,20 +279,20 @@ which will add info only when run with "--v"::
$ py.test -v $ py.test -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
info1: did you know that ... info1: did you know that ...
did you? did you?
collecting ... collected 0 items collecting ... collected 0 items
============================= in 0.00 seconds ============================= ============================= in 0.00 seconds =============================
and nothing when run plainly:: and nothing when run plainly::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 0 items collected 0 items
============================= in 0.00 seconds ============================= ============================= in 0.00 seconds =============================
profiling test duration profiling test duration
@ -322,11 +322,11 @@ Now we can profile which test functions execute the slowest::
$ py.test --durations=3 $ py.test --durations=3
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 3 items collected 3 items
test_some_are_slow.py ... test_some_are_slow.py ...
========================= slowest 3 test durations ========================= ========================= slowest 3 test durations =========================
0.20s call test_some_are_slow.py::test_funcslow2 0.20s call test_some_are_slow.py::test_funcslow2
0.10s call test_some_are_slow.py::test_funcslow1 0.10s call test_some_are_slow.py::test_funcslow1
@ -383,20 +383,20 @@ If we run this::
$ py.test -rx $ py.test -rx
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 4 items collected 4 items
test_step.py .Fx. test_step.py .Fx.
================================= FAILURES ================================= ================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________ ____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling instance at 0x2758c20> self = <test_step.TestUserHandling instance at 0x2768dd0>
def test_modification(self): def test_modification(self):
> assert 0 > assert 0
E assert 0 E assert 0
test_step.py:9: AssertionError test_step.py:9: AssertionError
========================= short test summary info ========================== ========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::()::test_deletion XFAIL test_step.py::TestUserHandling::()::test_deletion
@ -453,50 +453,50 @@ We can run this::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 7 items collected 7 items
test_step.py .Fx. test_step.py .Fx.
a/test_db.py F a/test_db.py F
a/test_db2.py F a/test_db2.py F
b/test_error.py E b/test_error.py E
================================== ERRORS ================================== ================================== ERRORS ==================================
_______________________ ERROR at setup of test_root ________________________ _______________________ ERROR at setup of test_root ________________________
file /tmp/doc-exec-68/b/test_error.py, line 1 file /tmp/doc-exec-70/b/test_error.py, line 1
def test_root(db): # no db here, will error out def test_root(db): # no db here, will error out
fixture 'db' not found fixture 'db' not found
available fixtures: recwarn, capfd, pytestconfig, capsys, tmpdir, monkeypatch available fixtures: pytestconfig, capfd, monkeypatch, capsys, recwarn, tmpdir
use 'py.test --fixtures [testpath]' for help on them. use 'py.test --fixtures [testpath]' for help on them.
/tmp/doc-exec-68/b/test_error.py:1 /tmp/doc-exec-70/b/test_error.py:1
================================= FAILURES ================================= ================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________ ____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling instance at 0x131fc20> self = <test_step.TestUserHandling instance at 0x238fdd0>
def test_modification(self): def test_modification(self):
> assert 0 > assert 0
E assert 0 E assert 0
test_step.py:9: AssertionError test_step.py:9: AssertionError
_________________________________ test_a1 __________________________________ _________________________________ test_a1 __________________________________
db = <conftest.DB instance at 0x1328878> db = <conftest.DB instance at 0x23f9998>
def test_a1(db): def test_a1(db):
> assert 0, db # to show value > assert 0, db # to show value
E AssertionError: <conftest.DB instance at 0x1328878> E AssertionError: <conftest.DB instance at 0x23f9998>
a/test_db.py:2: AssertionError a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________ _________________________________ test_a2 __________________________________
db = <conftest.DB instance at 0x1328878> db = <conftest.DB instance at 0x23f9998>
def test_a2(db): def test_a2(db):
> assert 0, db # to show value > assert 0, db # to show value
E AssertionError: <conftest.DB instance at 0x1328878> E AssertionError: <conftest.DB instance at 0x23f9998>
a/test_db2.py:2: AssertionError a/test_db2.py:2: AssertionError
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.03 seconds ========== ========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.03 seconds ==========
@ -553,34 +553,34 @@ and run them::
$ py.test test_module.py $ py.test test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items collected 2 items
test_module.py FF test_module.py FF
================================= FAILURES ================================= ================================= FAILURES =================================
________________________________ test_fail1 ________________________________ ________________________________ test_fail1 ________________________________
tmpdir = local('/tmp/pytest-42/test_fail10') tmpdir = local('/tmp/pytest-1012/test_fail10')
def test_fail1(tmpdir): def test_fail1(tmpdir):
> assert 0 > assert 0
E assert 0 E assert 0
test_module.py:2: AssertionError test_module.py:2: AssertionError
________________________________ test_fail2 ________________________________ ________________________________ test_fail2 ________________________________
def test_fail2(): def test_fail2():
> assert 0 > assert 0
E assert 0 E assert 0
test_module.py:4: AssertionError test_module.py:4: AssertionError
========================= 2 failed in 0.01 seconds ========================= ========================= 2 failed in 0.01 seconds =========================
you will have a "failures" file which contains the failing test ids:: you will have a "failures" file which contains the failing test ids::
$ cat failures $ cat failures
test_module.py::test_fail1 (/tmp/pytest-42/test_fail10) test_module.py::test_fail1 (/tmp/pytest-1012/test_fail10)
test_module.py::test_fail2 test_module.py::test_fail2
Making test result information available in fixtures Making test result information available in fixtures
@ -643,38 +643,38 @@ and run it::
$ py.test -s test_module.py $ py.test -s test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 3 items collected 3 items
test_module.py Esetting up a test failed! test_module.py::test_setup_fails test_module.py Esetting up a test failed! test_module.py::test_setup_fails
Fexecuting test failed test_module.py::test_call_fails Fexecuting test failed test_module.py::test_call_fails
F F
================================== ERRORS ================================== ================================== ERRORS ==================================
____________________ ERROR at setup of test_setup_fails ____________________ ____________________ ERROR at setup of test_setup_fails ____________________
@pytest.fixture @pytest.fixture
def other(): def other():
> assert 0 > assert 0
E assert 0 E assert 0
test_module.py:6: AssertionError test_module.py:6: AssertionError
================================= FAILURES ================================= ================================= FAILURES =================================
_____________________________ test_call_fails ______________________________ _____________________________ test_call_fails ______________________________
something = None something = None
def test_call_fails(something): def test_call_fails(something):
> assert 0 > assert 0
E assert 0 E assert 0
test_module.py:12: AssertionError test_module.py:12: AssertionError
________________________________ test_fail2 ________________________________ ________________________________ test_fail2 ________________________________
def test_fail2(): def test_fail2():
> assert 0 > assert 0
E assert 0 E assert 0
test_module.py:15: AssertionError test_module.py:15: AssertionError
==================== 2 failed, 1 error in 0.01 seconds ===================== ==================== 2 failed, 1 error in 0.01 seconds =====================

View File

@ -76,23 +76,23 @@ marked ``smtp`` fixture function. Running the test looks like this::
$ py.test test_smtpsimple.py $ py.test test_smtpsimple.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items collected 1 items
test_smtpsimple.py F test_smtpsimple.py F
================================= FAILURES ================================= ================================= FAILURES =================================
________________________________ test_ehlo _________________________________ ________________________________ test_ehlo _________________________________
smtp = <smtplib.SMTP instance at 0x2ae3469203f8> smtp = <smtplib.SMTP instance at 0x15cc0e0>
def test_ehlo(smtp): def test_ehlo(smtp):
response, msg = smtp.ehlo() response, msg = smtp.ehlo()
assert response == 250 assert response == 250
assert "merlinux" in msg assert "merlinux" in msg
> assert 0 # for demo purposes > assert 0 # for demo purposes
E assert 0 E assert 0
test_smtpsimple.py:12: AssertionError test_smtpsimple.py:12: AssertionError
========================= 1 failed in 0.21 seconds ========================= ========================= 1 failed in 0.21 seconds =========================
@ -194,36 +194,36 @@ inspect what is going on and can now run the tests::
$ py.test test_module.py $ py.test test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items collected 2 items
test_module.py FF test_module.py FF
================================= FAILURES ================================= ================================= FAILURES =================================
________________________________ test_ehlo _________________________________ ________________________________ test_ehlo _________________________________
smtp = <smtplib.SMTP instance at 0x1af5440> smtp = <smtplib.SMTP instance at 0x237b638>
def test_ehlo(smtp): def test_ehlo(smtp):
response = smtp.ehlo() response = smtp.ehlo()
assert response[0] == 250 assert response[0] == 250
assert "merlinux" in response[1] assert "merlinux" in response[1]
> assert 0 # for demo purposes > assert 0 # for demo purposes
E assert 0 E assert 0
test_module.py:6: AssertionError test_module.py:6: AssertionError
________________________________ test_noop _________________________________ ________________________________ test_noop _________________________________
smtp = <smtplib.SMTP instance at 0x1af5440> smtp = <smtplib.SMTP instance at 0x237b638>
def test_noop(smtp): def test_noop(smtp):
response = smtp.noop() response = smtp.noop()
assert response[0] == 250 assert response[0] == 250
> assert 0 # for demo purposes > assert 0 # for demo purposes
E assert 0 E assert 0
test_module.py:11: AssertionError test_module.py:11: AssertionError
========================= 2 failed in 0.17 seconds ========================= ========================= 2 failed in 0.23 seconds =========================
You see the two ``assert 0`` failing and more importantly you can also see You see the two ``assert 0`` failing and more importantly you can also see
that the same (module-scoped) ``smtp`` object was passed into the two that the same (module-scoped) ``smtp`` object was passed into the two
@ -270,8 +270,8 @@ Let's execute it::
$ py.test -s -q --tb=no $ py.test -s -q --tb=no
FFteardown smtp FFteardown smtp
2 failed in 0.17 seconds 2 failed in 0.21 seconds
We see that the ``smtp`` instance is finalized after the two We see that the ``smtp`` instance is finalized after the two
tests finished execution. Note that if we decorated our fixture tests finished execution. Note that if we decorated our fixture
@ -312,7 +312,7 @@ again, nothing much has changed::
$ py.test -s -q --tb=no $ py.test -s -q --tb=no
FF FF
2 failed in 0.21 seconds 2 failed in 0.59 seconds
Let's quickly create another test module that actually sets the Let's quickly create another test module that actually sets the
server URL in its module namespace:: server URL in its module namespace::
@ -378,53 +378,53 @@ So let's just do another run::
FFFF FFFF
================================= FAILURES ================================= ================================= FAILURES =================================
__________________________ test_ehlo[merlinux.eu] __________________________ __________________________ test_ehlo[merlinux.eu] __________________________
smtp = <smtplib.SMTP instance at 0x100ac20> smtp = <smtplib.SMTP instance at 0x21f3e60>
def test_ehlo(smtp): def test_ehlo(smtp):
response = smtp.ehlo() response = smtp.ehlo()
assert response[0] == 250 assert response[0] == 250
assert "merlinux" in response[1] assert "merlinux" in response[1]
> assert 0 # for demo purposes > assert 0 # for demo purposes
E assert 0 E assert 0
test_module.py:6: AssertionError test_module.py:6: AssertionError
__________________________ test_noop[merlinux.eu] __________________________ __________________________ test_noop[merlinux.eu] __________________________
smtp = <smtplib.SMTP instance at 0x100ac20> smtp = <smtplib.SMTP instance at 0x21f3e60>
def test_noop(smtp): def test_noop(smtp):
response = smtp.noop() response = smtp.noop()
assert response[0] == 250 assert response[0] == 250
> assert 0 # for demo purposes > assert 0 # for demo purposes
E assert 0 E assert 0
test_module.py:11: AssertionError test_module.py:11: AssertionError
________________________ test_ehlo[mail.python.org] ________________________ ________________________ test_ehlo[mail.python.org] ________________________
smtp = <smtplib.SMTP instance at 0x105b638> smtp = <smtplib.SMTP instance at 0x22047e8>
def test_ehlo(smtp): def test_ehlo(smtp):
response = smtp.ehlo() response = smtp.ehlo()
assert response[0] == 250 assert response[0] == 250
> assert "merlinux" in response[1] > assert "merlinux" in response[1]
E assert 'merlinux' in 'mail.python.org\nSIZE 25600000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN' E assert 'merlinux' in 'mail.python.org\nSIZE 25600000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN'
test_module.py:5: AssertionError test_module.py:5: AssertionError
----------------------------- Captured stdout ------------------------------ ----------------------------- Captured stdout ------------------------------
finalizing <smtplib.SMTP instance at 0x100ac20> finalizing <smtplib.SMTP instance at 0x21f3e60>
________________________ test_noop[mail.python.org] ________________________ ________________________ test_noop[mail.python.org] ________________________
smtp = <smtplib.SMTP instance at 0x105b638> smtp = <smtplib.SMTP instance at 0x22047e8>
def test_noop(smtp): def test_noop(smtp):
response = smtp.noop() response = smtp.noop()
assert response[0] == 250 assert response[0] == 250
> assert 0 # for demo purposes > assert 0 # for demo purposes
E assert 0 E assert 0
test_module.py:11: AssertionError test_module.py:11: AssertionError
4 failed in 6.58 seconds 4 failed in 6.06 seconds
We see that our two test functions each ran twice, against the different We see that our two test functions each ran twice, against the different
``smtp`` instances. Note also, that with the ``mail.python.org`` ``smtp`` instances. Note also, that with the ``mail.python.org``
@ -464,13 +464,13 @@ Here we declare an ``app`` fixture which receives the previously defined
$ py.test -v test_appsetup.py $ py.test -v test_appsetup.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 2 items collecting ... collected 2 items
test_appsetup.py:12: test_smtp_exists[merlinux.eu] PASSED test_appsetup.py:12: test_smtp_exists[merlinux.eu] PASSED
test_appsetup.py:12: test_smtp_exists[mail.python.org] PASSED test_appsetup.py:12: test_smtp_exists[mail.python.org] PASSED
========================= 2 passed in 5.95 seconds ========================= ========================= 2 passed in 6.42 seconds =========================
Due to the parametrization of ``smtp`` the test will run twice with two Due to the parametrization of ``smtp`` the test will run twice with two
different ``App`` instances and respective smtp servers. There is no different ``App`` instances and respective smtp servers. There is no
@ -528,9 +528,9 @@ Let's run the tests in verbose mode and with looking at the print-output::
$ py.test -v -s test_module.py $ py.test -v -s test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 8 items collecting ... collected 8 items
test_module.py:15: test_0[1] test0 1 test_module.py:15: test_0[1] test0 1
PASSED PASSED
test_module.py:15: test_0[2] test0 2 test_module.py:15: test_0[2] test0 2
@ -549,7 +549,7 @@ Let's run the tests in verbose mode and with looking at the print-output::
PASSED PASSED
test_module.py:19: test_2[2-mod2] test2 2 mod2 test_module.py:19: test_2[2-mod2] test2 2 mod2
PASSED PASSED
========================= 8 passed in 0.01 seconds ========================= ========================= 8 passed in 0.01 seconds =========================
You can see that the parametrized module-scoped ``modarg`` resource caused You can see that the parametrized module-scoped ``modarg`` resource caused

View File

@ -23,7 +23,7 @@ Installation options::
To check your installation has installed the correct version:: To check your installation has installed the correct version::
$ py.test --version $ py.test --version
This is pytest version 2.5.1, imported from /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/pytest.pyc This is pytest version 2.5.2, imported from /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/pytest.pyc
If you get an error checkout :ref:`installation issues`. If you get an error checkout :ref:`installation issues`.
@ -45,19 +45,19 @@ That's it. You can execute the test function now::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items collected 1 items
test_sample.py F test_sample.py F
================================= FAILURES ================================= ================================= FAILURES =================================
_______________________________ test_answer ________________________________ _______________________________ test_answer ________________________________
def test_answer(): def test_answer():
> assert func(3) == 5 > assert func(3) == 5
E assert 4 == 5 E assert 4 == 5
E + where 4 = func(3) E + where 4 = func(3)
test_sample.py:5: AssertionError test_sample.py:5: AssertionError
========================= 1 failed in 0.01 seconds ========================= ========================= 1 failed in 0.01 seconds =========================
@ -93,7 +93,7 @@ Running it with, this time in "quiet" reporting mode::
$ py.test -q test_sysexit.py $ py.test -q test_sysexit.py
. .
1 passed in 0.00 seconds 1 passed in 0.01 seconds
.. todo:: For further ways to assert exceptions see the `raises` .. todo:: For further ways to assert exceptions see the `raises`
@ -122,14 +122,14 @@ run the module by passing its filename::
.F .F
================================= FAILURES ================================= ================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________ ____________________________ TestClass.test_two ____________________________
self = <test_class.TestClass instance at 0x2b57dd0> self = <test_class.TestClass instance at 0x255a0e0>
def test_two(self): def test_two(self):
x = "hello" x = "hello"
> assert hasattr(x, 'check') > assert hasattr(x, 'check')
E assert hasattr('hello', 'check') E assert hasattr('hello', 'check')
test_class.py:8: AssertionError test_class.py:8: AssertionError
1 failed, 1 passed in 0.01 seconds 1 failed, 1 passed in 0.01 seconds
@ -158,18 +158,18 @@ before performing the test function call. Let's just run it::
F F
================================= FAILURES ================================= ================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________ _____________________________ test_needsfiles ______________________________
tmpdir = local('/tmp/pytest-38/test_needsfiles0') tmpdir = local('/tmp/pytest-1008/test_needsfiles0')
def test_needsfiles(tmpdir): def test_needsfiles(tmpdir):
print tmpdir print tmpdir
> assert 0 > assert 0
E assert 0 E assert 0
test_tmpdir.py:3: AssertionError test_tmpdir.py:3: AssertionError
----------------------------- Captured stdout ------------------------------ ----------------------------- Captured stdout ------------------------------
/tmp/pytest-38/test_needsfiles0 /tmp/pytest-1008/test_needsfiles0
1 failed in 0.04 seconds 1 failed in 0.01 seconds
Before the test runs, a unique-per-test-invocation temporary directory Before the test runs, a unique-per-test-invocation temporary directory
was created. More info at :ref:`tmpdir handling`. was created. More info at :ref:`tmpdir handling`.

View File

@ -53,16 +53,16 @@ them in turn::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 3 items collected 3 items
test_expectation.py ..F test_expectation.py ..F
================================= FAILURES ================================= ================================= FAILURES =================================
____________________________ test_eval[6*9-42] _____________________________ ____________________________ test_eval[6*9-42] _____________________________
input = '6*9', expected = 42 input = '6*9', expected = 42
@pytest.mark.parametrize("input,expected", [ @pytest.mark.parametrize("input,expected", [
("3+5", 8), ("3+5", 8),
("2+4", 6), ("2+4", 6),
@ -72,7 +72,7 @@ them in turn::
> assert eval(input) == expected > assert eval(input) == expected
E assert 54 == 42 E assert 54 == 42
E + where 54 = eval('6*9') E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError test_expectation.py:8: AssertionError
==================== 1 failed, 2 passed in 0.01 seconds ==================== ==================== 1 failed, 2 passed in 0.01 seconds ====================
@ -100,11 +100,11 @@ Let's run this::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 3 items collected 3 items
test_expectation.py ..x test_expectation.py ..x
=================== 2 passed, 1 xfailed in 0.01 seconds ==================== =================== 2 passed, 1 xfailed in 0.01 seconds ====================
The one parameter set which caused a failure previously now The one parameter set which caused a failure previously now
@ -165,14 +165,14 @@ Let's also run with a stringinput that will lead to a failing test::
F F
================================= FAILURES ================================= ================================= FAILURES =================================
___________________________ test_valid_string[!] ___________________________ ___________________________ test_valid_string[!] ___________________________
stringinput = '!' stringinput = '!'
def test_valid_string(stringinput): def test_valid_string(stringinput):
> assert stringinput.isalpha() > assert stringinput.isalpha()
E assert <built-in method isalpha of str object at 0x2b72934ca198>() E assert <built-in method isalpha of str object at 0x2b869b32b148>()
E + where <built-in method isalpha of str object at 0x2b72934ca198> = '!'.isalpha E + where <built-in method isalpha of str object at 0x2b869b32b148> = '!'.isalpha
test_strings.py:3: AssertionError test_strings.py:3: AssertionError
1 failed in 0.01 seconds 1 failed in 0.01 seconds
@ -185,7 +185,7 @@ listlist::
$ py.test -q -rs test_strings.py $ py.test -q -rs test_strings.py
s s
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIP [1] /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:1094: got empty parameter set, function test_valid_string at /tmp/doc-exec-24/test_strings.py:1 SKIP [1] /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:1110: got empty parameter set, function test_valid_string at /tmp/doc-exec-24/test_strings.py:1
1 skipped in 0.01 seconds 1 skipped in 0.01 seconds
For further examples, you might want to look at :ref:`more For further examples, you might want to look at :ref:`more

View File

@ -64,7 +64,9 @@ tool, for example::
pip uninstall pytest-NAME pip uninstall pytest-NAME
If a plugin is installed, ``pytest`` automatically finds and integrates it, If a plugin is installed, ``pytest`` automatically finds and integrates it,
there is no need to activate it. Here is a initial list of known plugins: there is no need to activate it. We have a :doc:`beta page listing
all 3rd party plugins and their status <plugins_index/index>` and here
is a little annotated list for some popular plugins:
.. _`django`: https://www.djangoproject.com/ .. _`django`: https://www.djangoproject.com/

View File

@ -1,5 +1,5 @@
""" """
Script to generate the file `plugins_index.txt` with information about Script to generate the file `index.txt` with information about
pytest plugins taken directly from a live PyPI server. pytest plugins taken directly from a live PyPI server.
Also includes plugin compatibility between different python and pytest versions, Also includes plugin compatibility between different python and pytest versions,
@ -34,9 +34,9 @@ def get_proxy(url):
def iter_plugins(client, search='pytest-'): def iter_plugins(client, search='pytest-'):
""" """
Returns an iterator of (name, version) from PyPI. Returns an iterator of (name, version) from PyPI.
:param client: ServerProxy :param client: ServerProxy
:param search: package names to search for :param search: package names to search for
""" """
for plug_data in client.search({'name': search}): for plug_data in client.search({'name': search}):
yield plug_data['name'], plug_data['version'] yield plug_data['name'], plug_data['version']
@ -58,11 +58,11 @@ def obtain_plugins_table(plugins, client):
""" """
Returns information to populate a table of plugins, their versions, Returns information to populate a table of plugins, their versions,
authors, etc. authors, etc.
The returned information is a list of columns of `ColumnData` The returned information is a list of columns of `ColumnData`
namedtuples(text, link). Link can be None if the text for that column namedtuples(text, link). Link can be None if the text for that column
should not be linked to anything. should not be linked to anything.
:param plugins: list of (name, version) :param plugins: list of (name, version)
:param client: ServerProxy :param client: ServerProxy
""" """
@ -141,7 +141,7 @@ def obtain_override_repositories():
def generate_plugins_index_from_table(filename, headers, rows): def generate_plugins_index_from_table(filename, headers, rows):
""" """
Generates a RST file with the table data given. Generates a RST file with the table data given.
:param filename: output filename :param filename: output filename
:param headers: see `obtain_plugins_table` :param headers: see `obtain_plugins_table`
:param rows: see `obtain_plugins_table` :param rows: see `obtain_plugins_table`
@ -168,14 +168,14 @@ def generate_plugins_index_from_table(filename, headers, rows):
return ' '.join(char * length for length in column_lengths) return ' '.join(char * length for length in column_lengths)
with open(filename, 'w') as f: with open(filename, 'w') as f:
# write welcome # write welcome
print('.. _plugins_index:', file=f) print('.. _plugins_index:', file=f)
print(file=f) print(file=f)
print('List of Third-Party Plugins', file=f) print('List of Third-Party Plugins', file=f)
print('===========================', file=f) print('===========================', file=f)
print(file=f) print(file=f)
# table # table
print(get_row_limiter('='), file=f) print(get_row_limiter('='), file=f)
formatted_headers = [ formatted_headers = [
'{0:^{fill}}'.format(header, fill=column_lengths[i]) '{0:^{fill}}'.format(header, fill=column_lengths[i])
@ -200,7 +200,7 @@ def generate_plugins_index(client, filename):
""" """
Generates an RST file with a table of the latest pytest plugins found in Generates an RST file with a table of the latest pytest plugins found in
PyPI. PyPI.
:param client: ServerProxy :param client: ServerProxy
:param filename: output filename :param filename: output filename
""" """
@ -214,7 +214,7 @@ def main(argv):
Script entry point. Configures an option parser and calls the appropriate Script entry point. Configures an option parser and calls the appropriate
internal function. internal function.
""" """
filename = os.path.join(os.path.dirname(__file__), 'plugins_index.txt') filename = os.path.join(os.path.dirname(__file__), 'index.txt')
url = 'http://pypi.python.org/pypi' url = 'http://pypi.python.org/pypi'
parser = OptionParser( parser = OptionParser(

View File

@ -159,14 +159,14 @@ Running it with the report-on-xfail option gives this output::
example $ py.test -rx xfail_demo.py example $ py.test -rx xfail_demo.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 6 items collected 6 items
xfail_demo.py xxxxxx xfail_demo.py xxxxxx
========================= short test summary info ========================== ========================= short test summary info ==========================
XFAIL xfail_demo.py::test_hello XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2 XFAIL xfail_demo.py::test_hello2
reason: [NOTRUN] reason: [NOTRUN]
XFAIL xfail_demo.py::test_hello3 XFAIL xfail_demo.py::test_hello3
condition: hasattr(os, 'sep') condition: hasattr(os, 'sep')
XFAIL xfail_demo.py::test_hello4 XFAIL xfail_demo.py::test_hello4
@ -175,7 +175,7 @@ Running it with the report-on-xfail option gives this output::
condition: pytest.__version__[0] != "17" condition: pytest.__version__[0] != "17"
XFAIL xfail_demo.py::test_hello6 XFAIL xfail_demo.py::test_hello6
reason: reason reason: reason
======================== 6 xfailed in 0.04 seconds ========================= ======================== 6 xfailed in 0.04 seconds =========================
.. _`skip/xfail with parametrize`: .. _`skip/xfail with parametrize`:

View File

@ -29,7 +29,7 @@ Running this would result in a passed test except for the last
$ py.test test_tmpdir.py $ py.test test_tmpdir.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 1 items collected 1 items
test_tmpdir.py F test_tmpdir.py F
@ -37,7 +37,7 @@ Running this would result in a passed test except for the last
================================= FAILURES ================================= ================================= FAILURES =================================
_____________________________ test_create_file _____________________________ _____________________________ test_create_file _____________________________
tmpdir = local('/tmp/pytest-39/test_create_file0') tmpdir = local('/tmp/pytest-1009/test_create_file0')
def test_create_file(tmpdir): def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt") p = tmpdir.mkdir("sub").join("hello.txt")

View File

@ -88,7 +88,7 @@ the ``self.db`` values in the traceback::
$ py.test test_unittest_db.py $ py.test test_unittest_db.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2
collected 2 items collected 2 items
test_unittest_db.py FF test_unittest_db.py FF
@ -101,7 +101,7 @@ the ``self.db`` values in the traceback::
def test_method1(self): def test_method1(self):
assert hasattr(self, "db") assert hasattr(self, "db")
> assert 0, self.db # fail for demo purposes > assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.DummyDB instance at 0x11e23f8> E AssertionError: <conftest.DummyDB instance at 0x12124d0>
test_unittest_db.py:9: AssertionError test_unittest_db.py:9: AssertionError
___________________________ MyTest.test_method2 ____________________________ ___________________________ MyTest.test_method2 ____________________________
@ -110,7 +110,7 @@ the ``self.db`` values in the traceback::
def test_method2(self): def test_method2(self):
> assert 0, self.db # fail for demo purposes > assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.DummyDB instance at 0x11e23f8> E AssertionError: <conftest.DummyDB instance at 0x12124d0>
test_unittest_db.py:12: AssertionError test_unittest_db.py:12: AssertionError
========================= 2 failed in 0.01 seconds ========================= ========================= 2 failed in 0.01 seconds =========================

View File

@ -27,7 +27,7 @@ def main():
name='pytest', name='pytest',
description='pytest: simple powerful testing with Python', description='pytest: simple powerful testing with Python',
long_description = long_description, long_description = long_description,
version='2.5.2.dev1', version='2.5.2',
url='http://pytest.org', url='http://pytest.org',
license='MIT license', license='MIT license',
platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'], platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'],

View File

@ -545,6 +545,7 @@ def test_capture_early_option_parsing(testdir):
@pytest.mark.xfail(sys.version_info >= (3, 0), reason='encoding issues') @pytest.mark.xfail(sys.version_info >= (3, 0), reason='encoding issues')
@pytest.mark.xfail(sys.version_info < (2, 6), reason='test not run on py25')
def test_capture_binary_output(testdir): def test_capture_binary_output(testdir):
testdir.makepyfile(r""" testdir.makepyfile(r"""
import pytest import pytest