bump version to 2.4.2, regen docs

This commit is contained in:
holger krekel 2013-10-03 19:09:18 +02:00
parent cec7d47c1f
commit 0780f2573f
22 changed files with 230 additions and 185 deletions

View File

@ -8,9 +8,6 @@ Changes between 2.4.1 and 2.4.2
cause wrong matches because of an internal implementation quirk
(don't ask) which is now properly implemented. fixes issue345.
- avoid "IOError: Bad Filedescriptor" on pytest shutdown by not closing
the internal dupped stdout (fix is slightly hand-wavy but works).
- avoid tmpdir fixture to create too long filenames especially
when parametrization is used (issue354)
@ -25,7 +22,7 @@ Changes between 2.4.1 and 2.4.2
docs.
- remove attempt to "dup" stdout at startup.
- remove attempt to "dup" stdout at startup as it's icky.
the normal capturing should catch enough possibilities
of tests messing up standard FDs.

View File

@ -5,6 +5,8 @@ Release announcements
.. toctree::
:maxdepth: 2
release-2.4.2
release-2.4.1
release-2.4.0
release-2.3.5
release-2.3.4

View File

@ -0,0 +1,32 @@
pytest-2.4.2: colorama on windows, plugin/tmpdir fixes
===========================================================================
pytest-2.4.2 is another bug-fixing release:
- fix "-k" matching of tests where "repr" and "attr" and other names would
cause wrong matches because of an internal implementation quirk
(don't ask) which is now properly implemented. fixes issue345.
- avoid tmpdir fixture to create too long filenames especially
when parametrization is used (issue354)
- fix pytest-pep8 and pytest-flakes / pytest interactions
(collection names in mark plugin was assuming an item always
has a function which is not true for those plugins etc.)
Thanks Andi Zeidler.
- introduce node.get_marker/node.add_marker API for plugins
like pytest-pep8 and pytest-flakes to avoid the messy
details of the node.keywords pseudo-dicts. Adapated
docs.
- remove attempt to "dup" stdout at startup as it's icky.
the normal capturing should catch enough possibilities
of tests messing up standard FDs.
as usual, docs at http://pytest.org and upgrades via::
pip install -U pytest
have fun,
holger krekel

View File

@ -26,7 +26,7 @@ you will see the return value of the function call::
$ py.test test_assert1.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 1 items
test_assert1.py F
@ -116,7 +116,7 @@ if you run this module::
$ py.test test_assert2.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 1 items
test_assert2.py F
@ -191,6 +191,7 @@ the conftest file::
E vals: 1 != 2
test_foocompare.py:8: AssertionError
1 failed in 0.01 seconds
.. _assert-details:
.. _`assert introspection`:

View File

@ -129,6 +129,7 @@ Let's run this module without output-capturing::
E NameError: global name 'globresource' is not defined
test_glob.py:5: NameError
2 failed in 0.01 seconds
The two tests see the same global ``globresource`` object.
@ -177,6 +178,7 @@ And then re-run our test module::
E NameError: global name 'globresource' is not defined
test_glob.py:5: NameError
2 failed in 0.01 seconds
We are now running the two tests twice with two different global resource
instances. Note that the tests are ordered such that only

View File

@ -120,3 +120,4 @@ You can ask for available builtin or project-custom
path object.
in 0.00 seconds

View File

@ -64,7 +64,7 @@ of the failing function and hide the other one::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 2 items
test_module.py .F
@ -78,7 +78,7 @@ of the failing function and hide the other one::
test_module.py:9: AssertionError
----------------------------- Captured stdout ------------------------------
setting up <function test_func2 at 0x2d79f50>
setting up <function test_func2 at 0x282d2a8>
==================== 1 failed, 1 passed in 0.01 seconds ====================
Accessing captured output from a test function

View File

@ -44,12 +44,12 @@ then you can just invoke ``py.test`` without command line options::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 1 items
mymodule.py .
========================= 1 passed in 0.02 seconds =========================
========================= 1 passed in 0.01 seconds =========================
It is possible to use fixtures using the ``getfixture`` helper::

View File

@ -28,7 +28,7 @@ You can then restrict a test run to only run tests marked with ``webtest``::
$ py.test -v -m webtest
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:3: test_send_http PASSED
@ -40,7 +40,7 @@ Or the inverse, running all tests except the webtest ones::
$ py.test -v -m "not webtest"
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:6: test_something_quick PASSED
@ -61,7 +61,7 @@ select tests based on their names::
$ py.test -v -k http # running with the above defined example module
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:3: test_send_http PASSED
@ -73,7 +73,7 @@ And you can also run all tests except the ones that match the keyword::
$ py.test -k "not send_http" -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:6: test_something_quick PASSED
@ -86,7 +86,7 @@ Or to select "http" and "quick" tests::
$ py.test -k "http or quick" -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:3: test_send_http PASSED
@ -255,7 +255,7 @@ the test needs::
$ py.test -E stage2
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 1 items
test_someenv.py s
@ -266,7 +266,7 @@ and here is one that specifies exactly the environment needed::
$ py.test -E stage1
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 1 items
test_someenv.py .
@ -331,6 +331,7 @@ Let's run this without capturing output and see what we get::
glob args=('class',) kwargs={'x': 2}
glob args=('module',) kwargs={'x': 1}
.
1 passed in 0.01 seconds
marking platform specific tests with pytest
--------------------------------------------------------------
@ -383,12 +384,12 @@ then you will see two test skipped and two executed tests as expected::
$ py.test -rs # this option reports skip reasons
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 4 items
test_plat.py s.s.
========================= short test summary info ==========================
SKIP [2] /tmp/doc-exec-273/conftest.py:12: cannot run on platform linux2
SKIP [2] /tmp/doc-exec-598/conftest.py:12: cannot run on platform linux2
=================== 2 passed, 2 skipped in 0.01 seconds ====================
@ -396,7 +397,7 @@ Note that if you specify a platform via the marker-command line option like this
$ py.test -m linux2
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 4 items
test_plat.py .
@ -447,7 +448,7 @@ We can now use the ``-m option`` to select one set::
$ py.test -m interface --tb=short
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 4 items
test_module.py FF
@ -468,7 +469,7 @@ or to select both "event" and "interface" tests::
$ py.test -m "interface or event" --tb=short
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 4 items
test_module.py FFF

View File

@ -27,7 +27,7 @@ now execute the test specification::
nonpython $ py.test test_simple.yml
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 2 items
test_simple.yml .F
@ -37,7 +37,7 @@ now execute the test specification::
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
==================== 1 failed, 1 passed in 0.05 seconds ====================
==================== 1 failed, 1 passed in 0.03 seconds ====================
You get one dot for the passing ``sub1: sub1`` check and one failure.
Obviously in the above ``conftest.py`` you'll want to implement a more
@ -56,7 +56,7 @@ consulted when reporting in ``verbose`` mode::
nonpython $ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 2 items
test_simple.yml:1: usecase: ok PASSED
@ -67,17 +67,17 @@ consulted when reporting in ``verbose`` mode::
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
==================== 1 failed, 1 passed in 0.05 seconds ====================
==================== 1 failed, 1 passed in 0.03 seconds ====================
While developing your custom test collection and execution it's also
interesting to just look at the collection tree::
nonpython $ py.test --collect-only
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 2 items
<YamlFile 'test_simple.yml'>
<YamlItem 'ok'>
<YamlItem 'hello'>
============================= in 0.05 seconds =============================
============================= in 0.03 seconds =============================

View File

@ -46,6 +46,7 @@ This means that we only run 2 tests if we do not pass ``--all``::
$ py.test -q test_compute.py
..
2 passed in 0.01 seconds
We run only two computations, so we see two dots.
let's run the full monty::
@ -62,6 +63,7 @@ let's run the full monty::
E assert 4 < 4
test_compute.py:3: AssertionError
1 failed, 4 passed in 0.01 seconds
As expected when running the full range of ``param1`` values
we'll get an error on the last one.
@ -104,7 +106,7 @@ this is a fully self-contained example which you can run with::
$ py.test test_scenarios.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 4 items
test_scenarios.py ....
@ -116,7 +118,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
$ py.test --collect-only test_scenarios.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 4 items
<Module 'test_scenarios.py'>
<Class 'TestSampleWithScenarios'>
@ -180,7 +182,7 @@ Let's first see how it looks like at collection time::
$ py.test test_backends.py --collect-only
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 2 items
<Module 'test_backends.py'>
<Function 'test_db_initialized[d1]'>
@ -195,7 +197,7 @@ And then when we run the test::
================================= FAILURES =================================
_________________________ test_db_initialized[d2] __________________________
db = <conftest.DB2 instance at 0x2038f80>
db = <conftest.DB2 instance at 0x2dbd950>
def test_db_initialized(db):
# a dummy test
@ -204,6 +206,7 @@ And then when we run the test::
E Failed: deliberately failing for demo purposes
test_backends.py:6: Failed
1 failed, 1 passed in 0.01 seconds
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
@ -250,13 +253,14 @@ argument sets to use for each test function. Let's run it::
================================= FAILURES =================================
________________________ TestClass.test_equals[1-2] ________________________
self = <test_parametrize.TestClass instance at 0x1338f80>, a = 1, b = 2
self = <test_parametrize.TestClass instance at 0x258a6c8>, a = 1, b = 2
def test_equals(self, a, b):
> assert a == b
E assert 1 == 2
test_parametrize.py:18: AssertionError
1 failed, 2 passed in 0.02 seconds
Indirect parametrization with multiple fixtures
--------------------------------------------------------------
@ -278,6 +282,7 @@ Running it results in some skips if we don't have all the python interpreters in
............sss............sss............sss............ssssssssssssssssss
========================= short test summary info ==========================
SKIP [27] /home/hpk/p/pytest/doc/en/example/multipython.py:21: 'python2.8' not found
48 passed, 27 skipped in 1.37 seconds
Indirect parametrization of optional implementations/imports
--------------------------------------------------------------------
@ -324,12 +329,12 @@ If you run this with reporting for skips enabled::
$ py.test -rs test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 2 items
test_module.py .s
test_module.py s.
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-275/conftest.py:10: could not import 'opt2'
SKIP [1] /tmp/doc-exec-600/conftest.py:10: could not import 'opt2'
=================== 1 passed, 1 skipped in 0.01 seconds ====================

View File

@ -43,7 +43,7 @@ then the test collection looks like this::
$ py.test --collect-only
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 2 items
<Module 'check_myapp.py'>
<Class 'CheckMyApp'>
@ -82,7 +82,7 @@ You can always peek at the collection tree without running tests like this::
. $ py.test --collect-only pythoncollection.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 3 items
<Module 'pythoncollection.py'>
<Function 'test_function'>
@ -135,7 +135,7 @@ interpreters and will leave out the setup.py file::
$ py.test --collect-only
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 1 items
<Module 'pkg/module_py2.py'>
<Function 'test_only_on_python2'>

View File

@ -13,7 +13,7 @@ get on the terminal - we are working on that):
assertion $ py.test failure_demo.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 39 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
@ -30,7 +30,7 @@ get on the terminal - we are working on that):
failure_demo.py:15: AssertionError
_________________________ TestFailing.test_simple __________________________
self = <failure_demo.TestFailing object at 0x1445e10>
self = <failure_demo.TestFailing object at 0x26f8f50>
def test_simple(self):
def f():
@ -40,13 +40,13 @@ get on the terminal - we are working on that):
> assert f() == g()
E assert 42 == 43
E + where 42 = <function f at 0x137c6e0>()
E + and 43 = <function g at 0x137c758>()
E + where 42 = <function f at 0x269d5f0>()
E + and 43 = <function g at 0x269d6e0>()
failure_demo.py:28: AssertionError
____________________ TestFailing.test_simple_multiline _____________________
self = <failure_demo.TestFailing object at 0x135a1d0>
self = <failure_demo.TestFailing object at 0x26ade90>
def test_simple_multiline(self):
otherfunc_multi(
@ -66,19 +66,19 @@ get on the terminal - we are working on that):
failure_demo.py:11: AssertionError
___________________________ TestFailing.test_not ___________________________
self = <failure_demo.TestFailing object at 0x1458ed0>
self = <failure_demo.TestFailing object at 0x26aac10>
def test_not(self):
def f():
return 42
> assert not f()
E assert not 42
E + where 42 = <function f at 0x137caa0>()
E + where 42 = <function f at 0x269d8c0>()
failure_demo.py:38: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x14451d0>
self = <failure_demo.TestSpecialisedExplanations object at 0x2861490>
def test_eq_text(self):
> assert 'spam' == 'eggs'
@ -89,7 +89,7 @@ get on the terminal - we are working on that):
failure_demo.py:42: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
self = <failure_demo.TestSpecialisedExplanations object at 0x1458c90>
self = <failure_demo.TestSpecialisedExplanations object at 0x26ade10>
def test_eq_similar_text(self):
> assert 'foo 1 bar' == 'foo 2 bar'
@ -102,7 +102,7 @@ get on the terminal - we are working on that):
failure_demo.py:45: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x1434390>
self = <failure_demo.TestSpecialisedExplanations object at 0x26f8ad0>
def test_eq_multiline_text(self):
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
@ -115,7 +115,7 @@ get on the terminal - we are working on that):
failure_demo.py:48: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x1459f50>
self = <failure_demo.TestSpecialisedExplanations object at 0x26aa450>
def test_eq_long_text(self):
a = '1'*100 + 'a' + '2'*100
@ -132,7 +132,7 @@ get on the terminal - we are working on that):
failure_demo.py:53: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x135a790>
self = <failure_demo.TestSpecialisedExplanations object at 0x26ad7d0>
def test_eq_long_text_multiline(self):
a = '1\n'*100 + 'a' + '2\n'*100
@ -156,7 +156,7 @@ get on the terminal - we are working on that):
failure_demo.py:58: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x138dfd0>
self = <failure_demo.TestSpecialisedExplanations object at 0x26f8550>
def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3]
@ -166,7 +166,7 @@ get on the terminal - we are working on that):
failure_demo.py:61: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x135a990>
self = <failure_demo.TestSpecialisedExplanations object at 0x26aa310>
def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100
@ -178,12 +178,12 @@ get on the terminal - we are working on that):
failure_demo.py:66: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1459310>
self = <failure_demo.TestSpecialisedExplanations object at 0x26a6950>
def test_eq_dict(self):
> assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E Hiding 1 identical items, use -v to show
E Omitting 1 identical items, use -v to show
E Differing items:
E {'b': 1} != {'b': 2}
E Left contains more items:
@ -194,7 +194,7 @@ get on the terminal - we are working on that):
failure_demo.py:69: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1434310>
self = <failure_demo.TestSpecialisedExplanations object at 0x26e4210>
def test_eq_set(self):
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
@ -210,7 +210,7 @@ get on the terminal - we are working on that):
failure_demo.py:72: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
self = <failure_demo.TestSpecialisedExplanations object at 0x138ded0>
self = <failure_demo.TestSpecialisedExplanations object at 0x26f9c10>
def test_eq_longer_list(self):
> assert [1,2] == [1,2,3]
@ -220,7 +220,7 @@ get on the terminal - we are working on that):
failure_demo.py:75: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1459e10>
self = <failure_demo.TestSpecialisedExplanations object at 0x26aac50>
def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5]
@ -229,7 +229,7 @@ get on the terminal - we are working on that):
failure_demo.py:78: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x1434950>
self = <failure_demo.TestSpecialisedExplanations object at 0x26a6b90>
def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
@ -247,7 +247,7 @@ get on the terminal - we are working on that):
failure_demo.py:82: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x138dbd0>
self = <failure_demo.TestSpecialisedExplanations object at 0x26f9d90>
def test_not_in_text_single(self):
text = 'single foo line'
@ -260,7 +260,7 @@ get on the terminal - we are working on that):
failure_demo.py:86: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
self = <failure_demo.TestSpecialisedExplanations object at 0x14593d0>
self = <failure_demo.TestSpecialisedExplanations object at 0x26f89d0>
def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
@ -273,7 +273,7 @@ get on the terminal - we are working on that):
failure_demo.py:90: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
self = <failure_demo.TestSpecialisedExplanations object at 0x1459650>
self = <failure_demo.TestSpecialisedExplanations object at 0x26ad310>
def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
@ -292,7 +292,7 @@ get on the terminal - we are working on that):
i = Foo()
> assert i.b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x1434850>.b
E + where 1 = <failure_demo.Foo object at 0x26e4650>.b
failure_demo.py:101: AssertionError
_________________________ test_attribute_instance __________________________
@ -302,8 +302,8 @@ get on the terminal - we are working on that):
b = 1
> assert Foo().b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x1459dd0>.b
E + where <failure_demo.Foo object at 0x1459dd0> = <class 'failure_demo.Foo'>()
E + where 1 = <failure_demo.Foo object at 0x26f8c50>.b
E + where <failure_demo.Foo object at 0x26f8c50> = <class 'failure_demo.Foo'>()
failure_demo.py:107: AssertionError
__________________________ test_attribute_failure __________________________
@ -319,7 +319,7 @@ get on the terminal - we are working on that):
failure_demo.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.Foo object at 0x1434150>
self = <failure_demo.Foo object at 0x26a65d0>
def _get_b(self):
> raise Exception('Failed to get attrib')
@ -335,15 +335,15 @@ get on the terminal - we are working on that):
b = 2
> assert Foo().b == Bar().b
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x14590d0>.b
E + where <failure_demo.Foo object at 0x14590d0> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x1459b10>.b
E + where <failure_demo.Bar object at 0x1459b10> = <class 'failure_demo.Bar'>()
E + where 1 = <failure_demo.Foo object at 0x26ad050>.b
E + where <failure_demo.Foo object at 0x26ad050> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x26ad850>.b
E + where <failure_demo.Bar object at 0x26ad850> = <class 'failure_demo.Bar'>()
failure_demo.py:124: AssertionError
__________________________ TestRaises.test_raises __________________________
self = <failure_demo.TestRaises instance at 0x13a0d88>
self = <failure_demo.TestRaises instance at 0x2859e18>
def test_raises(self):
s = 'qwe'
@ -355,10 +355,10 @@ get on the terminal - we are working on that):
> int(s)
E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:858>:1: ValueError
<0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:905>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises instance at 0x145fcf8>
self = <failure_demo.TestRaises instance at 0x27013b0>
def test_raises_doesnt(self):
> raises(IOError, "int('3')")
@ -367,7 +367,7 @@ get on the terminal - we are working on that):
failure_demo.py:136: Failed
__________________________ TestRaises.test_raise ___________________________
self = <failure_demo.TestRaises instance at 0x13a9ea8>
self = <failure_demo.TestRaises instance at 0x271d9e0>
def test_raise(self):
> raise ValueError("demo error")
@ -376,7 +376,7 @@ get on the terminal - we are working on that):
failure_demo.py:139: ValueError
________________________ TestRaises.test_tupleerror ________________________
self = <failure_demo.TestRaises instance at 0x13843f8>
self = <failure_demo.TestRaises instance at 0x270b3f8>
def test_tupleerror(self):
> a,b = [1]
@ -385,7 +385,7 @@ get on the terminal - we are working on that):
failure_demo.py:142: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
self = <failure_demo.TestRaises instance at 0x14532d8>
self = <failure_demo.TestRaises instance at 0x26ab368>
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3]
@ -398,7 +398,7 @@ get on the terminal - we are working on that):
l is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________
self = <failure_demo.TestRaises instance at 0x139d290>
self = <failure_demo.TestRaises instance at 0x271b488>
def test_some_error(self):
> if namenotexi:
@ -426,7 +426,7 @@ get on the terminal - we are working on that):
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x137d758>
self = <failure_demo.TestMoreErrors instance at 0x271da28>
def test_complex_error(self):
def f():
@ -455,7 +455,7 @@ get on the terminal - we are working on that):
failure_demo.py:5: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________
self = <failure_demo.TestMoreErrors instance at 0x13a5200>
self = <failure_demo.TestMoreErrors instance at 0x2716950>
def test_z1_unpack_error(self):
l = []
@ -465,7 +465,7 @@ get on the terminal - we are working on that):
failure_demo.py:179: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x1395290>
self = <failure_demo.TestMoreErrors instance at 0x26f5e18>
def test_z2_type_error(self):
l = 3
@ -475,19 +475,19 @@ get on the terminal - we are working on that):
failure_demo.py:183: TypeError
______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors instance at 0x137f200>
self = <failure_demo.TestMoreErrors instance at 0x27075f0>
def test_startswith(self):
s = "123"
g = "456"
> assert s.startswith(g)
E assert <built-in method startswith of str object at 0x143f288>('456')
E + where <built-in method startswith of str object at 0x143f288> = '123'.startswith
E assert <built-in method startswith of str object at 0x26ff8c8>('456')
E + where <built-in method startswith of str object at 0x26ff8c8> = '123'.startswith
failure_demo.py:188: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors instance at 0x145fb00>
self = <failure_demo.TestMoreErrors instance at 0x2707ef0>
def test_startswith_nested(self):
def f():
@ -495,15 +495,15 @@ get on the terminal - we are working on that):
def g():
return "456"
> assert f().startswith(g())
E assert <built-in method startswith of str object at 0x143f288>('456')
E + where <built-in method startswith of str object at 0x143f288> = '123'.startswith
E + where '123' = <function f at 0x13abaa0>()
E + and '456' = <function g at 0x13ab578>()
E assert <built-in method startswith of str object at 0x26ff8c8>('456')
E + where <built-in method startswith of str object at 0x26ff8c8> = '123'.startswith
E + where '123' = <function f at 0x269d7d0>()
E + and '456' = <function g at 0x2698ed8>()
failure_demo.py:195: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors instance at 0x139cd40>
self = <failure_demo.TestMoreErrors instance at 0x271bef0>
def test_global_func(self):
> assert isinstance(globf(42), float)
@ -513,18 +513,18 @@ get on the terminal - we are working on that):
failure_demo.py:198: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors instance at 0x13593b0>
self = <failure_demo.TestMoreErrors instance at 0x271bb90>
def test_instance(self):
self.x = 6*7
> assert self.x != 42
E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors instance at 0x13593b0>.x
E + where 42 = <failure_demo.TestMoreErrors instance at 0x271bb90>.x
failure_demo.py:202: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors instance at 0x1465d40>
self = <failure_demo.TestMoreErrors instance at 0x2634170>
def test_compare(self):
> assert globf(10) < 5
@ -534,7 +534,7 @@ get on the terminal - we are working on that):
failure_demo.py:205: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors instance at 0x1456ea8>
self = <failure_demo.TestMoreErrors instance at 0x2717f80>
def test_try_finally(self):
x = 1
@ -543,4 +543,4 @@ get on the terminal - we are working on that):
E assert 1 == 0
failure_demo.py:210: AssertionError
======================== 39 failed in 0.21 seconds =========================
======================== 39 failed in 0.26 seconds =========================

View File

@ -55,6 +55,7 @@ Let's run this without supplying our new option::
test_sample.py:6: AssertionError
----------------------------- Captured stdout ------------------------------
first
1 failed in 0.01 seconds
And now with supplying a command line option::
@ -76,6 +77,7 @@ And now with supplying a command line option::
test_sample.py:6: AssertionError
----------------------------- Captured stdout ------------------------------
second
1 failed in 0.01 seconds
You can see that the command line option arrived in our test. This
completes the basic pattern. However, one often rather wants to process
@ -106,7 +108,7 @@ directory with the above conftest.py::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 0 items
============================= in 0.00 seconds =============================
@ -150,12 +152,12 @@ and when running it will see a skipped "slow" test::
$ py.test -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 2 items
test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-278/conftest.py:9: need --runslow option to run
SKIP [1] /tmp/doc-exec-603/conftest.py:9: need --runslow option to run
=================== 1 passed, 1 skipped in 0.01 seconds ====================
@ -163,7 +165,7 @@ Or run it including the ``slow`` marked test::
$ py.test --runslow
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 2 items
test_module.py ..
@ -206,6 +208,7 @@ Let's run our little function::
E Failed: not configured: 42
test_checkconfig.py:8: Failed
1 failed in 0.01 seconds
Detect if running from within a py.test run
--------------------------------------------------------------
@ -253,7 +256,7 @@ which will add the string to the test header accordingly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
project deps: mylib-1.1
collected 0 items
@ -276,7 +279,7 @@ which will add info only when run with "--v"::
$ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
info1: did you know that ...
did you?
collecting ... collected 0 items
@ -287,7 +290,7 @@ and nothing when run plainly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 0 items
============================= in 0.00 seconds =============================
@ -319,7 +322,7 @@ Now we can profile which test functions execute the slowest::
$ py.test --durations=3
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 3 items
test_some_are_slow.py ...
@ -380,7 +383,7 @@ If we run this::
$ py.test -rx
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 4 items
test_step.py .Fx.
@ -388,7 +391,7 @@ If we run this::
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling instance at 0x282b8c0>
self = <test_step.TestUserHandling instance at 0x1c6fb90>
def test_modification(self):
> assert 0
@ -398,7 +401,7 @@ If we run this::
========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::()::test_deletion
reason: previous test failed (test_modification)
============== 1 failed, 2 passed, 1 xfailed in 0.01 seconds ===============
============== 1 failed, 2 passed, 1 xfailed in 0.02 seconds ===============
We'll see that ``test_deletion`` was not executed because ``test_modification``
failed. It is reported as an "expected failure".
@ -450,7 +453,7 @@ We can run this::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 7 items
test_step.py .Fx.
@ -460,17 +463,17 @@ We can run this::
================================== ERRORS ==================================
_______________________ ERROR at setup of test_root ________________________
file /tmp/doc-exec-278/b/test_error.py, line 1
file /tmp/doc-exec-603/b/test_error.py, line 1
def test_root(db): # no db here, will error out
fixture 'db' not found
available fixtures: pytestconfig, recwarn, monkeypatch, capfd, capsys, tmpdir
use 'py.test --fixtures [testpath]' for help on them.
/tmp/doc-exec-278/b/test_error.py:1
/tmp/doc-exec-603/b/test_error.py:1
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling instance at 0x26145f0>
self = <test_step.TestUserHandling instance at 0x22f3518>
def test_modification(self):
> assert 0
@ -479,20 +482,20 @@ We can run this::
test_step.py:9: AssertionError
_________________________________ test_a1 __________________________________
db = <conftest.DB instance at 0x26211b8>
db = <conftest.DB instance at 0x2304248>
def test_a1(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB instance at 0x26211b8>
E AssertionError: <conftest.DB instance at 0x2304248>
a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________
db = <conftest.DB instance at 0x26211b8>
db = <conftest.DB instance at 0x2304248>
def test_a2(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB instance at 0x26211b8>
E AssertionError: <conftest.DB instance at 0x2304248>
a/test_db2.py:2: AssertionError
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.03 seconds ==========
@ -550,7 +553,7 @@ and run them::
$ py.test test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 2 items
test_module.py FF
@ -558,7 +561,7 @@ and run them::
================================= FAILURES =================================
________________________________ test_fail1 ________________________________
tmpdir = local('/tmp/pytest-326/test_fail10')
tmpdir = local('/tmp/pytest-190/test_fail10')
def test_fail1(tmpdir):
> assert 0
@ -572,12 +575,12 @@ and run them::
E assert 0
test_module.py:4: AssertionError
========================= 2 failed in 0.02 seconds =========================
========================= 2 failed in 0.01 seconds =========================
you will have a "failures" file which contains the failing test ids::
$ cat failures
test_module.py::test_fail1 (/tmp/pytest-326/test_fail10)
test_module.py::test_fail1 (/tmp/pytest-190/test_fail10)
test_module.py::test_fail2
Making test result information available in fixtures
@ -640,10 +643,12 @@ and run it::
$ py.test -s test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 3 items
test_module.py EFF
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
Fexecuting test failed test_module.py::test_call_fails
F
================================== ERRORS ==================================
____________________ ERROR at setup of test_setup_fails ____________________
@ -671,9 +676,7 @@ and run it::
E assert 0
test_module.py:15: AssertionError
==================== 2 failed, 1 error in 0.01 seconds =====================
setting up a test failed! test_module.py::test_setup_fails
executing test failed test_module.py::test_call_fails
==================== 2 failed, 1 error in 0.02 seconds =====================
You'll see that the fixture finalizers could use the precise reporting
information.

View File

@ -61,12 +61,13 @@ will be called ahead of running any tests::
If you run this without output capturing::
$ py.test -q -s test_module.py
....
callattr_ahead_of_alltests called
callme called!
callme other called
SomeTest callme called
test_method1 called
test_method1 called
test other
test_unit1 method called
.test_method1 called
.test other
.test_unit1 method called
.
4 passed in 0.02 seconds

View File

@ -76,8 +76,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
$ py.test test_smtpsimple.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.0.dev12
plugins: xdist, pep8, cov, cache, capturelog, instafail
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 1 items
test_smtpsimple.py F
@ -85,7 +84,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
================================= FAILURES =================================
________________________________ test_ehlo _________________________________
smtp = <smtplib.SMTP instance at 0x22530e0>
smtp = <smtplib.SMTP instance at 0x2bb9d88>
def test_ehlo(smtp):
response, msg = smtp.ehlo()
@ -95,7 +94,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
E assert 0
test_smtpsimple.py:12: AssertionError
========================= 1 failed in 0.17 seconds =========================
========================= 1 failed in 0.18 seconds =========================
In the failure traceback we see that the test function was called with a
``smtp`` argument, the ``smtplib.SMTP()`` instance created by the fixture
@ -195,8 +194,7 @@ inspect what is going on and can now run the tests::
$ py.test test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.0.dev12
plugins: xdist, pep8, cov, cache, capturelog, instafail
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 2 items
test_module.py FF
@ -204,7 +202,7 @@ inspect what is going on and can now run the tests::
================================= FAILURES =================================
________________________________ test_ehlo _________________________________
smtp = <smtplib.SMTP instance at 0x165fa28>
smtp = <smtplib.SMTP instance at 0x18f2fc8>
def test_ehlo(smtp):
response = smtp.ehlo()
@ -216,7 +214,7 @@ inspect what is going on and can now run the tests::
test_module.py:6: AssertionError
________________________________ test_noop _________________________________
smtp = <smtplib.SMTP instance at 0x165fa28>
smtp = <smtplib.SMTP instance at 0x18f2fc8>
def test_noop(smtp):
response = smtp.noop()
@ -225,7 +223,7 @@ inspect what is going on and can now run the tests::
E assert 0
test_module.py:11: AssertionError
========================= 2 failed in 0.18 seconds =========================
========================= 2 failed in 0.16 seconds =========================
You see the two ``assert 0`` failing and more importantly you can also see
that the same (module-scoped) ``smtp`` object was passed into the two
@ -271,8 +269,9 @@ the fixture in the module has finished execution.
Let's execute it::
$ py.test -s -q --tb=no
FF
2 failed in 0.20 seconds
FFteardown smtp
2 failed in 0.15 seconds
We see that the ``smtp`` instance is finalized after the two
tests finished execution. Note that if we decorated our fixture
@ -313,7 +312,7 @@ again, nothing much has changed::
$ py.test -s -q --tb=no
FF
2 failed in 0.18 seconds
2 failed in 0.16 seconds
Let's quickly create another test module that actually sets the
server URL in its module namespace::
@ -380,7 +379,7 @@ So let's just do another run::
================================= FAILURES =================================
__________________________ test_ehlo[merlinux.eu] __________________________
smtp = <smtplib.SMTP instance at 0x28d13b0>
smtp = <smtplib.SMTP instance at 0x2662290>
def test_ehlo(smtp):
response = smtp.ehlo()
@ -392,7 +391,7 @@ So let's just do another run::
test_module.py:6: AssertionError
__________________________ test_noop[merlinux.eu] __________________________
smtp = <smtplib.SMTP instance at 0x28d13b0>
smtp = <smtplib.SMTP instance at 0x2662290>
def test_noop(smtp):
response = smtp.noop()
@ -403,7 +402,7 @@ So let's just do another run::
test_module.py:11: AssertionError
________________________ test_ehlo[mail.python.org] ________________________
smtp = <smtplib.SMTP instance at 0x28d8440>
smtp = <smtplib.SMTP instance at 0x26c2dd0>
def test_ehlo(smtp):
response = smtp.ehlo()
@ -414,7 +413,7 @@ So let's just do another run::
test_module.py:5: AssertionError
________________________ test_noop[mail.python.org] ________________________
smtp = <smtplib.SMTP instance at 0x28d8440>
smtp = <smtplib.SMTP instance at 0x26c2dd0>
def test_noop(smtp):
response = smtp.noop()
@ -423,7 +422,7 @@ So let's just do another run::
E assert 0
test_module.py:11: AssertionError
4 failed in 6.47 seconds
4 failed in 6.32 seconds
We see that our two test functions each ran twice, against the different
``smtp`` instances. Note also, that with the ``mail.python.org``
@ -463,15 +462,13 @@ Here we declare an ``app`` fixture which receives the previously defined
$ py.test -v test_appsetup.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.0.dev12 -- /home/hpk/venv/0/bin/python
cachedir: /tmp/doc-exec-127/.cache
plugins: xdist, pep8, cov, cache, capturelog, instafail
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 2 items
test_appsetup.py:12: test_smtp_exists[mail.python.org] PASSED
test_appsetup.py:12: test_smtp_exists[merlinux.eu] PASSED
========================= 2 passed in 6.07 seconds =========================
========================= 2 passed in 5.75 seconds =========================
Due to the parametrization of ``smtp`` the test will run twice with two
different ``App`` instances and respective smtp servers. There is no
@ -529,9 +526,7 @@ Let's run the tests in verbose mode and with looking at the print-output::
$ py.test -v -s test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.0.dev12 -- /home/hpk/venv/0/bin/python
cachedir: /tmp/doc-exec-127/.cache
plugins: xdist, pep8, cov, cache, capturelog, instafail
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 8 items
test_module.py:15: test_0[1] test0 1
@ -553,7 +548,7 @@ Let's run the tests in verbose mode and with looking at the print-output::
test_module.py:19: test_2[2-mod2] test2 2 mod2
PASSED
========================= 8 passed in 0.02 seconds =========================
========================= 8 passed in 0.01 seconds =========================
You can see that the parametrized module-scoped ``modarg`` resource caused
an ordering of test execution that lead to the fewest possible "active" resources. The finalizer for the ``mod1`` parametrized resource was executed
@ -609,7 +604,7 @@ to verify our fixture is activated and the tests pass::
$ py.test -q
..
2 passed in 0.02 seconds
2 passed in 0.01 seconds
You can specify multiple fixtures like this::
@ -680,7 +675,7 @@ If we run it, we get two passing tests::
$ py.test -q
..
2 passed in 0.02 seconds
2 passed in 0.01 seconds
Here is how autouse fixtures work in other scopes:

View File

@ -23,7 +23,7 @@ Installation options::
To check your installation has installed the correct version::
$ py.test --version
This is py.test version 2.3.5, imported from /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/pytest.py
This is py.test version 2.4.2, imported from /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/pytest.pyc
If you get an error checkout :ref:`installation issues`.
@ -45,7 +45,7 @@ That's it. You can execute the test function now::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 1 items
test_sample.py F
@ -93,6 +93,7 @@ Running it with, this time in "quiet" reporting mode::
$ py.test -q test_sysexit.py
.
1 passed in 0.01 seconds
.. todo:: For further ways to assert exceptions see the `raises`
@ -122,7 +123,7 @@ run the module by passing its filename::
================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________
self = <test_class.TestClass instance at 0x315b488>
self = <test_class.TestClass instance at 0x1e1f518>
def test_two(self):
x = "hello"
@ -130,6 +131,7 @@ run the module by passing its filename::
E assert hasattr('hello', 'check')
test_class.py:8: AssertionError
1 failed, 1 passed in 0.01 seconds
The first test passed, the second failed. Again we can easily see
the intermediate values used in the assertion, helping us to
@ -157,7 +159,7 @@ before performing the test function call. Let's just run it::
================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________
tmpdir = local('/tmp/pytest-322/test_needsfiles0')
tmpdir = local('/tmp/pytest-186/test_needsfiles0')
def test_needsfiles(tmpdir):
print tmpdir
@ -166,7 +168,8 @@ before performing the test function call. Let's just run it::
test_tmpdir.py:3: AssertionError
----------------------------- Captured stdout ------------------------------
/tmp/pytest-322/test_needsfiles0
/tmp/pytest-186/test_needsfiles0
1 failed in 0.01 seconds
Before the test runs, a unique-per-test-invocation temporary directory
was created. More info at :ref:`tmpdir handling`.

View File

@ -52,15 +52,14 @@ tuples so that that the ``test_eval`` function will run three times using
them in turn::
$ py.test
============================= test session starts ==============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.0.dev3
plugins: xdist, cache, cli, pep8, xprocess, cov, capturelog, bdd-splinter, rerunfailures, instafail, localserver
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 3 items
test_expectation.py ..F
=================================== FAILURES ===================================
______________________________ test_eval[6*9-42] _______________________________
================================= FAILURES =================================
____________________________ test_eval[6*9-42] _____________________________
input = '6*9', expected = 42
@ -75,7 +74,7 @@ them in turn::
E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError
====================== 1 failed, 2 passed in 0.02 seconds ======================
==================== 1 failed, 2 passed in 0.01 seconds ====================
As designed in this example, only one pair of input/output values fails
the simple test function. And as usual with test function arguments,
@ -100,14 +99,13 @@ for example with the builtin ``mark.xfail``::
Let's run this::
$ py.test
============================= test session starts ==============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.0.dev3
plugins: xdist, cache, cli, pep8, xprocess, cov, capturelog, bdd-splinter, rerunfailures, instafail, localserver
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 3 items
test_expectation.py ..x
===================== 2 passed, 1 xfailed in 0.02 seconds ======================
=================== 2 passed, 1 xfailed in 0.01 seconds ====================
The one parameter set which caused a failure previously now
shows up as an "xfailed (expected to fail)" test.
@ -159,22 +157,24 @@ If we now pass two stringinput values, our test will run twice::
$ py.test -q --stringinput="hello" --stringinput="world" test_strings.py
..
2 passed in 0.01 seconds
Let's also run with a stringinput that will lead to a failing test::
$ py.test -q --stringinput="!" test_strings.py
F
=================================== FAILURES ===================================
_____________________________ test_valid_string[!] _____________________________
================================= FAILURES =================================
___________________________ test_valid_string[!] ___________________________
stringinput = '!'
def test_valid_string(stringinput):
> assert stringinput.isalpha()
E assert <built-in method isalpha of str object at 0x7fd657390fd0>()
E + where <built-in method isalpha of str object at 0x7fd657390fd0> = '!'.isalpha
E assert <built-in method isalpha of str object at 0x2ac85b043198>()
E + where <built-in method isalpha of str object at 0x2ac85b043198> = '!'.isalpha
test_strings.py:3: AssertionError
1 failed in 0.01 seconds
As expected our test function fails.
@ -184,8 +184,9 @@ listlist::
$ py.test -q -rs test_strings.py
s
=========================== short test summary info ============================
SKIP [1] /home/hpk/p/pytest/_pytest/python.py:999: got empty parameter set, function test_valid_string at /tmp/doc-exec-2/test_strings.py:1
========================= short test summary info ==========================
SKIP [1] /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:1024: got empty parameter set, function test_valid_string at /tmp/doc-exec-561/test_strings.py:1
1 skipped in 0.01 seconds
For further examples, you might want to look at :ref:`more
parametrization examples <paramexamples>`.

View File

@ -158,14 +158,14 @@ Running it with the report-on-xfail option gives this output::
example $ py.test -rx xfail_demo.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 6 items
xfail_demo.py xxxxxx
========================= short test summary info ==========================
XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2
reason: [NOTRUN]
reason: [NOTRUN]
XFAIL xfail_demo.py::test_hello3
condition: hasattr(os, 'sep')
XFAIL xfail_demo.py::test_hello4
@ -174,7 +174,7 @@ Running it with the report-on-xfail option gives this output::
condition: pytest.__version__[0] != "17"
XFAIL xfail_demo.py::test_hello6
reason: reason
======================== 6 xfailed in 0.05 seconds =========================
.. _`skip/xfail with parametrize`:

View File

@ -29,7 +29,7 @@ Running this would result in a passed test except for the last
$ py.test test_tmpdir.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 1 items
test_tmpdir.py F
@ -37,7 +37,7 @@ Running this would result in a passed test except for the last
================================= FAILURES =================================
_____________________________ test_create_file _____________________________
tmpdir = local('/tmp/pytest-323/test_create_file0')
tmpdir = local('/tmp/pytest-187/test_create_file0')
def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt")
@ -48,7 +48,7 @@ Running this would result in a passed test except for the last
E assert 0
test_tmpdir.py:7: AssertionError
========================= 1 failed in 0.02 seconds =========================
========================= 1 failed in 0.01 seconds =========================
.. _`base temporary directory`:

View File

@ -88,7 +88,7 @@ the ``self.db`` values in the traceback::
$ py.test test_unittest_db.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
collected 2 items
test_unittest_db.py FF
@ -101,7 +101,7 @@ the ``self.db`` values in the traceback::
def test_method1(self):
assert hasattr(self, "db")
> assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.DummyDB instance at 0x19fdf38>
E AssertionError: <conftest.DummyDB instance at 0x27b2b00>
test_unittest_db.py:9: AssertionError
___________________________ MyTest.test_method2 ____________________________
@ -110,7 +110,7 @@ the ``self.db`` values in the traceback::
def test_method2(self):
> assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.DummyDB instance at 0x19fdf38>
E AssertionError: <conftest.DummyDB instance at 0x27b2b00>
test_unittest_db.py:12: AssertionError
========================= 2 failed in 0.02 seconds =========================
@ -160,6 +160,7 @@ Running this test module ...::
$ py.test -q test_unittest_cleandir.py
.
1 passed in 0.02 seconds
... gives us one passed test because the ``initdir`` fixture function
was executed ahead of the ``test_method``.

View File

@ -188,7 +188,7 @@ Running it will show that ``MyPlugin`` was added and its
hook was invoked::
$ python myinvoke.py
*** test run reporting finishing
.. include:: links.inc