parent
3d990a6237
commit
738c8762df
|
@ -26,8 +26,8 @@ you will see the return value of the function call::
|
|||
|
||||
$ py.test test_assert1.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-98, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-87, inifile:
|
||||
collected 1 items
|
||||
|
||||
test_assert1.py F
|
||||
|
@ -136,8 +136,8 @@ if you run this module::
|
|||
|
||||
$ py.test test_assert2.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-98, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-87, inifile:
|
||||
collected 1 items
|
||||
|
||||
test_assert2.py F
|
||||
|
@ -213,7 +213,7 @@ the conftest file::
|
|||
E vals: 1 != 2
|
||||
|
||||
test_foocompare.py:8: AssertionError
|
||||
1 failed in 0.00 seconds
|
||||
1 failed in 0.01 seconds
|
||||
|
||||
.. _assert-details:
|
||||
.. _`assert introspection`:
|
||||
|
|
|
@ -64,8 +64,8 @@ of the failing function and hide the other one::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-101, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-90, inifile:
|
||||
collected 2 items
|
||||
|
||||
test_module.py .F
|
||||
|
@ -79,7 +79,7 @@ of the failing function and hide the other one::
|
|||
|
||||
test_module.py:9: AssertionError
|
||||
-------------------------- Captured stdout setup ---------------------------
|
||||
setting up <function test_func2 at 0x2b24d5259158>
|
||||
setting up <function test_func2 at 0x7fa678d6eb70>
|
||||
==================== 1 failed, 1 passed in 0.01 seconds ====================
|
||||
|
||||
Accessing captured output from a test function
|
||||
|
|
|
@ -44,13 +44,13 @@ then you can just invoke ``py.test`` without command line options::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-107, inifile: pytest.ini
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-96, inifile: pytest.ini
|
||||
collected 1 items
|
||||
|
||||
mymodule.py .
|
||||
|
||||
========================= 1 passed in 0.05 seconds =========================
|
||||
========================= 1 passed in 0.06 seconds =========================
|
||||
|
||||
It is possible to use fixtures using the ``getfixture`` helper::
|
||||
|
||||
|
|
|
@ -31,8 +31,8 @@ You can then restrict a test run to only run tests marked with ``webtest``::
|
|||
|
||||
$ py.test -v -m webtest
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0 -- /home/hpk/p/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collecting ... collected 4 items
|
||||
|
||||
test_server.py::test_send_http PASSED
|
||||
|
@ -44,8 +44,8 @@ Or the inverse, running all tests except the webtest ones::
|
|||
|
||||
$ py.test -v -m "not webtest"
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0 -- /home/hpk/p/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collecting ... collected 4 items
|
||||
|
||||
test_server.py::test_something_quick PASSED
|
||||
|
@ -64,8 +64,8 @@ tests based on their module, class, method, or function name::
|
|||
|
||||
$ py.test -v test_server.py::TestClass::test_method
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0 -- /home/hpk/p/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collecting ... collected 5 items
|
||||
|
||||
test_server.py::TestClass::test_method PASSED
|
||||
|
@ -76,8 +76,8 @@ You can also select on the class::
|
|||
|
||||
$ py.test -v test_server.py::TestClass
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0 -- /home/hpk/p/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collecting ... collected 4 items
|
||||
|
||||
test_server.py::TestClass::test_method PASSED
|
||||
|
@ -88,8 +88,8 @@ Or select multiple nodes::
|
|||
|
||||
$ py.test -v test_server.py::TestClass test_server.py::test_send_http
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0 -- /home/hpk/p/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collecting ... collected 8 items
|
||||
|
||||
test_server.py::TestClass::test_method PASSED
|
||||
|
@ -125,8 +125,8 @@ select tests based on their names::
|
|||
|
||||
$ py.test -v -k http # running with the above defined example module
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0 -- /home/hpk/p/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collecting ... collected 4 items
|
||||
|
||||
test_server.py::test_send_http PASSED
|
||||
|
@ -138,8 +138,8 @@ And you can also run all tests except the ones that match the keyword::
|
|||
|
||||
$ py.test -k "not send_http" -v
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0 -- /home/hpk/p/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collecting ... collected 4 items
|
||||
|
||||
test_server.py::test_something_quick PASSED
|
||||
|
@ -153,8 +153,8 @@ Or to select "http" and "quick" tests::
|
|||
|
||||
$ py.test -k "http or quick" -v
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0 -- /home/hpk/p/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collecting ... collected 4 items
|
||||
|
||||
test_server.py::test_send_http PASSED
|
||||
|
@ -342,25 +342,25 @@ the test needs::
|
|||
|
||||
$ py.test -E stage2
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collected 1 items
|
||||
|
||||
test_someenv.py s
|
||||
|
||||
======================== 1 skipped in 0.00 seconds =========================
|
||||
======================== 1 skipped in 0.01 seconds =========================
|
||||
|
||||
and here is one that specifies exactly the environment needed::
|
||||
|
||||
$ py.test -E stage1
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collected 1 items
|
||||
|
||||
test_someenv.py .
|
||||
|
||||
========================= 1 passed in 0.00 seconds =========================
|
||||
========================= 1 passed in 0.01 seconds =========================
|
||||
|
||||
The ``--markers`` option always gives you a list of available markers::
|
||||
|
||||
|
@ -473,13 +473,13 @@ then you will see two test skipped and two executed tests as expected::
|
|||
|
||||
$ py.test -rs # this option reports skip reasons
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collected 4 items
|
||||
|
||||
test_plat.py sss.
|
||||
========================= short test summary info ==========================
|
||||
SKIP [3] /tmp/doc-exec-167/conftest.py:12: cannot run on platform linux
|
||||
SKIP [3] /tmp/doc-exec-157/conftest.py:12: cannot run on platform linux
|
||||
|
||||
=================== 1 passed, 3 skipped in 0.01 seconds ====================
|
||||
|
||||
|
@ -487,8 +487,8 @@ Note that if you specify a platform via the marker-command line option like this
|
|||
|
||||
$ py.test -m linux2
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collected 4 items
|
||||
|
||||
test_plat.py s
|
||||
|
@ -539,8 +539,8 @@ We can now use the ``-m option`` to select one set::
|
|||
|
||||
$ py.test -m interface --tb=short
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collected 4 items
|
||||
|
||||
test_module.py FF
|
||||
|
@ -555,14 +555,14 @@ We can now use the ``-m option`` to select one set::
|
|||
assert 0
|
||||
E assert 0
|
||||
================== 2 tests deselected by "-m 'interface'" ==================
|
||||
================== 2 failed, 2 deselected in 0.01 seconds ==================
|
||||
================== 2 failed, 2 deselected in 0.02 seconds ==================
|
||||
|
||||
or to select both "event" and "interface" tests::
|
||||
|
||||
$ py.test -m "interface or event" --tb=short
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-167, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-157, inifile:
|
||||
collected 4 items
|
||||
|
||||
test_module.py FFF
|
||||
|
@ -581,4 +581,4 @@ or to select both "event" and "interface" tests::
|
|||
assert 0
|
||||
E assert 0
|
||||
============= 1 tests deselected by "-m 'interface or event'" ==============
|
||||
================== 3 failed, 1 deselected in 0.01 seconds ==================
|
||||
================== 3 failed, 1 deselected in 0.02 seconds ==================
|
||||
|
|
|
@ -27,18 +27,18 @@ now execute the test specification::
|
|||
|
||||
nonpython $ py.test test_simple.yml
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /home/hpk/p/pytest/doc/en, inifile: pytest.ini
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini
|
||||
collected 2 items
|
||||
|
||||
test_simple.yml F.
|
||||
test_simple.yml .F
|
||||
|
||||
================================= FAILURES =================================
|
||||
______________________________ usecase: hello ______________________________
|
||||
usecase execution failed
|
||||
spec failed: 'some': 'other'
|
||||
no further details known at this point.
|
||||
==================== 1 failed, 1 passed in 0.03 seconds ====================
|
||||
==================== 1 failed, 1 passed in 0.19 seconds ====================
|
||||
|
||||
You get one dot for the passing ``sub1: sub1`` check and one failure.
|
||||
Obviously in the above ``conftest.py`` you'll want to implement a more
|
||||
|
@ -57,30 +57,30 @@ consulted when reporting in ``verbose`` mode::
|
|||
|
||||
nonpython $ py.test -v
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0 -- /home/hpk/p/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /home/hpk/p/pytest/doc/en, inifile: pytest.ini
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_simple.yml::hello FAILED
|
||||
test_simple.yml::ok PASSED
|
||||
test_simple.yml::hello FAILED
|
||||
|
||||
================================= FAILURES =================================
|
||||
______________________________ usecase: hello ______________________________
|
||||
usecase execution failed
|
||||
spec failed: 'some': 'other'
|
||||
no further details known at this point.
|
||||
==================== 1 failed, 1 passed in 0.03 seconds ====================
|
||||
==================== 1 failed, 1 passed in 0.05 seconds ====================
|
||||
|
||||
While developing your custom test collection and execution it's also
|
||||
interesting to just look at the collection tree::
|
||||
|
||||
nonpython $ py.test --collect-only
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /home/hpk/p/pytest/doc/en, inifile: pytest.ini
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini
|
||||
collected 2 items
|
||||
<YamlFile 'example/nonpython/test_simple.yml'>
|
||||
<YamlItem 'hello'>
|
||||
<YamlItem 'ok'>
|
||||
<YamlItem 'hello'>
|
||||
|
||||
============================= in 0.03 seconds =============================
|
||||
============================= in 0.04 seconds =============================
|
||||
|
|
|
@ -55,15 +55,15 @@ let's run the full monty::
|
|||
....F
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_compute[4] ______________________________
|
||||
|
||||
|
||||
param1 = 4
|
||||
|
||||
|
||||
def test_compute(param1):
|
||||
> assert param1 < 4
|
||||
E assert 4 < 4
|
||||
|
||||
|
||||
test_compute.py:3: AssertionError
|
||||
1 failed, 4 passed in 0.01 seconds
|
||||
1 failed, 4 passed in 0.02 seconds
|
||||
|
||||
As expected when running the full range of ``param1`` values
|
||||
we'll get an error on the last one.
|
||||
|
@ -127,8 +127,8 @@ objects, they are still using the default pytest representation::
|
|||
|
||||
$ py.test test_time.py --collect-only
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-169, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-159, inifile:
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
ERROR: file not found: test_time.py
|
||||
|
@ -171,21 +171,21 @@ this is a fully self-contained example which you can run with::
|
|||
|
||||
$ py.test test_scenarios.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-169, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-159, inifile:
|
||||
collected 4 items
|
||||
|
||||
test_scenarios.py ....
|
||||
|
||||
========================= 4 passed in 0.01 seconds =========================
|
||||
========================= 4 passed in 0.02 seconds =========================
|
||||
|
||||
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
|
||||
|
||||
|
||||
$ py.test --collect-only test_scenarios.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-169, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-159, inifile:
|
||||
collected 4 items
|
||||
<Module 'test_scenarios.py'>
|
||||
<Class 'TestSampleWithScenarios'>
|
||||
|
@ -249,14 +249,14 @@ Let's first see how it looks like at collection time::
|
|||
|
||||
$ py.test test_backends.py --collect-only
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-169, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-159, inifile:
|
||||
collected 2 items
|
||||
<Module 'test_backends.py'>
|
||||
<Function 'test_db_initialized[d1]'>
|
||||
<Function 'test_db_initialized[d2]'>
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
============================= in 0.01 seconds =============================
|
||||
|
||||
And then when we run the test::
|
||||
|
||||
|
@ -265,7 +265,7 @@ And then when we run the test::
|
|||
================================= FAILURES =================================
|
||||
_________________________ test_db_initialized[d2] __________________________
|
||||
|
||||
db = <conftest.DB2 object at 0x2b160a531f98>
|
||||
db = <conftest.DB2 object at 0x7f10a071cb38>
|
||||
|
||||
def test_db_initialized(db):
|
||||
# a dummy test
|
||||
|
@ -319,16 +319,16 @@ argument sets to use for each test function. Let's run it::
|
|||
$ py.test -q
|
||||
F..
|
||||
================================= FAILURES =================================
|
||||
________________________ TestClass.test_equals[1-2] ________________________
|
||||
________________________ TestClass.test_equals[2-1] ________________________
|
||||
|
||||
self = <test_parametrize.TestClass object at 0x2ab66352a978>, a = 1, b = 2
|
||||
self = <test_parametrize.TestClass object at 0x7f878094f630>, a = 1, b = 2
|
||||
|
||||
def test_equals(self, a, b):
|
||||
> assert a == b
|
||||
E assert 1 == 2
|
||||
|
||||
test_parametrize.py:18: AssertionError
|
||||
1 failed, 2 passed in 0.01 seconds
|
||||
1 failed, 2 passed in 0.02 seconds
|
||||
|
||||
Indirect parametrization with multiple fixtures
|
||||
--------------------------------------------------------------
|
||||
|
@ -348,7 +348,7 @@ Running it results in some skips if we don't have all the python interpreters in
|
|||
|
||||
. $ py.test -rs -q multipython.py
|
||||
...........................
|
||||
27 passed in 1.70 seconds
|
||||
27 passed in 4.14 seconds
|
||||
|
||||
Indirect parametrization of optional implementations/imports
|
||||
--------------------------------------------------------------------
|
||||
|
@ -395,13 +395,13 @@ If you run this with reporting for skips enabled::
|
|||
|
||||
$ py.test -rs test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-169, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-159, inifile:
|
||||
collected 2 items
|
||||
|
||||
test_module.py .s
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] /tmp/doc-exec-169/conftest.py:10: could not import 'opt2'
|
||||
SKIP [1] /tmp/doc-exec-159/conftest.py:10: could not import 'opt2'
|
||||
|
||||
=================== 1 passed, 1 skipped in 0.01 seconds ====================
|
||||
|
||||
|
|
|
@ -43,8 +43,8 @@ then the test collection looks like this::
|
|||
|
||||
$ py.test --collect-only
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-170, inifile: setup.cfg
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-160, inifile: setup.cfg
|
||||
collected 2 items
|
||||
<Module 'check_myapp.py'>
|
||||
<Class 'CheckMyApp'>
|
||||
|
@ -89,8 +89,8 @@ You can always peek at the collection tree without running tests like this::
|
|||
|
||||
. $ py.test --collect-only pythoncollection.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /home/hpk/p/pytest/doc/en, inifile: pytest.ini
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini
|
||||
collected 3 items
|
||||
<Module 'example/pythoncollection.py'>
|
||||
<Function 'test_function'>
|
||||
|
@ -143,11 +143,11 @@ interpreters and will leave out the setup.py file::
|
|||
|
||||
$ py.test --collect-only
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-170, inifile: pytest.ini
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-160, inifile: pytest.ini
|
||||
collected 0 items
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
============================= in 0.01 seconds =============================
|
||||
|
||||
If you run with a Python3 interpreter the moduled added through the conftest.py file will not be considered for test collection.
|
||||
|
||||
|
|
|
@ -13,8 +13,8 @@ get on the terminal - we are working on that):
|
|||
|
||||
assertion $ py.test failure_demo.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /home/hpk/p/pytest/doc/en, inifile: pytest.ini
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini
|
||||
collected 42 items
|
||||
|
||||
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
|
||||
|
@ -31,7 +31,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:15: AssertionError
|
||||
_________________________ TestFailing.test_simple __________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x2b186edcf6a0>
|
||||
self = <failure_demo.TestFailing object at 0x7f65f1ca25c0>
|
||||
|
||||
def test_simple(self):
|
||||
def f():
|
||||
|
@ -41,13 +41,13 @@ get on the terminal - we are working on that):
|
|||
|
||||
> assert f() == g()
|
||||
E assert 42 == 43
|
||||
E + where 42 = <function TestFailing.test_simple.<locals>.f at 0x2b186edd09d8>()
|
||||
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0x2b186edd9950>()
|
||||
E + where 42 = <function TestFailing.test_simple.<locals>.f at 0x7f65f2315510>()
|
||||
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0x7f65f2323510>()
|
||||
|
||||
failure_demo.py:28: AssertionError
|
||||
____________________ TestFailing.test_simple_multiline _____________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x2b186e2942e8>
|
||||
self = <failure_demo.TestFailing object at 0x7f65f1c812b0>
|
||||
|
||||
def test_simple_multiline(self):
|
||||
otherfunc_multi(
|
||||
|
@ -67,19 +67,19 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:11: AssertionError
|
||||
___________________________ TestFailing.test_not ___________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x2b186e270630>
|
||||
self = <failure_demo.TestFailing object at 0x7f65f1c9df98>
|
||||
|
||||
def test_not(self):
|
||||
def f():
|
||||
return 42
|
||||
> assert not f()
|
||||
E assert not 42
|
||||
E + where 42 = <function TestFailing.test_not.<locals>.f at 0x2b186edd99d8>()
|
||||
E + where 42 = <function TestFailing.test_not.<locals>.f at 0x7f65f2323598>()
|
||||
|
||||
failure_demo.py:38: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_text _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186eea7048>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c67710>
|
||||
|
||||
def test_eq_text(self):
|
||||
> assert 'spam' == 'eggs'
|
||||
|
@ -90,7 +90,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:42: AssertionError
|
||||
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186ed9aa58>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c97198>
|
||||
|
||||
def test_eq_similar_text(self):
|
||||
> assert 'foo 1 bar' == 'foo 2 bar'
|
||||
|
@ -103,7 +103,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:45: AssertionError
|
||||
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186ee904a8>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1cc4d30>
|
||||
|
||||
def test_eq_multiline_text(self):
|
||||
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
|
||||
|
@ -116,7 +116,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:48: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_long_text _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186ee8d828>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1cce588>
|
||||
|
||||
def test_eq_long_text(self):
|
||||
a = '1'*100 + 'a' + '2'*100
|
||||
|
@ -133,7 +133,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:53: AssertionError
|
||||
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186e28cb00>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c81cc0>
|
||||
|
||||
def test_eq_long_text_multiline(self):
|
||||
a = '1\n'*100 + 'a' + '2\n'*100
|
||||
|
@ -157,7 +157,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:58: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186ee879b0>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1ca2cc0>
|
||||
|
||||
def test_eq_list(self):
|
||||
> assert [0, 1, 2] == [0, 1, 3]
|
||||
|
@ -168,7 +168,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:61: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_list_long _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186e28eb70>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c29358>
|
||||
|
||||
def test_eq_list_long(self):
|
||||
a = [0]*100 + [1] + [3]*100
|
||||
|
@ -181,7 +181,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:66: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_dict _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186ee78860>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c9b588>
|
||||
|
||||
def test_eq_dict(self):
|
||||
> assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
|
||||
|
@ -198,7 +198,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:69: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_set __________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186eea6588>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c7fdd8>
|
||||
|
||||
def test_eq_set(self):
|
||||
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
|
||||
|
@ -215,7 +215,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:72: AssertionError
|
||||
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186ecbdc50>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c347f0>
|
||||
|
||||
def test_eq_longer_list(self):
|
||||
> assert [1,2] == [1,2,3]
|
||||
|
@ -226,7 +226,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:75: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_in_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186eeb0518>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f2313668>
|
||||
|
||||
def test_in_list(self):
|
||||
> assert 1 in [0, 2, 3, 4, 5]
|
||||
|
@ -235,7 +235,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:78: AssertionError
|
||||
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186eeb1860>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1cceb38>
|
||||
|
||||
def test_not_in_text_multiline(self):
|
||||
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
|
||||
|
@ -253,7 +253,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:82: AssertionError
|
||||
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186ee9e4a8>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c27438>
|
||||
|
||||
def test_not_in_text_single(self):
|
||||
text = 'single foo line'
|
||||
|
@ -266,7 +266,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:86: AssertionError
|
||||
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186eea6908>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c9d4e0>
|
||||
|
||||
def test_not_in_text_single_long(self):
|
||||
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
|
||||
|
@ -279,7 +279,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:90: AssertionError
|
||||
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2b186ee908d0>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1ce16d8>
|
||||
|
||||
def test_not_in_text_single_long_term(self):
|
||||
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
|
||||
|
@ -298,7 +298,7 @@ get on the terminal - we are working on that):
|
|||
i = Foo()
|
||||
> assert i.b == 2
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0x2b186ee8d4e0>.b
|
||||
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0x7f65f1c814e0>.b
|
||||
|
||||
failure_demo.py:101: AssertionError
|
||||
_________________________ test_attribute_instance __________________________
|
||||
|
@ -308,8 +308,8 @@ get on the terminal - we are working on that):
|
|||
b = 1
|
||||
> assert Foo().b == 2
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0x2b186eea6240>.b
|
||||
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0x2b186eea6240> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
|
||||
E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0x7f65f1c7f7f0>.b
|
||||
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0x7f65f1c7f7f0> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
|
||||
|
||||
failure_demo.py:107: AssertionError
|
||||
__________________________ test_attribute_failure __________________________
|
||||
|
@ -325,7 +325,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:116:
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
|
||||
self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0x2b186ed9a4e0>
|
||||
self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0x7f65f1c97dd8>
|
||||
|
||||
def _get_b(self):
|
||||
> raise Exception('Failed to get attrib')
|
||||
|
@ -341,15 +341,15 @@ get on the terminal - we are working on that):
|
|||
b = 2
|
||||
> assert Foo().b == Bar().b
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.test_attribute_multiple.<locals>.Foo object at 0x2b186ee78630>.b
|
||||
E + where <failure_demo.test_attribute_multiple.<locals>.Foo object at 0x2b186ee78630> = <class 'failure_demo.test_attribute_multiple.<locals>.Foo'>()
|
||||
E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0x2b186ee78358>.b
|
||||
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0x2b186ee78358> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
|
||||
E + where 1 = <failure_demo.test_attribute_multiple.<locals>.Foo object at 0x7f65f1c9b630>.b
|
||||
E + where <failure_demo.test_attribute_multiple.<locals>.Foo object at 0x7f65f1c9b630> = <class 'failure_demo.test_attribute_multiple.<locals>.Foo'>()
|
||||
E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0x7f65f1c9b2b0>.b
|
||||
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0x7f65f1c9b2b0> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
|
||||
|
||||
failure_demo.py:124: AssertionError
|
||||
__________________________ TestRaises.test_raises __________________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0x2b186e270b38>
|
||||
self = <failure_demo.TestRaises object at 0x7f65f1c3eba8>
|
||||
|
||||
def test_raises(self):
|
||||
s = 'qwe'
|
||||
|
@ -361,10 +361,10 @@ get on the terminal - we are working on that):
|
|||
> int(s)
|
||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||
|
||||
<0-codegen /home/hpk/p/pytest/.tox/regen/lib/python3.4/site-packages/_pytest/python.py:1075>:1: ValueError
|
||||
<0-codegen /tmp/sandbox/pytest/.tox/regen/lib/python3.4/site-packages/_pytest/python.py:1075>:1: ValueError
|
||||
______________________ TestRaises.test_raises_doesnt _______________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0x2b186eeab2b0>
|
||||
self = <failure_demo.TestRaises object at 0x7f65f1cc4eb8>
|
||||
|
||||
def test_raises_doesnt(self):
|
||||
> raises(IOError, "int('3')")
|
||||
|
@ -373,7 +373,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:136: Failed
|
||||
__________________________ TestRaises.test_raise ___________________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0x2b186ee75358>
|
||||
self = <failure_demo.TestRaises object at 0x7f65f1cceeb8>
|
||||
|
||||
def test_raise(self):
|
||||
> raise ValueError("demo error")
|
||||
|
@ -382,7 +382,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:139: ValueError
|
||||
________________________ TestRaises.test_tupleerror ________________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0x2b186e285978>
|
||||
self = <failure_demo.TestRaises object at 0x7f65f23136d8>
|
||||
|
||||
def test_tupleerror(self):
|
||||
> a,b = [1]
|
||||
|
@ -391,7 +391,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:142: ValueError
|
||||
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
|
||||
|
||||
self = <failure_demo.TestRaises object at 0x2b186edcfe48>
|
||||
self = <failure_demo.TestRaises object at 0x7f65f1ca2240>
|
||||
|
||||
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
|
||||
l = [1,2,3]
|
||||
|
@ -404,7 +404,7 @@ get on the terminal - we are working on that):
|
|||
l is [1, 2, 3]
|
||||
________________________ TestRaises.test_some_error ________________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0x2b186eeb1898>
|
||||
self = <failure_demo.TestRaises object at 0x7f65f1cb36a0>
|
||||
|
||||
def test_some_error(self):
|
||||
> if namenotexi:
|
||||
|
@ -429,10 +429,10 @@ get on the terminal - we are working on that):
|
|||
> assert 1 == 0
|
||||
E assert 1 == 0
|
||||
|
||||
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError
|
||||
<2-codegen 'abc-123' /tmp/sandbox/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError
|
||||
____________________ TestMoreErrors.test_complex_error _____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0x2b186e270358>
|
||||
self = <failure_demo.TestMoreErrors object at 0x7f65f1cb5470>
|
||||
|
||||
def test_complex_error(self):
|
||||
def f():
|
||||
|
@ -456,7 +456,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:5: AssertionError
|
||||
___________________ TestMoreErrors.test_z1_unpack_error ____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0x2b186ecbdba8>
|
||||
self = <failure_demo.TestMoreErrors object at 0x7f65f1c9d940>
|
||||
|
||||
def test_z1_unpack_error(self):
|
||||
l = []
|
||||
|
@ -466,7 +466,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:179: ValueError
|
||||
____________________ TestMoreErrors.test_z2_type_error _____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0x2b186ee78550>
|
||||
self = <failure_demo.TestMoreErrors object at 0x7f65f1c7f208>
|
||||
|
||||
def test_z2_type_error(self):
|
||||
l = 3
|
||||
|
@ -476,19 +476,19 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:183: TypeError
|
||||
______________________ TestMoreErrors.test_startswith ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0x2b186ee90978>
|
||||
self = <failure_demo.TestMoreErrors object at 0x7f65f1cc40b8>
|
||||
|
||||
def test_startswith(self):
|
||||
s = "123"
|
||||
g = "456"
|
||||
> assert s.startswith(g)
|
||||
E assert <built-in method startswith of str object at 0x2b186eea6500>('456')
|
||||
E + where <built-in method startswith of str object at 0x2b186eea6500> = '123'.startswith
|
||||
E assert <built-in method startswith of str object at 0x7f65f1ce14c8>('456')
|
||||
E + where <built-in method startswith of str object at 0x7f65f1ce14c8> = '123'.startswith
|
||||
|
||||
failure_demo.py:188: AssertionError
|
||||
__________________ TestMoreErrors.test_startswith_nested ___________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0x2b186eeab358>
|
||||
self = <failure_demo.TestMoreErrors object at 0x7f65f1c81b00>
|
||||
|
||||
def test_startswith_nested(self):
|
||||
def f():
|
||||
|
@ -496,15 +496,15 @@ get on the terminal - we are working on that):
|
|||
def g():
|
||||
return "456"
|
||||
> assert f().startswith(g())
|
||||
E assert <built-in method startswith of str object at 0x2b186eea6500>('456')
|
||||
E + where <built-in method startswith of str object at 0x2b186eea6500> = '123'.startswith
|
||||
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0x2b186eea1510>()
|
||||
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0x2b186eea1268>()
|
||||
E assert <built-in method startswith of str object at 0x7f65f1ce14c8>('456')
|
||||
E + where <built-in method startswith of str object at 0x7f65f1ce14c8> = '123'.startswith
|
||||
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0x7f65f1c32950>()
|
||||
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0x7f65f1c32ea0>()
|
||||
|
||||
failure_demo.py:195: AssertionError
|
||||
_____________________ TestMoreErrors.test_global_func ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0x2b186e28c7f0>
|
||||
self = <failure_demo.TestMoreErrors object at 0x7f65f1c97240>
|
||||
|
||||
def test_global_func(self):
|
||||
> assert isinstance(globf(42), float)
|
||||
|
@ -514,18 +514,18 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:198: AssertionError
|
||||
_______________________ TestMoreErrors.test_instance _______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0x2b186ee759b0>
|
||||
self = <failure_demo.TestMoreErrors object at 0x7f65f1ce1080>
|
||||
|
||||
def test_instance(self):
|
||||
self.x = 6*7
|
||||
> assert self.x != 42
|
||||
E assert 42 != 42
|
||||
E + where 42 = <failure_demo.TestMoreErrors object at 0x2b186ee759b0>.x
|
||||
E + where 42 = <failure_demo.TestMoreErrors object at 0x7f65f1ce1080>.x
|
||||
|
||||
failure_demo.py:202: AssertionError
|
||||
_______________________ TestMoreErrors.test_compare ________________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0x2b186ecbdf60>
|
||||
self = <failure_demo.TestMoreErrors object at 0x7f65f1c3e828>
|
||||
|
||||
def test_compare(self):
|
||||
> assert globf(10) < 5
|
||||
|
@ -535,7 +535,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:205: AssertionError
|
||||
_____________________ TestMoreErrors.test_try_finally ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0x2b186eeb1e48>
|
||||
self = <failure_demo.TestMoreErrors object at 0x7f65f1c67828>
|
||||
|
||||
def test_try_finally(self):
|
||||
x = 1
|
||||
|
@ -546,7 +546,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:210: AssertionError
|
||||
___________________ TestCustomAssertMsg.test_single_line ___________________
|
||||
|
||||
self = <failure_demo.TestCustomAssertMsg object at 0x2b186ed9a748>
|
||||
self = <failure_demo.TestCustomAssertMsg object at 0x7f65f1c29860>
|
||||
|
||||
def test_single_line(self):
|
||||
class A:
|
||||
|
@ -560,7 +560,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:221: AssertionError
|
||||
____________________ TestCustomAssertMsg.test_multiline ____________________
|
||||
|
||||
self = <failure_demo.TestCustomAssertMsg object at 0x2b186ee8d630>
|
||||
self = <failure_demo.TestCustomAssertMsg object at 0x7f65f1c676a0>
|
||||
|
||||
def test_multiline(self):
|
||||
class A:
|
||||
|
@ -577,7 +577,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:227: AssertionError
|
||||
___________________ TestCustomAssertMsg.test_custom_repr ___________________
|
||||
|
||||
self = <failure_demo.TestCustomAssertMsg object at 0x2b186e270e48>
|
||||
self = <failure_demo.TestCustomAssertMsg object at 0x7f65f1ccebe0>
|
||||
|
||||
def test_custom_repr(self):
|
||||
class JSON:
|
||||
|
@ -595,4 +595,4 @@ get on the terminal - we are working on that):
|
|||
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
|
||||
|
||||
failure_demo.py:237: AssertionError
|
||||
======================== 42 failed in 0.22 seconds =========================
|
||||
======================== 42 failed in 0.35 seconds =========================
|
||||
|
|
|
@ -108,8 +108,8 @@ directory with the above conftest.py::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-172, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 0 items
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
@ -153,13 +153,13 @@ and when running it will see a skipped "slow" test::
|
|||
|
||||
$ py.test -rs # "-rs" means report details on the little 's'
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-172, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 2 items
|
||||
|
||||
test_module.py .s
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] /tmp/doc-exec-172/conftest.py:9: need --runslow option to run
|
||||
SKIP [1] /tmp/doc-exec-162/conftest.py:9: need --runslow option to run
|
||||
|
||||
=================== 1 passed, 1 skipped in 0.01 seconds ====================
|
||||
|
||||
|
@ -167,8 +167,8 @@ Or run it including the ``slow`` marked test::
|
|||
|
||||
$ py.test --runslow
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-172, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 2 items
|
||||
|
||||
test_module.py ..
|
||||
|
@ -205,13 +205,13 @@ Let's run our little function::
|
|||
F
|
||||
================================= FAILURES =================================
|
||||
______________________________ test_something ______________________________
|
||||
|
||||
|
||||
def test_something():
|
||||
> checkconfig(42)
|
||||
E Failed: not configured: 42
|
||||
|
||||
|
||||
test_checkconfig.py:8: Failed
|
||||
1 failed in 0.01 seconds
|
||||
1 failed in 0.02 seconds
|
||||
|
||||
Detect if running from within a pytest run
|
||||
--------------------------------------------------------------
|
||||
|
@ -259,8 +259,8 @@ which will add the string to the test header accordingly::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-172, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
project deps: mylib-1.1
|
||||
collected 0 items
|
||||
|
||||
|
@ -283,8 +283,8 @@ which will add info only when run with "--v"::
|
|||
|
||||
$ py.test -v
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0 -- /home/hpk/p/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-172, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
info1: did you know that ...
|
||||
did you?
|
||||
collecting ... collected 0 items
|
||||
|
@ -295,8 +295,8 @@ and nothing when run plainly::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-172, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 0 items
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
@ -328,8 +328,8 @@ Now we can profile which test functions execute the slowest::
|
|||
|
||||
$ py.test --durations=3
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-172, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 3 items
|
||||
|
||||
test_some_are_slow.py ...
|
||||
|
@ -390,8 +390,8 @@ If we run this::
|
|||
|
||||
$ py.test -rx
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-172, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 4 items
|
||||
|
||||
test_step.py .Fx.
|
||||
|
@ -399,7 +399,7 @@ If we run this::
|
|||
================================= FAILURES =================================
|
||||
____________________ TestUserHandling.test_modification ____________________
|
||||
|
||||
self = <test_step.TestUserHandling object at 0x2b9ab60ccfd0>
|
||||
self = <test_step.TestUserHandling object at 0x7ff60bbb83c8>
|
||||
|
||||
def test_modification(self):
|
||||
> assert 0
|
||||
|
@ -409,7 +409,7 @@ If we run this::
|
|||
========================= short test summary info ==========================
|
||||
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
||||
reason: previous test failed (test_modification)
|
||||
============== 1 failed, 2 passed, 1 xfailed in 0.01 seconds ===============
|
||||
============== 1 failed, 2 passed, 1 xfailed in 0.02 seconds ===============
|
||||
|
||||
We'll see that ``test_deletion`` was not executed because ``test_modification``
|
||||
failed. It is reported as an "expected failure".
|
||||
|
@ -461,8 +461,8 @@ We can run this::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-172, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 7 items
|
||||
|
||||
test_step.py .Fx.
|
||||
|
@ -472,17 +472,17 @@ We can run this::
|
|||
|
||||
================================== ERRORS ==================================
|
||||
_______________________ ERROR at setup of test_root ________________________
|
||||
file /tmp/doc-exec-172/b/test_error.py, line 1
|
||||
file /tmp/doc-exec-162/b/test_error.py, line 1
|
||||
def test_root(db): # no db here, will error out
|
||||
fixture 'db' not found
|
||||
available fixtures: pytestconfig, tmpdir, monkeypatch, capfd, recwarn, capsys
|
||||
available fixtures: pytestconfig, capsys, recwarn, monkeypatch, tmpdir, capfd
|
||||
use 'py.test --fixtures [testpath]' for help on them.
|
||||
|
||||
/tmp/doc-exec-172/b/test_error.py:1
|
||||
/tmp/doc-exec-162/b/test_error.py:1
|
||||
================================= FAILURES =================================
|
||||
____________________ TestUserHandling.test_modification ____________________
|
||||
|
||||
self = <test_step.TestUserHandling object at 0x2aec569b87b8>
|
||||
self = <test_step.TestUserHandling object at 0x7f8ecd5b87f0>
|
||||
|
||||
def test_modification(self):
|
||||
> assert 0
|
||||
|
@ -491,25 +491,25 @@ We can run this::
|
|||
test_step.py:9: AssertionError
|
||||
_________________________________ test_a1 __________________________________
|
||||
|
||||
db = <conftest.DB object at 0x2aec569d1588>
|
||||
db = <conftest.DB object at 0x7f8ecdc11470>
|
||||
|
||||
def test_a1(db):
|
||||
> assert 0, db # to show value
|
||||
E AssertionError: <conftest.DB object at 0x2aec569d1588>
|
||||
E AssertionError: <conftest.DB object at 0x7f8ecdc11470>
|
||||
E assert 0
|
||||
|
||||
a/test_db.py:2: AssertionError
|
||||
_________________________________ test_a2 __________________________________
|
||||
|
||||
db = <conftest.DB object at 0x2aec569d1588>
|
||||
db = <conftest.DB object at 0x7f8ecdc11470>
|
||||
|
||||
def test_a2(db):
|
||||
> assert 0, db # to show value
|
||||
E AssertionError: <conftest.DB object at 0x2aec569d1588>
|
||||
E AssertionError: <conftest.DB object at 0x7f8ecdc11470>
|
||||
E assert 0
|
||||
|
||||
a/test_db2.py:2: AssertionError
|
||||
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.03 seconds ==========
|
||||
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.05 seconds ==========
|
||||
|
||||
The two test modules in the ``a`` directory see the same ``db`` fixture instance
|
||||
while the one test in the sister-directory ``b`` doesn't see it. We could of course
|
||||
|
@ -564,8 +564,8 @@ and run them::
|
|||
|
||||
$ py.test test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-172, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 2 items
|
||||
|
||||
test_module.py FF
|
||||
|
@ -573,7 +573,7 @@ and run them::
|
|||
================================= FAILURES =================================
|
||||
________________________________ test_fail1 ________________________________
|
||||
|
||||
tmpdir = local('/tmp/pytest-219/test_fail10')
|
||||
tmpdir = local('/tmp/pytest-22/test_fail10')
|
||||
|
||||
def test_fail1(tmpdir):
|
||||
> assert 0
|
||||
|
@ -587,12 +587,12 @@ and run them::
|
|||
E assert 0
|
||||
|
||||
test_module.py:4: AssertionError
|
||||
========================= 2 failed in 0.01 seconds =========================
|
||||
========================= 2 failed in 0.02 seconds =========================
|
||||
|
||||
you will have a "failures" file which contains the failing test ids::
|
||||
|
||||
$ cat failures
|
||||
test_module.py::test_fail1 (/tmp/pytest-219/test_fail10)
|
||||
test_module.py::test_fail1 (/tmp/pytest-22/test_fail10)
|
||||
test_module.py::test_fail2
|
||||
|
||||
Making test result information available in fixtures
|
||||
|
@ -655,8 +655,8 @@ and run it::
|
|||
|
||||
$ py.test -s test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-172, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 3 items
|
||||
|
||||
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
|
||||
|
@ -689,7 +689,7 @@ and run it::
|
|||
E assert 0
|
||||
|
||||
test_module.py:15: AssertionError
|
||||
==================== 2 failed, 1 error in 0.01 seconds =====================
|
||||
==================== 2 failed, 1 error in 0.02 seconds =====================
|
||||
|
||||
You'll see that the fixture finalizers could use the precise reporting
|
||||
information.
|
||||
|
|
|
@ -69,4 +69,4 @@ If you run this without output capturing::
|
|||
.test other
|
||||
.test_unit1 method called
|
||||
.
|
||||
4 passed in 0.04 seconds
|
||||
4 passed in 0.03 seconds
|
||||
|
|
|
@ -75,8 +75,8 @@ marked ``smtp`` fixture function. Running the test looks like this::
|
|||
|
||||
$ py.test test_smtpsimple.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-109, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-98, inifile:
|
||||
collected 1 items
|
||||
|
||||
test_smtpsimple.py F
|
||||
|
@ -84,7 +84,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
|
|||
================================= FAILURES =================================
|
||||
________________________________ test_ehlo _________________________________
|
||||
|
||||
smtp = <smtplib.SMTP object at 0x2b058f0f53c8>
|
||||
smtp = <smtplib.SMTP object at 0x7f9d45764c88>
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response, msg = smtp.ehlo()
|
||||
|
@ -93,7 +93,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
|
|||
E assert 0
|
||||
|
||||
test_smtpsimple.py:11: AssertionError
|
||||
========================= 1 failed in 0.17 seconds =========================
|
||||
========================= 1 failed in 1.07 seconds =========================
|
||||
|
||||
In the failure traceback we see that the test function was called with a
|
||||
``smtp`` argument, the ``smtplib.SMTP()`` instance created by the fixture
|
||||
|
@ -193,8 +193,8 @@ inspect what is going on and can now run the tests::
|
|||
|
||||
$ py.test test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-109, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-98, inifile:
|
||||
collected 2 items
|
||||
|
||||
test_module.py FF
|
||||
|
@ -202,7 +202,7 @@ inspect what is going on and can now run the tests::
|
|||
================================= FAILURES =================================
|
||||
________________________________ test_ehlo _________________________________
|
||||
|
||||
smtp = <smtplib.SMTP object at 0x2aec79533a58>
|
||||
smtp = <smtplib.SMTP object at 0x7fb558b12240>
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response = smtp.ehlo()
|
||||
|
@ -213,7 +213,7 @@ inspect what is going on and can now run the tests::
|
|||
test_module.py:5: TypeError
|
||||
________________________________ test_noop _________________________________
|
||||
|
||||
smtp = <smtplib.SMTP object at 0x2aec79533a58>
|
||||
smtp = <smtplib.SMTP object at 0x7fb558b12240>
|
||||
|
||||
def test_noop(smtp):
|
||||
response = smtp.noop()
|
||||
|
@ -222,7 +222,7 @@ inspect what is going on and can now run the tests::
|
|||
E assert 0
|
||||
|
||||
test_module.py:11: AssertionError
|
||||
========================= 2 failed in 0.20 seconds =========================
|
||||
========================= 2 failed in 0.82 seconds =========================
|
||||
|
||||
You see the two ``assert 0`` failing and more importantly you can also see
|
||||
that the same (module-scoped) ``smtp`` object was passed into the two
|
||||
|
@ -270,7 +270,7 @@ Let's execute it::
|
|||
$ py.test -s -q --tb=no
|
||||
FFteardown smtp
|
||||
|
||||
2 failed in 0.25 seconds
|
||||
2 failed in 1.44 seconds
|
||||
|
||||
We see that the ``smtp`` instance is finalized after the two
|
||||
tests finished execution. Note that if we decorated our fixture
|
||||
|
@ -311,7 +311,7 @@ again, nothing much has changed::
|
|||
|
||||
$ py.test -s -q --tb=no
|
||||
FF
|
||||
2 failed in 0.17 seconds
|
||||
2 failed in 0.62 seconds
|
||||
|
||||
Let's quickly create another test module that actually sets the
|
||||
server URL in its module namespace::
|
||||
|
@ -379,7 +379,7 @@ So let's just do another run::
|
|||
================================= FAILURES =================================
|
||||
__________________________ test_ehlo[merlinux.eu] __________________________
|
||||
|
||||
smtp = <smtplib.SMTP object at 0x2b4ce634f828>
|
||||
smtp = <smtplib.SMTP object at 0x7f4eecf92080>
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response = smtp.ehlo()
|
||||
|
@ -390,7 +390,7 @@ So let's just do another run::
|
|||
test_module.py:5: TypeError
|
||||
__________________________ test_noop[merlinux.eu] __________________________
|
||||
|
||||
smtp = <smtplib.SMTP object at 0x2b4ce634f828>
|
||||
smtp = <smtplib.SMTP object at 0x7f4eecf92080>
|
||||
|
||||
def test_noop(smtp):
|
||||
response = smtp.noop()
|
||||
|
@ -401,7 +401,7 @@ So let's just do another run::
|
|||
test_module.py:11: AssertionError
|
||||
________________________ test_ehlo[mail.python.org] ________________________
|
||||
|
||||
smtp = <smtplib.SMTP object at 0x2b4ce634f7f0>
|
||||
smtp = <smtplib.SMTP object at 0x7f4eecf92048>
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response = smtp.ehlo()
|
||||
|
@ -411,10 +411,10 @@ So let's just do another run::
|
|||
|
||||
test_module.py:5: TypeError
|
||||
-------------------------- Captured stdout setup ---------------------------
|
||||
finalizing <smtplib.SMTP object at 0x2b4ce634f828>
|
||||
finalizing <smtplib.SMTP object at 0x7f4eecf92080>
|
||||
________________________ test_noop[mail.python.org] ________________________
|
||||
|
||||
smtp = <smtplib.SMTP object at 0x2b4ce634f7f0>
|
||||
smtp = <smtplib.SMTP object at 0x7f4eecf92048>
|
||||
|
||||
def test_noop(smtp):
|
||||
response = smtp.noop()
|
||||
|
@ -423,7 +423,7 @@ So let's just do another run::
|
|||
E assert 0
|
||||
|
||||
test_module.py:11: AssertionError
|
||||
4 failed in 6.70 seconds
|
||||
4 failed in 1.75 seconds
|
||||
|
||||
We see that our two test functions each ran twice, against the different
|
||||
``smtp`` instances. Note also, that with the ``mail.python.org``
|
||||
|
@ -474,8 +474,8 @@ Running the above tests results in the following test IDs being used::
|
|||
|
||||
$ py.test --collect-only
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-109, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-98, inifile:
|
||||
collected 6 items
|
||||
<Module 'test_anothersmtp.py'>
|
||||
<Function 'test_showhelo[merlinux.eu]'>
|
||||
|
@ -486,7 +486,7 @@ Running the above tests results in the following test IDs being used::
|
|||
<Function 'test_ehlo[mail.python.org]'>
|
||||
<Function 'test_noop[mail.python.org]'>
|
||||
|
||||
============================= in 0.01 seconds =============================
|
||||
============================= in 0.02 seconds =============================
|
||||
|
||||
.. _`interdependent fixtures`:
|
||||
|
||||
|
@ -520,14 +520,14 @@ Here we declare an ``app`` fixture which receives the previously defined
|
|||
|
||||
$ py.test -v test_appsetup.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0 -- /home/hpk/p/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-109, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-98, inifile:
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_appsetup.py::test_smtp_exists[merlinux.eu] PASSED
|
||||
test_appsetup.py::test_smtp_exists[mail.python.org] PASSED
|
||||
|
||||
========================= 2 passed in 6.53 seconds =========================
|
||||
========================= 2 passed in 1.09 seconds =========================
|
||||
|
||||
Due to the parametrization of ``smtp`` the test will run twice with two
|
||||
different ``App`` instances and respective smtp servers. There is no
|
||||
|
@ -585,8 +585,8 @@ Let's run the tests in verbose mode and with looking at the print-output::
|
|||
|
||||
$ py.test -v -s test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0 -- /home/hpk/p/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-109, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-98, inifile:
|
||||
collecting ... collected 8 items
|
||||
|
||||
test_module.py::test_0[1] test0 1
|
||||
|
@ -608,7 +608,7 @@ Let's run the tests in verbose mode and with looking at the print-output::
|
|||
test_module.py::test_2[2-mod2] test2 2 mod2
|
||||
PASSED
|
||||
|
||||
========================= 8 passed in 0.01 seconds =========================
|
||||
========================= 8 passed in 0.02 seconds =========================
|
||||
|
||||
You can see that the parametrized module-scoped ``modarg`` resource caused
|
||||
an ordering of test execution that lead to the fewest possible "active" resources. The finalizer for the ``mod1`` parametrized resource was executed
|
||||
|
|
|
@ -27,7 +27,7 @@ Installation options::
|
|||
To check your installation has installed the correct version::
|
||||
|
||||
$ py.test --version
|
||||
This is pytest version 2.7.0, imported from /home/hpk/p/pytest/.tox/regen/lib/python3.4/site-packages/pytest.py
|
||||
This is pytest version 2.7.1, imported from /tmp/sandbox/pytest/.tox/regen/lib/python3.4/site-packages/pytest.py
|
||||
|
||||
If you get an error checkout :ref:`installation issues`.
|
||||
|
||||
|
@ -49,8 +49,8 @@ That's it. You can execute the test function now::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-112, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-101, inifile:
|
||||
collected 1 items
|
||||
|
||||
test_sample.py F
|
||||
|
@ -98,7 +98,7 @@ Running it with, this time in "quiet" reporting mode::
|
|||
|
||||
$ py.test -q test_sysexit.py
|
||||
.
|
||||
1 passed in 0.00 seconds
|
||||
1 passed in 0.01 seconds
|
||||
|
||||
.. todo:: For further ways to assert exceptions see the `raises`
|
||||
|
||||
|
@ -128,7 +128,7 @@ run the module by passing its filename::
|
|||
================================= FAILURES =================================
|
||||
____________________________ TestClass.test_two ____________________________
|
||||
|
||||
self = <test_class.TestClass object at 0x2ad6b3a6f278>
|
||||
self = <test_class.TestClass object at 0x7fbf54cf5668>
|
||||
|
||||
def test_two(self):
|
||||
x = "hello"
|
||||
|
@ -164,7 +164,7 @@ before performing the test function call. Let's just run it::
|
|||
================================= FAILURES =================================
|
||||
_____________________________ test_needsfiles ______________________________
|
||||
|
||||
tmpdir = local('/tmp/pytest-215/test_needsfiles0')
|
||||
tmpdir = local('/tmp/pytest-18/test_needsfiles0')
|
||||
|
||||
def test_needsfiles(tmpdir):
|
||||
print (tmpdir)
|
||||
|
@ -173,8 +173,8 @@ before performing the test function call. Let's just run it::
|
|||
|
||||
test_tmpdir.py:3: AssertionError
|
||||
--------------------------- Captured stdout call ---------------------------
|
||||
/tmp/pytest-215/test_needsfiles0
|
||||
1 failed in 0.01 seconds
|
||||
/tmp/pytest-18/test_needsfiles0
|
||||
1 failed in 0.05 seconds
|
||||
|
||||
Before the test runs, a unique-per-test-invocation temporary directory
|
||||
was created. More info at :ref:`tmpdir handling`.
|
||||
|
|
|
@ -53,8 +53,8 @@ them in turn::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-120, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-109, inifile:
|
||||
collected 3 items
|
||||
|
||||
test_expectation.py ..F
|
||||
|
@ -75,7 +75,7 @@ them in turn::
|
|||
E + where 54 = eval('6*9')
|
||||
|
||||
test_expectation.py:8: AssertionError
|
||||
==================== 1 failed, 2 passed in 0.01 seconds ====================
|
||||
==================== 1 failed, 2 passed in 0.02 seconds ====================
|
||||
|
||||
As designed in this example, only one pair of input/output values fails
|
||||
the simple test function. And as usual with test function arguments,
|
||||
|
@ -101,13 +101,13 @@ Let's run this::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-120, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-109, inifile:
|
||||
collected 3 items
|
||||
|
||||
test_expectation.py ..x
|
||||
|
||||
=================== 2 passed, 1 xfailed in 0.01 seconds ====================
|
||||
=================== 2 passed, 1 xfailed in 0.02 seconds ====================
|
||||
|
||||
The one parameter set which caused a failure previously now
|
||||
shows up as an "xfailed (expected to fail)" test.
|
||||
|
@ -172,8 +172,8 @@ Let's also run with a stringinput that will lead to a failing test::
|
|||
|
||||
def test_valid_string(stringinput):
|
||||
> assert stringinput.isalpha()
|
||||
E assert <built-in method isalpha of str object at 0x2ae1375d9810>()
|
||||
E + where <built-in method isalpha of str object at 0x2ae1375d9810> = '!'.isalpha
|
||||
E assert <built-in method isalpha of str object at 0x7f6e2145e768>()
|
||||
E + where <built-in method isalpha of str object at 0x7f6e2145e768> = '!'.isalpha
|
||||
|
||||
test_strings.py:3: AssertionError
|
||||
1 failed in 0.01 seconds
|
||||
|
@ -187,8 +187,8 @@ listlist::
|
|||
$ py.test -q -rs test_strings.py
|
||||
s
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] /home/hpk/p/pytest/.tox/regen/lib/python3.4/site-packages/_pytest/python.py:1185: got empty parameter set, function test_valid_string at /tmp/doc-exec-120/test_strings.py:1
|
||||
1 skipped in 0.00 seconds
|
||||
SKIP [1] /tmp/sandbox/pytest/.tox/regen/lib/python3.4/site-packages/_pytest/python.py:1185: got empty parameter set, function test_valid_string at /tmp/doc-exec-109/test_strings.py:1
|
||||
1 skipped in 0.01 seconds
|
||||
|
||||
For further examples, you might want to look at :ref:`more
|
||||
parametrization examples <paramexamples>`.
|
||||
|
|
|
@ -164,8 +164,8 @@ Running it with the report-on-xfail option gives this output::
|
|||
|
||||
example $ py.test -rx xfail_demo.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /home/hpk/p/pytest/doc/en, inifile: pytest.ini
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini
|
||||
collected 7 items
|
||||
|
||||
xfail_demo.py xxxxxxx
|
||||
|
@ -183,7 +183,7 @@ Running it with the report-on-xfail option gives this output::
|
|||
reason: reason
|
||||
XFAIL xfail_demo.py::test_hello7
|
||||
|
||||
======================== 7 xfailed in 0.05 seconds =========================
|
||||
======================== 7 xfailed in 0.06 seconds =========================
|
||||
|
||||
.. _`skip/xfail with parametrize`:
|
||||
|
||||
|
|
|
@ -29,8 +29,8 @@ Running this would result in a passed test except for the last
|
|||
|
||||
$ py.test test_tmpdir.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-129, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-118, inifile:
|
||||
collected 1 items
|
||||
|
||||
test_tmpdir.py F
|
||||
|
@ -38,7 +38,7 @@ Running this would result in a passed test except for the last
|
|||
================================= FAILURES =================================
|
||||
_____________________________ test_create_file _____________________________
|
||||
|
||||
tmpdir = local('/tmp/pytest-216/test_create_file0')
|
||||
tmpdir = local('/tmp/pytest-19/test_create_file0')
|
||||
|
||||
def test_create_file(tmpdir):
|
||||
p = tmpdir.mkdir("sub").join("hello.txt")
|
||||
|
@ -49,7 +49,7 @@ Running this would result in a passed test except for the last
|
|||
E assert 0
|
||||
|
||||
test_tmpdir.py:7: AssertionError
|
||||
========================= 1 failed in 0.01 seconds =========================
|
||||
========================= 1 failed in 0.04 seconds =========================
|
||||
|
||||
.. _`base temporary directory`:
|
||||
|
||||
|
|
|
@ -88,8 +88,8 @@ the ``self.db`` values in the traceback::
|
|||
|
||||
$ py.test test_unittest_db.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0
|
||||
rootdir: /tmp/doc-exec-130, inifile:
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-119, inifile:
|
||||
collected 2 items
|
||||
|
||||
test_unittest_db.py FF
|
||||
|
@ -102,7 +102,7 @@ the ``self.db`` values in the traceback::
|
|||
def test_method1(self):
|
||||
assert hasattr(self, "db")
|
||||
> assert 0, self.db # fail for demo purposes
|
||||
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0x2ab102a4bac8>
|
||||
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0x7f97382031d0>
|
||||
E assert 0
|
||||
|
||||
test_unittest_db.py:9: AssertionError
|
||||
|
@ -112,7 +112,7 @@ the ``self.db`` values in the traceback::
|
|||
|
||||
def test_method2(self):
|
||||
> assert 0, self.db # fail for demo purposes
|
||||
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0x2ab102a4bac8>
|
||||
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0x7f97382031d0>
|
||||
E assert 0
|
||||
|
||||
test_unittest_db.py:12: AssertionError
|
||||
|
@ -163,7 +163,7 @@ Running this test module ...::
|
|||
|
||||
$ py.test -q test_unittest_cleandir.py
|
||||
.
|
||||
1 passed in 0.04 seconds
|
||||
1 passed in 0.25 seconds
|
||||
|
||||
... gives us one passed test because the ``initdir`` fixture function
|
||||
was executed ahead of the ``test_method``.
|
||||
|
|
|
@ -51,7 +51,7 @@ Let's run it with output capturing disabled::
|
|||
test called
|
||||
.teardown after yield
|
||||
|
||||
1 passed in 0.00 seconds
|
||||
1 passed in 0.01 seconds
|
||||
|
||||
We can also seamlessly use the new syntax with ``with`` statements.
|
||||
Let's simplify the above ``passwd`` fixture::
|
||||
|
|
Loading…
Reference in New Issue