use regendoc normalization and regenerate docs

--HG--
branch : regendoc-upgrade
This commit is contained in:
Ronny Pfannschmidt 2015-06-06 23:30:49 +02:00
parent 645ddc917f
commit 43d27ec7ed
20 changed files with 572 additions and 558 deletions

View File

@ -1,6 +1,11 @@
# Set of targets useful for development/release process # Set of targets useful for development/release process
PYTHON = python2.7 PYTHON = python2.7
PATH := $(PWD)/.env/bin:$(PATH) PATH := $(PWD)/.env/bin:$(PATH)
REGENDOC_ARGS := \
--normalize "/={8,} (.*) ={8,}/======= \1 ========/" \
--normalize "/_{8,} (.*) _{8,}/_______ \1 ________/" \
--normalize "/in \d+.\d+ seconds/in 0.12 seconds/" \
--normalize "@/tmp/pytest-\d+/@/tmp/pytest-NaN/@"
# prepare virtual python environment # prepare virtual python environment
.env: .env:
@ -16,10 +21,11 @@ clean:
# generate documentation # generate documentation
docs: develop docs: develop
find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc ${REGENDOC_ARGS}
cd doc/en; make html cd doc/en; make html
# upload documentation # upload documentation
upload-docs: develop upload-docs: develop
find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc --update find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc ${REGENDOC_ARGS} --update
cd doc/en; make install #cd doc/en; make install

View File

@ -25,15 +25,15 @@ to assert that your function returns a certain value. If this assertion fails
you will see the return value of the function call:: you will see the return value of the function call::
$ py.test test_assert1.py $ py.test test_assert1.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-87, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items
test_assert1.py F test_assert1.py F
================================= FAILURES ================================= ======= FAILURES ========
______________________________ test_function _______________________________ _______ test_function ________
def test_function(): def test_function():
> assert f() == 4 > assert f() == 4
@ -41,7 +41,7 @@ you will see the return value of the function call::
E + where 3 = f() E + where 3 = f()
test_assert1.py:5: AssertionError test_assert1.py:5: AssertionError
========================= 1 failed in 0.01 seconds ========================= ======= 1 failed in 0.12 seconds ========
``pytest`` has support for showing the values of the most common subexpressions ``pytest`` has support for showing the values of the most common subexpressions
including calls, attributes, comparisons, and binary and unary including calls, attributes, comparisons, and binary and unary
@ -135,15 +135,15 @@ when it encounters comparisons. For example::
if you run this module:: if you run this module::
$ py.test test_assert2.py $ py.test test_assert2.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-87, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items
test_assert2.py F test_assert2.py F
================================= FAILURES ================================= ======= FAILURES ========
___________________________ test_set_comparison ____________________________ _______ test_set_comparison ________
def test_set_comparison(): def test_set_comparison():
set1 = set("1308") set1 = set("1308")
@ -157,7 +157,7 @@ if you run this module::
E Use -v to get the full diff E Use -v to get the full diff
test_assert2.py:5: AssertionError test_assert2.py:5: AssertionError
========================= 1 failed in 0.01 seconds ========================= ======= 1 failed in 0.12 seconds ========
Special comparisons are done for a number of cases: Special comparisons are done for a number of cases:
@ -202,8 +202,8 @@ the conftest file::
$ py.test -q test_foocompare.py $ py.test -q test_foocompare.py
F F
================================= FAILURES ================================= ======= FAILURES ========
_______________________________ test_compare _______________________________ _______ test_compare ________
def test_compare(): def test_compare():
f1 = Foo(1) f1 = Foo(1)
@ -213,7 +213,7 @@ the conftest file::
E vals: 1 != 2 E vals: 1 != 2
test_foocompare.py:8: AssertionError test_foocompare.py:8: AssertionError
1 failed in 0.01 seconds 1 failed in 0.12 seconds
.. _assert-details: .. _assert-details:
.. _`assert introspection`: .. _`assert introspection`:

View File

@ -115,4 +115,4 @@ You can ask for available builtin or project-custom
directory. The returned object is a `py.path.local`_ directory. The returned object is a `py.path.local`_
path object. path object.
in 0.00 seconds in 0.12 seconds

View File

@ -63,24 +63,24 @@ and running this module will show you precisely the output
of the failing function and hide the other one:: of the failing function and hide the other one::
$ py.test $ py.test
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-90, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
test_module.py .F test_module.py .F
================================= FAILURES ================================= ======= FAILURES ========
________________________________ test_func2 ________________________________ _______ test_func2 ________
def test_func2(): def test_func2():
> assert False > assert False
E assert False E assert False
test_module.py:9: AssertionError test_module.py:9: AssertionError
-------------------------- Captured stdout setup --------------------------- ---------------------------- Captured stdout setup -----------------------------
setting up <function test_func2 at 0x7fa678d6eb70> setting up <function test_func2 at 0xdeadbeef>
==================== 1 failed, 1 passed in 0.01 seconds ==================== ======= 1 failed, 1 passed in 0.12 seconds ========
Accessing captured output from a test function Accessing captured output from a test function
--------------------------------------------------- ---------------------------------------------------

View File

@ -43,14 +43,14 @@ and another like this::
then you can just invoke ``py.test`` without command line options:: then you can just invoke ``py.test`` without command line options::
$ py.test $ py.test
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-96, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 1 items collected 1 items
mymodule.py . mymodule.py .
========================= 1 passed in 0.06 seconds ========================= ======= 1 passed in 0.12 seconds ========
It is possible to use fixtures using the ``getfixture`` helper:: It is possible to use fixtures using the ``getfixture`` helper::

View File

@ -30,30 +30,30 @@ You can "mark" a test function with custom metadata like this::
You can then restrict a test run to only run tests marked with ``webtest``:: You can then restrict a test run to only run tests marked with ``webtest``::
$ py.test -v -m webtest $ py.test -v -m webtest
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
test_server.py::test_send_http PASSED test_server.py::test_send_http PASSED
=================== 3 tests deselected by "-m 'webtest'" =================== ======= 3 tests deselected by "-m 'webtest'" ========
================== 1 passed, 3 deselected in 0.01 seconds ================== ======= 1 passed, 3 deselected in 0.12 seconds ========
Or the inverse, running all tests except the webtest ones:: Or the inverse, running all tests except the webtest ones::
$ py.test -v -m "not webtest" $ py.test -v -m "not webtest"
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
test_server.py::test_something_quick PASSED test_server.py::test_something_quick PASSED
test_server.py::test_another PASSED test_server.py::test_another PASSED
test_server.py::TestClass::test_method PASSED test_server.py::TestClass::test_method PASSED
================= 1 tests deselected by "-m 'not webtest'" ================= ======= 1 tests deselected by "-m 'not webtest'" ========
================== 3 passed, 1 deselected in 0.01 seconds ================== ======= 3 passed, 1 deselected in 0.12 seconds ========
Selecting tests based on their node ID Selecting tests based on their node ID
-------------------------------------- --------------------------------------
@ -63,39 +63,39 @@ arguments to select only specified tests. This makes it easy to select
tests based on their module, class, method, or function name:: tests based on their module, class, method, or function name::
$ py.test -v test_server.py::TestClass::test_method $ py.test -v test_server.py::TestClass::test_method
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 5 items collecting ... collected 5 items
test_server.py::TestClass::test_method PASSED test_server.py::TestClass::test_method PASSED
========================= 1 passed in 0.01 seconds ========================= ======= 1 passed in 0.12 seconds ========
You can also select on the class:: You can also select on the class::
$ py.test -v test_server.py::TestClass $ py.test -v test_server.py::TestClass
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
test_server.py::TestClass::test_method PASSED test_server.py::TestClass::test_method PASSED
========================= 1 passed in 0.01 seconds ========================= ======= 1 passed in 0.12 seconds ========
Or select multiple nodes:: Or select multiple nodes::
$ py.test -v test_server.py::TestClass test_server.py::test_send_http $ py.test -v test_server.py::TestClass test_server.py::test_send_http
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 8 items collecting ... collected 8 items
test_server.py::TestClass::test_method PASSED test_server.py::TestClass::test_method PASSED
test_server.py::test_send_http PASSED test_server.py::test_send_http PASSED
========================= 2 passed in 0.01 seconds ========================= ======= 2 passed in 0.12 seconds ========
.. _node-id: .. _node-id:
@ -124,44 +124,44 @@ exact match on markers that ``-m`` provides. This makes it easy to
select tests based on their names:: select tests based on their names::
$ py.test -v -k http # running with the above defined example module $ py.test -v -k http # running with the above defined example module
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
test_server.py::test_send_http PASSED test_server.py::test_send_http PASSED
====================== 3 tests deselected by '-khttp' ====================== ======= 3 tests deselected by '-khttp' ========
================== 1 passed, 3 deselected in 0.01 seconds ================== ======= 1 passed, 3 deselected in 0.12 seconds ========
And you can also run all tests except the ones that match the keyword:: And you can also run all tests except the ones that match the keyword::
$ py.test -k "not send_http" -v $ py.test -k "not send_http" -v
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
test_server.py::test_something_quick PASSED test_server.py::test_something_quick PASSED
test_server.py::test_another PASSED test_server.py::test_another PASSED
test_server.py::TestClass::test_method PASSED test_server.py::TestClass::test_method PASSED
================= 1 tests deselected by '-knot send_http' ================== ======= 1 tests deselected by '-knot send_http' ========
================== 3 passed, 1 deselected in 0.01 seconds ================== ======= 3 passed, 1 deselected in 0.12 seconds ========
Or to select "http" and "quick" tests:: Or to select "http" and "quick" tests::
$ py.test -k "http or quick" -v $ py.test -k "http or quick" -v
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
test_server.py::test_send_http PASSED test_server.py::test_send_http PASSED
test_server.py::test_something_quick PASSED test_server.py::test_something_quick PASSED
================= 2 tests deselected by '-khttp or quick' ================== ======= 2 tests deselected by '-khttp or quick' ========
================== 2 passed, 2 deselected in 0.01 seconds ================== ======= 2 passed, 2 deselected in 0.12 seconds ========
.. note:: .. note::
@ -201,9 +201,9 @@ You can ask which markers exist for your test suite - the list includes our just
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.hookimpl(tryfirst=True): mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@pytest.hookimpl(trylast=True): mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
For an example on how to add and work with markers from a plugin, see For an example on how to add and work with markers from a plugin, see
@ -341,26 +341,26 @@ and an example invocations specifying a different environment than what
the test needs:: the test needs::
$ py.test -E stage2 $ py.test -E stage2
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items
test_someenv.py s test_someenv.py s
======================== 1 skipped in 0.01 seconds ========================= ======= 1 skipped in 0.12 seconds ========
and here is one that specifies exactly the environment needed:: and here is one that specifies exactly the environment needed::
$ py.test -E stage1 $ py.test -E stage1
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items
test_someenv.py . test_someenv.py .
========================= 1 passed in 0.01 seconds ========================= ======= 1 passed in 0.12 seconds ========
The ``--markers`` option always gives you a list of available markers:: The ``--markers`` option always gives you a list of available markers::
@ -375,9 +375,9 @@ The ``--markers`` option always gives you a list of available markers::
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.hookimpl(tryfirst=True): mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@pytest.hookimpl(trylast=True): mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
Reading markers which were set from multiple places Reading markers which were set from multiple places
@ -420,7 +420,7 @@ Let's run this without capturing output and see what we get::
glob args=('class',) kwargs={'x': 2} glob args=('class',) kwargs={'x': 2}
glob args=('module',) kwargs={'x': 1} glob args=('module',) kwargs={'x': 1}
. .
1 passed in 0.01 seconds 1 passed in 0.12 seconds
marking platform specific tests with pytest marking platform specific tests with pytest
-------------------------------------------------------------- --------------------------------------------------------------
@ -472,29 +472,29 @@ Let's do a little test file to show how this looks like::
then you will see two test skipped and two executed tests as expected:: then you will see two test skipped and two executed tests as expected::
$ py.test -rs # this option reports skip reasons $ py.test -rs # this option reports skip reasons
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
test_plat.py sss. test_plat.py s.s.
========================= short test summary info ========================== ======= short test summary info ========
SKIP [3] /tmp/doc-exec-157/conftest.py:12: cannot run on platform linux SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux2
=================== 1 passed, 3 skipped in 0.01 seconds ==================== ======= 2 passed, 2 skipped in 0.12 seconds ========
Note that if you specify a platform via the marker-command line option like this:: Note that if you specify a platform via the marker-command line option like this::
$ py.test -m linux2 $ py.test -m linux2
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
test_plat.py s test_plat.py .
=================== 3 tests deselected by "-m 'linux2'" ==================== ======= 3 tests deselected by "-m 'linux2'" ========
================= 1 skipped, 3 deselected in 0.01 seconds ================== ======= 1 passed, 3 deselected in 0.12 seconds ========
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests. then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
@ -538,47 +538,47 @@ We want to dynamically define two markers and can do it in a
We can now use the ``-m option`` to select one set:: We can now use the ``-m option`` to select one set::
$ py.test -m interface --tb=short $ py.test -m interface --tb=short
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
test_module.py FF test_module.py FF
================================= FAILURES ================================= ======= FAILURES ========
__________________________ test_interface_simple ___________________________ _______ test_interface_simple ________
test_module.py:3: in test_interface_simple test_module.py:3: in test_interface_simple
assert 0 assert 0
E assert 0 E assert 0
__________________________ test_interface_complex __________________________ _______ test_interface_complex ________
test_module.py:6: in test_interface_complex test_module.py:6: in test_interface_complex
assert 0 assert 0
E assert 0 E assert 0
================== 2 tests deselected by "-m 'interface'" ================== ======= 2 tests deselected by "-m 'interface'" ========
================== 2 failed, 2 deselected in 0.02 seconds ================== ======= 2 failed, 2 deselected in 0.12 seconds ========
or to select both "event" and "interface" tests:: or to select both "event" and "interface" tests::
$ py.test -m "interface or event" --tb=short $ py.test -m "interface or event" --tb=short
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-157, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
test_module.py FFF test_module.py FFF
================================= FAILURES ================================= ======= FAILURES ========
__________________________ test_interface_simple ___________________________ _______ test_interface_simple ________
test_module.py:3: in test_interface_simple test_module.py:3: in test_interface_simple
assert 0 assert 0
E assert 0 E assert 0
__________________________ test_interface_complex __________________________ _______ test_interface_complex ________
test_module.py:6: in test_interface_complex test_module.py:6: in test_interface_complex
assert 0 assert 0
E assert 0 E assert 0
____________________________ test_event_simple _____________________________ _______ test_event_simple ________
test_module.py:9: in test_event_simple test_module.py:9: in test_event_simple
assert 0 assert 0
E assert 0 E assert 0
============= 1 tests deselected by "-m 'interface or event'" ============== ======= 1 tests deselected by "-m 'interface or event'" ========
================== 3 failed, 1 deselected in 0.02 seconds ================== ======= 3 failed, 1 deselected in 0.12 seconds ========

View File

@ -26,19 +26,19 @@ and if you installed `PyYAML`_ or a compatible YAML-parser you can
now execute the test specification:: now execute the test specification::
nonpython $ py.test test_simple.yml nonpython $ py.test test_simple.yml
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini rootdir: $PWD/doc/en, inifile: pytest.ini
collected 2 items collected 2 items
test_simple.yml .F test_simple.yml .F
================================= FAILURES ================================= ======= FAILURES ========
______________________________ usecase: hello ______________________________ _______ usecase: hello ________
usecase execution failed usecase execution failed
spec failed: 'some': 'other' spec failed: 'some': 'other'
no further details known at this point. no further details known at this point.
==================== 1 failed, 1 passed in 0.19 seconds ==================== ======= 1 failed, 1 passed in 0.12 seconds ========
You get one dot for the passing ``sub1: sub1`` check and one failure. You get one dot for the passing ``sub1: sub1`` check and one failure.
Obviously in the above ``conftest.py`` you'll want to implement a more Obviously in the above ``conftest.py`` you'll want to implement a more
@ -56,31 +56,31 @@ your own domain specific testing language this way.
consulted when reporting in ``verbose`` mode:: consulted when reporting in ``verbose`` mode::
nonpython $ py.test -v nonpython $ py.test -v
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7
rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini rootdir: $PWD/doc/en, inifile: pytest.ini
collecting ... collected 2 items collecting ... collected 2 items
test_simple.yml::ok PASSED test_simple.yml::ok PASSED
test_simple.yml::hello FAILED test_simple.yml::hello FAILED
================================= FAILURES ================================= ======= FAILURES ========
______________________________ usecase: hello ______________________________ _______ usecase: hello ________
usecase execution failed usecase execution failed
spec failed: 'some': 'other' spec failed: 'some': 'other'
no further details known at this point. no further details known at this point.
==================== 1 failed, 1 passed in 0.05 seconds ==================== ======= 1 failed, 1 passed in 0.12 seconds ========
While developing your custom test collection and execution it's also While developing your custom test collection and execution it's also
interesting to just look at the collection tree:: interesting to just look at the collection tree::
nonpython $ py.test --collect-only nonpython $ py.test --collect-only
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini rootdir: $PWD/doc/en, inifile: pytest.ini
collected 2 items collected 2 items
<YamlFile 'example/nonpython/test_simple.yml'> <YamlFile 'example/nonpython/test_simple.yml'>
<YamlItem 'ok'> <YamlItem 'ok'>
<YamlItem 'hello'> <YamlItem 'hello'>
============================= in 0.04 seconds ============================= ======= in 0.12 seconds ========

View File

@ -46,15 +46,15 @@ This means that we only run 2 tests if we do not pass ``--all``::
$ py.test -q test_compute.py $ py.test -q test_compute.py
.. ..
2 passed in 0.01 seconds 2 passed in 0.12 seconds
We run only two computations, so we see two dots. We run only two computations, so we see two dots.
let's run the full monty:: let's run the full monty::
$ py.test -q --all $ py.test -q --all
....F ....F
================================= FAILURES ================================= ======= FAILURES ========
_____________________________ test_compute[4] ______________________________ _______ test_compute[4] ________
param1 = 4 param1 = 4
@ -63,7 +63,7 @@ let's run the full monty::
E assert 4 < 4 E assert 4 < 4
test_compute.py:3: AssertionError test_compute.py:3: AssertionError
1 failed, 4 passed in 0.02 seconds 1 failed, 4 passed in 0.12 seconds
As expected when running the full range of ``param1`` values As expected when running the full range of ``param1`` values
we'll get an error on the last one. we'll get an error on the last one.
@ -126,11 +126,11 @@ objects, they are still using the default pytest representation::
$ py.test test_time.py --collect-only $ py.test test_time.py --collect-only
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-159, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
============================= in 0.00 seconds ============================= ======= in 0.12 seconds ========
ERROR: file not found: test_time.py ERROR: file not found: test_time.py
A quick port of "testscenarios" A quick port of "testscenarios"
@ -170,22 +170,22 @@ only have to work a bit to construct the correct arguments for pytest's
this is a fully self-contained example which you can run with:: this is a fully self-contained example which you can run with::
$ py.test test_scenarios.py $ py.test test_scenarios.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-159, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
test_scenarios.py .... test_scenarios.py ....
========================= 4 passed in 0.02 seconds ========================= ======= 4 passed in 0.12 seconds ========
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:: If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
$ py.test --collect-only test_scenarios.py $ py.test --collect-only test_scenarios.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-159, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
<Module 'test_scenarios.py'> <Module 'test_scenarios.py'>
<Class 'TestSampleWithScenarios'> <Class 'TestSampleWithScenarios'>
@ -195,7 +195,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
<Function 'test_demo1[advanced]'> <Function 'test_demo1[advanced]'>
<Function 'test_demo2[advanced]'> <Function 'test_demo2[advanced]'>
============================= in 0.01 seconds ============================= ======= in 0.12 seconds ========
Note that we told ``metafunc.parametrize()`` that your scenario values Note that we told ``metafunc.parametrize()`` that your scenario values
should be considered class-scoped. With pytest-2.3 this leads to a should be considered class-scoped. With pytest-2.3 this leads to a
@ -248,24 +248,24 @@ creates a database object for the actual test invocations::
Let's first see how it looks like at collection time:: Let's first see how it looks like at collection time::
$ py.test test_backends.py --collect-only $ py.test test_backends.py --collect-only
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-159, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
<Module 'test_backends.py'> <Module 'test_backends.py'>
<Function 'test_db_initialized[d1]'> <Function 'test_db_initialized[d1]'>
<Function 'test_db_initialized[d2]'> <Function 'test_db_initialized[d2]'>
============================= in 0.01 seconds ============================= ======= in 0.12 seconds ========
And then when we run the test:: And then when we run the test::
$ py.test -q test_backends.py $ py.test -q test_backends.py
.F .F
================================= FAILURES ================================= ======= FAILURES ========
_________________________ test_db_initialized[d2] __________________________ _______ test_db_initialized[d2] ________
db = <conftest.DB2 object at 0x7f10a071cb38> db = <conftest.DB2 instance at 0xdeadbeef>
def test_db_initialized(db): def test_db_initialized(db):
# a dummy test # a dummy test
@ -274,7 +274,7 @@ And then when we run the test::
E Failed: deliberately failing for demo purposes E Failed: deliberately failing for demo purposes
test_backends.py:6: Failed test_backends.py:6: Failed
1 failed, 1 passed in 0.01 seconds 1 failed, 1 passed in 0.12 seconds
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase. The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
@ -318,17 +318,17 @@ argument sets to use for each test function. Let's run it::
$ py.test -q $ py.test -q
F.. F..
================================= FAILURES ================================= ======= FAILURES ========
________________________ TestClass.test_equals[2-1] ________________________ _______ TestClass.test_equals[1-2] ________
self = <test_parametrize.TestClass object at 0x7f878094f630>, a = 1, b = 2 self = <test_parametrize.TestClass instance at 0xdeadbeef>, a = 1, b = 2
def test_equals(self, a, b): def test_equals(self, a, b):
> assert a == b > assert a == b
E assert 1 == 2 E assert 1 == 2
test_parametrize.py:18: AssertionError test_parametrize.py:18: AssertionError
1 failed, 2 passed in 0.02 seconds 1 failed, 2 passed in 0.12 seconds
Indirect parametrization with multiple fixtures Indirect parametrization with multiple fixtures
-------------------------------------------------------------- --------------------------------------------------------------
@ -347,8 +347,11 @@ is to be run with different sets of arguments for its three arguments:
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):: Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::
. $ py.test -rs -q multipython.py . $ py.test -rs -q multipython.py
........................... ssssssssssss...ssssssssssss
27 passed in 4.14 seconds ======= short test summary info ========
SKIP [12] $PWD/doc/en/example/multipython.py:22: 'python3.3' not found
SKIP [12] $PWD/doc/en/example/multipython.py:22: 'python2.6' not found
3 passed, 24 skipped in 0.12 seconds
Indirect parametrization of optional implementations/imports Indirect parametrization of optional implementations/imports
-------------------------------------------------------------------- --------------------------------------------------------------------
@ -394,16 +397,16 @@ And finally a little test module::
If you run this with reporting for skips enabled:: If you run this with reporting for skips enabled::
$ py.test -rs test_module.py $ py.test -rs test_module.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-159, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
test_module.py .s test_module.py .s
========================= short test summary info ========================== ======= short test summary info ========
SKIP [1] /tmp/doc-exec-159/conftest.py:10: could not import 'opt2' SKIP [1] $REGENDOC_TMPDIR/conftest.py:10: could not import 'opt2'
=================== 1 passed, 1 skipped in 0.01 seconds ==================== ======= 1 passed, 1 skipped in 0.12 seconds ========
You'll see that we don't have a ``opt2`` module and thus the second test run You'll see that we don't have a ``opt2`` module and thus the second test run
of our ``test_func1`` was skipped. A few notes: of our ``test_func1`` was skipped. A few notes:

View File

@ -42,9 +42,9 @@ that match ``*_check``. For example, if we have::
then the test collection looks like this:: then the test collection looks like this::
$ py.test --collect-only $ py.test --collect-only
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-160, inifile: setup.cfg rootdir: $REGENDOC_TMPDIR, inifile: setup.cfg
collected 2 items collected 2 items
<Module 'check_myapp.py'> <Module 'check_myapp.py'>
<Class 'CheckMyApp'> <Class 'CheckMyApp'>
@ -52,7 +52,7 @@ then the test collection looks like this::
<Function 'simple_check'> <Function 'simple_check'>
<Function 'complex_check'> <Function 'complex_check'>
============================= in 0.01 seconds ============================= ======= in 0.12 seconds ========
.. note:: .. note::
@ -88,9 +88,9 @@ Finding out what is collected
You can always peek at the collection tree without running tests like this:: You can always peek at the collection tree without running tests like this::
. $ py.test --collect-only pythoncollection.py . $ py.test --collect-only pythoncollection.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini rootdir: $PWD/doc/en, inifile: pytest.ini
collected 3 items collected 3 items
<Module 'example/pythoncollection.py'> <Module 'example/pythoncollection.py'>
<Function 'test_function'> <Function 'test_function'>
@ -99,7 +99,7 @@ You can always peek at the collection tree without running tests like this::
<Function 'test_method'> <Function 'test_method'>
<Function 'test_anothermethod'> <Function 'test_anothermethod'>
============================= in 0.01 seconds ============================= ======= in 0.12 seconds ========
customizing test collection to find all .py files customizing test collection to find all .py files
--------------------------------------------------------- ---------------------------------------------------------
@ -142,12 +142,14 @@ then a pytest run on python2 will find the one test when run with a python2
interpreters and will leave out the setup.py file:: interpreters and will leave out the setup.py file::
$ py.test --collect-only $ py.test --collect-only
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-160, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 0 items collected 1 items
<Module 'pkg/module_py2.py'>
<Function 'test_only_on_python2'>
============================= in 0.01 seconds ============================= ======= in 0.12 seconds ========
If you run with a Python3 interpreter the moduled added through the conftest.py file will not be considered for test collection. If you run with a Python3 interpreter the moduled added through the conftest.py file will not be considered for test collection.

View File

@ -12,15 +12,15 @@ get on the terminal - we are working on that):
.. code-block:: python .. code-block:: python
assertion $ py.test failure_demo.py assertion $ py.test failure_demo.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini rootdir: $PWD/doc/en, inifile: pytest.ini
collected 42 items collected 42 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
================================= FAILURES ================================= ======= FAILURES ========
____________________________ test_generative[0] ____________________________ _______ test_generative[0] ________
param1 = 3, param2 = 6 param1 = 3, param2 = 6
@ -29,9 +29,9 @@ get on the terminal - we are working on that):
E assert (3 * 2) < 6 E assert (3 * 2) < 6
failure_demo.py:15: AssertionError failure_demo.py:15: AssertionError
_________________________ TestFailing.test_simple __________________________ _______ TestFailing.test_simple ________
self = <failure_demo.TestFailing object at 0x7f65f1ca25c0> self = <failure_demo.TestFailing object at 0xdeadbeef>
def test_simple(self): def test_simple(self):
def f(): def f():
@ -41,13 +41,13 @@ get on the terminal - we are working on that):
> assert f() == g() > assert f() == g()
E assert 42 == 43 E assert 42 == 43
E + where 42 = <function TestFailing.test_simple.<locals>.f at 0x7f65f2315510>() E + where 42 = <function f at 0xdeadbeef>()
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0x7f65f2323510>() E + and 43 = <function g at 0xdeadbeef>()
failure_demo.py:28: AssertionError failure_demo.py:28: AssertionError
____________________ TestFailing.test_simple_multiline _____________________ _______ TestFailing.test_simple_multiline ________
self = <failure_demo.TestFailing object at 0x7f65f1c812b0> self = <failure_demo.TestFailing object at 0xdeadbeef>
def test_simple_multiline(self): def test_simple_multiline(self):
otherfunc_multi( otherfunc_multi(
@ -55,7 +55,7 @@ get on the terminal - we are working on that):
> 6*9) > 6*9)
failure_demo.py:33: failure_demo.py:33:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 42, b = 54 a = 42, b = 54
@ -65,21 +65,21 @@ get on the terminal - we are working on that):
E assert 42 == 54 E assert 42 == 54
failure_demo.py:11: AssertionError failure_demo.py:11: AssertionError
___________________________ TestFailing.test_not ___________________________ _______ TestFailing.test_not ________
self = <failure_demo.TestFailing object at 0x7f65f1c9df98> self = <failure_demo.TestFailing object at 0xdeadbeef>
def test_not(self): def test_not(self):
def f(): def f():
return 42 return 42
> assert not f() > assert not f()
E assert not 42 E assert not 42
E + where 42 = <function TestFailing.test_not.<locals>.f at 0x7f65f2323598>() E + where 42 = <function f at 0xdeadbeef>()
failure_demo.py:38: AssertionError failure_demo.py:38: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________ _______ TestSpecialisedExplanations.test_eq_text ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c67710> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_text(self): def test_eq_text(self):
> assert 'spam' == 'eggs' > assert 'spam' == 'eggs'
@ -88,9 +88,9 @@ get on the terminal - we are working on that):
E + eggs E + eggs
failure_demo.py:42: AssertionError failure_demo.py:42: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________ _______ TestSpecialisedExplanations.test_eq_similar_text ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c97198> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_similar_text(self): def test_eq_similar_text(self):
> assert 'foo 1 bar' == 'foo 2 bar' > assert 'foo 1 bar' == 'foo 2 bar'
@ -101,9 +101,9 @@ get on the terminal - we are working on that):
E ? ^ E ? ^
failure_demo.py:45: AssertionError failure_demo.py:45: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________ _______ TestSpecialisedExplanations.test_eq_multiline_text ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1cc4d30> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_multiline_text(self): def test_eq_multiline_text(self):
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar' > assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
@ -114,9 +114,9 @@ get on the terminal - we are working on that):
E bar E bar
failure_demo.py:48: AssertionError failure_demo.py:48: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________ _______ TestSpecialisedExplanations.test_eq_long_text ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1cce588> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_long_text(self): def test_eq_long_text(self):
a = '1'*100 + 'a' + '2'*100 a = '1'*100 + 'a' + '2'*100
@ -131,9 +131,9 @@ get on the terminal - we are working on that):
E ? ^ E ? ^
failure_demo.py:53: AssertionError failure_demo.py:53: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________ _______ TestSpecialisedExplanations.test_eq_long_text_multiline ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c81cc0> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_long_text_multiline(self): def test_eq_long_text_multiline(self):
a = '1\n'*100 + 'a' + '2\n'*100 a = '1\n'*100 + 'a' + '2\n'*100
@ -155,9 +155,9 @@ get on the terminal - we are working on that):
E 2 E 2
failure_demo.py:58: AssertionError failure_demo.py:58: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________ _______ TestSpecialisedExplanations.test_eq_list ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1ca2cc0> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_list(self): def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3] > assert [0, 1, 2] == [0, 1, 3]
@ -166,9 +166,9 @@ get on the terminal - we are working on that):
E Use -v to get the full diff E Use -v to get the full diff
failure_demo.py:61: AssertionError failure_demo.py:61: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________ _______ TestSpecialisedExplanations.test_eq_list_long ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c29358> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_list_long(self): def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100 a = [0]*100 + [1] + [3]*100
@ -179,9 +179,9 @@ get on the terminal - we are working on that):
E Use -v to get the full diff E Use -v to get the full diff
failure_demo.py:66: AssertionError failure_demo.py:66: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________ _______ TestSpecialisedExplanations.test_eq_dict ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c9b588> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_dict(self): def test_eq_dict(self):
> assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0} > assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
@ -196,9 +196,9 @@ get on the terminal - we are working on that):
E Use -v to get the full diff E Use -v to get the full diff
failure_demo.py:69: AssertionError failure_demo.py:69: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________ _______ TestSpecialisedExplanations.test_eq_set ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c7fdd8> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_set(self): def test_eq_set(self):
> assert set([0, 10, 11, 12]) == set([0, 20, 21]) > assert set([0, 10, 11, 12]) == set([0, 20, 21])
@ -213,9 +213,9 @@ get on the terminal - we are working on that):
E Use -v to get the full diff E Use -v to get the full diff
failure_demo.py:72: AssertionError failure_demo.py:72: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________ _______ TestSpecialisedExplanations.test_eq_longer_list ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c347f0> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_longer_list(self): def test_eq_longer_list(self):
> assert [1,2] == [1,2,3] > assert [1,2] == [1,2,3]
@ -224,18 +224,18 @@ get on the terminal - we are working on that):
E Use -v to get the full diff E Use -v to get the full diff
failure_demo.py:75: AssertionError failure_demo.py:75: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________ _______ TestSpecialisedExplanations.test_in_list ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f2313668> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_in_list(self): def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5] > assert 1 in [0, 2, 3, 4, 5]
E assert 1 in [0, 2, 3, 4, 5] E assert 1 in [0, 2, 3, 4, 5]
failure_demo.py:78: AssertionError failure_demo.py:78: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________ _______ TestSpecialisedExplanations.test_not_in_text_multiline ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1cceb38> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_not_in_text_multiline(self): def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail' text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
@ -251,9 +251,9 @@ get on the terminal - we are working on that):
E tail E tail
failure_demo.py:82: AssertionError failure_demo.py:82: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________ _______ TestSpecialisedExplanations.test_not_in_text_single ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c27438> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_not_in_text_single(self): def test_not_in_text_single(self):
text = 'single foo line' text = 'single foo line'
@ -264,9 +264,9 @@ get on the terminal - we are working on that):
E ? +++ E ? +++
failure_demo.py:86: AssertionError failure_demo.py:86: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________ _______ TestSpecialisedExplanations.test_not_in_text_single_long ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1c9d4e0> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_not_in_text_single_long(self): def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20 text = 'head ' * 50 + 'foo ' + 'tail ' * 20
@ -277,9 +277,9 @@ get on the terminal - we are working on that):
E ? +++ E ? +++
failure_demo.py:90: AssertionError failure_demo.py:90: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______ _______ TestSpecialisedExplanations.test_not_in_text_single_long_term ________
self = <failure_demo.TestSpecialisedExplanations object at 0x7f65f1ce16d8> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_not_in_text_single_long_term(self): def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20 text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
@ -290,7 +290,7 @@ get on the terminal - we are working on that):
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
failure_demo.py:94: AssertionError failure_demo.py:94: AssertionError
______________________________ test_attribute ______________________________ _______ test_attribute ________
def test_attribute(): def test_attribute():
class Foo(object): class Foo(object):
@ -298,21 +298,21 @@ get on the terminal - we are working on that):
i = Foo() i = Foo()
> assert i.b == 2 > assert i.b == 2
E assert 1 == 2 E assert 1 == 2
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0x7f65f1c814e0>.b E + where 1 = <failure_demo.Foo object at 0xdeadbeef>.b
failure_demo.py:101: AssertionError failure_demo.py:101: AssertionError
_________________________ test_attribute_instance __________________________ _______ test_attribute_instance ________
def test_attribute_instance(): def test_attribute_instance():
class Foo(object): class Foo(object):
b = 1 b = 1
> assert Foo().b == 2 > assert Foo().b == 2
E assert 1 == 2 E assert 1 == 2
E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0x7f65f1c7f7f0>.b E + where 1 = <failure_demo.Foo object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0x7f65f1c7f7f0> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>() E + where <failure_demo.Foo object at 0xdeadbeef> = <class 'failure_demo.Foo'>()
failure_demo.py:107: AssertionError failure_demo.py:107: AssertionError
__________________________ test_attribute_failure __________________________ _______ test_attribute_failure ________
def test_attribute_failure(): def test_attribute_failure():
class Foo(object): class Foo(object):
@ -323,16 +323,16 @@ get on the terminal - we are working on that):
> assert i.b == 2 > assert i.b == 2
failure_demo.py:116: failure_demo.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0x7f65f1c97dd8> self = <failure_demo.Foo object at 0xdeadbeef>
def _get_b(self): def _get_b(self):
> raise Exception('Failed to get attrib') > raise Exception('Failed to get attrib')
E Exception: Failed to get attrib E Exception: Failed to get attrib
failure_demo.py:113: Exception failure_demo.py:113: Exception
_________________________ test_attribute_multiple __________________________ _______ test_attribute_multiple ________
def test_attribute_multiple(): def test_attribute_multiple():
class Foo(object): class Foo(object):
@ -341,57 +341,57 @@ get on the terminal - we are working on that):
b = 2 b = 2
> assert Foo().b == Bar().b > assert Foo().b == Bar().b
E assert 1 == 2 E assert 1 == 2
E + where 1 = <failure_demo.test_attribute_multiple.<locals>.Foo object at 0x7f65f1c9b630>.b E + where 1 = <failure_demo.Foo object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_multiple.<locals>.Foo object at 0x7f65f1c9b630> = <class 'failure_demo.test_attribute_multiple.<locals>.Foo'>() E + where <failure_demo.Foo object at 0xdeadbeef> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0x7f65f1c9b2b0>.b E + and 2 = <failure_demo.Bar object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0x7f65f1c9b2b0> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>() E + where <failure_demo.Bar object at 0xdeadbeef> = <class 'failure_demo.Bar'>()
failure_demo.py:124: AssertionError failure_demo.py:124: AssertionError
__________________________ TestRaises.test_raises __________________________ _______ TestRaises.test_raises ________
self = <failure_demo.TestRaises object at 0x7f65f1c3eba8> self = <failure_demo.TestRaises instance at 0xdeadbeef>
def test_raises(self): def test_raises(self):
s = 'qwe' s = 'qwe'
> raises(TypeError, "int(s)") > raises(TypeError, "int(s)")
failure_demo.py:133: failure_demo.py:133:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> int(s) > int(s)
E ValueError: invalid literal for int() with base 10: 'qwe' E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen /tmp/sandbox/pytest/.tox/regen/lib/python3.4/site-packages/_pytest/python.py:1075>:1: ValueError <0-codegen $PWD/_pytest/python.py:1091>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________ _______ TestRaises.test_raises_doesnt ________
self = <failure_demo.TestRaises object at 0x7f65f1cc4eb8> self = <failure_demo.TestRaises instance at 0xdeadbeef>
def test_raises_doesnt(self): def test_raises_doesnt(self):
> raises(IOError, "int('3')") > raises(IOError, "int('3')")
E Failed: DID NOT RAISE E Failed: DID NOT RAISE
failure_demo.py:136: Failed failure_demo.py:136: Failed
__________________________ TestRaises.test_raise ___________________________ _______ TestRaises.test_raise ________
self = <failure_demo.TestRaises object at 0x7f65f1cceeb8> self = <failure_demo.TestRaises instance at 0xdeadbeef>
def test_raise(self): def test_raise(self):
> raise ValueError("demo error") > raise ValueError("demo error")
E ValueError: demo error E ValueError: demo error
failure_demo.py:139: ValueError failure_demo.py:139: ValueError
________________________ TestRaises.test_tupleerror ________________________ _______ TestRaises.test_tupleerror ________
self = <failure_demo.TestRaises object at 0x7f65f23136d8> self = <failure_demo.TestRaises instance at 0xdeadbeef>
def test_tupleerror(self): def test_tupleerror(self):
> a,b = [1] > a,b = [1]
E ValueError: need more than 1 value to unpack E ValueError: need more than 1 value to unpack
failure_demo.py:142: ValueError failure_demo.py:142: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______ _______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ________
self = <failure_demo.TestRaises object at 0x7f65f1ca2240> self = <failure_demo.TestRaises instance at 0xdeadbeef>
def test_reinterpret_fails_with_print_for_the_fun_of_it(self): def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3] l = [1,2,3]
@ -400,18 +400,18 @@ get on the terminal - we are working on that):
E TypeError: 'int' object is not iterable E TypeError: 'int' object is not iterable
failure_demo.py:147: TypeError failure_demo.py:147: TypeError
--------------------------- Captured stdout call --------------------------- ----------------------------- Captured stdout call -----------------------------
l is [1, 2, 3] l is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________ _______ TestRaises.test_some_error ________
self = <failure_demo.TestRaises object at 0x7f65f1cb36a0> self = <failure_demo.TestRaises instance at 0xdeadbeef>
def test_some_error(self): def test_some_error(self):
> if namenotexi: > if namenotexi:
E NameError: name 'namenotexi' is not defined E NameError: global name 'namenotexi' is not defined
failure_demo.py:150: NameError failure_demo.py:150: NameError
____________________ test_dynamic_compile_shows_nicely _____________________ _______ test_dynamic_compile_shows_nicely ________
def test_dynamic_compile_shows_nicely(): def test_dynamic_compile_shows_nicely():
src = 'def foo():\n assert 1 == 0\n' src = 'def foo():\n assert 1 == 0\n'
@ -423,16 +423,16 @@ get on the terminal - we are working on that):
> module.foo() > module.foo()
failure_demo.py:165: failure_demo.py:165:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def foo(): def foo():
> assert 1 == 0 > assert 1 == 0
E assert 1 == 0 E assert 1 == 0
<2-codegen 'abc-123' /tmp/sandbox/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError <2-codegen 'abc-123' $PWD/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________ _______ TestMoreErrors.test_complex_error ________
self = <failure_demo.TestMoreErrors object at 0x7f65f1cb5470> self = <failure_demo.TestMoreErrors instance at 0xdeadbeef>
def test_complex_error(self): def test_complex_error(self):
def f(): def f():
@ -442,10 +442,10 @@ get on the terminal - we are working on that):
> somefunc(f(), g()) > somefunc(f(), g())
failure_demo.py:175: failure_demo.py:175:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:8: in somefunc failure_demo.py:8: in somefunc
otherfunc(x,y) otherfunc(x,y)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 44, b = 43 a = 44, b = 43
@ -454,9 +454,9 @@ get on the terminal - we are working on that):
E assert 44 == 43 E assert 44 == 43
failure_demo.py:5: AssertionError failure_demo.py:5: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________ _______ TestMoreErrors.test_z1_unpack_error ________
self = <failure_demo.TestMoreErrors object at 0x7f65f1c9d940> self = <failure_demo.TestMoreErrors instance at 0xdeadbeef>
def test_z1_unpack_error(self): def test_z1_unpack_error(self):
l = [] l = []
@ -464,9 +464,9 @@ get on the terminal - we are working on that):
E ValueError: need more than 0 values to unpack E ValueError: need more than 0 values to unpack
failure_demo.py:179: ValueError failure_demo.py:179: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________ _______ TestMoreErrors.test_z2_type_error ________
self = <failure_demo.TestMoreErrors object at 0x7f65f1c7f208> self = <failure_demo.TestMoreErrors instance at 0xdeadbeef>
def test_z2_type_error(self): def test_z2_type_error(self):
l = 3 l = 3
@ -474,21 +474,21 @@ get on the terminal - we are working on that):
E TypeError: 'int' object is not iterable E TypeError: 'int' object is not iterable
failure_demo.py:183: TypeError failure_demo.py:183: TypeError
______________________ TestMoreErrors.test_startswith ______________________ _______ TestMoreErrors.test_startswith ________
self = <failure_demo.TestMoreErrors object at 0x7f65f1cc40b8> self = <failure_demo.TestMoreErrors instance at 0xdeadbeef>
def test_startswith(self): def test_startswith(self):
s = "123" s = "123"
g = "456" g = "456"
> assert s.startswith(g) > assert s.startswith(g)
E assert <built-in method startswith of str object at 0x7f65f1ce14c8>('456') E assert <built-in method startswith of str object at 0xdeadbeef>('456')
E + where <built-in method startswith of str object at 0x7f65f1ce14c8> = '123'.startswith E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
failure_demo.py:188: AssertionError failure_demo.py:188: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________ _______ TestMoreErrors.test_startswith_nested ________
self = <failure_demo.TestMoreErrors object at 0x7f65f1c81b00> self = <failure_demo.TestMoreErrors instance at 0xdeadbeef>
def test_startswith_nested(self): def test_startswith_nested(self):
def f(): def f():
@ -496,15 +496,15 @@ get on the terminal - we are working on that):
def g(): def g():
return "456" return "456"
> assert f().startswith(g()) > assert f().startswith(g())
E assert <built-in method startswith of str object at 0x7f65f1ce14c8>('456') E assert <built-in method startswith of str object at 0xdeadbeef>('456')
E + where <built-in method startswith of str object at 0x7f65f1ce14c8> = '123'.startswith E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0x7f65f1c32950>() E + where '123' = <function f at 0xdeadbeef>()
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0x7f65f1c32ea0>() E + and '456' = <function g at 0xdeadbeef>()
failure_demo.py:195: AssertionError failure_demo.py:195: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________ _______ TestMoreErrors.test_global_func ________
self = <failure_demo.TestMoreErrors object at 0x7f65f1c97240> self = <failure_demo.TestMoreErrors instance at 0xdeadbeef>
def test_global_func(self): def test_global_func(self):
> assert isinstance(globf(42), float) > assert isinstance(globf(42), float)
@ -512,20 +512,20 @@ get on the terminal - we are working on that):
E + where 43 = globf(42) E + where 43 = globf(42)
failure_demo.py:198: AssertionError failure_demo.py:198: AssertionError
_______________________ TestMoreErrors.test_instance _______________________ _______ TestMoreErrors.test_instance ________
self = <failure_demo.TestMoreErrors object at 0x7f65f1ce1080> self = <failure_demo.TestMoreErrors instance at 0xdeadbeef>
def test_instance(self): def test_instance(self):
self.x = 6*7 self.x = 6*7
> assert self.x != 42 > assert self.x != 42
E assert 42 != 42 E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors object at 0x7f65f1ce1080>.x E + where 42 = <failure_demo.TestMoreErrors instance at 0xdeadbeef>.x
failure_demo.py:202: AssertionError failure_demo.py:202: AssertionError
_______________________ TestMoreErrors.test_compare ________________________ _______ TestMoreErrors.test_compare ________
self = <failure_demo.TestMoreErrors object at 0x7f65f1c3e828> self = <failure_demo.TestMoreErrors instance at 0xdeadbeef>
def test_compare(self): def test_compare(self):
> assert globf(10) < 5 > assert globf(10) < 5
@ -533,9 +533,9 @@ get on the terminal - we are working on that):
E + where 11 = globf(10) E + where 11 = globf(10)
failure_demo.py:205: AssertionError failure_demo.py:205: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________ _______ TestMoreErrors.test_try_finally ________
self = <failure_demo.TestMoreErrors object at 0x7f65f1c67828> self = <failure_demo.TestMoreErrors instance at 0xdeadbeef>
def test_try_finally(self): def test_try_finally(self):
x = 1 x = 1
@ -544,9 +544,9 @@ get on the terminal - we are working on that):
E assert 1 == 0 E assert 1 == 0
failure_demo.py:210: AssertionError failure_demo.py:210: AssertionError
___________________ TestCustomAssertMsg.test_single_line ___________________ _______ TestCustomAssertMsg.test_single_line ________
self = <failure_demo.TestCustomAssertMsg object at 0x7f65f1c29860> self = <failure_demo.TestCustomAssertMsg instance at 0xdeadbeef>
def test_single_line(self): def test_single_line(self):
class A: class A:
@ -555,12 +555,12 @@ get on the terminal - we are working on that):
> assert A.a == b, "A.a appears not to be b" > assert A.a == b, "A.a appears not to be b"
E AssertionError: A.a appears not to be b E AssertionError: A.a appears not to be b
E assert 1 == 2 E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a E + where 1 = <class failure_demo.A at 0xdeadbeef>.a
failure_demo.py:221: AssertionError failure_demo.py:221: AssertionError
____________________ TestCustomAssertMsg.test_multiline ____________________ _______ TestCustomAssertMsg.test_multiline ________
self = <failure_demo.TestCustomAssertMsg object at 0x7f65f1c676a0> self = <failure_demo.TestCustomAssertMsg instance at 0xdeadbeef>
def test_multiline(self): def test_multiline(self):
class A: class A:
@ -572,12 +572,12 @@ get on the terminal - we are working on that):
E or does not appear to be b E or does not appear to be b
E one of those E one of those
E assert 1 == 2 E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a E + where 1 = <class failure_demo.A at 0xdeadbeef>.a
failure_demo.py:227: AssertionError failure_demo.py:227: AssertionError
___________________ TestCustomAssertMsg.test_custom_repr ___________________ _______ TestCustomAssertMsg.test_custom_repr ________
self = <failure_demo.TestCustomAssertMsg object at 0x7f65f1ccebe0> self = <failure_demo.TestCustomAssertMsg instance at 0xdeadbeef>
def test_custom_repr(self): def test_custom_repr(self):
class JSON: class JSON:
@ -595,4 +595,4 @@ get on the terminal - we are working on that):
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
failure_demo.py:237: AssertionError failure_demo.py:237: AssertionError
======================== 42 failed in 0.35 seconds ========================= ======= 42 failed in 0.12 seconds ========

View File

@ -39,8 +39,8 @@ Let's run this without supplying our new option::
$ py.test -q test_sample.py $ py.test -q test_sample.py
F F
================================= FAILURES ================================= ======= FAILURES ========
_______________________________ test_answer ________________________________ _______ test_answer ________
cmdopt = 'type1' cmdopt = 'type1'
@ -53,16 +53,16 @@ Let's run this without supplying our new option::
E assert 0 E assert 0
test_sample.py:6: AssertionError test_sample.py:6: AssertionError
--------------------------- Captured stdout call --------------------------- ----------------------------- Captured stdout call -----------------------------
first first
1 failed in 0.01 seconds 1 failed in 0.12 seconds
And now with supplying a command line option:: And now with supplying a command line option::
$ py.test -q --cmdopt=type2 $ py.test -q --cmdopt=type2
F F
================================= FAILURES ================================= ======= FAILURES ========
_______________________________ test_answer ________________________________ _______ test_answer ________
cmdopt = 'type2' cmdopt = 'type2'
@ -75,9 +75,9 @@ And now with supplying a command line option::
E assert 0 E assert 0
test_sample.py:6: AssertionError test_sample.py:6: AssertionError
--------------------------- Captured stdout call --------------------------- ----------------------------- Captured stdout call -----------------------------
second second
1 failed in 0.01 seconds 1 failed in 0.12 seconds
You can see that the command line option arrived in our test. This You can see that the command line option arrived in our test. This
completes the basic pattern. However, one often rather wants to process completes the basic pattern. However, one often rather wants to process
@ -107,12 +107,12 @@ of subprocesses close to your CPU. Running in an empty
directory with the above conftest.py:: directory with the above conftest.py::
$ py.test $ py.test
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-162, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items collected 0 items
============================= in 0.00 seconds ============================= ======= in 0.12 seconds ========
.. _`excontrolskip`: .. _`excontrolskip`:
@ -152,28 +152,28 @@ We can now write a test module like this::
and when running it will see a skipped "slow" test:: and when running it will see a skipped "slow" test::
$ py.test -rs # "-rs" means report details on the little 's' $ py.test -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-162, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
test_module.py .s test_module.py .s
========================= short test summary info ========================== ======= short test summary info ========
SKIP [1] /tmp/doc-exec-162/conftest.py:9: need --runslow option to run SKIP [1] $REGENDOC_TMPDIR/conftest.py:9: need --runslow option to run
=================== 1 passed, 1 skipped in 0.01 seconds ==================== ======= 1 passed, 1 skipped in 0.12 seconds ========
Or run it including the ``slow`` marked test:: Or run it including the ``slow`` marked test::
$ py.test --runslow $ py.test --runslow
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-162, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
test_module.py .. test_module.py ..
========================= 2 passed in 0.01 seconds ========================= ======= 2 passed in 0.12 seconds ========
Writing well integrated assertion helpers Writing well integrated assertion helpers
-------------------------------------------------- --------------------------------------------------
@ -203,15 +203,15 @@ Let's run our little function::
$ py.test -q test_checkconfig.py $ py.test -q test_checkconfig.py
F F
================================= FAILURES ================================= ======= FAILURES ========
______________________________ test_something ______________________________ _______ test_something ________
def test_something(): def test_something():
> checkconfig(42) > checkconfig(42)
E Failed: not configured: 42 E Failed: not configured: 42
test_checkconfig.py:8: Failed test_checkconfig.py:8: Failed
1 failed in 0.02 seconds 1 failed in 0.12 seconds
Detect if running from within a pytest run Detect if running from within a pytest run
-------------------------------------------------------------- --------------------------------------------------------------
@ -258,13 +258,13 @@ It's easy to present extra information in a ``pytest`` run::
which will add the string to the test header accordingly:: which will add the string to the test header accordingly::
$ py.test $ py.test
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-162, inifile:
project deps: mylib-1.1 project deps: mylib-1.1
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items collected 0 items
============================= in 0.00 seconds ============================= ======= in 0.12 seconds ========
.. regendoc:wipe .. regendoc:wipe
@ -282,24 +282,24 @@ you present more information appropriately::
which will add info only when run with "--v":: which will add info only when run with "--v"::
$ py.test -v $ py.test -v
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7
rootdir: /tmp/doc-exec-162, inifile:
info1: did you know that ... info1: did you know that ...
did you? did you?
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 0 items collecting ... collected 0 items
============================= in 0.00 seconds ============================= ======= in 0.12 seconds ========
and nothing when run plainly:: and nothing when run plainly::
$ py.test $ py.test
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-162, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items collected 0 items
============================= in 0.00 seconds ============================= ======= in 0.12 seconds ========
profiling test duration profiling test duration
-------------------------- --------------------------
@ -327,18 +327,18 @@ out which tests are the slowest. Let's make an artifical test suite::
Now we can profile which test functions execute the slowest:: Now we can profile which test functions execute the slowest::
$ py.test --durations=3 $ py.test --durations=3
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-162, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items collected 3 items
test_some_are_slow.py ... test_some_are_slow.py ...
========================= slowest 3 test durations ========================= ======= slowest 3 test durations ========
0.20s call test_some_are_slow.py::test_funcslow2 0.20s call test_some_are_slow.py::test_funcslow2
0.10s call test_some_are_slow.py::test_funcslow1 0.10s call test_some_are_slow.py::test_funcslow1
0.00s setup test_some_are_slow.py::test_funcslow2 0.00s setup test_some_are_slow.py::test_funcfast
========================= 3 passed in 0.31 seconds ========================= ======= 3 passed in 0.12 seconds ========
incremental testing - test steps incremental testing - test steps
--------------------------------------------------- ---------------------------------------------------
@ -389,27 +389,27 @@ tests in a class. Here is a test module example::
If we run this:: If we run this::
$ py.test -rx $ py.test -rx
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-162, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
test_step.py .Fx. test_step.py .Fx.
================================= FAILURES ================================= ======= FAILURES ========
____________________ TestUserHandling.test_modification ____________________ _______ TestUserHandling.test_modification ________
self = <test_step.TestUserHandling object at 0x7ff60bbb83c8> self = <test_step.TestUserHandling instance at 0xdeadbeef>
def test_modification(self): def test_modification(self):
> assert 0 > assert 0
E assert 0 E assert 0
test_step.py:9: AssertionError test_step.py:9: AssertionError
========================= short test summary info ========================== ======= short test summary info ========
XFAIL test_step.py::TestUserHandling::()::test_deletion XFAIL test_step.py::TestUserHandling::()::test_deletion
reason: previous test failed (test_modification) reason: previous test failed (test_modification)
============== 1 failed, 2 passed, 1 xfailed in 0.02 seconds =============== ======= 1 failed, 2 passed, 1 xfailed in 0.12 seconds ========
We'll see that ``test_deletion`` was not executed because ``test_modification`` We'll see that ``test_deletion`` was not executed because ``test_modification``
failed. It is reported as an "expected failure". failed. It is reported as an "expected failure".
@ -460,9 +460,9 @@ the ``db`` fixture::
We can run this:: We can run this::
$ py.test $ py.test
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-162, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 7 items collected 7 items
test_step.py .Fx. test_step.py .Fx.
@ -470,46 +470,46 @@ We can run this::
a/test_db2.py F a/test_db2.py F
b/test_error.py E b/test_error.py E
================================== ERRORS ================================== ======= ERRORS ========
_______________________ ERROR at setup of test_root ________________________ _______ ERROR at setup of test_root ________
file /tmp/doc-exec-162/b/test_error.py, line 1 file $REGENDOC_TMPDIR/b/test_error.py, line 1
def test_root(db): # no db here, will error out def test_root(db): # no db here, will error out
fixture 'db' not found fixture 'db' not found
available fixtures: pytestconfig, capsys, recwarn, monkeypatch, tmpdir, capfd available fixtures: pytestconfig, recwarn, monkeypatch, capfd, capsys, tmpdir
use 'py.test --fixtures [testpath]' for help on them. use 'py.test --fixtures [testpath]' for help on them.
/tmp/doc-exec-162/b/test_error.py:1 $REGENDOC_TMPDIR/b/test_error.py:1
================================= FAILURES ================================= ======= FAILURES ========
____________________ TestUserHandling.test_modification ____________________ _______ TestUserHandling.test_modification ________
self = <test_step.TestUserHandling object at 0x7f8ecd5b87f0> self = <test_step.TestUserHandling instance at 0xdeadbeef>
def test_modification(self): def test_modification(self):
> assert 0 > assert 0
E assert 0 E assert 0
test_step.py:9: AssertionError test_step.py:9: AssertionError
_________________________________ test_a1 __________________________________ _______ test_a1 ________
db = <conftest.DB object at 0x7f8ecdc11470> db = <conftest.DB instance at 0xdeadbeef>
def test_a1(db): def test_a1(db):
> assert 0, db # to show value > assert 0, db # to show value
E AssertionError: <conftest.DB object at 0x7f8ecdc11470> E AssertionError: <conftest.DB instance at 0xdeadbeef>
E assert 0 E assert 0
a/test_db.py:2: AssertionError a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________ _______ test_a2 ________
db = <conftest.DB object at 0x7f8ecdc11470> db = <conftest.DB instance at 0xdeadbeef>
def test_a2(db): def test_a2(db):
> assert 0, db # to show value > assert 0, db # to show value
E AssertionError: <conftest.DB object at 0x7f8ecdc11470> E AssertionError: <conftest.DB instance at 0xdeadbeef>
E assert 0 E assert 0
a/test_db2.py:2: AssertionError a/test_db2.py:2: AssertionError
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.05 seconds ========== ======= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ========
The two test modules in the ``a`` directory see the same ``db`` fixture instance The two test modules in the ``a`` directory see the same ``db`` fixture instance
while the one test in the sister-directory ``b`` doesn't see it. We could of course while the one test in the sister-directory ``b`` doesn't see it. We could of course
@ -563,37 +563,36 @@ if you then have failing tests::
and run them:: and run them::
$ py.test test_module.py $ py.test test_module.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-162, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
test_module.py FF test_module.py FF
================================= FAILURES ================================= ======= FAILURES ========
________________________________ test_fail1 ________________________________ _______ test_fail1 ________
tmpdir = local('/tmp/pytest-22/test_fail10') tmpdir = local('/tmp/pytest-NaN/test_fail10')
def test_fail1(tmpdir): def test_fail1(tmpdir):
> assert 0 > assert 0
E assert 0 E assert 0
test_module.py:2: AssertionError test_module.py:2: AssertionError
________________________________ test_fail2 ________________________________ _______ test_fail2 ________
def test_fail2(): def test_fail2():
> assert 0 > assert 0
E assert 0 E assert 0
test_module.py:4: AssertionError test_module.py:4: AssertionError
========================= 2 failed in 0.02 seconds ========================= ======= 2 failed in 0.12 seconds ========
you will have a "failures" file which contains the failing test ids:: you will have a "failures" file which contains the failing test ids::
$ cat failures $ cat failures
test_module.py::test_fail1 (/tmp/pytest-22/test_fail10) cat: failures: No such file or directory
test_module.py::test_fail2
Making test result information available in fixtures Making test result information available in fixtures
----------------------------------------------------------- -----------------------------------------------------------
@ -654,17 +653,17 @@ if you then have failing tests::
and run it:: and run it::
$ py.test -s test_module.py $ py.test -s test_module.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-162, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items collected 3 items
test_module.py Esetting up a test failed! test_module.py::test_setup_fails test_module.py E('setting up a test failed!', 'test_module.py::test_setup_fails')
Fexecuting test failed test_module.py::test_call_fails F('executing test failed', 'test_module.py::test_call_fails')
F F
================================== ERRORS ================================== ======= ERRORS ========
____________________ ERROR at setup of test_setup_fails ____________________ _______ ERROR at setup of test_setup_fails ________
@pytest.fixture @pytest.fixture
def other(): def other():
@ -672,8 +671,8 @@ and run it::
E assert 0 E assert 0
test_module.py:6: AssertionError test_module.py:6: AssertionError
================================= FAILURES ================================= ======= FAILURES ========
_____________________________ test_call_fails ______________________________ _______ test_call_fails ________
something = None something = None
@ -682,14 +681,14 @@ and run it::
E assert 0 E assert 0
test_module.py:12: AssertionError test_module.py:12: AssertionError
________________________________ test_fail2 ________________________________ _______ test_fail2 ________
def test_fail2(): def test_fail2():
> assert 0 > assert 0
E assert 0 E assert 0
test_module.py:15: AssertionError test_module.py:15: AssertionError
==================== 2 failed, 1 error in 0.02 seconds ===================== ======= 2 failed, 1 warnings, 1 error in 0.12 seconds ========
You'll see that the fixture finalizers could use the precise reporting You'll see that the fixture finalizers could use the precise reporting
information. information.
@ -744,4 +743,4 @@ This makes it convenient to execute your tests from within your frozen
application, using standard ``py.test`` command-line options:: application, using standard ``py.test`` command-line options::
$ ./app_main --pytest --verbose --tb=long --junit-xml=results.xml test-suite/ $ ./app_main --pytest --verbose --tb=long --junit-xml=results.xml test-suite/
/bin/sh: 1: ./app_main: not found /bin/sh: ./app_main: No such file or directory

View File

@ -69,4 +69,4 @@ If you run this without output capturing::
.test other .test other
.test_unit1 method called .test_unit1 method called
. .
4 passed in 0.03 seconds 4 passed in 0.12 seconds

View File

@ -74,17 +74,17 @@ will discover and call the :py:func:`@pytest.fixture <_pytest.python.fixture>`
marked ``smtp`` fixture function. Running the test looks like this:: marked ``smtp`` fixture function. Running the test looks like this::
$ py.test test_smtpsimple.py $ py.test test_smtpsimple.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-98, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items
test_smtpsimple.py F test_smtpsimple.py F
================================= FAILURES ================================= ======= FAILURES ========
________________________________ test_ehlo _________________________________ _______ test_ehlo ________
smtp = <smtplib.SMTP object at 0x7f9d45764c88> smtp = <smtplib.SMTP instance at 0xdeadbeef>
def test_ehlo(smtp): def test_ehlo(smtp):
response, msg = smtp.ehlo() response, msg = smtp.ehlo()
@ -93,7 +93,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
E assert 0 E assert 0
test_smtpsimple.py:11: AssertionError test_smtpsimple.py:11: AssertionError
========================= 1 failed in 1.07 seconds ========================= ======= 1 failed in 0.12 seconds ========
In the failure traceback we see that the test function was called with a In the failure traceback we see that the test function was called with a
``smtp`` argument, the ``smtplib.SMTP()`` instance created by the fixture ``smtp`` argument, the ``smtplib.SMTP()`` instance created by the fixture
@ -192,28 +192,29 @@ We deliberately insert failing ``assert 0`` statements in order to
inspect what is going on and can now run the tests:: inspect what is going on and can now run the tests::
$ py.test test_module.py $ py.test test_module.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-98, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
test_module.py FF test_module.py FF
================================= FAILURES ================================= ======= FAILURES ========
________________________________ test_ehlo _________________________________ _______ test_ehlo ________
smtp = <smtplib.SMTP object at 0x7fb558b12240> smtp = <smtplib.SMTP instance at 0xdeadbeef>
def test_ehlo(smtp): def test_ehlo(smtp):
response = smtp.ehlo() response = smtp.ehlo()
assert response[0] == 250 assert response[0] == 250
> assert "merlinux" in response[1] assert "merlinux" in response[1]
E TypeError: Type str doesn't support the buffer API > assert 0 # for demo purposes
E assert 0
test_module.py:5: TypeError test_module.py:6: AssertionError
________________________________ test_noop _________________________________ _______ test_noop ________
smtp = <smtplib.SMTP object at 0x7fb558b12240> smtp = <smtplib.SMTP instance at 0xdeadbeef>
def test_noop(smtp): def test_noop(smtp):
response = smtp.noop() response = smtp.noop()
@ -222,7 +223,7 @@ inspect what is going on and can now run the tests::
E assert 0 E assert 0
test_module.py:11: AssertionError test_module.py:11: AssertionError
========================= 2 failed in 0.82 seconds ========================= ======= 2 failed in 0.12 seconds ========
You see the two ``assert 0`` failing and more importantly you can also see You see the two ``assert 0`` failing and more importantly you can also see
that the same (module-scoped) ``smtp`` object was passed into the two that the same (module-scoped) ``smtp`` object was passed into the two
@ -270,7 +271,7 @@ Let's execute it::
$ py.test -s -q --tb=no $ py.test -s -q --tb=no
FFteardown smtp FFteardown smtp
2 failed in 1.44 seconds 2 failed in 0.12 seconds
We see that the ``smtp`` instance is finalized after the two We see that the ``smtp`` instance is finalized after the two
tests finished execution. Note that if we decorated our fixture tests finished execution. Note that if we decorated our fixture
@ -310,8 +311,9 @@ We use the ``request.module`` attribute to optionally obtain an
again, nothing much has changed:: again, nothing much has changed::
$ py.test -s -q --tb=no $ py.test -s -q --tb=no
FF FFteardown smtp
2 failed in 0.62 seconds
2 failed in 0.12 seconds
Let's quickly create another test module that actually sets the Let's quickly create another test module that actually sets the
server URL in its module namespace:: server URL in its module namespace::
@ -327,11 +329,11 @@ Running it::
$ py.test -qq --tb=short test_anothersmtp.py $ py.test -qq --tb=short test_anothersmtp.py
F F
================================= FAILURES ================================= ======= FAILURES ========
______________________________ test_showhelo _______________________________ _______ test_showhelo ________
test_anothersmtp.py:5: in test_showhelo test_anothersmtp.py:5: in test_showhelo
assert 0, smtp.helo() assert 0, smtp.helo()
E AssertionError: (250, b'mail.python.org') E AssertionError: (250, 'hq.merlinux.eu')
E assert 0 E assert 0
voila! The ``smtp`` fixture function picked up our mail server name voila! The ``smtp`` fixture function picked up our mail server name
@ -376,21 +378,22 @@ So let's just do another run::
$ py.test -q test_module.py $ py.test -q test_module.py
FFFF FFFF
================================= FAILURES ================================= ======= FAILURES ========
__________________________ test_ehlo[merlinux.eu] __________________________ _______ test_ehlo[merlinux.eu] ________
smtp = <smtplib.SMTP object at 0x7f4eecf92080> smtp = <smtplib.SMTP instance at 0xdeadbeef>
def test_ehlo(smtp): def test_ehlo(smtp):
response = smtp.ehlo() response = smtp.ehlo()
assert response[0] == 250 assert response[0] == 250
> assert "merlinux" in response[1] assert "merlinux" in response[1]
E TypeError: Type str doesn't support the buffer API > assert 0 # for demo purposes
E assert 0
test_module.py:5: TypeError test_module.py:6: AssertionError
__________________________ test_noop[merlinux.eu] __________________________ _______ test_noop[merlinux.eu] ________
smtp = <smtplib.SMTP object at 0x7f4eecf92080> smtp = <smtplib.SMTP instance at 0xdeadbeef>
def test_noop(smtp): def test_noop(smtp):
response = smtp.noop() response = smtp.noop()
@ -399,22 +402,22 @@ So let's just do another run::
E assert 0 E assert 0
test_module.py:11: AssertionError test_module.py:11: AssertionError
________________________ test_ehlo[mail.python.org] ________________________ _______ test_ehlo[mail.python.org] ________
smtp = <smtplib.SMTP object at 0x7f4eecf92048> smtp = <smtplib.SMTP instance at 0xdeadbeef>
def test_ehlo(smtp): def test_ehlo(smtp):
response = smtp.ehlo() response = smtp.ehlo()
assert response[0] == 250 assert response[0] == 250
> assert "merlinux" in response[1] > assert "merlinux" in response[1]
E TypeError: Type str doesn't support the buffer API E assert 'merlinux' in 'mail.python.org\nSIZE 51200000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8'
test_module.py:5: TypeError test_module.py:5: AssertionError
-------------------------- Captured stdout setup --------------------------- ---------------------------- Captured stdout setup -----------------------------
finalizing <smtplib.SMTP object at 0x7f4eecf92080> finalizing <smtplib.SMTP instance at 0xdeadbeef>
________________________ test_noop[mail.python.org] ________________________ _______ test_noop[mail.python.org] ________
smtp = <smtplib.SMTP object at 0x7f4eecf92048> smtp = <smtplib.SMTP instance at 0xdeadbeef>
def test_noop(smtp): def test_noop(smtp):
response = smtp.noop() response = smtp.noop()
@ -423,7 +426,7 @@ So let's just do another run::
E assert 0 E assert 0
test_module.py:11: AssertionError test_module.py:11: AssertionError
4 failed in 1.75 seconds 4 failed in 0.12 seconds
We see that our two test functions each ran twice, against the different We see that our two test functions each ran twice, against the different
``smtp`` instances. Note also, that with the ``mail.python.org`` ``smtp`` instances. Note also, that with the ``mail.python.org``
@ -473,9 +476,9 @@ return ``None`` then pytest's auto-generated ID will be used.
Running the above tests results in the following test IDs being used:: Running the above tests results in the following test IDs being used::
$ py.test --collect-only $ py.test --collect-only
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-98, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 6 items collected 6 items
<Module 'test_anothersmtp.py'> <Module 'test_anothersmtp.py'>
<Function 'test_showhelo[merlinux.eu]'> <Function 'test_showhelo[merlinux.eu]'>
@ -486,7 +489,7 @@ Running the above tests results in the following test IDs being used::
<Function 'test_ehlo[mail.python.org]'> <Function 'test_ehlo[mail.python.org]'>
<Function 'test_noop[mail.python.org]'> <Function 'test_noop[mail.python.org]'>
============================= in 0.02 seconds ============================= ======= in 0.12 seconds ========
.. _`interdependent fixtures`: .. _`interdependent fixtures`:
@ -519,15 +522,15 @@ Here we declare an ``app`` fixture which receives the previously defined
``smtp`` fixture and instantiates an ``App`` object with it. Let's run it:: ``smtp`` fixture and instantiates an ``App`` object with it. Let's run it::
$ py.test -v test_appsetup.py $ py.test -v test_appsetup.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7
rootdir: /tmp/doc-exec-98, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items collecting ... collected 2 items
test_appsetup.py::test_smtp_exists[merlinux.eu] PASSED test_appsetup.py::test_smtp_exists[merlinux.eu] PASSED
test_appsetup.py::test_smtp_exists[mail.python.org] PASSED test_appsetup.py::test_smtp_exists[mail.python.org] PASSED
========================= 2 passed in 1.09 seconds ========================= ======= 2 passed in 0.12 seconds ========
Due to the parametrization of ``smtp`` the test will run twice with two Due to the parametrization of ``smtp`` the test will run twice with two
different ``App`` instances and respective smtp servers. There is no different ``App`` instances and respective smtp servers. There is no
@ -584,31 +587,31 @@ to show the setup/teardown flow::
Let's run the tests in verbose mode and with looking at the print-output:: Let's run the tests in verbose mode and with looking at the print-output::
$ py.test -v -s test_module.py $ py.test -v -s test_module.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7
rootdir: /tmp/doc-exec-98, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 8 items collecting ... collected 8 items
test_module.py::test_0[1] test0 1 test_module.py::test_0[1] (' test0', 1)
PASSED PASSED
test_module.py::test_0[2] test0 2 test_module.py::test_0[2] (' test0', 2)
PASSED PASSED
test_module.py::test_1[mod1] create mod1 test_module.py::test_1[mod1] ('create', 'mod1')
test1 mod1 (' test1', 'mod1')
PASSED PASSED
test_module.py::test_2[1-mod1] test2 1 mod1 test_module.py::test_2[1-mod1] (' test2', 1, 'mod1')
PASSED PASSED
test_module.py::test_2[2-mod1] test2 2 mod1 test_module.py::test_2[2-mod1] (' test2', 2, 'mod1')
PASSED PASSED
test_module.py::test_1[mod2] create mod2 test_module.py::test_1[mod2] ('create', 'mod2')
test1 mod2 (' test1', 'mod2')
PASSED PASSED
test_module.py::test_2[1-mod2] test2 1 mod2 test_module.py::test_2[1-mod2] (' test2', 1, 'mod2')
PASSED PASSED
test_module.py::test_2[2-mod2] test2 2 mod2 test_module.py::test_2[2-mod2] (' test2', 2, 'mod2')
PASSED PASSED
========================= 8 passed in 0.02 seconds ========================= ======= 8 passed in 0.12 seconds ========
You can see that the parametrized module-scoped ``modarg`` resource caused You can see that the parametrized module-scoped ``modarg`` resource caused
an ordering of test execution that lead to the fewest possible "active" resources. The finalizer for the ``mod1`` parametrized resource was executed an ordering of test execution that lead to the fewest possible "active" resources. The finalizer for the ``mod1`` parametrized resource was executed
@ -664,7 +667,7 @@ to verify our fixture is activated and the tests pass::
$ py.test -q $ py.test -q
.. ..
2 passed in 0.01 seconds 2 passed in 0.12 seconds
You can specify multiple fixtures like this:: You can specify multiple fixtures like this::
@ -736,7 +739,7 @@ If we run it, we get two passing tests::
$ py.test -q $ py.test -q
.. ..
2 passed in 0.01 seconds 2 passed in 0.12 seconds
Here is how autouse fixtures work in other scopes: Here is how autouse fixtures work in other scopes:

View File

@ -27,7 +27,7 @@ Installation options::
To check your installation has installed the correct version:: To check your installation has installed the correct version::
$ py.test --version $ py.test --version
This is pytest version 2.7.1, imported from /tmp/sandbox/pytest/.tox/regen/lib/python3.4/site-packages/pytest.py This is pytest version 2.8.0.dev4, imported from $PWD/pytest.pyc
If you get an error checkout :ref:`installation issues`. If you get an error checkout :ref:`installation issues`.
@ -48,15 +48,15 @@ Let's create a first test file with a simple test function::
That's it. You can execute the test function now:: That's it. You can execute the test function now::
$ py.test $ py.test
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-101, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items
test_sample.py F test_sample.py F
================================= FAILURES ================================= ======= FAILURES ========
_______________________________ test_answer ________________________________ _______ test_answer ________
def test_answer(): def test_answer():
> assert func(3) == 5 > assert func(3) == 5
@ -64,7 +64,7 @@ That's it. You can execute the test function now::
E + where 4 = func(3) E + where 4 = func(3)
test_sample.py:5: AssertionError test_sample.py:5: AssertionError
========================= 1 failed in 0.01 seconds ========================= ======= 1 failed in 0.12 seconds ========
``pytest`` found the ``test_answer`` function by following :ref:`standard test discovery rules <test discovery>`, basically detecting the ``test_`` prefixes. We got a failure report because our little ``func(3)`` call did not return ``5``. ``pytest`` found the ``test_answer`` function by following :ref:`standard test discovery rules <test discovery>`, basically detecting the ``test_`` prefixes. We got a failure report because our little ``func(3)`` call did not return ``5``.
@ -98,7 +98,7 @@ Running it with, this time in "quiet" reporting mode::
$ py.test -q test_sysexit.py $ py.test -q test_sysexit.py
. .
1 passed in 0.01 seconds 1 passed in 0.12 seconds
.. todo:: For further ways to assert exceptions see the `raises` .. todo:: For further ways to assert exceptions see the `raises`
@ -125,10 +125,10 @@ run the module by passing its filename::
$ py.test -q test_class.py $ py.test -q test_class.py
.F .F
================================= FAILURES ================================= ======= FAILURES ========
____________________________ TestClass.test_two ____________________________ _______ TestClass.test_two ________
self = <test_class.TestClass object at 0x7fbf54cf5668> self = <test_class.TestClass instance at 0xdeadbeef>
def test_two(self): def test_two(self):
x = "hello" x = "hello"
@ -136,7 +136,7 @@ run the module by passing its filename::
E assert hasattr('hello', 'check') E assert hasattr('hello', 'check')
test_class.py:8: AssertionError test_class.py:8: AssertionError
1 failed, 1 passed in 0.01 seconds 1 failed, 1 passed in 0.12 seconds
The first test passed, the second failed. Again we can easily see The first test passed, the second failed. Again we can easily see
the intermediate values used in the assertion, helping us to the intermediate values used in the assertion, helping us to
@ -161,10 +161,10 @@ before performing the test function call. Let's just run it::
$ py.test -q test_tmpdir.py $ py.test -q test_tmpdir.py
F F
================================= FAILURES ================================= ======= FAILURES ========
_____________________________ test_needsfiles ______________________________ _______ test_needsfiles ________
tmpdir = local('/tmp/pytest-18/test_needsfiles0') tmpdir = local('/tmp/pytest-NaN/test_needsfiles0')
def test_needsfiles(tmpdir): def test_needsfiles(tmpdir):
print (tmpdir) print (tmpdir)
@ -172,9 +172,9 @@ before performing the test function call. Let's just run it::
E assert 0 E assert 0
test_tmpdir.py:3: AssertionError test_tmpdir.py:3: AssertionError
--------------------------- Captured stdout call --------------------------- ----------------------------- Captured stdout call -----------------------------
/tmp/pytest-18/test_needsfiles0 /tmp/pytest-NaN/test_needsfiles0
1 failed in 0.05 seconds 1 failed in 0.12 seconds
Before the test runs, a unique-per-test-invocation temporary directory Before the test runs, a unique-per-test-invocation temporary directory
was created. More info at :ref:`tmpdir handling`. was created. More info at :ref:`tmpdir handling`.

View File

@ -52,15 +52,15 @@ tuples so that the ``test_eval`` function will run three times using
them in turn:: them in turn::
$ py.test $ py.test
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-109, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items collected 3 items
test_expectation.py ..F test_expectation.py ..F
================================= FAILURES ================================= ======= FAILURES ========
____________________________ test_eval[6*9-42] _____________________________ _______ test_eval[6*9-42] ________
input = '6*9', expected = 42 input = '6*9', expected = 42
@ -75,7 +75,7 @@ them in turn::
E + where 54 = eval('6*9') E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError test_expectation.py:8: AssertionError
==================== 1 failed, 2 passed in 0.02 seconds ==================== ======= 1 failed, 2 passed in 0.12 seconds ========
As designed in this example, only one pair of input/output values fails As designed in this example, only one pair of input/output values fails
the simple test function. And as usual with test function arguments, the simple test function. And as usual with test function arguments,
@ -100,14 +100,14 @@ for example with the builtin ``mark.xfail``::
Let's run this:: Let's run this::
$ py.test $ py.test
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-109, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items collected 3 items
test_expectation.py ..x test_expectation.py ..x
=================== 2 passed, 1 xfailed in 0.02 seconds ==================== ======= 2 passed, 1 xfailed in 0.12 seconds ========
The one parameter set which caused a failure previously now The one parameter set which caused a failure previously now
shows up as an "xfailed (expected to fail)" test. shows up as an "xfailed (expected to fail)" test.
@ -159,24 +159,24 @@ If we now pass two stringinput values, our test will run twice::
$ py.test -q --stringinput="hello" --stringinput="world" test_strings.py $ py.test -q --stringinput="hello" --stringinput="world" test_strings.py
.. ..
2 passed in 0.01 seconds 2 passed in 0.12 seconds
Let's also run with a stringinput that will lead to a failing test:: Let's also run with a stringinput that will lead to a failing test::
$ py.test -q --stringinput="!" test_strings.py $ py.test -q --stringinput="!" test_strings.py
F F
================================= FAILURES ================================= ======= FAILURES ========
___________________________ test_valid_string[!] ___________________________ _______ test_valid_string[!] ________
stringinput = '!' stringinput = '!'
def test_valid_string(stringinput): def test_valid_string(stringinput):
> assert stringinput.isalpha() > assert stringinput.isalpha()
E assert <built-in method isalpha of str object at 0x7f6e2145e768>() E assert <built-in method isalpha of str object at 0xdeadbeef>()
E + where <built-in method isalpha of str object at 0x7f6e2145e768> = '!'.isalpha E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha
test_strings.py:3: AssertionError test_strings.py:3: AssertionError
1 failed in 0.01 seconds 1 failed in 0.12 seconds
As expected our test function fails. As expected our test function fails.
@ -186,9 +186,9 @@ listlist::
$ py.test -q -rs test_strings.py $ py.test -q -rs test_strings.py
s s
========================= short test summary info ========================== ======= short test summary info ========
SKIP [1] /tmp/sandbox/pytest/.tox/regen/lib/python3.4/site-packages/_pytest/python.py:1185: got empty parameter set, function test_valid_string at /tmp/doc-exec-109/test_strings.py:1 SKIP [1] $PWD/_pytest/python.py:1201: got empty parameter set, function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1
1 skipped in 0.01 seconds 1 skipped in 0.12 seconds
For further examples, you might want to look at :ref:`more For further examples, you might want to look at :ref:`more
parametrization examples <paramexamples>`. parametrization examples <paramexamples>`.

View File

@ -163,13 +163,13 @@ a simple test file with the several usages:
Running it with the report-on-xfail option gives this output:: Running it with the report-on-xfail option gives this output::
example $ py.test -rx xfail_demo.py example $ py.test -rx xfail_demo.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini rootdir: $PWD/doc/en, inifile: pytest.ini
collected 7 items collected 7 items
xfail_demo.py xxxxxxx xfail_demo.py xxxxxxx
========================= short test summary info ========================== ======= short test summary info ========
XFAIL xfail_demo.py::test_hello XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2 XFAIL xfail_demo.py::test_hello2
reason: [NOTRUN] reason: [NOTRUN]
@ -183,7 +183,7 @@ Running it with the report-on-xfail option gives this output::
reason: reason reason: reason
XFAIL xfail_demo.py::test_hello7 XFAIL xfail_demo.py::test_hello7
======================== 7 xfailed in 0.06 seconds ========================= ======= 7 xfailed in 0.12 seconds ========
.. _`skip/xfail with parametrize`: .. _`skip/xfail with parametrize`:

View File

@ -28,17 +28,17 @@ Running this would result in a passed test except for the last
``assert 0`` line which we use to look at values:: ``assert 0`` line which we use to look at values::
$ py.test test_tmpdir.py $ py.test test_tmpdir.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-118, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items
test_tmpdir.py F test_tmpdir.py F
================================= FAILURES ================================= ======= FAILURES ========
_____________________________ test_create_file _____________________________ _______ test_create_file ________
tmpdir = local('/tmp/pytest-19/test_create_file0') tmpdir = local('/tmp/pytest-NaN/test_create_file0')
def test_create_file(tmpdir): def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt") p = tmpdir.mkdir("sub").join("hello.txt")
@ -49,7 +49,7 @@ Running this would result in a passed test except for the last
E assert 0 E assert 0
test_tmpdir.py:7: AssertionError test_tmpdir.py:7: AssertionError
========================= 1 failed in 0.04 seconds ========================= ======= 1 failed in 0.12 seconds ========
.. _`base temporary directory`: .. _`base temporary directory`:

View File

@ -87,36 +87,36 @@ Due to the deliberately failing assert statements, we can take a look at
the ``self.db`` values in the traceback:: the ``self.db`` values in the traceback::
$ py.test test_unittest_db.py $ py.test test_unittest_db.py
=========================== test session starts ============================ ======= test session starts ========
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0
rootdir: /tmp/doc-exec-119, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
test_unittest_db.py FF test_unittest_db.py FF
================================= FAILURES ================================= ======= FAILURES ========
___________________________ MyTest.test_method1 ____________________________ _______ MyTest.test_method1 ________
self = <test_unittest_db.MyTest testMethod=test_method1> self = <test_unittest_db.MyTest testMethod=test_method1>
def test_method1(self): def test_method1(self):
assert hasattr(self, "db") assert hasattr(self, "db")
> assert 0, self.db # fail for demo purposes > assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0x7f97382031d0> E AssertionError: <conftest.DummyDB instance at 0xdeadbeef>
E assert 0 E assert 0
test_unittest_db.py:9: AssertionError test_unittest_db.py:9: AssertionError
___________________________ MyTest.test_method2 ____________________________ _______ MyTest.test_method2 ________
self = <test_unittest_db.MyTest testMethod=test_method2> self = <test_unittest_db.MyTest testMethod=test_method2>
def test_method2(self): def test_method2(self):
> assert 0, self.db # fail for demo purposes > assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0x7f97382031d0> E AssertionError: <conftest.DummyDB instance at 0xdeadbeef>
E assert 0 E assert 0
test_unittest_db.py:12: AssertionError test_unittest_db.py:12: AssertionError
========================= 2 failed in 0.04 seconds ========================= ======= 2 failed in 0.12 seconds ========
This default pytest traceback shows that the two test methods This default pytest traceback shows that the two test methods
share the same ``self.db`` instance which was our intention share the same ``self.db`` instance which was our intention
@ -163,7 +163,7 @@ Running this test module ...::
$ py.test -q test_unittest_cleandir.py $ py.test -q test_unittest_cleandir.py
. .
1 passed in 0.25 seconds 1 passed in 0.12 seconds
... gives us one passed test because the ``initdir`` fixture function ... gives us one passed test because the ``initdir`` fixture function
was executed ahead of the ``test_method``. was executed ahead of the ``test_method``.

View File

@ -43,7 +43,7 @@ Let's run it with output capturing disabled::
test called test called
.teardown after yield .teardown after yield
1 passed in 0.01 seconds 1 passed in 0.12 seconds
We can also seamlessly use the new syntax with ``with`` statements. We can also seamlessly use the new syntax with ``with`` statements.
Let's simplify the above ``passwd`` fixture:: Let's simplify the above ``passwd`` fixture::

View File

@ -1,2 +1,3 @@
sphinx==1.2.3 sphinx==1.2.3
regendoc regendoc
pyyaml