Regendoc again
This commit is contained in:
parent
01151ff566
commit
d7465895d0
|
@ -27,17 +27,17 @@ you will see the return value of the function call:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_assert1.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 1 item
|
||||
|
||||
test_assert1.py F [100%]
|
||||
test_assert1.py F [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
___________________________________ test_function ____________________________________
|
||||
================================= FAILURES =================================
|
||||
______________________________ test_function _______________________________
|
||||
|
||||
def test_function():
|
||||
> assert f() == 4
|
||||
|
@ -45,7 +45,7 @@ you will see the return value of the function call:
|
|||
E + where 3 = f()
|
||||
|
||||
test_assert1.py:5: AssertionError
|
||||
============================== 1 failed in 0.12 seconds ==============================
|
||||
========================= 1 failed in 0.12 seconds =========================
|
||||
|
||||
``pytest`` has support for showing the values of the most common subexpressions
|
||||
including calls, attributes, comparisons, and binary and unary
|
||||
|
@ -173,17 +173,17 @@ if you run this module:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_assert2.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 1 item
|
||||
|
||||
test_assert2.py F [100%]
|
||||
test_assert2.py F [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
________________________________ test_set_comparison _________________________________
|
||||
================================= FAILURES =================================
|
||||
___________________________ test_set_comparison ____________________________
|
||||
|
||||
def test_set_comparison():
|
||||
set1 = set("1308")
|
||||
|
@ -197,7 +197,7 @@ if you run this module:
|
|||
E Use -v to get the full diff
|
||||
|
||||
test_assert2.py:5: AssertionError
|
||||
============================== 1 failed in 0.12 seconds ==============================
|
||||
========================= 1 failed in 0.12 seconds =========================
|
||||
|
||||
Special comparisons are done for a number of cases:
|
||||
|
||||
|
@ -247,9 +247,9 @@ the conftest file:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q test_foocompare.py
|
||||
F [100%]
|
||||
====================================== FAILURES ======================================
|
||||
____________________________________ test_compare ____________________________________
|
||||
F [100%]
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_compare _______________________________
|
||||
|
||||
def test_compare():
|
||||
f1 = Foo(1)
|
||||
|
|
|
@ -48,9 +48,9 @@ If you run this for the first time you will see two failures:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q
|
||||
.................F.......F........................ [100%]
|
||||
====================================== FAILURES ======================================
|
||||
____________________________________ test_num[17] ____________________________________
|
||||
.................F.......F........................ [100%]
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_num[17] _______________________________
|
||||
|
||||
i = 17
|
||||
|
||||
|
@ -61,7 +61,7 @@ If you run this for the first time you will see two failures:
|
|||
E Failed: bad luck
|
||||
|
||||
test_50.py:6: Failed
|
||||
____________________________________ test_num[25] ____________________________________
|
||||
_______________________________ test_num[25] _______________________________
|
||||
|
||||
i = 25
|
||||
|
||||
|
@ -79,7 +79,7 @@ If you then run it with ``--lf``:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest --lf
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
|
@ -87,10 +87,10 @@ If you then run it with ``--lf``:
|
|||
collected 50 items / 48 deselected
|
||||
run-last-failure: rerun previous 2 failures
|
||||
|
||||
test_50.py FF [100%]
|
||||
test_50.py FF [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
____________________________________ test_num[17] ____________________________________
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_num[17] _______________________________
|
||||
|
||||
i = 17
|
||||
|
||||
|
@ -101,7 +101,7 @@ If you then run it with ``--lf``:
|
|||
E Failed: bad luck
|
||||
|
||||
test_50.py:6: Failed
|
||||
____________________________________ test_num[25] ____________________________________
|
||||
_______________________________ test_num[25] _______________________________
|
||||
|
||||
i = 25
|
||||
|
||||
|
@ -112,7 +112,7 @@ If you then run it with ``--lf``:
|
|||
E Failed: bad luck
|
||||
|
||||
test_50.py:6: Failed
|
||||
====================== 2 failed, 48 deselected in 0.12 seconds =======================
|
||||
================= 2 failed, 48 deselected in 0.12 seconds ==================
|
||||
|
||||
You have run only the two failing test from the last run, while 48 tests have
|
||||
not been run ("deselected").
|
||||
|
@ -124,7 +124,7 @@ of ``FF`` and dots):
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest --ff
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
|
@ -132,10 +132,10 @@ of ``FF`` and dots):
|
|||
collected 50 items
|
||||
run-last-failure: rerun previous 2 failures first
|
||||
|
||||
test_50.py FF................................................ [100%]
|
||||
test_50.py FF................................................ [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
____________________________________ test_num[17] ____________________________________
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_num[17] _______________________________
|
||||
|
||||
i = 17
|
||||
|
||||
|
@ -146,7 +146,7 @@ of ``FF`` and dots):
|
|||
E Failed: bad luck
|
||||
|
||||
test_50.py:6: Failed
|
||||
____________________________________ test_num[25] ____________________________________
|
||||
_______________________________ test_num[25] _______________________________
|
||||
|
||||
i = 25
|
||||
|
||||
|
@ -157,7 +157,7 @@ of ``FF`` and dots):
|
|||
E Failed: bad luck
|
||||
|
||||
test_50.py:6: Failed
|
||||
======================== 2 failed, 48 passed in 0.12 seconds =========================
|
||||
=================== 2 failed, 48 passed in 0.12 seconds ====================
|
||||
|
||||
.. _`config.cache`:
|
||||
|
||||
|
@ -209,9 +209,9 @@ If you run this command for the first time, you can see the print statement:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q
|
||||
F [100%]
|
||||
====================================== FAILURES ======================================
|
||||
___________________________________ test_function ____________________________________
|
||||
F [100%]
|
||||
================================= FAILURES =================================
|
||||
______________________________ test_function _______________________________
|
||||
|
||||
mydata = 42
|
||||
|
||||
|
@ -220,7 +220,7 @@ If you run this command for the first time, you can see the print statement:
|
|||
E assert 42 == 23
|
||||
|
||||
test_caching.py:17: AssertionError
|
||||
------------------------------- Captured stdout setup --------------------------------
|
||||
-------------------------- Captured stdout setup ---------------------------
|
||||
running expensive computation...
|
||||
1 failed in 0.12 seconds
|
||||
|
||||
|
@ -230,9 +230,9 @@ the cache and nothing will be printed:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q
|
||||
F [100%]
|
||||
====================================== FAILURES ======================================
|
||||
___________________________________ test_function ____________________________________
|
||||
F [100%]
|
||||
================================= FAILURES =================================
|
||||
______________________________ test_function _______________________________
|
||||
|
||||
mydata = 42
|
||||
|
||||
|
@ -255,13 +255,13 @@ You can always peek at the content of the cache using the
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest --cache-show
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
cachedir: $REGENDOC_TMPDIR/.pytest_cache
|
||||
------------------------------------ cache values ------------------------------------
|
||||
------------------------------- cache values -------------------------------
|
||||
cache/lastfailed contains:
|
||||
{'test_caching.py::test_function': True}
|
||||
cache/nodeids contains:
|
||||
|
@ -271,7 +271,7 @@ You can always peek at the content of the cache using the
|
|||
example/value contains:
|
||||
42
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
||||
Clearing Cache content
|
||||
-------------------------------
|
||||
|
|
|
@ -66,26 +66,26 @@ of the failing function and hide the other one:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 2 items
|
||||
|
||||
test_module.py .F [100%]
|
||||
test_module.py .F [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
_____________________________________ test_func2 _____________________________________
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_func2 ________________________________
|
||||
|
||||
def test_func2():
|
||||
> assert False
|
||||
E assert False
|
||||
|
||||
test_module.py:9: AssertionError
|
||||
------------------------------- Captured stdout setup --------------------------------
|
||||
-------------------------- Captured stdout setup ---------------------------
|
||||
setting up <function test_func2 at 0xdeadbeef>
|
||||
========================= 1 failed, 1 passed in 0.12 seconds =========================
|
||||
==================== 1 failed, 1 passed in 0.12 seconds ====================
|
||||
|
||||
Accessing captured output from a test function
|
||||
---------------------------------------------------
|
||||
|
|
|
@ -63,16 +63,16 @@ then you can just invoke ``pytest`` without command line options:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 1 item
|
||||
|
||||
mymodule.py . [100%]
|
||||
mymodule.py . [100%]
|
||||
|
||||
============================== 1 passed in 0.12 seconds ==============================
|
||||
========================= 1 passed in 0.12 seconds =========================
|
||||
|
||||
It is possible to use fixtures using the ``getfixture`` helper::
|
||||
|
||||
|
|
|
@ -32,7 +32,7 @@ You can then restrict a test run to only run tests marked with ``webtest``:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -v -m webtest
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
|
@ -40,16 +40,16 @@ You can then restrict a test run to only run tests marked with ``webtest``:
|
|||
plugins: hypothesis-3.x.y
|
||||
collecting ... collected 4 items / 3 deselected
|
||||
|
||||
test_server.py::test_send_http PASSED [100%]
|
||||
test_server.py::test_send_http PASSED [100%]
|
||||
|
||||
======================= 1 passed, 3 deselected in 0.12 seconds =======================
|
||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||
|
||||
Or the inverse, running all tests except the webtest ones:
|
||||
|
||||
.. code-block:: pytest
|
||||
|
||||
$ pytest -v -m "not webtest"
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
|
@ -57,11 +57,11 @@ Or the inverse, running all tests except the webtest ones:
|
|||
plugins: hypothesis-3.x.y
|
||||
collecting ... collected 4 items / 1 deselected
|
||||
|
||||
test_server.py::test_something_quick PASSED [ 33%]
|
||||
test_server.py::test_another PASSED [ 66%]
|
||||
test_server.py::TestClass::test_method PASSED [100%]
|
||||
test_server.py::test_something_quick PASSED [ 33%]
|
||||
test_server.py::test_another PASSED [ 66%]
|
||||
test_server.py::TestClass::test_method PASSED [100%]
|
||||
|
||||
======================= 3 passed, 1 deselected in 0.12 seconds =======================
|
||||
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
||||
|
||||
Selecting tests based on their node ID
|
||||
--------------------------------------
|
||||
|
@ -73,7 +73,7 @@ tests based on their module, class, method, or function name:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -v test_server.py::TestClass::test_method
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
|
@ -81,16 +81,16 @@ tests based on their module, class, method, or function name:
|
|||
plugins: hypothesis-3.x.y
|
||||
collecting ... collected 1 item
|
||||
|
||||
test_server.py::TestClass::test_method PASSED [100%]
|
||||
test_server.py::TestClass::test_method PASSED [100%]
|
||||
|
||||
============================== 1 passed in 0.12 seconds ==============================
|
||||
========================= 1 passed in 0.12 seconds =========================
|
||||
|
||||
You can also select on the class:
|
||||
|
||||
.. code-block:: pytest
|
||||
|
||||
$ pytest -v test_server.py::TestClass
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
|
@ -98,16 +98,16 @@ You can also select on the class:
|
|||
plugins: hypothesis-3.x.y
|
||||
collecting ... collected 1 item
|
||||
|
||||
test_server.py::TestClass::test_method PASSED [100%]
|
||||
test_server.py::TestClass::test_method PASSED [100%]
|
||||
|
||||
============================== 1 passed in 0.12 seconds ==============================
|
||||
========================= 1 passed in 0.12 seconds =========================
|
||||
|
||||
Or select multiple nodes:
|
||||
|
||||
.. code-block:: pytest
|
||||
|
||||
$ pytest -v test_server.py::TestClass test_server.py::test_send_http
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
|
@ -115,10 +115,10 @@ Or select multiple nodes:
|
|||
plugins: hypothesis-3.x.y
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_server.py::TestClass::test_method PASSED [ 50%]
|
||||
test_server.py::test_send_http PASSED [100%]
|
||||
test_server.py::TestClass::test_method PASSED [ 50%]
|
||||
test_server.py::test_send_http PASSED [100%]
|
||||
|
||||
============================== 2 passed in 0.12 seconds ==============================
|
||||
========================= 2 passed in 0.12 seconds =========================
|
||||
|
||||
.. _node-id:
|
||||
|
||||
|
@ -149,7 +149,7 @@ select tests based on their names:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -v -k http # running with the above defined example module
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
|
@ -157,16 +157,16 @@ select tests based on their names:
|
|||
plugins: hypothesis-3.x.y
|
||||
collecting ... collected 4 items / 3 deselected
|
||||
|
||||
test_server.py::test_send_http PASSED [100%]
|
||||
test_server.py::test_send_http PASSED [100%]
|
||||
|
||||
======================= 1 passed, 3 deselected in 0.12 seconds =======================
|
||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||
|
||||
And you can also run all tests except the ones that match the keyword:
|
||||
|
||||
.. code-block:: pytest
|
||||
|
||||
$ pytest -k "not send_http" -v
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
|
@ -174,18 +174,18 @@ And you can also run all tests except the ones that match the keyword:
|
|||
plugins: hypothesis-3.x.y
|
||||
collecting ... collected 4 items / 1 deselected
|
||||
|
||||
test_server.py::test_something_quick PASSED [ 33%]
|
||||
test_server.py::test_another PASSED [ 66%]
|
||||
test_server.py::TestClass::test_method PASSED [100%]
|
||||
test_server.py::test_something_quick PASSED [ 33%]
|
||||
test_server.py::test_another PASSED [ 66%]
|
||||
test_server.py::TestClass::test_method PASSED [100%]
|
||||
|
||||
======================= 3 passed, 1 deselected in 0.12 seconds =======================
|
||||
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
||||
|
||||
Or to select "http" and "quick" tests:
|
||||
|
||||
.. code-block:: pytest
|
||||
|
||||
$ pytest -k "http or quick" -v
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
|
@ -193,10 +193,10 @@ Or to select "http" and "quick" tests:
|
|||
plugins: hypothesis-3.x.y
|
||||
collecting ... collected 4 items / 2 deselected
|
||||
|
||||
test_server.py::test_send_http PASSED [ 50%]
|
||||
test_server.py::test_something_quick PASSED [100%]
|
||||
test_server.py::test_send_http PASSED [ 50%]
|
||||
test_server.py::test_something_quick PASSED [100%]
|
||||
|
||||
======================= 2 passed, 2 deselected in 0.12 seconds =======================
|
||||
================== 2 passed, 2 deselected in 0.12 seconds ==================
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -381,32 +381,32 @@ the test needs:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -E stage2
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 1 item
|
||||
|
||||
test_someenv.py s [100%]
|
||||
test_someenv.py s [100%]
|
||||
|
||||
============================= 1 skipped in 0.12 seconds ==============================
|
||||
======================== 1 skipped in 0.12 seconds =========================
|
||||
|
||||
and here is one that specifies exactly the environment needed:
|
||||
|
||||
.. code-block:: pytest
|
||||
|
||||
$ pytest -E stage1
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 1 item
|
||||
|
||||
test_someenv.py . [100%]
|
||||
test_someenv.py . [100%]
|
||||
|
||||
============================== 1 passed in 0.12 seconds ==============================
|
||||
========================= 1 passed in 0.12 seconds =========================
|
||||
|
||||
The ``--markers`` option always gives you a list of available markers::
|
||||
|
||||
|
@ -568,34 +568,34 @@ then you will see two tests skipped and two executed tests as expected:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -rs # this option reports skip reasons
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 4 items
|
||||
|
||||
test_plat.py s.s. [100%]
|
||||
============================== short test summary info ===============================
|
||||
test_plat.py s.s. [100%]
|
||||
========================= short test summary info ==========================
|
||||
SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux
|
||||
|
||||
======================== 2 passed, 2 skipped in 0.12 seconds =========================
|
||||
=================== 2 passed, 2 skipped in 0.12 seconds ====================
|
||||
|
||||
Note that if you specify a platform via the marker-command line option like this:
|
||||
|
||||
.. code-block:: pytest
|
||||
|
||||
$ pytest -m linux
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 4 items / 3 deselected
|
||||
|
||||
test_plat.py . [100%]
|
||||
test_plat.py . [100%]
|
||||
|
||||
======================= 1 passed, 3 deselected in 0.12 seconds =======================
|
||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||
|
||||
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
||||
|
||||
|
@ -641,51 +641,51 @@ We can now use the ``-m option`` to select one set:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -m interface --tb=short
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 4 items / 2 deselected
|
||||
|
||||
test_module.py FF [100%]
|
||||
test_module.py FF [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
_______________________________ test_interface_simple ________________________________
|
||||
================================= FAILURES =================================
|
||||
__________________________ test_interface_simple ___________________________
|
||||
test_module.py:3: in test_interface_simple
|
||||
assert 0
|
||||
E assert 0
|
||||
_______________________________ test_interface_complex _______________________________
|
||||
__________________________ test_interface_complex __________________________
|
||||
test_module.py:6: in test_interface_complex
|
||||
assert 0
|
||||
E assert 0
|
||||
======================= 2 failed, 2 deselected in 0.12 seconds =======================
|
||||
================== 2 failed, 2 deselected in 0.12 seconds ==================
|
||||
|
||||
or to select both "event" and "interface" tests:
|
||||
|
||||
.. code-block:: pytest
|
||||
|
||||
$ pytest -m "interface or event" --tb=short
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 4 items / 1 deselected
|
||||
|
||||
test_module.py FFF [100%]
|
||||
test_module.py FFF [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
_______________________________ test_interface_simple ________________________________
|
||||
================================= FAILURES =================================
|
||||
__________________________ test_interface_simple ___________________________
|
||||
test_module.py:3: in test_interface_simple
|
||||
assert 0
|
||||
E assert 0
|
||||
_______________________________ test_interface_complex _______________________________
|
||||
__________________________ test_interface_complex __________________________
|
||||
test_module.py:6: in test_interface_complex
|
||||
assert 0
|
||||
E assert 0
|
||||
_________________________________ test_event_simple __________________________________
|
||||
____________________________ test_event_simple _____________________________
|
||||
test_module.py:9: in test_event_simple
|
||||
assert 0
|
||||
E assert 0
|
||||
======================= 3 failed, 1 deselected in 0.12 seconds =======================
|
||||
================== 3 failed, 1 deselected in 0.12 seconds ==================
|
||||
|
|
|
@ -28,21 +28,21 @@ now execute the test specification:
|
|||
.. code-block:: pytest
|
||||
|
||||
nonpython $ pytest test_simple.yml
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/nonpython/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 2 items
|
||||
|
||||
test_simple.yml F. [100%]
|
||||
test_simple.yml F. [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
___________________________________ usecase: hello ___________________________________
|
||||
================================= FAILURES =================================
|
||||
______________________________ usecase: hello ______________________________
|
||||
usecase execution failed
|
||||
spec failed: 'some': 'other'
|
||||
no further details known at this point.
|
||||
========================= 1 failed, 1 passed in 0.12 seconds =========================
|
||||
==================== 1 failed, 1 passed in 0.12 seconds ====================
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
|
@ -64,7 +64,7 @@ consulted when reporting in ``verbose`` mode:
|
|||
.. code-block:: pytest
|
||||
|
||||
nonpython $ pytest -v
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/nonpython/.hypothesis/examples')
|
||||
|
@ -72,15 +72,15 @@ consulted when reporting in ``verbose`` mode:
|
|||
plugins: hypothesis-3.x.y
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_simple.yml::hello FAILED [ 50%]
|
||||
test_simple.yml::ok PASSED [100%]
|
||||
test_simple.yml::hello FAILED [ 50%]
|
||||
test_simple.yml::ok PASSED [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
___________________________________ usecase: hello ___________________________________
|
||||
================================= FAILURES =================================
|
||||
______________________________ usecase: hello ______________________________
|
||||
usecase execution failed
|
||||
spec failed: 'some': 'other'
|
||||
no further details known at this point.
|
||||
========================= 1 failed, 1 passed in 0.12 seconds =========================
|
||||
==================== 1 failed, 1 passed in 0.12 seconds ====================
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
|
@ -90,7 +90,7 @@ interesting to just look at the collection tree:
|
|||
.. code-block:: pytest
|
||||
|
||||
nonpython $ pytest --collect-only
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/nonpython/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
||||
|
@ -101,4 +101,4 @@ interesting to just look at the collection tree:
|
|||
<YamlItem hello>
|
||||
<YamlItem ok>
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
|
|
@ -47,7 +47,7 @@ This means that we only run 2 tests if we do not pass ``--all``:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q test_compute.py
|
||||
.. [100%]
|
||||
.. [100%]
|
||||
2 passed in 0.12 seconds
|
||||
|
||||
We run only two computations, so we see two dots.
|
||||
|
@ -56,9 +56,9 @@ let's run the full monty:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q --all
|
||||
....F [100%]
|
||||
====================================== FAILURES ======================================
|
||||
__________________________________ test_compute[4] ___________________________________
|
||||
....F [100%]
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_compute[4] ______________________________
|
||||
|
||||
param1 = 4
|
||||
|
||||
|
@ -143,7 +143,7 @@ objects, they are still using the default pytest representation:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_time.py --collect-only
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
|
@ -159,7 +159,7 @@ objects, they are still using the default pytest representation:
|
|||
<Function test_timedistance_v3[forward]>
|
||||
<Function test_timedistance_v3[backward]>
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
||||
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
|
||||
together with the actual data, instead of listing them separately.
|
||||
|
@ -203,23 +203,23 @@ this is a fully self-contained example which you can run with:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_scenarios.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 4 items
|
||||
|
||||
test_scenarios.py .... [100%]
|
||||
test_scenarios.py .... [100%]
|
||||
|
||||
============================== 4 passed in 0.12 seconds ==============================
|
||||
========================= 4 passed in 0.12 seconds =========================
|
||||
|
||||
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:
|
||||
|
||||
.. code-block:: pytest
|
||||
|
||||
$ pytest --collect-only test_scenarios.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
|
@ -232,7 +232,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
|
|||
<Function test_demo1[advanced]>
|
||||
<Function test_demo2[advanced]>
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
||||
Note that we told ``metafunc.parametrize()`` that your scenario values
|
||||
should be considered class-scoped. With pytest-2.3 this leads to a
|
||||
|
@ -287,7 +287,7 @@ Let's first see how it looks like at collection time:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_backends.py --collect-only
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
|
@ -297,16 +297,16 @@ Let's first see how it looks like at collection time:
|
|||
<Function test_db_initialized[d1]>
|
||||
<Function test_db_initialized[d2]>
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
||||
And then when we run the test:
|
||||
|
||||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q test_backends.py
|
||||
.F [100%]
|
||||
====================================== FAILURES ======================================
|
||||
______________________________ test_db_initialized[d2] _______________________________
|
||||
.F [100%]
|
||||
================================= FAILURES =================================
|
||||
_________________________ test_db_initialized[d2] __________________________
|
||||
|
||||
db = <conftest.DB2 object at 0xdeadbeef>
|
||||
|
||||
|
@ -354,7 +354,7 @@ The result of this test will be successful:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_indirect_list.py --collect-only
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
|
@ -363,7 +363,7 @@ The result of this test will be successful:
|
|||
<Module test_indirect_list.py>
|
||||
<Function test_indirect[a-b]>
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
|
@ -407,9 +407,9 @@ argument sets to use for each test function. Let's run it:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q
|
||||
F.. [100%]
|
||||
====================================== FAILURES ======================================
|
||||
_____________________________ TestClass.test_equals[1-2] _____________________________
|
||||
F.. [100%]
|
||||
================================= FAILURES =================================
|
||||
________________________ TestClass.test_equals[1-2] ________________________
|
||||
|
||||
self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2
|
||||
|
||||
|
@ -439,8 +439,8 @@ Running it results in some skips if we don't have all the python interpreters in
|
|||
.. code-block:: pytest
|
||||
|
||||
. $ pytest -rs -q multipython.py
|
||||
...sss...sssssssss...sss... [100%]
|
||||
============================== short test summary info ===============================
|
||||
...sss...sssssssss...sss... [100%]
|
||||
========================= short test summary info ==========================
|
||||
SKIP [15] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.4' not found
|
||||
12 passed, 15 skipped in 0.12 seconds
|
||||
|
||||
|
@ -490,18 +490,18 @@ If you run this with reporting for skips enabled:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -rs test_module.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 2 items
|
||||
|
||||
test_module.py .s [100%]
|
||||
============================== short test summary info ===============================
|
||||
test_module.py .s [100%]
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] $REGENDOC_TMPDIR/conftest.py:11: could not import 'opt2'
|
||||
|
||||
======================== 1 passed, 1 skipped in 0.12 seconds =========================
|
||||
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
||||
|
||||
You'll see that we don't have an ``opt2`` module and thus the second test run
|
||||
of our ``test_func1`` was skipped. A few notes:
|
||||
|
@ -549,7 +549,7 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -v -m basic
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
|
@ -557,11 +557,11 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
|
|||
plugins: hypothesis-3.x.y
|
||||
collecting ... collected 17 items / 14 deselected
|
||||
|
||||
test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%]
|
||||
test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
|
||||
test_pytest_param_example.py::test_eval[basic_6*9] xfail [100%]
|
||||
test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%]
|
||||
test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
|
||||
test_pytest_param_example.py::test_eval[basic_6*9] xfail [100%]
|
||||
|
||||
================= 2 passed, 14 deselected, 1 xfailed in 0.12 seconds =================
|
||||
============ 2 passed, 14 deselected, 1 xfailed in 0.12 seconds ============
|
||||
|
||||
As the result:
|
||||
|
||||
|
|
|
@ -130,7 +130,7 @@ The test collection would look like this:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest --collect-only
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||
|
@ -141,7 +141,7 @@ The test collection would look like this:
|
|||
<Function simple_check>
|
||||
<Function complex_check>
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
||||
You can check for multiple glob patterns by adding a space between the patterns::
|
||||
|
||||
|
@ -187,7 +187,7 @@ You can always peek at the collection tree without running tests like this:
|
|||
.. code-block:: pytest
|
||||
|
||||
. $ pytest --collect-only pythoncollection.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/CWD/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||
|
@ -199,7 +199,7 @@ You can always peek at the collection tree without running tests like this:
|
|||
<Function test_method>
|
||||
<Function test_anothermethod>
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
||||
.. _customizing-test-collection:
|
||||
|
||||
|
@ -261,11 +261,11 @@ file will be left out:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest --collect-only
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 0 items
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
|
|
@ -12,17 +12,17 @@ get on the terminal - we are working on that):
|
|||
.. code-block:: pytest
|
||||
|
||||
assertion $ pytest failure_demo.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/assertion/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 44 items
|
||||
|
||||
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%]
|
||||
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
________________________________ test_generative[3-6] ________________________________
|
||||
================================= FAILURES =================================
|
||||
___________________________ test_generative[3-6] ___________________________
|
||||
|
||||
param1 = 3, param2 = 6
|
||||
|
||||
|
@ -32,7 +32,7 @@ get on the terminal - we are working on that):
|
|||
E assert (3 * 2) < 6
|
||||
|
||||
failure_demo.py:22: AssertionError
|
||||
______________________________ TestFailing.test_simple _______________________________
|
||||
_________________________ TestFailing.test_simple __________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||
|
||||
|
@ -49,7 +49,7 @@ get on the terminal - we are working on that):
|
|||
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>()
|
||||
|
||||
failure_demo.py:33: AssertionError
|
||||
_________________________ TestFailing.test_simple_multiline __________________________
|
||||
____________________ TestFailing.test_simple_multiline _____________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||
|
||||
|
@ -57,7 +57,7 @@ get on the terminal - we are working on that):
|
|||
> otherfunc_multi(42, 6 * 9)
|
||||
|
||||
failure_demo.py:36:
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
|
||||
a = 42, b = 54
|
||||
|
||||
|
@ -66,7 +66,7 @@ get on the terminal - we are working on that):
|
|||
E assert 42 == 54
|
||||
|
||||
failure_demo.py:17: AssertionError
|
||||
________________________________ TestFailing.test_not ________________________________
|
||||
___________________________ TestFailing.test_not ___________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||
|
||||
|
@ -79,7 +79,7 @@ get on the terminal - we are working on that):
|
|||
E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>()
|
||||
|
||||
failure_demo.py:42: AssertionError
|
||||
______________________ TestSpecialisedExplanations.test_eq_text ______________________
|
||||
_________________ TestSpecialisedExplanations.test_eq_text _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -90,7 +90,7 @@ get on the terminal - we are working on that):
|
|||
E + eggs
|
||||
|
||||
failure_demo.py:47: AssertionError
|
||||
__________________ TestSpecialisedExplanations.test_eq_similar_text __________________
|
||||
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -103,7 +103,7 @@ get on the terminal - we are working on that):
|
|||
E ? ^
|
||||
|
||||
failure_demo.py:50: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_multiline_text _________________
|
||||
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -116,7 +116,7 @@ get on the terminal - we are working on that):
|
|||
E bar
|
||||
|
||||
failure_demo.py:53: AssertionError
|
||||
___________________ TestSpecialisedExplanations.test_eq_long_text ____________________
|
||||
______________ TestSpecialisedExplanations.test_eq_long_text _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -133,7 +133,7 @@ get on the terminal - we are working on that):
|
|||
E ? ^
|
||||
|
||||
failure_demo.py:58: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_long_text_multiline _______________
|
||||
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -153,7 +153,7 @@ get on the terminal - we are working on that):
|
|||
E ...Full output truncated (7 lines hidden), use '-vv' to show
|
||||
|
||||
failure_demo.py:63: AssertionError
|
||||
______________________ TestSpecialisedExplanations.test_eq_list ______________________
|
||||
_________________ TestSpecialisedExplanations.test_eq_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -164,7 +164,7 @@ get on the terminal - we are working on that):
|
|||
E Use -v to get the full diff
|
||||
|
||||
failure_demo.py:66: AssertionError
|
||||
___________________ TestSpecialisedExplanations.test_eq_list_long ____________________
|
||||
______________ TestSpecialisedExplanations.test_eq_list_long _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -177,7 +177,7 @@ get on the terminal - we are working on that):
|
|||
E Use -v to get the full diff
|
||||
|
||||
failure_demo.py:71: AssertionError
|
||||
______________________ TestSpecialisedExplanations.test_eq_dict ______________________
|
||||
_________________ TestSpecialisedExplanations.test_eq_dict _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -195,7 +195,7 @@ get on the terminal - we are working on that):
|
|||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||
|
||||
failure_demo.py:74: AssertionError
|
||||
______________________ TestSpecialisedExplanations.test_eq_set _______________________
|
||||
_________________ TestSpecialisedExplanations.test_eq_set __________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -213,7 +213,7 @@ get on the terminal - we are working on that):
|
|||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||
|
||||
failure_demo.py:77: AssertionError
|
||||
__________________ TestSpecialisedExplanations.test_eq_longer_list ___________________
|
||||
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -224,7 +224,7 @@ get on the terminal - we are working on that):
|
|||
E Use -v to get the full diff
|
||||
|
||||
failure_demo.py:80: AssertionError
|
||||
______________________ TestSpecialisedExplanations.test_in_list ______________________
|
||||
_________________ TestSpecialisedExplanations.test_in_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -233,7 +233,7 @@ get on the terminal - we are working on that):
|
|||
E assert 1 in [0, 2, 3, 4, 5]
|
||||
|
||||
failure_demo.py:83: AssertionError
|
||||
_______________ TestSpecialisedExplanations.test_not_in_text_multiline _______________
|
||||
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -252,7 +252,7 @@ get on the terminal - we are working on that):
|
|||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||
|
||||
failure_demo.py:87: AssertionError
|
||||
________________ TestSpecialisedExplanations.test_not_in_text_single _________________
|
||||
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -265,7 +265,7 @@ get on the terminal - we are working on that):
|
|||
E ? +++
|
||||
|
||||
failure_demo.py:91: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_not_in_text_single_long ______________
|
||||
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -278,7 +278,7 @@ get on the terminal - we are working on that):
|
|||
E ? +++
|
||||
|
||||
failure_demo.py:95: AssertionError
|
||||
___________ TestSpecialisedExplanations.test_not_in_text_single_long_term ____________
|
||||
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -291,7 +291,7 @@ get on the terminal - we are working on that):
|
|||
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
failure_demo.py:99: AssertionError
|
||||
___________________ TestSpecialisedExplanations.test_eq_dataclass ____________________
|
||||
______________ TestSpecialisedExplanations.test_eq_dataclass _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -312,7 +312,7 @@ get on the terminal - we are working on that):
|
|||
E b: 'b' != 'c'
|
||||
|
||||
failure_demo.py:111: AssertionError
|
||||
_____________________ TestSpecialisedExplanations.test_eq_attrs ______________________
|
||||
________________ TestSpecialisedExplanations.test_eq_attrs _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
||||
|
@ -333,7 +333,7 @@ get on the terminal - we are working on that):
|
|||
E b: 'b' != 'c'
|
||||
|
||||
failure_demo.py:123: AssertionError
|
||||
___________________________________ test_attribute ___________________________________
|
||||
______________________________ test_attribute ______________________________
|
||||
|
||||
def test_attribute():
|
||||
class Foo(object):
|
||||
|
@ -345,7 +345,7 @@ get on the terminal - we are working on that):
|
|||
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.b
|
||||
|
||||
failure_demo.py:131: AssertionError
|
||||
______________________________ test_attribute_instance _______________________________
|
||||
_________________________ test_attribute_instance __________________________
|
||||
|
||||
def test_attribute_instance():
|
||||
class Foo(object):
|
||||
|
@ -357,7 +357,7 @@ get on the terminal - we are working on that):
|
|||
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
|
||||
|
||||
failure_demo.py:138: AssertionError
|
||||
_______________________________ test_attribute_failure _______________________________
|
||||
__________________________ test_attribute_failure __________________________
|
||||
|
||||
def test_attribute_failure():
|
||||
class Foo(object):
|
||||
|
@ -370,7 +370,7 @@ get on the terminal - we are working on that):
|
|||
> assert i.b == 2
|
||||
|
||||
failure_demo.py:149:
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
|
||||
self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0xdeadbeef>
|
||||
|
||||
|
@ -379,7 +379,7 @@ get on the terminal - we are working on that):
|
|||
E Exception: Failed to get attrib
|
||||
|
||||
failure_demo.py:144: Exception
|
||||
______________________________ test_attribute_multiple _______________________________
|
||||
_________________________ test_attribute_multiple __________________________
|
||||
|
||||
def test_attribute_multiple():
|
||||
class Foo(object):
|
||||
|
@ -396,7 +396,7 @@ get on the terminal - we are working on that):
|
|||
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
|
||||
|
||||
failure_demo.py:159: AssertionError
|
||||
_______________________________ TestRaises.test_raises _______________________________
|
||||
__________________________ TestRaises.test_raises __________________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
||||
|
@ -406,7 +406,7 @@ get on the terminal - we are working on that):
|
|||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||
|
||||
failure_demo.py:169: ValueError
|
||||
___________________________ TestRaises.test_raises_doesnt ____________________________
|
||||
______________________ TestRaises.test_raises_doesnt _______________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
||||
|
@ -415,7 +415,7 @@ get on the terminal - we are working on that):
|
|||
E Failed: DID NOT RAISE <class 'OSError'>
|
||||
|
||||
failure_demo.py:172: Failed
|
||||
_______________________________ TestRaises.test_raise ________________________________
|
||||
__________________________ TestRaises.test_raise ___________________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
||||
|
@ -424,7 +424,7 @@ get on the terminal - we are working on that):
|
|||
E ValueError: demo error
|
||||
|
||||
failure_demo.py:175: ValueError
|
||||
_____________________________ TestRaises.test_tupleerror _____________________________
|
||||
________________________ TestRaises.test_tupleerror ________________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
||||
|
@ -433,7 +433,7 @@ get on the terminal - we are working on that):
|
|||
E ValueError: not enough values to unpack (expected 2, got 1)
|
||||
|
||||
failure_demo.py:178: ValueError
|
||||
___________ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ___________
|
||||
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
||||
|
@ -444,9 +444,9 @@ get on the terminal - we are working on that):
|
|||
E TypeError: 'int' object is not iterable
|
||||
|
||||
failure_demo.py:183: TypeError
|
||||
-------------------------------- Captured stdout call --------------------------------
|
||||
--------------------------- Captured stdout call ---------------------------
|
||||
items is [1, 2, 3]
|
||||
_____________________________ TestRaises.test_some_error _____________________________
|
||||
________________________ TestRaises.test_some_error ________________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
||||
|
@ -455,7 +455,7 @@ get on the terminal - we are working on that):
|
|||
E NameError: name 'namenotexi' is not defined
|
||||
|
||||
failure_demo.py:186: NameError
|
||||
_________________________ test_dynamic_compile_shows_nicely __________________________
|
||||
____________________ test_dynamic_compile_shows_nicely _____________________
|
||||
|
||||
def test_dynamic_compile_shows_nicely():
|
||||
import imp
|
||||
|
@ -470,14 +470,14 @@ get on the terminal - we are working on that):
|
|||
> module.foo()
|
||||
|
||||
failure_demo.py:204:
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
|
||||
def foo():
|
||||
> assert 1 == 0
|
||||
E AssertionError
|
||||
|
||||
<0-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:201>:2: AssertionError
|
||||
_________________________ TestMoreErrors.test_complex_error __________________________
|
||||
____________________ TestMoreErrors.test_complex_error _____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||
|
||||
|
@ -491,10 +491,10 @@ get on the terminal - we are working on that):
|
|||
> somefunc(f(), g())
|
||||
|
||||
failure_demo.py:215:
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
failure_demo.py:13: in somefunc
|
||||
otherfunc(x, y)
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
|
||||
a = 44, b = 43
|
||||
|
||||
|
@ -503,7 +503,7 @@ get on the terminal - we are working on that):
|
|||
E assert 44 == 43
|
||||
|
||||
failure_demo.py:9: AssertionError
|
||||
________________________ TestMoreErrors.test_z1_unpack_error _________________________
|
||||
___________________ TestMoreErrors.test_z1_unpack_error ____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||
|
||||
|
@ -513,7 +513,7 @@ get on the terminal - we are working on that):
|
|||
E ValueError: not enough values to unpack (expected 2, got 0)
|
||||
|
||||
failure_demo.py:219: ValueError
|
||||
_________________________ TestMoreErrors.test_z2_type_error __________________________
|
||||
____________________ TestMoreErrors.test_z2_type_error _____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||
|
||||
|
@ -523,7 +523,7 @@ get on the terminal - we are working on that):
|
|||
E TypeError: 'int' object is not iterable
|
||||
|
||||
failure_demo.py:223: TypeError
|
||||
___________________________ TestMoreErrors.test_startswith ___________________________
|
||||
______________________ TestMoreErrors.test_startswith ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||
|
||||
|
@ -536,7 +536,7 @@ get on the terminal - we are working on that):
|
|||
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
|
||||
|
||||
failure_demo.py:228: AssertionError
|
||||
_______________________ TestMoreErrors.test_startswith_nested ________________________
|
||||
__________________ TestMoreErrors.test_startswith_nested ___________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||
|
||||
|
@ -555,7 +555,7 @@ get on the terminal - we are working on that):
|
|||
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>()
|
||||
|
||||
failure_demo.py:237: AssertionError
|
||||
__________________________ TestMoreErrors.test_global_func ___________________________
|
||||
_____________________ TestMoreErrors.test_global_func ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||
|
||||
|
@ -566,7 +566,7 @@ get on the terminal - we are working on that):
|
|||
E + where 43 = globf(42)
|
||||
|
||||
failure_demo.py:240: AssertionError
|
||||
____________________________ TestMoreErrors.test_instance ____________________________
|
||||
_______________________ TestMoreErrors.test_instance _______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||
|
||||
|
@ -577,7 +577,7 @@ get on the terminal - we are working on that):
|
|||
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
|
||||
|
||||
failure_demo.py:244: AssertionError
|
||||
____________________________ TestMoreErrors.test_compare _____________________________
|
||||
_______________________ TestMoreErrors.test_compare ________________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||
|
||||
|
@ -587,7 +587,7 @@ get on the terminal - we are working on that):
|
|||
E + where 11 = globf(10)
|
||||
|
||||
failure_demo.py:247: AssertionError
|
||||
__________________________ TestMoreErrors.test_try_finally ___________________________
|
||||
_____________________ TestMoreErrors.test_try_finally ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||
|
||||
|
@ -598,7 +598,7 @@ get on the terminal - we are working on that):
|
|||
E assert 1 == 0
|
||||
|
||||
failure_demo.py:252: AssertionError
|
||||
________________________ TestCustomAssertMsg.test_single_line ________________________
|
||||
___________________ TestCustomAssertMsg.test_single_line ___________________
|
||||
|
||||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||
|
||||
|
@ -613,7 +613,7 @@ get on the terminal - we are working on that):
|
|||
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a
|
||||
|
||||
failure_demo.py:263: AssertionError
|
||||
_________________________ TestCustomAssertMsg.test_multiline _________________________
|
||||
____________________ TestCustomAssertMsg.test_multiline ____________________
|
||||
|
||||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||
|
||||
|
@ -632,7 +632,7 @@ get on the terminal - we are working on that):
|
|||
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a
|
||||
|
||||
failure_demo.py:270: AssertionError
|
||||
________________________ TestCustomAssertMsg.test_custom_repr ________________________
|
||||
___________________ TestCustomAssertMsg.test_custom_repr ___________________
|
||||
|
||||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||
|
||||
|
@ -654,4 +654,4 @@ get on the terminal - we are working on that):
|
|||
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
|
||||
|
||||
failure_demo.py:283: AssertionError
|
||||
============================= 44 failed in 0.12 seconds ==============================
|
||||
======================== 44 failed in 0.12 seconds =========================
|
||||
|
|
|
@ -48,9 +48,9 @@ Let's run this without supplying our new option:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q test_sample.py
|
||||
F [100%]
|
||||
====================================== FAILURES ======================================
|
||||
____________________________________ test_answer _____________________________________
|
||||
F [100%]
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_answer ________________________________
|
||||
|
||||
cmdopt = 'type1'
|
||||
|
||||
|
@ -63,7 +63,7 @@ Let's run this without supplying our new option:
|
|||
E assert 0
|
||||
|
||||
test_sample.py:6: AssertionError
|
||||
-------------------------------- Captured stdout call --------------------------------
|
||||
--------------------------- Captured stdout call ---------------------------
|
||||
first
|
||||
1 failed in 0.12 seconds
|
||||
|
||||
|
@ -72,9 +72,9 @@ And now with supplying a command line option:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q --cmdopt=type2
|
||||
F [100%]
|
||||
====================================== FAILURES ======================================
|
||||
____________________________________ test_answer _____________________________________
|
||||
F [100%]
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_answer ________________________________
|
||||
|
||||
cmdopt = 'type2'
|
||||
|
||||
|
@ -87,7 +87,7 @@ And now with supplying a command line option:
|
|||
E assert 0
|
||||
|
||||
test_sample.py:6: AssertionError
|
||||
-------------------------------- Captured stdout call --------------------------------
|
||||
--------------------------- Captured stdout call ---------------------------
|
||||
second
|
||||
1 failed in 0.12 seconds
|
||||
|
||||
|
@ -126,14 +126,14 @@ directory with the above conftest.py:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 0 items
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
||||
.. _`excontrolskip`:
|
||||
|
||||
|
@ -188,34 +188,34 @@ and when running it will see a skipped "slow" test:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -rs # "-rs" means report details on the little 's'
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 2 items
|
||||
|
||||
test_module.py .s [100%]
|
||||
============================== short test summary info ===============================
|
||||
test_module.py .s [100%]
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] test_module.py:8: need --runslow option to run
|
||||
|
||||
======================== 1 passed, 1 skipped in 0.12 seconds =========================
|
||||
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
||||
|
||||
Or run it including the ``slow`` marked test:
|
||||
|
||||
.. code-block:: pytest
|
||||
|
||||
$ pytest --runslow
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 2 items
|
||||
|
||||
test_module.py .. [100%]
|
||||
test_module.py .. [100%]
|
||||
|
||||
============================== 2 passed in 0.12 seconds ==============================
|
||||
========================= 2 passed in 0.12 seconds =========================
|
||||
|
||||
Writing well integrated assertion helpers
|
||||
--------------------------------------------------
|
||||
|
@ -251,9 +251,9 @@ Let's run our little function:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q test_checkconfig.py
|
||||
F [100%]
|
||||
====================================== FAILURES ======================================
|
||||
___________________________________ test_something ___________________________________
|
||||
F [100%]
|
||||
================================= FAILURES =================================
|
||||
______________________________ test_something ______________________________
|
||||
|
||||
def test_something():
|
||||
> checkconfig(42)
|
||||
|
@ -350,7 +350,7 @@ which will add the string to the test header accordingly:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
project deps: mylib-1.1
|
||||
|
@ -358,7 +358,7 @@ which will add the string to the test header accordingly:
|
|||
plugins: hypothesis-3.x.y
|
||||
collected 0 items
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
|
@ -380,7 +380,7 @@ which will add info only when run with "--v":
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -v
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
|
@ -390,21 +390,21 @@ which will add info only when run with "--v":
|
|||
plugins: hypothesis-3.x.y
|
||||
collecting ... collected 0 items
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
||||
and nothing when run plainly:
|
||||
|
||||
.. code-block:: pytest
|
||||
|
||||
$ pytest
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 0 items
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
||||
profiling test duration
|
||||
--------------------------
|
||||
|
@ -438,20 +438,20 @@ Now we can profile which test functions execute the slowest:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest --durations=3
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 3 items
|
||||
|
||||
test_some_are_slow.py ... [100%]
|
||||
test_some_are_slow.py ... [100%]
|
||||
|
||||
============================== slowest 3 test durations ==============================
|
||||
========================= slowest 3 test durations =========================
|
||||
0.30s call test_some_are_slow.py::test_funcslow2
|
||||
0.20s call test_some_are_slow.py::test_funcslow1
|
||||
0.10s call test_some_are_slow.py::test_funcfast
|
||||
============================== 3 passed in 0.12 seconds ==============================
|
||||
========================= 3 passed in 0.12 seconds =========================
|
||||
|
||||
incremental testing - test steps
|
||||
---------------------------------------------------
|
||||
|
@ -514,17 +514,17 @@ If we run this:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -rx
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 4 items
|
||||
|
||||
test_step.py .Fx. [100%]
|
||||
test_step.py .Fx. [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
_________________________ TestUserHandling.test_modification _________________________
|
||||
================================= FAILURES =================================
|
||||
____________________ TestUserHandling.test_modification ____________________
|
||||
|
||||
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
||||
|
||||
|
@ -533,10 +533,10 @@ If we run this:
|
|||
E assert 0
|
||||
|
||||
test_step.py:11: AssertionError
|
||||
============================== short test summary info ===============================
|
||||
========================= short test summary info ==========================
|
||||
XFAIL test_step.py::TestUserHandling::test_deletion
|
||||
reason: previous test failed (test_modification)
|
||||
=================== 1 failed, 2 passed, 1 xfailed in 0.12 seconds ====================
|
||||
============== 1 failed, 2 passed, 1 xfailed in 0.12 seconds ===============
|
||||
|
||||
We'll see that ``test_deletion`` was not executed because ``test_modification``
|
||||
failed. It is reported as an "expected failure".
|
||||
|
@ -599,20 +599,20 @@ We can run this:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 7 items
|
||||
|
||||
test_step.py .Fx. [ 57%]
|
||||
a/test_db.py F [ 71%]
|
||||
a/test_db2.py F [ 85%]
|
||||
b/test_error.py E [100%]
|
||||
test_step.py .Fx. [ 57%]
|
||||
a/test_db.py F [ 71%]
|
||||
a/test_db2.py F [ 85%]
|
||||
b/test_error.py E [100%]
|
||||
|
||||
======================================= ERRORS =======================================
|
||||
____________________________ ERROR at setup of test_root _____________________________
|
||||
================================== ERRORS ==================================
|
||||
_______________________ ERROR at setup of test_root ________________________
|
||||
file $REGENDOC_TMPDIR/b/test_error.py, line 1
|
||||
def test_root(db): # no db here, will error out
|
||||
E fixture 'db' not found
|
||||
|
@ -620,8 +620,8 @@ We can run this:
|
|||
> use 'pytest --fixtures [testpath]' for help on them.
|
||||
|
||||
$REGENDOC_TMPDIR/b/test_error.py:1
|
||||
====================================== FAILURES ======================================
|
||||
_________________________ TestUserHandling.test_modification _________________________
|
||||
================================= FAILURES =================================
|
||||
____________________ TestUserHandling.test_modification ____________________
|
||||
|
||||
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
||||
|
||||
|
@ -630,7 +630,7 @@ We can run this:
|
|||
E assert 0
|
||||
|
||||
test_step.py:11: AssertionError
|
||||
______________________________________ test_a1 _______________________________________
|
||||
_________________________________ test_a1 __________________________________
|
||||
|
||||
db = <conftest.DB object at 0xdeadbeef>
|
||||
|
||||
|
@ -640,7 +640,7 @@ We can run this:
|
|||
E assert 0
|
||||
|
||||
a/test_db.py:2: AssertionError
|
||||
______________________________________ test_a2 _______________________________________
|
||||
_________________________________ test_a2 __________________________________
|
||||
|
||||
db = <conftest.DB object at 0xdeadbeef>
|
||||
|
||||
|
@ -650,7 +650,7 @@ We can run this:
|
|||
E assert 0
|
||||
|
||||
a/test_db2.py:2: AssertionError
|
||||
=============== 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ===============
|
||||
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ==========
|
||||
|
||||
The two test modules in the ``a`` directory see the same ``db`` fixture instance
|
||||
while the one test in the sister-directory ``b`` doesn't see it. We could of course
|
||||
|
@ -714,17 +714,17 @@ and run them:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_module.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 2 items
|
||||
|
||||
test_module.py FF [100%]
|
||||
test_module.py FF [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
_____________________________________ test_fail1 _____________________________________
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_fail1 ________________________________
|
||||
|
||||
tmpdir = local('PYTEST_TMPDIR/test_fail10')
|
||||
|
||||
|
@ -733,14 +733,14 @@ and run them:
|
|||
E assert 0
|
||||
|
||||
test_module.py:2: AssertionError
|
||||
_____________________________________ test_fail2 _____________________________________
|
||||
________________________________ test_fail2 ________________________________
|
||||
|
||||
def test_fail2():
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_module.py:6: AssertionError
|
||||
============================== 2 failed in 0.12 seconds ==============================
|
||||
========================= 2 failed in 0.12 seconds =========================
|
||||
|
||||
you will have a "failures" file which contains the failing test ids::
|
||||
|
||||
|
@ -817,7 +817,7 @@ and run it:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -s test_module.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
|
@ -828,8 +828,8 @@ and run it:
|
|||
Fexecuting test failed test_module.py::test_call_fails
|
||||
F
|
||||
|
||||
======================================= ERRORS =======================================
|
||||
_________________________ ERROR at setup of test_setup_fails _________________________
|
||||
================================== ERRORS ==================================
|
||||
____________________ ERROR at setup of test_setup_fails ____________________
|
||||
|
||||
@pytest.fixture
|
||||
def other():
|
||||
|
@ -837,8 +837,8 @@ and run it:
|
|||
E assert 0
|
||||
|
||||
test_module.py:7: AssertionError
|
||||
====================================== FAILURES ======================================
|
||||
__________________________________ test_call_fails ___________________________________
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_call_fails ______________________________
|
||||
|
||||
something = None
|
||||
|
||||
|
@ -847,14 +847,14 @@ and run it:
|
|||
E assert 0
|
||||
|
||||
test_module.py:15: AssertionError
|
||||
_____________________________________ test_fail2 _____________________________________
|
||||
________________________________ test_fail2 ________________________________
|
||||
|
||||
def test_fail2():
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_module.py:19: AssertionError
|
||||
========================= 2 failed, 1 error in 0.12 seconds ==========================
|
||||
==================== 2 failed, 1 error in 0.12 seconds =====================
|
||||
|
||||
You'll see that the fixture finalizers could use the precise reporting
|
||||
information.
|
||||
|
|
|
@ -71,17 +71,17 @@ marked ``smtp_connection`` fixture function. Running the test looks like this:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_smtpsimple.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 1 item
|
||||
|
||||
test_smtpsimple.py F [100%]
|
||||
test_smtpsimple.py F [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
_____________________________________ test_ehlo ______________________________________
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_ehlo _________________________________
|
||||
|
||||
smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
|
||||
|
||||
|
@ -92,7 +92,7 @@ marked ``smtp_connection`` fixture function. Running the test looks like this:
|
|||
E assert 0
|
||||
|
||||
test_smtpsimple.py:11: AssertionError
|
||||
============================== 1 failed in 0.12 seconds ==============================
|
||||
========================= 1 failed in 0.12 seconds =========================
|
||||
|
||||
In the failure traceback we see that the test function was called with a
|
||||
``smtp_connection`` argument, the ``smtplib.SMTP()`` instance created by the fixture
|
||||
|
@ -213,17 +213,17 @@ inspect what is going on and can now run the tests:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_module.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 2 items
|
||||
|
||||
test_module.py FF [100%]
|
||||
test_module.py FF [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
_____________________________________ test_ehlo ______________________________________
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_ehlo _________________________________
|
||||
|
||||
smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
|
||||
|
||||
|
@ -235,7 +235,7 @@ inspect what is going on and can now run the tests:
|
|||
E assert 0
|
||||
|
||||
test_module.py:6: AssertionError
|
||||
_____________________________________ test_noop ______________________________________
|
||||
________________________________ test_noop _________________________________
|
||||
|
||||
smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
|
||||
|
||||
|
@ -246,7 +246,7 @@ inspect what is going on and can now run the tests:
|
|||
E assert 0
|
||||
|
||||
test_module.py:11: AssertionError
|
||||
============================== 2 failed in 0.12 seconds ==============================
|
||||
========================= 2 failed in 0.12 seconds =========================
|
||||
|
||||
You see the two ``assert 0`` failing and more importantly you can also see
|
||||
that the same (module-scoped) ``smtp_connection`` object was passed into the
|
||||
|
@ -495,14 +495,14 @@ Running it:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -qq --tb=short test_anothersmtp.py
|
||||
F [100%]
|
||||
====================================== FAILURES ======================================
|
||||
___________________________________ test_showhelo ____________________________________
|
||||
F [100%]
|
||||
================================= FAILURES =================================
|
||||
______________________________ test_showhelo _______________________________
|
||||
test_anothersmtp.py:5: in test_showhelo
|
||||
assert 0, smtp_connection.helo()
|
||||
E AssertionError: (250, b'mail.python.org')
|
||||
E assert 0
|
||||
------------------------------ Captured stdout teardown ------------------------------
|
||||
------------------------- Captured stdout teardown -------------------------
|
||||
finalizing <smtplib.SMTP object at 0xdeadbeef> (mail.python.org)
|
||||
|
||||
voila! The ``smtp_connection`` fixture function picked up our mail server name
|
||||
|
@ -599,9 +599,9 @@ So let's just do another run:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q test_module.py
|
||||
FFFF [100%]
|
||||
====================================== FAILURES ======================================
|
||||
_____________________________ test_ehlo[smtp.gmail.com] ______________________________
|
||||
FFFF [100%]
|
||||
================================= FAILURES =================================
|
||||
________________________ test_ehlo[smtp.gmail.com] _________________________
|
||||
|
||||
smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
|
||||
|
||||
|
@ -613,7 +613,7 @@ So let's just do another run:
|
|||
E assert 0
|
||||
|
||||
test_module.py:6: AssertionError
|
||||
_____________________________ test_noop[smtp.gmail.com] ______________________________
|
||||
________________________ test_noop[smtp.gmail.com] _________________________
|
||||
|
||||
smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
|
||||
|
||||
|
@ -624,7 +624,7 @@ So let's just do another run:
|
|||
E assert 0
|
||||
|
||||
test_module.py:11: AssertionError
|
||||
_____________________________ test_ehlo[mail.python.org] _____________________________
|
||||
________________________ test_ehlo[mail.python.org] ________________________
|
||||
|
||||
smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
|
||||
|
||||
|
@ -635,9 +635,9 @@ So let's just do another run:
|
|||
E AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nPIPELINING\nSIZE 51200000\nETRN\nSTARTTLS\nAUTH DIGEST-MD5 NTLM CRAM-MD5\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8\nCHUNKING'
|
||||
|
||||
test_module.py:5: AssertionError
|
||||
------------------------------- Captured stdout setup --------------------------------
|
||||
-------------------------- Captured stdout setup ---------------------------
|
||||
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
||||
_____________________________ test_noop[mail.python.org] _____________________________
|
||||
________________________ test_noop[mail.python.org] ________________________
|
||||
|
||||
smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
|
||||
|
||||
|
@ -648,7 +648,7 @@ So let's just do another run:
|
|||
E assert 0
|
||||
|
||||
test_module.py:11: AssertionError
|
||||
------------------------------ Captured stdout teardown ------------------------------
|
||||
------------------------- Captured stdout teardown -------------------------
|
||||
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
||||
4 failed in 0.12 seconds
|
||||
|
||||
|
@ -703,7 +703,7 @@ Running the above tests results in the following test IDs being used:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest --collect-only
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
|
@ -723,7 +723,7 @@ Running the above tests results in the following test IDs being used:
|
|||
<Function test_ehlo[mail.python.org]>
|
||||
<Function test_noop[mail.python.org]>
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
======================= no tests ran in 0.12 seconds =======================
|
||||
|
||||
.. _`fixture-parametrize-marks`:
|
||||
|
||||
|
@ -749,7 +749,7 @@ Running this test will *skip* the invocation of ``data_set`` with value ``2``:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_fixture_marks.py -v
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
|
@ -757,11 +757,11 @@ Running this test will *skip* the invocation of ``data_set`` with value ``2``:
|
|||
plugins: hypothesis-3.x.y
|
||||
collecting ... collected 3 items
|
||||
|
||||
test_fixture_marks.py::test_data[0] PASSED [ 33%]
|
||||
test_fixture_marks.py::test_data[1] PASSED [ 66%]
|
||||
test_fixture_marks.py::test_data[2] SKIPPED [100%]
|
||||
test_fixture_marks.py::test_data[0] PASSED [ 33%]
|
||||
test_fixture_marks.py::test_data[1] PASSED [ 66%]
|
||||
test_fixture_marks.py::test_data[2] SKIPPED [100%]
|
||||
|
||||
======================== 2 passed, 1 skipped in 0.12 seconds =========================
|
||||
=================== 2 passed, 1 skipped in 0.12 seconds ====================
|
||||
|
||||
.. _`interdependent fixtures`:
|
||||
|
||||
|
@ -796,7 +796,7 @@ Here we declare an ``app`` fixture which receives the previously defined
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -v test_appsetup.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
|
@ -804,10 +804,10 @@ Here we declare an ``app`` fixture which receives the previously defined
|
|||
plugins: hypothesis-3.x.y
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%]
|
||||
test_appsetup.py::test_smtp_connection_exists[mail.python.org] PASSED [100%]
|
||||
test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%]
|
||||
test_appsetup.py::test_smtp_connection_exists[mail.python.org] PASSED [100%]
|
||||
|
||||
============================== 2 passed in 0.12 seconds ==============================
|
||||
========================= 2 passed in 0.12 seconds =========================
|
||||
|
||||
Due to the parametrization of ``smtp_connection``, the test will run twice with two
|
||||
different ``App`` instances and respective smtp servers. There is no
|
||||
|
@ -869,7 +869,7 @@ Let's run the tests in verbose mode and with looking at the print-output:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -v -s test_module.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
|
||||
cachedir: .pytest_cache
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
|
@ -910,7 +910,7 @@ Let's run the tests in verbose mode and with looking at the print-output:
|
|||
TEARDOWN modarg mod2
|
||||
|
||||
|
||||
============================== 8 passed in 0.12 seconds ==============================
|
||||
========================= 8 passed in 0.12 seconds =========================
|
||||
|
||||
You can see that the parametrized module-scoped ``modarg`` resource caused an
|
||||
ordering of test execution that lead to the fewest possible "active" resources.
|
||||
|
@ -975,7 +975,7 @@ to verify our fixture is activated and the tests pass:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q
|
||||
.. [100%]
|
||||
.. [100%]
|
||||
2 passed in 0.12 seconds
|
||||
|
||||
You can specify multiple fixtures like this:
|
||||
|
@ -1076,7 +1076,7 @@ If we run it, we get two passing tests:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q
|
||||
.. [100%]
|
||||
.. [100%]
|
||||
2 passed in 0.12 seconds
|
||||
|
||||
Here is how autouse fixtures work in other scopes:
|
||||
|
|
|
@ -47,17 +47,17 @@ That’s it. You can now execute the test function:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 1 item
|
||||
|
||||
test_sample.py F [100%]
|
||||
test_sample.py F [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
____________________________________ test_answer _____________________________________
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_answer ________________________________
|
||||
|
||||
def test_answer():
|
||||
> assert func(3) == 5
|
||||
|
@ -65,7 +65,7 @@ That’s it. You can now execute the test function:
|
|||
E + where 4 = func(3)
|
||||
|
||||
test_sample.py:5: AssertionError
|
||||
============================== 1 failed in 0.12 seconds ==============================
|
||||
========================= 1 failed in 0.12 seconds =========================
|
||||
|
||||
This test returns a failure report because ``func(3)`` does not return ``5``.
|
||||
|
||||
|
@ -98,7 +98,7 @@ Execute the test function with “quiet” reporting mode:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q test_sysexit.py
|
||||
. [100%]
|
||||
. [100%]
|
||||
1 passed in 0.12 seconds
|
||||
|
||||
Group multiple tests in a class
|
||||
|
@ -121,9 +121,9 @@ Once you develop multiple tests, you may want to group them into a class. pytest
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q test_class.py
|
||||
.F [100%]
|
||||
====================================== FAILURES ======================================
|
||||
_________________________________ TestClass.test_two _________________________________
|
||||
.F [100%]
|
||||
================================= FAILURES =================================
|
||||
____________________________ TestClass.test_two ____________________________
|
||||
|
||||
self = <test_class.TestClass object at 0xdeadbeef>
|
||||
|
||||
|
@ -153,9 +153,9 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q test_tmpdir.py
|
||||
F [100%]
|
||||
====================================== FAILURES ======================================
|
||||
__________________________________ test_needsfiles ___________________________________
|
||||
F [100%]
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_needsfiles ______________________________
|
||||
|
||||
tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')
|
||||
|
||||
|
@ -165,7 +165,7 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look
|
|||
E assert 0
|
||||
|
||||
test_tmpdir.py:3: AssertionError
|
||||
-------------------------------- Captured stdout call --------------------------------
|
||||
--------------------------- Captured stdout call ---------------------------
|
||||
PYTEST_TMPDIR/test_needsfiles0
|
||||
1 failed in 0.12 seconds
|
||||
|
||||
|
|
|
@ -27,17 +27,17 @@ To execute it:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 1 item
|
||||
|
||||
test_sample.py F [100%]
|
||||
test_sample.py F [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
____________________________________ test_answer _____________________________________
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_answer ________________________________
|
||||
|
||||
def test_answer():
|
||||
> assert inc(3) == 5
|
||||
|
@ -45,7 +45,7 @@ To execute it:
|
|||
E + where 4 = inc(3)
|
||||
|
||||
test_sample.py:6: AssertionError
|
||||
============================== 1 failed in 0.12 seconds ==============================
|
||||
========================= 1 failed in 0.12 seconds =========================
|
||||
|
||||
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used.
|
||||
See :ref:`Getting Started <getstarted>` for more examples.
|
||||
|
|
|
@ -55,17 +55,17 @@ them in turn:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 3 items
|
||||
|
||||
test_expectation.py ..F [100%]
|
||||
test_expectation.py ..F [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
_________________________________ test_eval[6*9-42] __________________________________
|
||||
================================= FAILURES =================================
|
||||
____________________________ test_eval[6*9-42] _____________________________
|
||||
|
||||
test_input = '6*9', expected = 42
|
||||
|
||||
|
@ -80,7 +80,7 @@ them in turn:
|
|||
E + where 54 = eval('6*9')
|
||||
|
||||
test_expectation.py:8: AssertionError
|
||||
========================= 1 failed, 2 passed in 0.12 seconds =========================
|
||||
==================== 1 failed, 2 passed in 0.12 seconds ====================
|
||||
|
||||
As designed in this example, only one pair of input/output values fails
|
||||
the simple test function. And as usual with test function arguments,
|
||||
|
@ -108,16 +108,16 @@ Let's run this:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 3 items
|
||||
|
||||
test_expectation.py ..x [100%]
|
||||
test_expectation.py ..x [100%]
|
||||
|
||||
======================== 2 passed, 1 xfailed in 0.12 seconds =========================
|
||||
=================== 2 passed, 1 xfailed in 0.12 seconds ====================
|
||||
|
||||
The one parameter set which caused a failure previously now
|
||||
shows up as an "xfailed (expected to fail)" test.
|
||||
|
@ -177,7 +177,7 @@ command line option and the parametrization of our test function::
|
|||
If we now pass two stringinput values, our test will run twice::
|
||||
|
||||
$ pytest -q --stringinput="hello" --stringinput="world" test_strings.py
|
||||
.. [100%]
|
||||
.. [100%]
|
||||
2 passed in 0.12 seconds
|
||||
|
||||
Let's also run with a stringinput that will lead to a failing test:
|
||||
|
@ -185,9 +185,9 @@ Let's also run with a stringinput that will lead to a failing test:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q --stringinput="!" test_strings.py
|
||||
F [100%]
|
||||
====================================== FAILURES ======================================
|
||||
________________________________ test_valid_string[!] ________________________________
|
||||
F [100%]
|
||||
================================= FAILURES =================================
|
||||
___________________________ test_valid_string[!] ___________________________
|
||||
|
||||
stringinput = '!'
|
||||
|
||||
|
@ -209,8 +209,8 @@ list:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q -rs test_strings.py
|
||||
s [100%]
|
||||
============================== short test summary info ===============================
|
||||
s [100%]
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1
|
||||
1 skipped in 0.12 seconds
|
||||
|
||||
|
|
|
@ -328,15 +328,15 @@ Running it with the report-on-xfail option gives this output:
|
|||
.. code-block:: pytest
|
||||
|
||||
example $ pytest -rx xfail_demo.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/example/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR/example, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 7 items
|
||||
|
||||
xfail_demo.py xxxxxxx [100%]
|
||||
============================== short test summary info ===============================
|
||||
xfail_demo.py xxxxxxx [100%]
|
||||
========================= short test summary info ==========================
|
||||
XFAIL xfail_demo.py::test_hello
|
||||
XFAIL xfail_demo.py::test_hello2
|
||||
reason: [NOTRUN]
|
||||
|
@ -350,7 +350,7 @@ Running it with the report-on-xfail option gives this output:
|
|||
reason: reason
|
||||
XFAIL xfail_demo.py::test_hello7
|
||||
|
||||
============================= 7 xfailed in 0.12 seconds ==============================
|
||||
======================== 7 xfailed in 0.12 seconds =========================
|
||||
|
||||
.. _`skip/xfail with parametrize`:
|
||||
|
||||
|
|
|
@ -40,17 +40,17 @@ Running this would result in a passed test except for the last
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_tmp_path.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 1 item
|
||||
|
||||
test_tmp_path.py F [100%]
|
||||
test_tmp_path.py F [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
__________________________________ test_create_file __________________________________
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_create_file _____________________________
|
||||
|
||||
tmp_path = PosixPath('PYTEST_TMPDIR/test_create_file0')
|
||||
|
||||
|
@ -65,7 +65,7 @@ Running this would result in a passed test except for the last
|
|||
E assert 0
|
||||
|
||||
test_tmp_path.py:13: AssertionError
|
||||
============================== 1 failed in 0.12 seconds ==============================
|
||||
========================= 1 failed in 0.12 seconds =========================
|
||||
|
||||
The ``tmp_path_factory`` fixture
|
||||
--------------------------------
|
||||
|
@ -104,17 +104,17 @@ Running this would result in a passed test except for the last
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_tmpdir.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 1 item
|
||||
|
||||
test_tmpdir.py F [100%]
|
||||
test_tmpdir.py F [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
__________________________________ test_create_file __________________________________
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_create_file _____________________________
|
||||
|
||||
tmpdir = local('PYTEST_TMPDIR/test_create_file0')
|
||||
|
||||
|
@ -127,7 +127,7 @@ Running this would result in a passed test except for the last
|
|||
E assert 0
|
||||
|
||||
test_tmpdir.py:7: AssertionError
|
||||
============================== 1 failed in 0.12 seconds ==============================
|
||||
========================= 1 failed in 0.12 seconds =========================
|
||||
|
||||
.. _`tmpdir factory example`:
|
||||
|
||||
|
|
|
@ -127,17 +127,17 @@ the ``self.db`` values in the traceback:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_unittest_db.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 2 items
|
||||
|
||||
test_unittest_db.py FF [100%]
|
||||
test_unittest_db.py FF [100%]
|
||||
|
||||
====================================== FAILURES ======================================
|
||||
________________________________ MyTest.test_method1 _________________________________
|
||||
================================= FAILURES =================================
|
||||
___________________________ MyTest.test_method1 ____________________________
|
||||
|
||||
self = <test_unittest_db.MyTest testMethod=test_method1>
|
||||
|
||||
|
@ -148,7 +148,7 @@ the ``self.db`` values in the traceback:
|
|||
E assert 0
|
||||
|
||||
test_unittest_db.py:9: AssertionError
|
||||
________________________________ MyTest.test_method2 _________________________________
|
||||
___________________________ MyTest.test_method2 ____________________________
|
||||
|
||||
self = <test_unittest_db.MyTest testMethod=test_method2>
|
||||
|
||||
|
@ -158,7 +158,7 @@ the ``self.db`` values in the traceback:
|
|||
E assert 0
|
||||
|
||||
test_unittest_db.py:12: AssertionError
|
||||
============================== 2 failed in 0.12 seconds ==============================
|
||||
========================= 2 failed in 0.12 seconds =========================
|
||||
|
||||
This default pytest traceback shows that the two test methods
|
||||
share the same ``self.db`` instance which was our intention
|
||||
|
@ -208,7 +208,7 @@ Running this test module ...:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q test_unittest_cleandir.py
|
||||
. [100%]
|
||||
. [100%]
|
||||
1 passed in 0.12 seconds
|
||||
|
||||
... gives us one passed test because the ``initdir`` fixture function
|
||||
|
|
110
doc/en/usage.rst
110
doc/en/usage.rst
|
@ -191,14 +191,40 @@ Example:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -ra
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 0 items
|
||||
collected 6 items
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
test_example.py .FEsxX [100%]
|
||||
|
||||
================================== ERRORS ==================================
|
||||
_______________________ ERROR at setup of test_error _______________________
|
||||
|
||||
@pytest.fixture
|
||||
def error_fixture():
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_example.py:6: AssertionError
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_fail _________________________________
|
||||
|
||||
def test_fail():
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_example.py:14: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test
|
||||
XFAIL test_example.py::test_xfail
|
||||
reason: xfailing this test
|
||||
XPASS test_example.py::test_xpass always xfail
|
||||
ERROR test_example.py::test_error
|
||||
FAIL test_example.py::test_fail
|
||||
1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds
|
||||
|
||||
The ``-r`` options accepts a number of characters after it, with ``a`` used above meaning "all except passes".
|
||||
|
||||
|
@ -218,14 +244,36 @@ More than one character can be used, so for example to only see failed and skipp
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -rfs
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 0 items
|
||||
collected 6 items
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
test_example.py .FEsxX [100%]
|
||||
|
||||
================================== ERRORS ==================================
|
||||
_______________________ ERROR at setup of test_error _______________________
|
||||
|
||||
@pytest.fixture
|
||||
def error_fixture():
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_example.py:6: AssertionError
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_fail _________________________________
|
||||
|
||||
def test_fail():
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_example.py:14: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAIL test_example.py::test_fail
|
||||
SKIP [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test
|
||||
1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds
|
||||
|
||||
Using ``p`` lists the passing tests, whilst ``P`` adds an extra section "PASSES" with those tests that passed but had
|
||||
captured output:
|
||||
|
@ -233,14 +281,39 @@ captured output:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -rpP
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 0 items
|
||||
collected 6 items
|
||||
|
||||
============================ no tests ran in 0.12 seconds ============================
|
||||
test_example.py .FEsxX [100%]
|
||||
|
||||
================================== ERRORS ==================================
|
||||
_______________________ ERROR at setup of test_error _______________________
|
||||
|
||||
@pytest.fixture
|
||||
def error_fixture():
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_example.py:6: AssertionError
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_fail _________________________________
|
||||
|
||||
def test_fail():
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_example.py:14: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
PASSED test_example.py::test_ok
|
||||
================================== PASSES ==================================
|
||||
_________________________________ test_ok __________________________________
|
||||
--------------------------- Captured stdout call ---------------------------
|
||||
ok
|
||||
1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds
|
||||
|
||||
.. _pdb-option:
|
||||
|
||||
|
@ -626,8 +699,25 @@ Running it will show that ``MyPlugin`` was added and its
|
|||
hook was invoked::
|
||||
|
||||
$ python myinvoke.py
|
||||
. [100%]*** test run reporting finishing
|
||||
.FEsxX. [100%]*** test run reporting finishing
|
||||
|
||||
================================== ERRORS ==================================
|
||||
_______________________ ERROR at setup of test_error _______________________
|
||||
|
||||
@pytest.fixture
|
||||
def error_fixture():
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_example.py:6: AssertionError
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_fail _________________________________
|
||||
|
||||
def test_fail():
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_example.py:14: AssertionError
|
||||
|
||||
.. note::
|
||||
|
||||
|
|
|
@ -23,22 +23,22 @@ Running pytest now produces this output:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest test_show_warnings.py
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 1 item
|
||||
|
||||
test_show_warnings.py . [100%]
|
||||
test_show_warnings.py . [100%]
|
||||
|
||||
================================== warnings summary ==================================
|
||||
============================= warnings summary =============================
|
||||
test_show_warnings.py::test_one
|
||||
$REGENDOC_TMPDIR/test_show_warnings.py:4: UserWarning: api v1, should use functions from v2
|
||||
warnings.warn(UserWarning("api v1, should use functions from v2"))
|
||||
|
||||
-- Docs: https://docs.pytest.org/en/latest/warnings.html
|
||||
======================== 1 passed, 1 warnings in 0.12 seconds ========================
|
||||
=================== 1 passed, 1 warnings in 0.12 seconds ===================
|
||||
|
||||
The ``-W`` flag can be passed to control which warnings will be displayed or even turn
|
||||
them into errors:
|
||||
|
@ -46,15 +46,15 @@ them into errors:
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest -q test_show_warnings.py -W error::UserWarning
|
||||
F [100%]
|
||||
====================================== FAILURES ======================================
|
||||
______________________________________ test_one ______________________________________
|
||||
F [100%]
|
||||
================================= FAILURES =================================
|
||||
_________________________________ test_one _________________________________
|
||||
|
||||
def test_one():
|
||||
> assert api_v1() == 1
|
||||
|
||||
test_show_warnings.py:8:
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
|
||||
def api_v1():
|
||||
> warnings.warn(UserWarning("api v1, should use functions from v2"))
|
||||
|
@ -357,7 +357,7 @@ defines an ``__init__`` constructor, as this prevents the class from being insta
|
|||
|
||||
$ pytest test_pytest_warnings.py -q
|
||||
|
||||
================================== warnings summary ==================================
|
||||
============================= warnings summary =============================
|
||||
test_pytest_warnings.py:1
|
||||
$REGENDOC_TMPDIR/test_pytest_warnings.py:1: PytestWarning: cannot collect test class 'Test' because it has a __init__ constructor
|
||||
class Test:
|
||||
|
|
|
@ -411,22 +411,22 @@ additionally it is possible to copy examples for an example folder before runnin
|
|||
.. code-block:: pytest
|
||||
|
||||
$ pytest
|
||||
================================ test session starts =================================
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples')
|
||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||
plugins: hypothesis-3.x.y
|
||||
collected 2 items
|
||||
|
||||
test_example.py .. [100%]
|
||||
test_example.py .. [100%]
|
||||
|
||||
================================== warnings summary ==================================
|
||||
============================= warnings summary =============================
|
||||
test_example.py::test_plugin
|
||||
$REGENDOC_TMPDIR/test_example.py:4: PytestExperimentalApiWarning: testdir.copy_example is an experimental api that may change over time
|
||||
testdir.copy_example("test_example.py")
|
||||
|
||||
-- Docs: https://docs.pytest.org/en/latest/warnings.html
|
||||
======================== 2 passed, 1 warnings in 0.12 seconds ========================
|
||||
=================== 2 passed, 1 warnings in 0.12 seconds ===================
|
||||
|
||||
For more information about the result object that ``runpytest()`` returns, and
|
||||
the methods that it provides please check out the :py:class:`RunResult
|
||||
|
|
Loading…
Reference in New Issue