From d0e9b4812f083289cf14a581ab05da06d57d7174 Mon Sep 17 00:00:00 2001 From: Bruno Oliveira Date: Sat, 5 Jan 2019 17:32:16 +0000 Subject: [PATCH] Regendocs --- doc/en/assert.rst | 30 +++--- doc/en/cache.rst | 58 ++++++----- doc/en/capture.rst | 14 +-- doc/en/doctest.rst | 8 +- doc/en/example/markers.rst | 144 +++++++++++++++++----------- doc/en/example/nonpython.rst | 32 ++++--- doc/en/example/parametrize.rst | 78 ++++++++------- doc/en/example/pythoncollection.rst | 18 ++-- doc/en/example/reportingdemo.rst | 128 ++++++++++++++----------- doc/en/example/simple.rst | 144 ++++++++++++++++------------ doc/en/fixture.rst | 88 +++++++++-------- doc/en/getting-started.rst | 30 +++--- doc/en/index.rst | 12 ++- doc/en/parametrize.rst | 32 ++++--- doc/en/skipping.rst | 10 +- doc/en/tmpdir.rst | 24 +++-- doc/en/unittest.rst | 16 ++-- doc/en/usage.rst | 20 ++-- doc/en/warnings.rst | 20 ++-- doc/en/writing_plugins.rst | 10 +- 20 files changed, 532 insertions(+), 384 deletions(-) diff --git a/doc/en/assert.rst b/doc/en/assert.rst index b13a071f6..7f422af1f 100644 --- a/doc/en/assert.rst +++ b/doc/en/assert.rst @@ -27,15 +27,17 @@ you will see the return value of the function call: .. code-block:: pytest $ pytest test_assert1.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 1 item - test_assert1.py F [100%] + test_assert1.py F [100%] - ================================= FAILURES ================================= - ______________________________ test_function _______________________________ + ====================================== FAILURES ====================================== + ___________________________________ test_function ____________________________________ def test_function(): > assert f() == 4 @@ -43,7 +45,7 @@ you will see the return value of the function call: E + where 3 = f() test_assert1.py:5: AssertionError - ========================= 1 failed in 0.12 seconds ========================= + ============================== 1 failed in 0.12 seconds ============================== ``pytest`` has support for showing the values of the most common subexpressions including calls, attributes, comparisons, and binary and unary @@ -171,15 +173,17 @@ if you run this module: .. code-block:: pytest $ pytest test_assert2.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 1 item - test_assert2.py F [100%] + test_assert2.py F [100%] - ================================= FAILURES ================================= - ___________________________ test_set_comparison ____________________________ + ====================================== FAILURES ====================================== + ________________________________ test_set_comparison _________________________________ def test_set_comparison(): set1 = set("1308") @@ -193,7 +197,7 @@ if you run this module: E Use -v to get the full diff test_assert2.py:5: AssertionError - ========================= 1 failed in 0.12 seconds ========================= + ============================== 1 failed in 0.12 seconds ============================== Special comparisons are done for a number of cases: @@ -243,9 +247,9 @@ the conftest file: .. code-block:: pytest $ pytest -q test_foocompare.py - F [100%] - ================================= FAILURES ================================= - _______________________________ test_compare _______________________________ + F [100%] + ====================================== FAILURES ====================================== + ____________________________________ test_compare ____________________________________ def test_compare(): f1 = Foo(1) diff --git a/doc/en/cache.rst b/doc/en/cache.rst index ba9d87a5f..1814d386d 100644 --- a/doc/en/cache.rst +++ b/doc/en/cache.rst @@ -48,9 +48,9 @@ If you run this for the first time you will see two failures: .. code-block:: pytest $ pytest -q - .................F.......F........................ [100%] - ================================= FAILURES ================================= - _______________________________ test_num[17] _______________________________ + .................F.......F........................ [100%] + ====================================== FAILURES ====================================== + ____________________________________ test_num[17] ____________________________________ i = 17 @@ -61,7 +61,7 @@ If you run this for the first time you will see two failures: E Failed: bad luck test_50.py:6: Failed - _______________________________ test_num[25] _______________________________ + ____________________________________ test_num[25] ____________________________________ i = 25 @@ -79,16 +79,18 @@ If you then run it with ``--lf``: .. code-block:: pytest $ pytest --lf - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 50 items / 48 deselected run-last-failure: rerun previous 2 failures - test_50.py FF [100%] + test_50.py FF [100%] - ================================= FAILURES ================================= - _______________________________ test_num[17] _______________________________ + ====================================== FAILURES ====================================== + ____________________________________ test_num[17] ____________________________________ i = 17 @@ -99,7 +101,7 @@ If you then run it with ``--lf``: E Failed: bad luck test_50.py:6: Failed - _______________________________ test_num[25] _______________________________ + ____________________________________ test_num[25] ____________________________________ i = 25 @@ -110,7 +112,7 @@ If you then run it with ``--lf``: E Failed: bad luck test_50.py:6: Failed - ================= 2 failed, 48 deselected in 0.12 seconds ================== + ====================== 2 failed, 48 deselected in 0.12 seconds ======================= You have run only the two failing test from the last run, while 48 tests have not been run ("deselected"). @@ -122,16 +124,18 @@ of ``FF`` and dots): .. code-block:: pytest $ pytest --ff - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 50 items run-last-failure: rerun previous 2 failures first - test_50.py FF................................................ [100%] + test_50.py FF................................................ [100%] - ================================= FAILURES ================================= - _______________________________ test_num[17] _______________________________ + ====================================== FAILURES ====================================== + ____________________________________ test_num[17] ____________________________________ i = 17 @@ -142,7 +146,7 @@ of ``FF`` and dots): E Failed: bad luck test_50.py:6: Failed - _______________________________ test_num[25] _______________________________ + ____________________________________ test_num[25] ____________________________________ i = 25 @@ -153,7 +157,7 @@ of ``FF`` and dots): E Failed: bad luck test_50.py:6: Failed - =================== 2 failed, 48 passed in 0.12 seconds ==================== + ======================== 2 failed, 48 passed in 0.12 seconds ========================= .. _`config.cache`: @@ -205,9 +209,9 @@ If you run this command for the first time, you can see the print statement: .. code-block:: pytest $ pytest -q - F [100%] - ================================= FAILURES ================================= - ______________________________ test_function _______________________________ + F [100%] + ====================================== FAILURES ====================================== + ___________________________________ test_function ____________________________________ mydata = 42 @@ -216,7 +220,7 @@ If you run this command for the first time, you can see the print statement: E assert 42 == 23 test_caching.py:17: AssertionError - -------------------------- Captured stdout setup --------------------------- + ------------------------------- Captured stdout setup -------------------------------- running expensive computation... 1 failed in 0.12 seconds @@ -226,9 +230,9 @@ the cache and nothing will be printed: .. code-block:: pytest $ pytest -q - F [100%] - ================================= FAILURES ================================= - ______________________________ test_function _______________________________ + F [100%] + ====================================== FAILURES ====================================== + ___________________________________ test_function ____________________________________ mydata = 42 @@ -251,11 +255,13 @@ You can always peek at the content of the cache using the .. code-block:: pytest $ pytest --cache-show - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y cachedir: $REGENDOC_TMPDIR/.pytest_cache - ------------------------------- cache values ------------------------------- + ------------------------------------ cache values ------------------------------------ cache/lastfailed contains: {'test_caching.py::test_function': True} cache/nodeids contains: @@ -265,7 +271,7 @@ You can always peek at the content of the cache using the example/value contains: 42 - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ Clearing Cache content ------------------------------- diff --git a/doc/en/capture.rst b/doc/en/capture.rst index 488b2b874..15ad75910 100644 --- a/doc/en/capture.rst +++ b/doc/en/capture.rst @@ -66,24 +66,26 @@ of the failing function and hide the other one: .. code-block:: pytest $ pytest - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 2 items - test_module.py .F [100%] + test_module.py .F [100%] - ================================= FAILURES ================================= - ________________________________ test_func2 ________________________________ + ====================================== FAILURES ====================================== + _____________________________________ test_func2 _____________________________________ def test_func2(): > assert False E assert False test_module.py:9: AssertionError - -------------------------- Captured stdout setup --------------------------- + ------------------------------- Captured stdout setup -------------------------------- setting up - ==================== 1 failed, 1 passed in 0.12 seconds ==================== + ========================= 1 failed, 1 passed in 0.12 seconds ========================= Accessing captured output from a test function --------------------------------------------------- diff --git a/doc/en/doctest.rst b/doc/en/doctest.rst index 125ed3aa7..c861ede8a 100644 --- a/doc/en/doctest.rst +++ b/doc/en/doctest.rst @@ -63,14 +63,16 @@ then you can just invoke ``pytest`` without command line options: .. code-block:: pytest $ pytest - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini + plugins: hypothesis-3.x.y collected 1 item - mymodule.py . [100%] + mymodule.py . [100%] - ========================= 1 passed in 0.12 seconds ========================= + ============================== 1 passed in 0.12 seconds ============================== It is possible to use fixtures using the ``getfixture`` helper:: diff --git a/doc/en/example/markers.rst b/doc/en/example/markers.rst index 9d325c30e..b27a4fcb2 100644 --- a/doc/en/example/markers.rst +++ b/doc/en/example/markers.rst @@ -32,32 +32,36 @@ You can then restrict a test run to only run tests marked with ``webtest``: .. code-block:: pytest $ pytest -v -m webtest - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collecting ... collected 4 items / 3 deselected - test_server.py::test_send_http PASSED [100%] + test_server.py::test_send_http PASSED [100%] - ================== 1 passed, 3 deselected in 0.12 seconds ================== + ======================= 1 passed, 3 deselected in 0.12 seconds ======================= Or the inverse, running all tests except the webtest ones: .. code-block:: pytest $ pytest -v -m "not webtest" - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collecting ... collected 4 items / 1 deselected - test_server.py::test_something_quick PASSED [ 33%] - test_server.py::test_another PASSED [ 66%] - test_server.py::TestClass::test_method PASSED [100%] + test_server.py::test_something_quick PASSED [ 33%] + test_server.py::test_another PASSED [ 66%] + test_server.py::TestClass::test_method PASSED [100%] - ================== 3 passed, 1 deselected in 0.12 seconds ================== + ======================= 3 passed, 1 deselected in 0.12 seconds ======================= Selecting tests based on their node ID -------------------------------------- @@ -69,46 +73,52 @@ tests based on their module, class, method, or function name: .. code-block:: pytest $ pytest -v test_server.py::TestClass::test_method - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collecting ... collected 1 item - test_server.py::TestClass::test_method PASSED [100%] + test_server.py::TestClass::test_method PASSED [100%] - ========================= 1 passed in 0.12 seconds ========================= + ============================== 1 passed in 0.12 seconds ============================== You can also select on the class: .. code-block:: pytest $ pytest -v test_server.py::TestClass - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collecting ... collected 1 item - test_server.py::TestClass::test_method PASSED [100%] + test_server.py::TestClass::test_method PASSED [100%] - ========================= 1 passed in 0.12 seconds ========================= + ============================== 1 passed in 0.12 seconds ============================== Or select multiple nodes: .. code-block:: pytest $ pytest -v test_server.py::TestClass test_server.py::test_send_http - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collecting ... collected 2 items - test_server.py::TestClass::test_method PASSED [ 50%] - test_server.py::test_send_http PASSED [100%] + test_server.py::TestClass::test_method PASSED [ 50%] + test_server.py::test_send_http PASSED [100%] - ========================= 2 passed in 0.12 seconds ========================= + ============================== 2 passed in 0.12 seconds ============================== .. _node-id: @@ -139,48 +149,54 @@ select tests based on their names: .. code-block:: pytest $ pytest -v -k http # running with the above defined example module - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collecting ... collected 4 items / 3 deselected - test_server.py::test_send_http PASSED [100%] + test_server.py::test_send_http PASSED [100%] - ================== 1 passed, 3 deselected in 0.12 seconds ================== + ======================= 1 passed, 3 deselected in 0.12 seconds ======================= And you can also run all tests except the ones that match the keyword: .. code-block:: pytest $ pytest -k "not send_http" -v - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collecting ... collected 4 items / 1 deselected - test_server.py::test_something_quick PASSED [ 33%] - test_server.py::test_another PASSED [ 66%] - test_server.py::TestClass::test_method PASSED [100%] + test_server.py::test_something_quick PASSED [ 33%] + test_server.py::test_another PASSED [ 66%] + test_server.py::TestClass::test_method PASSED [100%] - ================== 3 passed, 1 deselected in 0.12 seconds ================== + ======================= 3 passed, 1 deselected in 0.12 seconds ======================= Or to select "http" and "quick" tests: .. code-block:: pytest $ pytest -k "http or quick" -v - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collecting ... collected 4 items / 2 deselected - test_server.py::test_send_http PASSED [ 50%] - test_server.py::test_something_quick PASSED [100%] + test_server.py::test_send_http PASSED [ 50%] + test_server.py::test_something_quick PASSED [100%] - ================== 2 passed, 2 deselected in 0.12 seconds ================== + ======================= 2 passed, 2 deselected in 0.12 seconds ======================= .. note:: @@ -216,6 +232,8 @@ You can ask which markers exist for your test suite - the list includes our just $ pytest --markers @pytest.mark.webtest: mark a test as a webtest. + @pytest.mark.hypothesis: Tests which use hypothesis. + @pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings @pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test. @@ -363,34 +381,40 @@ the test needs: .. code-block:: pytest $ pytest -E stage2 - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 1 item - test_someenv.py s [100%] + test_someenv.py s [100%] - ======================== 1 skipped in 0.12 seconds ========================= + ============================= 1 skipped in 0.12 seconds ============================== and here is one that specifies exactly the environment needed: .. code-block:: pytest $ pytest -E stage1 - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 1 item - test_someenv.py . [100%] + test_someenv.py . [100%] - ========================= 1 passed in 0.12 seconds ========================= + ============================== 1 passed in 0.12 seconds ============================== The ``--markers`` option always gives you a list of available markers:: $ pytest --markers @pytest.mark.env(name): mark test to run only on named environment + @pytest.mark.hypothesis: Tests which use hypothesis. + @pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings @pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test. @@ -544,30 +568,34 @@ then you will see two tests skipped and two executed tests as expected: .. code-block:: pytest $ pytest -rs # this option reports skip reasons - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 4 items - test_plat.py s.s. [100%] - ========================= short test summary info ========================== + test_plat.py s.s. [100%] + ============================== short test summary info =============================== SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux - =================== 2 passed, 2 skipped in 0.12 seconds ==================== + ======================== 2 passed, 2 skipped in 0.12 seconds ========================= Note that if you specify a platform via the marker-command line option like this: .. code-block:: pytest $ pytest -m linux - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 4 items / 3 deselected - test_plat.py . [100%] + test_plat.py . [100%] - ================== 1 passed, 3 deselected in 0.12 seconds ================== + ======================= 1 passed, 3 deselected in 0.12 seconds ======================= then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests. @@ -613,47 +641,51 @@ We can now use the ``-m option`` to select one set: .. code-block:: pytest $ pytest -m interface --tb=short - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 4 items / 2 deselected - test_module.py FF [100%] + test_module.py FF [100%] - ================================= FAILURES ================================= - __________________________ test_interface_simple ___________________________ + ====================================== FAILURES ====================================== + _______________________________ test_interface_simple ________________________________ test_module.py:3: in test_interface_simple assert 0 E assert 0 - __________________________ test_interface_complex __________________________ + _______________________________ test_interface_complex _______________________________ test_module.py:6: in test_interface_complex assert 0 E assert 0 - ================== 2 failed, 2 deselected in 0.12 seconds ================== + ======================= 2 failed, 2 deselected in 0.12 seconds ======================= or to select both "event" and "interface" tests: .. code-block:: pytest $ pytest -m "interface or event" --tb=short - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 4 items / 1 deselected - test_module.py FFF [100%] + test_module.py FFF [100%] - ================================= FAILURES ================================= - __________________________ test_interface_simple ___________________________ + ====================================== FAILURES ====================================== + _______________________________ test_interface_simple ________________________________ test_module.py:3: in test_interface_simple assert 0 E assert 0 - __________________________ test_interface_complex __________________________ + _______________________________ test_interface_complex _______________________________ test_module.py:6: in test_interface_complex assert 0 E assert 0 - ____________________________ test_event_simple _____________________________ + _________________________________ test_event_simple __________________________________ test_module.py:9: in test_event_simple assert 0 E assert 0 - ================== 3 failed, 1 deselected in 0.12 seconds ================== + ======================= 3 failed, 1 deselected in 0.12 seconds ======================= diff --git a/doc/en/example/nonpython.rst b/doc/en/example/nonpython.rst index eba8279f3..1581b8672 100644 --- a/doc/en/example/nonpython.rst +++ b/doc/en/example/nonpython.rst @@ -28,19 +28,21 @@ now execute the test specification: .. code-block:: pytest nonpython $ pytest test_simple.yml - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/nonpython/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR/nonpython, inifile: + plugins: hypothesis-3.x.y collected 2 items - test_simple.yml F. [100%] + test_simple.yml F. [100%] - ================================= FAILURES ================================= - ______________________________ usecase: hello ______________________________ + ====================================== FAILURES ====================================== + ___________________________________ usecase: hello ___________________________________ usecase execution failed spec failed: 'some': 'other' no further details known at this point. - ==================== 1 failed, 1 passed in 0.12 seconds ==================== + ========================= 1 failed, 1 passed in 0.12 seconds ========================= .. regendoc:wipe @@ -62,21 +64,23 @@ consulted when reporting in ``verbose`` mode: .. code-block:: pytest nonpython $ pytest -v - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/nonpython/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR/nonpython, inifile: + plugins: hypothesis-3.x.y collecting ... collected 2 items - test_simple.yml::hello FAILED [ 50%] - test_simple.yml::ok PASSED [100%] + test_simple.yml::hello FAILED [ 50%] + test_simple.yml::ok PASSED [100%] - ================================= FAILURES ================================= - ______________________________ usecase: hello ______________________________ + ====================================== FAILURES ====================================== + ___________________________________ usecase: hello ___________________________________ usecase execution failed spec failed: 'some': 'other' no further details known at this point. - ==================== 1 failed, 1 passed in 0.12 seconds ==================== + ========================= 1 failed, 1 passed in 0.12 seconds ========================= .. regendoc:wipe @@ -86,13 +90,15 @@ interesting to just look at the collection tree: .. code-block:: pytest nonpython $ pytest --collect-only - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/nonpython/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR/nonpython, inifile: + plugins: hypothesis-3.x.y collected 2 items - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ diff --git a/doc/en/example/parametrize.rst b/doc/en/example/parametrize.rst index 92756e492..76cb68867 100644 --- a/doc/en/example/parametrize.rst +++ b/doc/en/example/parametrize.rst @@ -47,7 +47,7 @@ This means that we only run 2 tests if we do not pass ``--all``: .. code-block:: pytest $ pytest -q test_compute.py - .. [100%] + .. [100%] 2 passed in 0.12 seconds We run only two computations, so we see two dots. @@ -56,9 +56,9 @@ let's run the full monty: .. code-block:: pytest $ pytest -q --all - ....F [100%] - ================================= FAILURES ================================= - _____________________________ test_compute[4] ______________________________ + ....F [100%] + ====================================== FAILURES ====================================== + __________________________________ test_compute[4] ___________________________________ param1 = 4 @@ -143,9 +143,11 @@ objects, they are still using the default pytest representation: .. code-block:: pytest $ pytest test_time.py --collect-only - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 8 items @@ -157,7 +159,7 @@ objects, they are still using the default pytest representation: - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs together with the actual data, instead of listing them separately. @@ -201,23 +203,27 @@ this is a fully self-contained example which you can run with: .. code-block:: pytest $ pytest test_scenarios.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 4 items - test_scenarios.py .... [100%] + test_scenarios.py .... [100%] - ========================= 4 passed in 0.12 seconds ========================= + ============================== 4 passed in 0.12 seconds ============================== If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function: .. code-block:: pytest $ pytest --collect-only test_scenarios.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 4 items @@ -226,7 +232,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ Note that we told ``metafunc.parametrize()`` that your scenario values should be considered class-scoped. With pytest-2.3 this leads to a @@ -281,24 +287,26 @@ Let's first see how it looks like at collection time: .. code-block:: pytest $ pytest test_backends.py --collect-only - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 2 items - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ And then when we run the test: .. code-block:: pytest $ pytest -q test_backends.py - .F [100%] - ================================= FAILURES ================================= - _________________________ test_db_initialized[d2] __________________________ + .F [100%] + ====================================== FAILURES ====================================== + ______________________________ test_db_initialized[d2] _______________________________ db = @@ -346,14 +354,16 @@ The result of this test will be successful: .. code-block:: pytest $ pytest test_indirect_list.py --collect-only - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 1 item - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ .. regendoc:wipe @@ -397,9 +407,9 @@ argument sets to use for each test function. Let's run it: .. code-block:: pytest $ pytest -q - F.. [100%] - ================================= FAILURES ================================= - ________________________ TestClass.test_equals[1-2] ________________________ + F.. [100%] + ====================================== FAILURES ====================================== + _____________________________ TestClass.test_equals[1-2] _____________________________ self = , a = 1, b = 2 @@ -429,8 +439,8 @@ Running it results in some skips if we don't have all the python interpreters in .. code-block:: pytest . $ pytest -rs -q multipython.py - ...sss...sssssssss...sss... [100%] - ========================= short test summary info ========================== + ...sss...sssssssss...sss... [100%] + ============================== short test summary info =============================== SKIP [15] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.4' not found 12 passed, 15 skipped in 0.12 seconds @@ -480,16 +490,18 @@ If you run this with reporting for skips enabled: .. code-block:: pytest $ pytest -rs test_module.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 2 items - test_module.py .s [100%] - ========================= short test summary info ========================== + test_module.py .s [100%] + ============================== short test summary info =============================== SKIP [1] $REGENDOC_TMPDIR/conftest.py:11: could not import 'opt2' - =================== 1 passed, 1 skipped in 0.12 seconds ==================== + ======================== 1 passed, 1 skipped in 0.12 seconds ========================= You'll see that we don't have an ``opt2`` module and thus the second test run of our ``test_func1`` was skipped. A few notes: @@ -537,17 +549,19 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker: .. code-block:: pytest $ pytest -v -m basic - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collecting ... collected 17 items / 14 deselected - test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%] - test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%] - test_pytest_param_example.py::test_eval[basic_6*9] xfail [100%] + test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%] + test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%] + test_pytest_param_example.py::test_eval[basic_6*9] xfail [100%] - ============ 2 passed, 14 deselected, 1 xfailed in 0.12 seconds ============ + ================= 2 passed, 14 deselected, 1 xfailed in 0.12 seconds ================= As the result: diff --git a/doc/en/example/pythoncollection.rst b/doc/en/example/pythoncollection.rst index 394924e2d..bc7e0c0d2 100644 --- a/doc/en/example/pythoncollection.rst +++ b/doc/en/example/pythoncollection.rst @@ -130,16 +130,18 @@ The test collection would look like this: .. code-block:: pytest $ pytest --collect-only - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini + plugins: hypothesis-3.x.y collected 2 items - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ You can check for multiple glob patterns by adding a space between the patterns:: @@ -185,9 +187,11 @@ You can always peek at the collection tree without running tests like this: .. code-block:: pytest . $ pytest --collect-only pythoncollection.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/CWD/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini + plugins: hypothesis-3.x.y collected 3 items @@ -195,7 +199,7 @@ You can always peek at the collection tree without running tests like this: - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ .. _customizing-test-collection: @@ -257,9 +261,11 @@ file will be left out: .. code-block:: pytest $ pytest --collect-only - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini + plugins: hypothesis-3.x.y collected 0 items - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ diff --git a/doc/en/example/reportingdemo.rst b/doc/en/example/reportingdemo.rst index 2f8c25f02..d99a06725 100644 --- a/doc/en/example/reportingdemo.rst +++ b/doc/en/example/reportingdemo.rst @@ -12,15 +12,17 @@ get on the terminal - we are working on that): .. code-block:: pytest assertion $ pytest failure_demo.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/assertion/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR/assertion, inifile: + plugins: hypothesis-3.x.y collected 44 items - failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%] + failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%] - ================================= FAILURES ================================= - ___________________________ test_generative[3-6] ___________________________ + ====================================== FAILURES ====================================== + ________________________________ test_generative[3-6] ________________________________ param1 = 3, param2 = 6 @@ -30,7 +32,7 @@ get on the terminal - we are working on that): E assert (3 * 2) < 6 failure_demo.py:22: AssertionError - _________________________ TestFailing.test_simple __________________________ + ______________________________ TestFailing.test_simple _______________________________ self = @@ -47,7 +49,7 @@ get on the terminal - we are working on that): E + and 43 = .g at 0xdeadbeef>() failure_demo.py:33: AssertionError - ____________________ TestFailing.test_simple_multiline _____________________ + _________________________ TestFailing.test_simple_multiline __________________________ self = @@ -55,7 +57,7 @@ get on the terminal - we are working on that): > otherfunc_multi(42, 6 * 9) failure_demo.py:36: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a = 42, b = 54 @@ -64,7 +66,7 @@ get on the terminal - we are working on that): E assert 42 == 54 failure_demo.py:17: AssertionError - ___________________________ TestFailing.test_not ___________________________ + ________________________________ TestFailing.test_not ________________________________ self = @@ -77,7 +79,7 @@ get on the terminal - we are working on that): E + where 42 = .f at 0xdeadbeef>() failure_demo.py:42: AssertionError - _________________ TestSpecialisedExplanations.test_eq_text _________________ + ______________________ TestSpecialisedExplanations.test_eq_text ______________________ self = @@ -88,7 +90,7 @@ get on the terminal - we are working on that): E + eggs failure_demo.py:47: AssertionError - _____________ TestSpecialisedExplanations.test_eq_similar_text _____________ + __________________ TestSpecialisedExplanations.test_eq_similar_text __________________ self = @@ -101,7 +103,7 @@ get on the terminal - we are working on that): E ? ^ failure_demo.py:50: AssertionError - ____________ TestSpecialisedExplanations.test_eq_multiline_text ____________ + _________________ TestSpecialisedExplanations.test_eq_multiline_text _________________ self = @@ -114,7 +116,7 @@ get on the terminal - we are working on that): E bar failure_demo.py:53: AssertionError - ______________ TestSpecialisedExplanations.test_eq_long_text _______________ + ___________________ TestSpecialisedExplanations.test_eq_long_text ____________________ self = @@ -131,7 +133,7 @@ get on the terminal - we are working on that): E ? ^ failure_demo.py:58: AssertionError - _________ TestSpecialisedExplanations.test_eq_long_text_multiline __________ + ______________ TestSpecialisedExplanations.test_eq_long_text_multiline _______________ self = @@ -151,7 +153,7 @@ get on the terminal - we are working on that): E ...Full output truncated (7 lines hidden), use '-vv' to show failure_demo.py:63: AssertionError - _________________ TestSpecialisedExplanations.test_eq_list _________________ + ______________________ TestSpecialisedExplanations.test_eq_list ______________________ self = @@ -162,7 +164,7 @@ get on the terminal - we are working on that): E Use -v to get the full diff failure_demo.py:66: AssertionError - ______________ TestSpecialisedExplanations.test_eq_list_long _______________ + ___________________ TestSpecialisedExplanations.test_eq_list_long ____________________ self = @@ -175,7 +177,7 @@ get on the terminal - we are working on that): E Use -v to get the full diff failure_demo.py:71: AssertionError - _________________ TestSpecialisedExplanations.test_eq_dict _________________ + ______________________ TestSpecialisedExplanations.test_eq_dict ______________________ self = @@ -193,7 +195,7 @@ get on the terminal - we are working on that): E ...Full output truncated (2 lines hidden), use '-vv' to show failure_demo.py:74: AssertionError - _________________ TestSpecialisedExplanations.test_eq_set __________________ + ______________________ TestSpecialisedExplanations.test_eq_set _______________________ self = @@ -211,7 +213,7 @@ get on the terminal - we are working on that): E ...Full output truncated (2 lines hidden), use '-vv' to show failure_demo.py:77: AssertionError - _____________ TestSpecialisedExplanations.test_eq_longer_list ______________ + __________________ TestSpecialisedExplanations.test_eq_longer_list ___________________ self = @@ -222,7 +224,7 @@ get on the terminal - we are working on that): E Use -v to get the full diff failure_demo.py:80: AssertionError - _________________ TestSpecialisedExplanations.test_in_list _________________ + ______________________ TestSpecialisedExplanations.test_in_list ______________________ self = @@ -231,7 +233,7 @@ get on the terminal - we are working on that): E assert 1 in [0, 2, 3, 4, 5] failure_demo.py:83: AssertionError - __________ TestSpecialisedExplanations.test_not_in_text_multiline __________ + _______________ TestSpecialisedExplanations.test_not_in_text_multiline _______________ self = @@ -250,7 +252,7 @@ get on the terminal - we are working on that): E ...Full output truncated (2 lines hidden), use '-vv' to show failure_demo.py:87: AssertionError - ___________ TestSpecialisedExplanations.test_not_in_text_single ____________ + ________________ TestSpecialisedExplanations.test_not_in_text_single _________________ self = @@ -263,7 +265,7 @@ get on the terminal - we are working on that): E ? +++ failure_demo.py:91: AssertionError - _________ TestSpecialisedExplanations.test_not_in_text_single_long _________ + ______________ TestSpecialisedExplanations.test_not_in_text_single_long ______________ self = @@ -276,7 +278,7 @@ get on the terminal - we are working on that): E ? +++ failure_demo.py:95: AssertionError - ______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______ + ___________ TestSpecialisedExplanations.test_not_in_text_single_long_term ____________ self = @@ -289,16 +291,28 @@ get on the terminal - we are working on that): E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ failure_demo.py:99: AssertionError - ______________ TestSpecialisedExplanations.test_eq_dataclass _______________ + ___________________ TestSpecialisedExplanations.test_eq_dataclass ____________________ self = def test_eq_dataclass(self): - > from dataclasses import dataclass - E ModuleNotFoundError: No module named 'dataclasses' + from dataclasses import dataclass - failure_demo.py:102: ModuleNotFoundError - ________________ TestSpecialisedExplanations.test_eq_attrs _________________ + @dataclass + class Foo(object): + a: int + b: str + + left = Foo(1, "b") + right = Foo(1, "c") + > assert left == right + E AssertionError: assert TestSpecialis...oo(a=1, b='b') == TestSpecialise...oo(a=1, b='c') + E Omitting 1 identical items, use -vv to show + E Differing attributes: + E b: 'b' != 'c' + + failure_demo.py:111: AssertionError + _____________________ TestSpecialisedExplanations.test_eq_attrs ______________________ self = @@ -319,7 +333,7 @@ get on the terminal - we are working on that): E b: 'b' != 'c' failure_demo.py:123: AssertionError - ______________________________ test_attribute ______________________________ + ___________________________________ test_attribute ___________________________________ def test_attribute(): class Foo(object): @@ -331,7 +345,7 @@ get on the terminal - we are working on that): E + where 1 = .Foo object at 0xdeadbeef>.b failure_demo.py:131: AssertionError - _________________________ test_attribute_instance __________________________ + ______________________________ test_attribute_instance _______________________________ def test_attribute_instance(): class Foo(object): @@ -343,7 +357,7 @@ get on the terminal - we are working on that): E + where .Foo object at 0xdeadbeef> = .Foo'>() failure_demo.py:138: AssertionError - __________________________ test_attribute_failure __________________________ + _______________________________ test_attribute_failure _______________________________ def test_attribute_failure(): class Foo(object): @@ -356,7 +370,7 @@ get on the terminal - we are working on that): > assert i.b == 2 failure_demo.py:149: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = .Foo object at 0xdeadbeef> @@ -365,7 +379,7 @@ get on the terminal - we are working on that): E Exception: Failed to get attrib failure_demo.py:144: Exception - _________________________ test_attribute_multiple __________________________ + ______________________________ test_attribute_multiple _______________________________ def test_attribute_multiple(): class Foo(object): @@ -382,7 +396,7 @@ get on the terminal - we are working on that): E + where .Bar object at 0xdeadbeef> = .Bar'>() failure_demo.py:159: AssertionError - __________________________ TestRaises.test_raises __________________________ + _______________________________ TestRaises.test_raises _______________________________ self = @@ -392,7 +406,7 @@ get on the terminal - we are working on that): E ValueError: invalid literal for int() with base 10: 'qwe' failure_demo.py:169: ValueError - ______________________ TestRaises.test_raises_doesnt _______________________ + ___________________________ TestRaises.test_raises_doesnt ____________________________ self = @@ -401,7 +415,7 @@ get on the terminal - we are working on that): E Failed: DID NOT RAISE failure_demo.py:172: Failed - __________________________ TestRaises.test_raise ___________________________ + _______________________________ TestRaises.test_raise ________________________________ self = @@ -410,7 +424,7 @@ get on the terminal - we are working on that): E ValueError: demo error failure_demo.py:175: ValueError - ________________________ TestRaises.test_tupleerror ________________________ + _____________________________ TestRaises.test_tupleerror _____________________________ self = @@ -419,7 +433,7 @@ get on the terminal - we are working on that): E ValueError: not enough values to unpack (expected 2, got 1) failure_demo.py:178: ValueError - ______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______ + ___________ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ___________ self = @@ -430,9 +444,9 @@ get on the terminal - we are working on that): E TypeError: 'int' object is not iterable failure_demo.py:183: TypeError - --------------------------- Captured stdout call --------------------------- + -------------------------------- Captured stdout call -------------------------------- items is [1, 2, 3] - ________________________ TestRaises.test_some_error ________________________ + _____________________________ TestRaises.test_some_error _____________________________ self = @@ -441,7 +455,7 @@ get on the terminal - we are working on that): E NameError: name 'namenotexi' is not defined failure_demo.py:186: NameError - ____________________ test_dynamic_compile_shows_nicely _____________________ + _________________________ test_dynamic_compile_shows_nicely __________________________ def test_dynamic_compile_shows_nicely(): import imp @@ -456,14 +470,14 @@ get on the terminal - we are working on that): > module.foo() failure_demo.py:204: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def foo(): > assert 1 == 0 E AssertionError <0-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:201>:2: AssertionError - ____________________ TestMoreErrors.test_complex_error _____________________ + _________________________ TestMoreErrors.test_complex_error __________________________ self = @@ -477,10 +491,10 @@ get on the terminal - we are working on that): > somefunc(f(), g()) failure_demo.py:215: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ failure_demo.py:13: in somefunc otherfunc(x, y) - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a = 44, b = 43 @@ -489,7 +503,7 @@ get on the terminal - we are working on that): E assert 44 == 43 failure_demo.py:9: AssertionError - ___________________ TestMoreErrors.test_z1_unpack_error ____________________ + ________________________ TestMoreErrors.test_z1_unpack_error _________________________ self = @@ -499,7 +513,7 @@ get on the terminal - we are working on that): E ValueError: not enough values to unpack (expected 2, got 0) failure_demo.py:219: ValueError - ____________________ TestMoreErrors.test_z2_type_error _____________________ + _________________________ TestMoreErrors.test_z2_type_error __________________________ self = @@ -509,7 +523,7 @@ get on the terminal - we are working on that): E TypeError: 'int' object is not iterable failure_demo.py:223: TypeError - ______________________ TestMoreErrors.test_startswith ______________________ + ___________________________ TestMoreErrors.test_startswith ___________________________ self = @@ -522,7 +536,7 @@ get on the terminal - we are working on that): E + where = '123'.startswith failure_demo.py:228: AssertionError - __________________ TestMoreErrors.test_startswith_nested ___________________ + _______________________ TestMoreErrors.test_startswith_nested ________________________ self = @@ -541,7 +555,7 @@ get on the terminal - we are working on that): E + and '456' = .g at 0xdeadbeef>() failure_demo.py:237: AssertionError - _____________________ TestMoreErrors.test_global_func ______________________ + __________________________ TestMoreErrors.test_global_func ___________________________ self = @@ -552,7 +566,7 @@ get on the terminal - we are working on that): E + where 43 = globf(42) failure_demo.py:240: AssertionError - _______________________ TestMoreErrors.test_instance _______________________ + ____________________________ TestMoreErrors.test_instance ____________________________ self = @@ -563,7 +577,7 @@ get on the terminal - we are working on that): E + where 42 = .x failure_demo.py:244: AssertionError - _______________________ TestMoreErrors.test_compare ________________________ + ____________________________ TestMoreErrors.test_compare _____________________________ self = @@ -573,7 +587,7 @@ get on the terminal - we are working on that): E + where 11 = globf(10) failure_demo.py:247: AssertionError - _____________________ TestMoreErrors.test_try_finally ______________________ + __________________________ TestMoreErrors.test_try_finally ___________________________ self = @@ -584,7 +598,7 @@ get on the terminal - we are working on that): E assert 1 == 0 failure_demo.py:252: AssertionError - ___________________ TestCustomAssertMsg.test_single_line ___________________ + ________________________ TestCustomAssertMsg.test_single_line ________________________ self = @@ -599,7 +613,7 @@ get on the terminal - we are working on that): E + where 1 = .A'>.a failure_demo.py:263: AssertionError - ____________________ TestCustomAssertMsg.test_multiline ____________________ + _________________________ TestCustomAssertMsg.test_multiline _________________________ self = @@ -618,7 +632,7 @@ get on the terminal - we are working on that): E + where 1 = .A'>.a failure_demo.py:270: AssertionError - ___________________ TestCustomAssertMsg.test_custom_repr ___________________ + ________________________ TestCustomAssertMsg.test_custom_repr ________________________ self = @@ -640,4 +654,4 @@ get on the terminal - we are working on that): E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a failure_demo.py:283: AssertionError - ======================== 44 failed in 0.12 seconds ========================= + ============================= 44 failed in 0.12 seconds ============================== diff --git a/doc/en/example/simple.rst b/doc/en/example/simple.rst index 76a1ddc80..26d5d6c4b 100644 --- a/doc/en/example/simple.rst +++ b/doc/en/example/simple.rst @@ -48,9 +48,9 @@ Let's run this without supplying our new option: .. code-block:: pytest $ pytest -q test_sample.py - F [100%] - ================================= FAILURES ================================= - _______________________________ test_answer ________________________________ + F [100%] + ====================================== FAILURES ====================================== + ____________________________________ test_answer _____________________________________ cmdopt = 'type1' @@ -63,7 +63,7 @@ Let's run this without supplying our new option: E assert 0 test_sample.py:6: AssertionError - --------------------------- Captured stdout call --------------------------- + -------------------------------- Captured stdout call -------------------------------- first 1 failed in 0.12 seconds @@ -72,9 +72,9 @@ And now with supplying a command line option: .. code-block:: pytest $ pytest -q --cmdopt=type2 - F [100%] - ================================= FAILURES ================================= - _______________________________ test_answer ________________________________ + F [100%] + ====================================== FAILURES ====================================== + ____________________________________ test_answer _____________________________________ cmdopt = 'type2' @@ -87,7 +87,7 @@ And now with supplying a command line option: E assert 0 test_sample.py:6: AssertionError - --------------------------- Captured stdout call --------------------------- + -------------------------------- Captured stdout call -------------------------------- second 1 failed in 0.12 seconds @@ -126,12 +126,14 @@ directory with the above conftest.py: .. code-block:: pytest $ pytest - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 0 items - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ .. _`excontrolskip`: @@ -186,30 +188,34 @@ and when running it will see a skipped "slow" test: .. code-block:: pytest $ pytest -rs # "-rs" means report details on the little 's' - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 2 items - test_module.py .s [100%] - ========================= short test summary info ========================== + test_module.py .s [100%] + ============================== short test summary info =============================== SKIP [1] test_module.py:8: need --runslow option to run - =================== 1 passed, 1 skipped in 0.12 seconds ==================== + ======================== 1 passed, 1 skipped in 0.12 seconds ========================= Or run it including the ``slow`` marked test: .. code-block:: pytest $ pytest --runslow - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 2 items - test_module.py .. [100%] + test_module.py .. [100%] - ========================= 2 passed in 0.12 seconds ========================= + ============================== 2 passed in 0.12 seconds ============================== Writing well integrated assertion helpers -------------------------------------------------- @@ -245,9 +251,9 @@ Let's run our little function: .. code-block:: pytest $ pytest -q test_checkconfig.py - F [100%] - ================================= FAILURES ================================= - ______________________________ test_something ______________________________ + F [100%] + ====================================== FAILURES ====================================== + ___________________________________ test_something ___________________________________ def test_something(): > checkconfig(42) @@ -344,13 +350,15 @@ which will add the string to the test header accordingly: .. code-block:: pytest $ pytest - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') project deps: mylib-1.1 rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 0 items - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ .. regendoc:wipe @@ -372,27 +380,31 @@ which will add info only when run with "--v": .. code-block:: pytest $ pytest -v - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') info1: did you know that ... did you? rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collecting ... collected 0 items - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ and nothing when run plainly: .. code-block:: pytest $ pytest - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 0 items - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ profiling test duration -------------------------- @@ -426,18 +438,20 @@ Now we can profile which test functions execute the slowest: .. code-block:: pytest $ pytest --durations=3 - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 3 items - test_some_are_slow.py ... [100%] + test_some_are_slow.py ... [100%] - ========================= slowest 3 test durations ========================= + ============================== slowest 3 test durations ============================== 0.30s call test_some_are_slow.py::test_funcslow2 0.20s call test_some_are_slow.py::test_funcslow1 0.10s call test_some_are_slow.py::test_funcfast - ========================= 3 passed in 0.12 seconds ========================= + ============================== 3 passed in 0.12 seconds ============================== incremental testing - test steps --------------------------------------------------- @@ -500,15 +514,17 @@ If we run this: .. code-block:: pytest $ pytest -rx - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 4 items - test_step.py .Fx. [100%] + test_step.py .Fx. [100%] - ================================= FAILURES ================================= - ____________________ TestUserHandling.test_modification ____________________ + ====================================== FAILURES ====================================== + _________________________ TestUserHandling.test_modification _________________________ self = @@ -517,10 +533,10 @@ If we run this: E assert 0 test_step.py:11: AssertionError - ========================= short test summary info ========================== + ============================== short test summary info =============================== XFAIL test_step.py::TestUserHandling::test_deletion reason: previous test failed (test_modification) - ============== 1 failed, 2 passed, 1 xfailed in 0.12 seconds =============== + =================== 1 failed, 2 passed, 1 xfailed in 0.12 seconds ==================== We'll see that ``test_deletion`` was not executed because ``test_modification`` failed. It is reported as an "expected failure". @@ -583,18 +599,20 @@ We can run this: .. code-block:: pytest $ pytest - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 7 items - test_step.py .Fx. [ 57%] - a/test_db.py F [ 71%] - a/test_db2.py F [ 85%] - b/test_error.py E [100%] + test_step.py .Fx. [ 57%] + a/test_db.py F [ 71%] + a/test_db2.py F [ 85%] + b/test_error.py E [100%] - ================================== ERRORS ================================== - _______________________ ERROR at setup of test_root ________________________ + ======================================= ERRORS ======================================= + ____________________________ ERROR at setup of test_root _____________________________ file $REGENDOC_TMPDIR/b/test_error.py, line 1 def test_root(db): # no db here, will error out E fixture 'db' not found @@ -602,8 +620,8 @@ We can run this: > use 'pytest --fixtures [testpath]' for help on them. $REGENDOC_TMPDIR/b/test_error.py:1 - ================================= FAILURES ================================= - ____________________ TestUserHandling.test_modification ____________________ + ====================================== FAILURES ====================================== + _________________________ TestUserHandling.test_modification _________________________ self = @@ -612,7 +630,7 @@ We can run this: E assert 0 test_step.py:11: AssertionError - _________________________________ test_a1 __________________________________ + ______________________________________ test_a1 _______________________________________ db = @@ -622,7 +640,7 @@ We can run this: E assert 0 a/test_db.py:2: AssertionError - _________________________________ test_a2 __________________________________ + ______________________________________ test_a2 _______________________________________ db = @@ -632,7 +650,7 @@ We can run this: E assert 0 a/test_db2.py:2: AssertionError - ========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ========== + =============== 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds =============== The two test modules in the ``a`` directory see the same ``db`` fixture instance while the one test in the sister-directory ``b`` doesn't see it. We could of course @@ -696,15 +714,17 @@ and run them: .. code-block:: pytest $ pytest test_module.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 2 items - test_module.py FF [100%] + test_module.py FF [100%] - ================================= FAILURES ================================= - ________________________________ test_fail1 ________________________________ + ====================================== FAILURES ====================================== + _____________________________________ test_fail1 _____________________________________ tmpdir = local('PYTEST_TMPDIR/test_fail10') @@ -713,14 +733,14 @@ and run them: E assert 0 test_module.py:2: AssertionError - ________________________________ test_fail2 ________________________________ + _____________________________________ test_fail2 _____________________________________ def test_fail2(): > assert 0 E assert 0 test_module.py:6: AssertionError - ========================= 2 failed in 0.12 seconds ========================= + ============================== 2 failed in 0.12 seconds ============================== you will have a "failures" file which contains the failing test ids:: @@ -797,17 +817,19 @@ and run it: .. code-block:: pytest $ pytest -s test_module.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 3 items test_module.py Esetting up a test failed! test_module.py::test_setup_fails Fexecuting test failed test_module.py::test_call_fails F - ================================== ERRORS ================================== - ____________________ ERROR at setup of test_setup_fails ____________________ + ======================================= ERRORS ======================================= + _________________________ ERROR at setup of test_setup_fails _________________________ @pytest.fixture def other(): @@ -815,8 +837,8 @@ and run it: E assert 0 test_module.py:7: AssertionError - ================================= FAILURES ================================= - _____________________________ test_call_fails ______________________________ + ====================================== FAILURES ====================================== + __________________________________ test_call_fails ___________________________________ something = None @@ -825,14 +847,14 @@ and run it: E assert 0 test_module.py:15: AssertionError - ________________________________ test_fail2 ________________________________ + _____________________________________ test_fail2 _____________________________________ def test_fail2(): > assert 0 E assert 0 test_module.py:19: AssertionError - ==================== 2 failed, 1 error in 0.12 seconds ===================== + ========================= 2 failed, 1 error in 0.12 seconds ========================== You'll see that the fixture finalizers could use the precise reporting information. diff --git a/doc/en/fixture.rst b/doc/en/fixture.rst index 4dd68f8e4..6aed3ca08 100644 --- a/doc/en/fixture.rst +++ b/doc/en/fixture.rst @@ -71,15 +71,17 @@ marked ``smtp_connection`` fixture function. Running the test looks like this: .. code-block:: pytest $ pytest test_smtpsimple.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 1 item - test_smtpsimple.py F [100%] + test_smtpsimple.py F [100%] - ================================= FAILURES ================================= - ________________________________ test_ehlo _________________________________ + ====================================== FAILURES ====================================== + _____________________________________ test_ehlo ______________________________________ smtp_connection = @@ -90,7 +92,7 @@ marked ``smtp_connection`` fixture function. Running the test looks like this: E assert 0 test_smtpsimple.py:11: AssertionError - ========================= 1 failed in 0.12 seconds ========================= + ============================== 1 failed in 0.12 seconds ============================== In the failure traceback we see that the test function was called with a ``smtp_connection`` argument, the ``smtplib.SMTP()`` instance created by the fixture @@ -211,15 +213,17 @@ inspect what is going on and can now run the tests: .. code-block:: pytest $ pytest test_module.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 2 items - test_module.py FF [100%] + test_module.py FF [100%] - ================================= FAILURES ================================= - ________________________________ test_ehlo _________________________________ + ====================================== FAILURES ====================================== + _____________________________________ test_ehlo ______________________________________ smtp_connection = @@ -231,7 +235,7 @@ inspect what is going on and can now run the tests: E assert 0 test_module.py:6: AssertionError - ________________________________ test_noop _________________________________ + _____________________________________ test_noop ______________________________________ smtp_connection = @@ -242,7 +246,7 @@ inspect what is going on and can now run the tests: E assert 0 test_module.py:11: AssertionError - ========================= 2 failed in 0.12 seconds ========================= + ============================== 2 failed in 0.12 seconds ============================== You see the two ``assert 0`` failing and more importantly you can also see that the same (module-scoped) ``smtp_connection`` object was passed into the @@ -491,14 +495,14 @@ Running it: .. code-block:: pytest $ pytest -qq --tb=short test_anothersmtp.py - F [100%] - ================================= FAILURES ================================= - ______________________________ test_showhelo _______________________________ + F [100%] + ====================================== FAILURES ====================================== + ___________________________________ test_showhelo ____________________________________ test_anothersmtp.py:5: in test_showhelo assert 0, smtp_connection.helo() E AssertionError: (250, b'mail.python.org') E assert 0 - ------------------------- Captured stdout teardown ------------------------- + ------------------------------ Captured stdout teardown ------------------------------ finalizing (mail.python.org) voila! The ``smtp_connection`` fixture function picked up our mail server name @@ -595,9 +599,9 @@ So let's just do another run: .. code-block:: pytest $ pytest -q test_module.py - FFFF [100%] - ================================= FAILURES ================================= - ________________________ test_ehlo[smtp.gmail.com] _________________________ + FFFF [100%] + ====================================== FAILURES ====================================== + _____________________________ test_ehlo[smtp.gmail.com] ______________________________ smtp_connection = @@ -609,7 +613,7 @@ So let's just do another run: E assert 0 test_module.py:6: AssertionError - ________________________ test_noop[smtp.gmail.com] _________________________ + _____________________________ test_noop[smtp.gmail.com] ______________________________ smtp_connection = @@ -620,7 +624,7 @@ So let's just do another run: E assert 0 test_module.py:11: AssertionError - ________________________ test_ehlo[mail.python.org] ________________________ + _____________________________ test_ehlo[mail.python.org] _____________________________ smtp_connection = @@ -631,9 +635,9 @@ So let's just do another run: E AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nPIPELINING\nSIZE 51200000\nETRN\nSTARTTLS\nAUTH DIGEST-MD5 NTLM CRAM-MD5\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8\nCHUNKING' test_module.py:5: AssertionError - -------------------------- Captured stdout setup --------------------------- + ------------------------------- Captured stdout setup -------------------------------- finalizing - ________________________ test_noop[mail.python.org] ________________________ + _____________________________ test_noop[mail.python.org] _____________________________ smtp_connection = @@ -644,7 +648,7 @@ So let's just do another run: E assert 0 test_module.py:11: AssertionError - ------------------------- Captured stdout teardown ------------------------- + ------------------------------ Captured stdout teardown ------------------------------ finalizing 4 failed in 0.12 seconds @@ -699,9 +703,11 @@ Running the above tests results in the following test IDs being used: .. code-block:: pytest $ pytest --collect-only - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 10 items @@ -717,7 +723,7 @@ Running the above tests results in the following test IDs being used: - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ .. _`fixture-parametrize-marks`: @@ -743,17 +749,19 @@ Running this test will *skip* the invocation of ``data_set`` with value ``2``: .. code-block:: pytest $ pytest test_fixture_marks.py -v - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collecting ... collected 3 items - test_fixture_marks.py::test_data[0] PASSED [ 33%] - test_fixture_marks.py::test_data[1] PASSED [ 66%] - test_fixture_marks.py::test_data[2] SKIPPED [100%] + test_fixture_marks.py::test_data[0] PASSED [ 33%] + test_fixture_marks.py::test_data[1] PASSED [ 66%] + test_fixture_marks.py::test_data[2] SKIPPED [100%] - =================== 2 passed, 1 skipped in 0.12 seconds ==================== + ======================== 2 passed, 1 skipped in 0.12 seconds ========================= .. _`interdependent fixtures`: @@ -788,16 +796,18 @@ Here we declare an ``app`` fixture which receives the previously defined .. code-block:: pytest $ pytest -v test_appsetup.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collecting ... collected 2 items - test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%] - test_appsetup.py::test_smtp_connection_exists[mail.python.org] PASSED [100%] + test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%] + test_appsetup.py::test_smtp_connection_exists[mail.python.org] PASSED [100%] - ========================= 2 passed in 0.12 seconds ========================= + ============================== 2 passed in 0.12 seconds ============================== Due to the parametrization of ``smtp_connection``, the test will run twice with two different ``App`` instances and respective smtp servers. There is no @@ -859,10 +869,12 @@ Let's run the tests in verbose mode and with looking at the print-output: .. code-block:: pytest $ pytest -v -s test_module.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 cachedir: .pytest_cache + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collecting ... collected 8 items test_module.py::test_0[1] SETUP otherarg 1 @@ -898,7 +910,7 @@ Let's run the tests in verbose mode and with looking at the print-output: TEARDOWN modarg mod2 - ========================= 8 passed in 0.12 seconds ========================= + ============================== 8 passed in 0.12 seconds ============================== You can see that the parametrized module-scoped ``modarg`` resource caused an ordering of test execution that lead to the fewest possible "active" resources. @@ -963,7 +975,7 @@ to verify our fixture is activated and the tests pass: .. code-block:: pytest $ pytest -q - .. [100%] + .. [100%] 2 passed in 0.12 seconds You can specify multiple fixtures like this: @@ -1064,7 +1076,7 @@ If we run it, we get two passing tests: .. code-block:: pytest $ pytest -q - .. [100%] + .. [100%] 2 passed in 0.12 seconds Here is how autouse fixtures work in other scopes: diff --git a/doc/en/getting-started.rst b/doc/en/getting-started.rst index 500fc3d93..31910fa02 100644 --- a/doc/en/getting-started.rst +++ b/doc/en/getting-started.rst @@ -25,6 +25,8 @@ Install ``pytest`` $ pytest --version This is pytest version 4.x.y, imported from $PYTHON_PREFIX/lib/python3.6/site-packages/pytest.py + setuptools registered plugins: + hypothesis-3.x.y at $PYTHON_PREFIX/lib/python3.6/site-packages/hypothesis/extra/pytestplugin.py .. _`simpletest`: @@ -45,15 +47,17 @@ That’s it. You can now execute the test function: .. code-block:: pytest $ pytest - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 1 item - test_sample.py F [100%] + test_sample.py F [100%] - ================================= FAILURES ================================= - _______________________________ test_answer ________________________________ + ====================================== FAILURES ====================================== + ____________________________________ test_answer _____________________________________ def test_answer(): > assert func(3) == 5 @@ -61,7 +65,7 @@ That’s it. You can now execute the test function: E + where 4 = func(3) test_sample.py:5: AssertionError - ========================= 1 failed in 0.12 seconds ========================= + ============================== 1 failed in 0.12 seconds ============================== This test returns a failure report because ``func(3)`` does not return ``5``. @@ -94,7 +98,7 @@ Execute the test function with “quiet” reporting mode: .. code-block:: pytest $ pytest -q test_sysexit.py - . [100%] + . [100%] 1 passed in 0.12 seconds Group multiple tests in a class @@ -117,9 +121,9 @@ Once you develop multiple tests, you may want to group them into a class. pytest .. code-block:: pytest $ pytest -q test_class.py - .F [100%] - ================================= FAILURES ================================= - ____________________________ TestClass.test_two ____________________________ + .F [100%] + ====================================== FAILURES ====================================== + _________________________________ TestClass.test_two _________________________________ self = @@ -149,9 +153,9 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look .. code-block:: pytest $ pytest -q test_tmpdir.py - F [100%] - ================================= FAILURES ================================= - _____________________________ test_needsfiles ______________________________ + F [100%] + ====================================== FAILURES ====================================== + __________________________________ test_needsfiles ___________________________________ tmpdir = local('PYTEST_TMPDIR/test_needsfiles0') @@ -161,7 +165,7 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look E assert 0 test_tmpdir.py:3: AssertionError - --------------------------- Captured stdout call --------------------------- + -------------------------------- Captured stdout call -------------------------------- PYTEST_TMPDIR/test_needsfiles0 1 failed in 0.12 seconds diff --git a/doc/en/index.rst b/doc/en/index.rst index 7c201fbd7..3c9cb0241 100644 --- a/doc/en/index.rst +++ b/doc/en/index.rst @@ -27,15 +27,17 @@ To execute it: .. code-block:: pytest $ pytest - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 1 item - test_sample.py F [100%] + test_sample.py F [100%] - ================================= FAILURES ================================= - _______________________________ test_answer ________________________________ + ====================================== FAILURES ====================================== + ____________________________________ test_answer _____________________________________ def test_answer(): > assert inc(3) == 5 @@ -43,7 +45,7 @@ To execute it: E + where 4 = inc(3) test_sample.py:6: AssertionError - ========================= 1 failed in 0.12 seconds ========================= + ============================== 1 failed in 0.12 seconds ============================== Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used. See :ref:`Getting Started ` for more examples. diff --git a/doc/en/parametrize.rst b/doc/en/parametrize.rst index 099b531c2..0808b08df 100644 --- a/doc/en/parametrize.rst +++ b/doc/en/parametrize.rst @@ -55,15 +55,17 @@ them in turn: .. code-block:: pytest $ pytest - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 3 items - test_expectation.py ..F [100%] + test_expectation.py ..F [100%] - ================================= FAILURES ================================= - ____________________________ test_eval[6*9-42] _____________________________ + ====================================== FAILURES ====================================== + _________________________________ test_eval[6*9-42] __________________________________ test_input = '6*9', expected = 42 @@ -78,7 +80,7 @@ them in turn: E + where 54 = eval('6*9') test_expectation.py:8: AssertionError - ==================== 1 failed, 2 passed in 0.12 seconds ==================== + ========================= 1 failed, 2 passed in 0.12 seconds ========================= As designed in this example, only one pair of input/output values fails the simple test function. And as usual with test function arguments, @@ -106,14 +108,16 @@ Let's run this: .. code-block:: pytest $ pytest - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 3 items - test_expectation.py ..x [100%] + test_expectation.py ..x [100%] - =================== 2 passed, 1 xfailed in 0.12 seconds ==================== + ======================== 2 passed, 1 xfailed in 0.12 seconds ========================= The one parameter set which caused a failure previously now shows up as an "xfailed (expected to fail)" test. @@ -173,7 +177,7 @@ command line option and the parametrization of our test function:: If we now pass two stringinput values, our test will run twice:: $ pytest -q --stringinput="hello" --stringinput="world" test_strings.py - .. [100%] + .. [100%] 2 passed in 0.12 seconds Let's also run with a stringinput that will lead to a failing test: @@ -181,9 +185,9 @@ Let's also run with a stringinput that will lead to a failing test: .. code-block:: pytest $ pytest -q --stringinput="!" test_strings.py - F [100%] - ================================= FAILURES ================================= - ___________________________ test_valid_string[!] ___________________________ + F [100%] + ====================================== FAILURES ====================================== + ________________________________ test_valid_string[!] ________________________________ stringinput = '!' @@ -205,8 +209,8 @@ list: .. code-block:: pytest $ pytest -q -rs test_strings.py - s [100%] - ========================= short test summary info ========================== + s [100%] + ============================== short test summary info =============================== SKIP [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1 1 skipped in 0.12 seconds diff --git a/doc/en/skipping.rst b/doc/en/skipping.rst index ae1dc7149..0f2073090 100644 --- a/doc/en/skipping.rst +++ b/doc/en/skipping.rst @@ -328,13 +328,15 @@ Running it with the report-on-xfail option gives this output: .. code-block:: pytest example $ pytest -rx xfail_demo.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/example/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR/example, inifile: + plugins: hypothesis-3.x.y collected 7 items - xfail_demo.py xxxxxxx [100%] - ========================= short test summary info ========================== + xfail_demo.py xxxxxxx [100%] + ============================== short test summary info =============================== XFAIL xfail_demo.py::test_hello XFAIL xfail_demo.py::test_hello2 reason: [NOTRUN] @@ -348,7 +350,7 @@ Running it with the report-on-xfail option gives this output: reason: reason XFAIL xfail_demo.py::test_hello7 - ======================== 7 xfailed in 0.12 seconds ========================= + ============================= 7 xfailed in 0.12 seconds ============================== .. _`skip/xfail with parametrize`: diff --git a/doc/en/tmpdir.rst b/doc/en/tmpdir.rst index 8c21e17e5..5f7e98a84 100644 --- a/doc/en/tmpdir.rst +++ b/doc/en/tmpdir.rst @@ -40,15 +40,17 @@ Running this would result in a passed test except for the last .. code-block:: pytest $ pytest test_tmp_path.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 1 item - test_tmp_path.py F [100%] + test_tmp_path.py F [100%] - ================================= FAILURES ================================= - _____________________________ test_create_file _____________________________ + ====================================== FAILURES ====================================== + __________________________________ test_create_file __________________________________ tmp_path = PosixPath('PYTEST_TMPDIR/test_create_file0') @@ -63,7 +65,7 @@ Running this would result in a passed test except for the last E assert 0 test_tmp_path.py:13: AssertionError - ========================= 1 failed in 0.12 seconds ========================= + ============================== 1 failed in 0.12 seconds ============================== The ``tmp_path_factory`` fixture -------------------------------- @@ -102,15 +104,17 @@ Running this would result in a passed test except for the last .. code-block:: pytest $ pytest test_tmpdir.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 1 item - test_tmpdir.py F [100%] + test_tmpdir.py F [100%] - ================================= FAILURES ================================= - _____________________________ test_create_file _____________________________ + ====================================== FAILURES ====================================== + __________________________________ test_create_file __________________________________ tmpdir = local('PYTEST_TMPDIR/test_create_file0') @@ -123,7 +127,7 @@ Running this would result in a passed test except for the last E assert 0 test_tmpdir.py:7: AssertionError - ========================= 1 failed in 0.12 seconds ========================= + ============================== 1 failed in 0.12 seconds ============================== .. _`tmpdir factory example`: diff --git a/doc/en/unittest.rst b/doc/en/unittest.rst index 34c8a35db..fe7f2e550 100644 --- a/doc/en/unittest.rst +++ b/doc/en/unittest.rst @@ -127,15 +127,17 @@ the ``self.db`` values in the traceback: .. code-block:: pytest $ pytest test_unittest_db.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 2 items - test_unittest_db.py FF [100%] + test_unittest_db.py FF [100%] - ================================= FAILURES ================================= - ___________________________ MyTest.test_method1 ____________________________ + ====================================== FAILURES ====================================== + ________________________________ MyTest.test_method1 _________________________________ self = @@ -146,7 +148,7 @@ the ``self.db`` values in the traceback: E assert 0 test_unittest_db.py:9: AssertionError - ___________________________ MyTest.test_method2 ____________________________ + ________________________________ MyTest.test_method2 _________________________________ self = @@ -156,7 +158,7 @@ the ``self.db`` values in the traceback: E assert 0 test_unittest_db.py:12: AssertionError - ========================= 2 failed in 0.12 seconds ========================= + ============================== 2 failed in 0.12 seconds ============================== This default pytest traceback shows that the two test methods share the same ``self.db`` instance which was our intention @@ -206,7 +208,7 @@ Running this test module ...: .. code-block:: pytest $ pytest -q test_unittest_cleandir.py - . [100%] + . [100%] 1 passed in 0.12 seconds ... gives us one passed test because the ``initdir`` fixture function diff --git a/doc/en/usage.rst b/doc/en/usage.rst index bd9706c4f..87171507d 100644 --- a/doc/en/usage.rst +++ b/doc/en/usage.rst @@ -155,12 +155,14 @@ Example: .. code-block:: pytest $ pytest -ra - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 0 items - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ The ``-r`` options accepts a number of characters after it, with ``a`` used above meaning "all except passes". @@ -180,12 +182,14 @@ More than one character can be used, so for example to only see failed and skipp .. code-block:: pytest $ pytest -rfs - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 0 items - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ Using ``p`` lists the passing tests, whilst ``P`` adds an extra section "PASSES" with those tests that passed but had captured output: @@ -193,12 +197,14 @@ captured output: .. code-block:: pytest $ pytest -rpP - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 0 items - ======================= no tests ran in 0.12 seconds ======================= + ============================ no tests ran in 0.12 seconds ============================ .. _pdb-option: @@ -584,7 +590,7 @@ Running it will show that ``MyPlugin`` was added and its hook was invoked:: $ python myinvoke.py - . [100%]*** test run reporting finishing + . [100%]*** test run reporting finishing .. note:: diff --git a/doc/en/warnings.rst b/doc/en/warnings.rst index 3e69d3480..8de555d3c 100644 --- a/doc/en/warnings.rst +++ b/doc/en/warnings.rst @@ -23,20 +23,22 @@ Running pytest now produces this output: .. code-block:: pytest $ pytest test_show_warnings.py - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: + plugins: hypothesis-3.x.y collected 1 item - test_show_warnings.py . [100%] + test_show_warnings.py . [100%] - ============================= warnings summary ============================= + ================================== warnings summary ================================== test_show_warnings.py::test_one $REGENDOC_TMPDIR/test_show_warnings.py:4: UserWarning: api v1, should use functions from v2 warnings.warn(UserWarning("api v1, should use functions from v2")) -- Docs: https://docs.pytest.org/en/latest/warnings.html - =================== 1 passed, 1 warnings in 0.12 seconds =================== + ======================== 1 passed, 1 warnings in 0.12 seconds ======================== The ``-W`` flag can be passed to control which warnings will be displayed or even turn them into errors: @@ -44,15 +46,15 @@ them into errors: .. code-block:: pytest $ pytest -q test_show_warnings.py -W error::UserWarning - F [100%] - ================================= FAILURES ================================= - _________________________________ test_one _________________________________ + F [100%] + ====================================== FAILURES ====================================== + ______________________________________ test_one ______________________________________ def test_one(): > assert api_v1() == 1 test_show_warnings.py:8: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def api_v1(): > warnings.warn(UserWarning("api v1, should use functions from v2")) @@ -355,7 +357,7 @@ defines an ``__init__`` constructor, as this prevents the class from being insta $ pytest test_pytest_warnings.py -q - ============================= warnings summary ============================= + ================================== warnings summary ================================== test_pytest_warnings.py:1 $REGENDOC_TMPDIR/test_pytest_warnings.py:1: PytestWarning: cannot collect test class 'Test' because it has a __init__ constructor class Test: diff --git a/doc/en/writing_plugins.rst b/doc/en/writing_plugins.rst index 70bf315aa..f627fec05 100644 --- a/doc/en/writing_plugins.rst +++ b/doc/en/writing_plugins.rst @@ -411,20 +411,22 @@ additionally it is possible to copy examples for an example folder before runnin .. code-block:: pytest $ pytest - =========================== test session starts ============================ + ================================ test session starts ================================= platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y + hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('$REGENDOC_TMPDIR/.hypothesis/examples') rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini + plugins: hypothesis-3.x.y collected 2 items - test_example.py .. [100%] + test_example.py .. [100%] - ============================= warnings summary ============================= + ================================== warnings summary ================================== test_example.py::test_plugin $REGENDOC_TMPDIR/test_example.py:4: PytestExperimentalApiWarning: testdir.copy_example is an experimental api that may change over time testdir.copy_example("test_example.py") -- Docs: https://docs.pytest.org/en/latest/warnings.html - =================== 2 passed, 1 warnings in 0.12 seconds =================== + ======================== 2 passed, 1 warnings in 0.12 seconds ======================== For more information about the result object that ``runpytest()`` returns, and the methods that it provides please check out the :py:class:`RunResult