From 378a75ddf602278fb2049a743b2ca3237722603d Mon Sep 17 00:00:00 2001 From: Ronny Pfannschmidt Date: Wed, 11 Mar 2020 15:23:25 +0100 Subject: [PATCH] run and fix tox -e regen to prepare 5.4 * no longer trigger deprecation warnings when looking up fixtures * fix missed imports in a test example --- doc/en/assert.rst | 6 ++++ doc/en/builtin.rst | 2 ++ doc/en/cache.rst | 17 +++++++++-- doc/en/capture.rst | 2 ++ doc/en/example/markers.rst | 7 +++++ doc/en/example/nonpython.rst | 4 +++ doc/en/example/parametrize.rst | 17 ++++++----- doc/en/example/reportingdemo.rst | 49 ++++++++++++++++++++++++++++++-- doc/en/example/simple.rst | 23 ++++++++++++++- doc/en/fixture.rst | 18 ++++++++++++ doc/en/getting-started.rst | 8 +++++- doc/en/index.rst | 2 ++ doc/en/parametrize.rst | 4 +++ doc/en/tmpdir.rst | 4 +++ doc/en/unittest.rst | 3 ++ doc/en/usage.rst | 3 ++ doc/en/warnings.rst | 2 ++ 17 files changed, 158 insertions(+), 13 deletions(-) diff --git a/doc/en/assert.rst b/doc/en/assert.rst index 995edfaa8..d7c380c60 100644 --- a/doc/en/assert.rst +++ b/doc/en/assert.rst @@ -47,6 +47,8 @@ you will see the return value of the function call: E + where 3 = f() test_assert1.py:6: AssertionError + ========================= short test summary info ========================== + FAILED test_assert1.py::test_function - assert 3 == 4 ============================ 1 failed in 0.12s ============================= ``pytest`` has support for showing the values of the most common subexpressions @@ -208,6 +210,8 @@ if you run this module: E Use -v to get the full diff test_assert2.py:6: AssertionError + ========================= short test summary info ========================== + FAILED test_assert2.py::test_set_comparison - AssertionError: assert {'0'... ============================ 1 failed in 0.12s ============================= Special comparisons are done for a number of cases: @@ -279,6 +283,8 @@ the conftest file: E vals: 1 != 2 test_foocompare.py:12: AssertionError + ========================= short test summary info ========================== + FAILED test_foocompare.py::test_compare - assert Comparing Foo instances: 1 failed in 0.12s .. _assert-details: diff --git a/doc/en/builtin.rst b/doc/en/builtin.rst index 7b8fd4a5a..7864233fc 100644 --- a/doc/en/builtin.rst +++ b/doc/en/builtin.rst @@ -137,9 +137,11 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a tmpdir_factory [session scope] Return a :class:`_pytest.tmpdir.TempdirFactory` instance for the test session. + tmp_path_factory [session scope] Return a :class:`_pytest.tmpdir.TempPathFactory` instance for the test session. + tmpdir Return a temporary directory path object which is unique to each test function invocation, diff --git a/doc/en/cache.rst b/doc/en/cache.rst index 6baccd440..b01182d98 100644 --- a/doc/en/cache.rst +++ b/doc/en/cache.rst @@ -75,6 +75,9 @@ If you run this for the first time you will see two failures: E Failed: bad luck test_50.py:7: Failed + ========================= short test summary info ========================== + FAILED test_50.py::test_num[17] - Failed: bad luck + FAILED test_50.py::test_num[25] - Failed: bad luck 2 failed, 48 passed in 0.12s If you then run it with ``--lf``: @@ -86,7 +89,7 @@ If you then run it with ``--lf``: platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y cachedir: $PYTHON_PREFIX/.pytest_cache rootdir: $REGENDOC_TMPDIR - collected 50 items / 48 deselected / 2 selected + collected 2 items run-last-failure: rerun previous 2 failures test_50.py FF [100%] @@ -114,7 +117,10 @@ If you then run it with ``--lf``: E Failed: bad luck test_50.py:7: Failed - ===================== 2 failed, 48 deselected in 0.12s ===================== + ========================= short test summary info ========================== + FAILED test_50.py::test_num[17] - Failed: bad luck + FAILED test_50.py::test_num[25] - Failed: bad luck + ============================ 2 failed in 0.12s ============================= You have run only the two failing tests from the last run, while the 48 passing tests have not been run ("deselected"). @@ -158,6 +164,9 @@ of ``FF`` and dots): E Failed: bad luck test_50.py:7: Failed + ========================= short test summary info ========================== + FAILED test_50.py::test_num[17] - Failed: bad luck + FAILED test_50.py::test_num[25] - Failed: bad luck ======================= 2 failed, 48 passed in 0.12s ======================= .. _`config.cache`: @@ -230,6 +239,8 @@ If you run this command for the first time, you can see the print statement: test_caching.py:20: AssertionError -------------------------- Captured stdout setup --------------------------- running expensive computation... + ========================= short test summary info ========================== + FAILED test_caching.py::test_function - assert 42 == 23 1 failed in 0.12s If you run it a second time, the value will be retrieved from @@ -249,6 +260,8 @@ the cache and nothing will be printed: E assert 42 == 23 test_caching.py:20: AssertionError + ========================= short test summary info ========================== + FAILED test_caching.py::test_function - assert 42 == 23 1 failed in 0.12s See the :fixture:`config.cache fixture ` for more details. diff --git a/doc/en/capture.rst b/doc/en/capture.rst index 45892a98e..7c8c25cc5 100644 --- a/doc/en/capture.rst +++ b/doc/en/capture.rst @@ -100,6 +100,8 @@ of the failing function and hide the other one: test_module.py:12: AssertionError -------------------------- Captured stdout setup --------------------------- setting up + ========================= short test summary info ========================== + FAILED test_module.py::test_func2 - assert False ======================= 1 failed, 1 passed in 0.12s ======================== Accessing captured output from a test function diff --git a/doc/en/example/markers.rst b/doc/en/example/markers.rst index e83beedd0..467c2a2fa 100644 --- a/doc/en/example/markers.rst +++ b/doc/en/example/markers.rst @@ -715,6 +715,9 @@ We can now use the ``-m option`` to select one set: test_module.py:8: in test_interface_complex assert 0 E assert 0 + ========================= short test summary info ========================== + FAILED test_module.py::test_interface_simple - assert 0 + FAILED test_module.py::test_interface_complex - assert 0 ===================== 2 failed, 2 deselected in 0.12s ====================== or to select both "event" and "interface" tests: @@ -743,4 +746,8 @@ or to select both "event" and "interface" tests: test_module.py:12: in test_event_simple assert 0 E assert 0 + ========================= short test summary info ========================== + FAILED test_module.py::test_interface_simple - assert 0 + FAILED test_module.py::test_interface_complex - assert 0 + FAILED test_module.py::test_event_simple - assert 0 ===================== 3 failed, 1 deselected in 0.12s ====================== diff --git a/doc/en/example/nonpython.rst b/doc/en/example/nonpython.rst index 28b20800e..083f6b439 100644 --- a/doc/en/example/nonpython.rst +++ b/doc/en/example/nonpython.rst @@ -41,6 +41,8 @@ now execute the test specification: usecase execution failed spec failed: 'some': 'other' no further details known at this point. + ========================= short test summary info ========================== + FAILED test_simple.yaml::hello ======================= 1 failed, 1 passed in 0.12s ======================== .. regendoc:wipe @@ -77,6 +79,8 @@ consulted when reporting in ``verbose`` mode: usecase execution failed spec failed: 'some': 'other' no further details known at this point. + ========================= short test summary info ========================== + FAILED test_simple.yaml::hello ======================= 1 failed, 1 passed in 0.12s ======================== .. regendoc:wipe diff --git a/doc/en/example/parametrize.rst b/doc/en/example/parametrize.rst index f1425342b..8165060d5 100644 --- a/doc/en/example/parametrize.rst +++ b/doc/en/example/parametrize.rst @@ -73,6 +73,8 @@ let's run the full monty: E assert 4 < 4 test_compute.py:4: AssertionError + ========================= short test summary info ========================== + FAILED test_compute.py::test_compute[4] - assert 4 < 4 1 failed, 4 passed in 0.12s As expected when running the full range of ``param1`` values @@ -343,6 +345,8 @@ And then when we run the test: E Failed: deliberately failing for demo purposes test_backends.py:8: Failed + ========================= short test summary info ========================== + FAILED test_backends.py::test_db_initialized[d2] - Failed: deliberately f... 1 failed, 1 passed in 0.12s The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase. @@ -457,6 +461,8 @@ argument sets to use for each test function. Let's run it: E assert 1 == 2 test_parametrize.py:21: AssertionError + ========================= short test summary info ========================== + FAILED test_parametrize.py::TestClass::test_equals[1-2] - assert 1 == 2 1 failed, 2 passed in 0.12s Indirect parametrization with multiple fixtures @@ -478,11 +484,8 @@ Running it results in some skips if we don't have all the python interpreters in .. code-block:: pytest . $ pytest -rs -q multipython.py - ssssssssssss...ssssssssssss [100%] - ========================= short test summary info ========================== - SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:29: 'python3.5' not found - SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:29: 'python3.7' not found - 3 passed, 24 skipped in 0.12s + ........................... [100%] + 27 passed in 0.12s Indirect parametrization of optional implementations/imports -------------------------------------------------------------------- @@ -607,13 +610,13 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker: platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python cachedir: $PYTHON_PREFIX/.pytest_cache rootdir: $REGENDOC_TMPDIR - collecting ... collected 17 items / 14 deselected / 3 selected + collecting ... collected 14 items / 11 deselected / 3 selected test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%] test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%] test_pytest_param_example.py::test_eval[basic_6*9] XFAIL [100%] - =============== 2 passed, 14 deselected, 1 xfailed in 0.12s ================ + =============== 2 passed, 11 deselected, 1 xfailed in 0.12s ================ As the result: diff --git a/doc/en/example/reportingdemo.rst b/doc/en/example/reportingdemo.rst index 1ab0f9c82..2a1b2ed65 100644 --- a/doc/en/example/reportingdemo.rst +++ b/doc/en/example/reportingdemo.rst @@ -436,7 +436,7 @@ Here is a nice run of several failures and how ``pytest`` presents things: items = [1, 2, 3] print("items is {!r}".format(items)) > a, b = items.pop() - E TypeError: 'int' object is not iterable + E TypeError: cannot unpack non-iterable int object failure_demo.py:181: TypeError --------------------------- Captured stdout call --------------------------- @@ -516,7 +516,7 @@ Here is a nice run of several failures and how ``pytest`` presents things: def test_z2_type_error(self): items = 3 > a, b = items - E TypeError: 'int' object is not iterable + E TypeError: cannot unpack non-iterable int object failure_demo.py:222: TypeError ______________________ TestMoreErrors.test_startswith ______________________ @@ -650,4 +650,49 @@ Here is a nice run of several failures and how ``pytest`` presents things: E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a failure_demo.py:282: AssertionError + ========================= short test summary info ========================== + FAILED failure_demo.py::test_generative[3-6] - assert (3 * 2) < 6 + FAILED failure_demo.py::TestFailing::test_simple - assert 42 == 43 + FAILED failure_demo.py::TestFailing::test_simple_multiline - assert 42 == 54 + FAILED failure_demo.py::TestFailing::test_not - assert not 42 + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_text - Asser... + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_similar_text + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_multiline_text + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text - ... + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text_multiline + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list - asser... + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list_long - ... + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dict - Asser... + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_set - Assert... + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_longer_list + FAILED failure_demo.py::TestSpecialisedExplanations::test_in_list - asser... + FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_multiline + FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single + FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long + FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long_term + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dataclass - ... + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_attrs - Asse... + FAILED failure_demo.py::test_attribute - assert 1 == 2 + FAILED failure_demo.py::test_attribute_instance - AssertionError: assert ... + FAILED failure_demo.py::test_attribute_failure - Exception: Failed to get... + FAILED failure_demo.py::test_attribute_multiple - AssertionError: assert ... + FAILED failure_demo.py::TestRaises::test_raises - ValueError: invalid lit... + FAILED failure_demo.py::TestRaises::test_raises_doesnt - Failed: DID NOT ... + FAILED failure_demo.py::TestRaises::test_raise - ValueError: demo error + FAILED failure_demo.py::TestRaises::test_tupleerror - ValueError: not eno... + FAILED failure_demo.py::TestRaises::test_reinterpret_fails_with_print_for_the_fun_of_it + FAILED failure_demo.py::TestRaises::test_some_error - NameError: name 'na... + FAILED failure_demo.py::test_dynamic_compile_shows_nicely - AssertionError + FAILED failure_demo.py::TestMoreErrors::test_complex_error - assert 44 == 43 + FAILED failure_demo.py::TestMoreErrors::test_z1_unpack_error - ValueError... + FAILED failure_demo.py::TestMoreErrors::test_z2_type_error - TypeError: c... + FAILED failure_demo.py::TestMoreErrors::test_startswith - AssertionError:... + FAILED failure_demo.py::TestMoreErrors::test_startswith_nested - Assertio... + FAILED failure_demo.py::TestMoreErrors::test_global_func - assert False + FAILED failure_demo.py::TestMoreErrors::test_instance - assert 42 != 42 + FAILED failure_demo.py::TestMoreErrors::test_compare - assert 11 < 5 + FAILED failure_demo.py::TestMoreErrors::test_try_finally - assert 1 == 0 + FAILED failure_demo.py::TestCustomAssertMsg::test_single_line - Assertion... + FAILED failure_demo.py::TestCustomAssertMsg::test_multiline - AssertionEr... + FAILED failure_demo.py::TestCustomAssertMsg::test_custom_repr - Assertion... ============================ 44 failed in 0.12s ============================ diff --git a/doc/en/example/simple.rst b/doc/en/example/simple.rst index 1a5c5b444..d149553c7 100644 --- a/doc/en/example/simple.rst +++ b/doc/en/example/simple.rst @@ -65,6 +65,8 @@ Let's run this without supplying our new option: test_sample.py:6: AssertionError --------------------------- Captured stdout call --------------------------- first + ========================= short test summary info ========================== + FAILED test_sample.py::test_answer - assert 0 1 failed in 0.12s And now with supplying a command line option: @@ -89,6 +91,8 @@ And now with supplying a command line option: test_sample.py:6: AssertionError --------------------------- Captured stdout call --------------------------- second + ========================= short test summary info ========================== + FAILED test_sample.py::test_answer - assert 0 1 failed in 0.12s You can see that the command line option arrived in our test. This @@ -261,6 +265,8 @@ Let's run our little function: E Failed: not configured: 42 test_checkconfig.py:11: Failed + ========================= short test summary info ========================== + FAILED test_checkconfig.py::test_something - Failed: not configured: 42 1 failed in 0.12s If you only want to hide certain exceptions, you can set ``__tracebackhide__`` @@ -443,7 +449,7 @@ Now we can profile which test functions execute the slowest: ========================= slowest 3 test durations ========================= 0.30s call test_some_are_slow.py::test_funcslow2 0.20s call test_some_are_slow.py::test_funcslow1 - 0.11s call test_some_are_slow.py::test_funcfast + 0.10s call test_some_are_slow.py::test_funcfast ============================ 3 passed in 0.12s ============================= incremental testing - test steps @@ -461,6 +467,9 @@ an ``incremental`` marker which is to be used on classes: # content of conftest.py + from typing import Dict, Tuple + import pytest + # store history of failures per test class name and per index in parametrize (if parametrize used) _test_failed_incremental: Dict[str, Dict[Tuple[int, ...], str]] = {} @@ -669,6 +678,11 @@ We can run this: E assert 0 a/test_db2.py:2: AssertionError + ========================= short test summary info ========================== + FAILED test_step.py::TestUserHandling::test_modification - assert 0 + FAILED a/test_db.py::test_a1 - AssertionError: (smtp.gmail.com) + ========================= short test summary info ========================== + FAILED test_module.py::test_ehlo - assert 0 + FAILED test_module.py::test_noop - assert 0 2 failed in 0.12s Let's quickly create another test module that actually sets the @@ -648,6 +659,8 @@ Running it: E assert 0 ------------------------- Captured stdout teardown ------------------------- finalizing (mail.python.org) + ========================= short test summary info ========================== + FAILED test_anothersmtp.py::test_showhelo - AssertionError: (250, b'mail.... voila! The ``smtp_connection`` fixture function picked up our mail server name from the module namespace. @@ -796,6 +809,11 @@ So let's just do another run: test_module.py:13: AssertionError ------------------------- Captured stdout teardown ------------------------- finalizing + ========================= short test summary info ========================== + FAILED test_module.py::test_ehlo[smtp.gmail.com] - assert 0 + FAILED test_module.py::test_noop[smtp.gmail.com] - assert 0 + FAILED test_module.py::test_ehlo[mail.python.org] - AssertionError: asser... + FAILED test_module.py::test_noop[mail.python.org] - assert 0 4 failed in 0.12s We see that our two test functions each ran twice, against the different diff --git a/doc/en/getting-started.rst b/doc/en/getting-started.rst index a1bdf16fe..e1890115e 100644 --- a/doc/en/getting-started.rst +++ b/doc/en/getting-started.rst @@ -28,7 +28,7 @@ Install ``pytest`` .. code-block:: bash $ pytest --version - This is pytest version 5.x.y, imported from $PYTHON_PREFIX/lib/python3.6/site-packages/pytest/__init__.py + This is pytest version 5.x.y, imported from $PYTHON_PREFIX/lib/python3.7/site-packages/pytest/__init__.py .. _`simpletest`: @@ -69,6 +69,8 @@ That’s it. You can now execute the test function: E + where 4 = func(3) test_sample.py:6: AssertionError + ========================= short test summary info ========================== + FAILED test_sample.py::test_answer - assert 4 == 5 ============================ 1 failed in 0.12s ============================= This test returns a failure report because ``func(3)`` does not return ``5``. @@ -145,6 +147,8 @@ Once you develop multiple tests, you may want to group them into a class. pytest E + where False = hasattr('hello', 'check') test_class.py:8: AssertionError + ========================= short test summary info ========================== + FAILED test_class.py::TestClass::test_two - AssertionError: assert False 1 failed, 1 passed in 0.12s The first test passed and the second failed. You can easily see the intermediate values in the assertion to help you understand the reason for the failure. @@ -180,6 +184,8 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look test_tmpdir.py:3: AssertionError --------------------------- Captured stdout call --------------------------- PYTEST_TMPDIR/test_needsfiles0 + ========================= short test summary info ========================== + FAILED test_tmpdir.py::test_needsfiles - assert 0 1 failed in 0.12s More info on tmpdir handling is available at :ref:`Temporary directories and files `. diff --git a/doc/en/index.rst b/doc/en/index.rst index 806c498c7..04c2a36e0 100644 --- a/doc/en/index.rst +++ b/doc/en/index.rst @@ -44,6 +44,8 @@ To execute it: E + where 4 = inc(3) test_sample.py:6: AssertionError + ========================= short test summary info ========================== + FAILED test_sample.py::test_answer - assert 4 == 5 ============================ 1 failed in 0.12s ============================= Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used. diff --git a/doc/en/parametrize.rst b/doc/en/parametrize.rst index 072065c83..29223e28e 100644 --- a/doc/en/parametrize.rst +++ b/doc/en/parametrize.rst @@ -75,6 +75,8 @@ them in turn: E + where 54 = eval('6*9') test_expectation.py:6: AssertionError + ========================= short test summary info ========================== + FAILED test_expectation.py::test_eval[6*9-42] - AssertionError: assert 54... ======================= 1 failed, 2 passed in 0.12s ======================== .. note:: @@ -225,6 +227,8 @@ Let's also run with a stringinput that will lead to a failing test: E + where = '!'.isalpha test_strings.py:4: AssertionError + ========================= short test summary info ========================== + FAILED test_strings.py::test_valid_string[!] - AssertionError: assert False 1 failed in 0.12s As expected our test function fails. diff --git a/doc/en/tmpdir.rst b/doc/en/tmpdir.rst index b9faef4dc..a4f7326fd 100644 --- a/doc/en/tmpdir.rst +++ b/doc/en/tmpdir.rst @@ -64,6 +64,8 @@ Running this would result in a passed test except for the last E assert 0 test_tmp_path.py:13: AssertionError + ========================= short test summary info ========================== + FAILED test_tmp_path.py::test_create_file - assert 0 ============================ 1 failed in 0.12s ============================= .. _`tmp_path_factory example`: @@ -133,6 +135,8 @@ Running this would result in a passed test except for the last E assert 0 test_tmpdir.py:9: AssertionError + ========================= short test summary info ========================== + FAILED test_tmpdir.py::test_create_file - assert 0 ============================ 1 failed in 0.12s ============================= .. _`tmpdir factory example`: diff --git a/doc/en/unittest.rst b/doc/en/unittest.rst index cd7858190..c30b17110 100644 --- a/doc/en/unittest.rst +++ b/doc/en/unittest.rst @@ -166,6 +166,9 @@ the ``self.db`` values in the traceback: E assert 0 test_unittest_db.py:13: AssertionError + ========================= short test summary info ========================== + FAILED test_unittest_db.py::MyTest::test_method1 - AssertionError: