run and fix tox -e regen to prepare 5.4
* no longer trigger deprecation warnings when looking up fixtures * fix missed imports in a test example
This commit is contained in:
parent
e1b3a68462
commit
378a75ddf6
|
@ -47,6 +47,8 @@ you will see the return value of the function call:
|
|||
E + where 3 = f()
|
||||
|
||||
test_assert1.py:6: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_assert1.py::test_function - assert 3 == 4
|
||||
============================ 1 failed in 0.12s =============================
|
||||
|
||||
``pytest`` has support for showing the values of the most common subexpressions
|
||||
|
@ -208,6 +210,8 @@ if you run this module:
|
|||
E Use -v to get the full diff
|
||||
|
||||
test_assert2.py:6: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_assert2.py::test_set_comparison - AssertionError: assert {'0'...
|
||||
============================ 1 failed in 0.12s =============================
|
||||
|
||||
Special comparisons are done for a number of cases:
|
||||
|
@ -279,6 +283,8 @@ the conftest file:
|
|||
E vals: 1 != 2
|
||||
|
||||
test_foocompare.py:12: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_foocompare.py::test_compare - assert Comparing Foo instances:
|
||||
1 failed in 0.12s
|
||||
|
||||
.. _assert-details:
|
||||
|
|
|
@ -137,9 +137,11 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
|||
tmpdir_factory [session scope]
|
||||
Return a :class:`_pytest.tmpdir.TempdirFactory` instance for the test session.
|
||||
|
||||
|
||||
tmp_path_factory [session scope]
|
||||
Return a :class:`_pytest.tmpdir.TempPathFactory` instance for the test session.
|
||||
|
||||
|
||||
tmpdir
|
||||
Return a temporary directory path object
|
||||
which is unique to each test function invocation,
|
||||
|
|
|
@ -75,6 +75,9 @@ If you run this for the first time you will see two failures:
|
|||
E Failed: bad luck
|
||||
|
||||
test_50.py:7: Failed
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_50.py::test_num[17] - Failed: bad luck
|
||||
FAILED test_50.py::test_num[25] - Failed: bad luck
|
||||
2 failed, 48 passed in 0.12s
|
||||
|
||||
If you then run it with ``--lf``:
|
||||
|
@ -86,7 +89,7 @@ If you then run it with ``--lf``:
|
|||
platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
|
||||
cachedir: $PYTHON_PREFIX/.pytest_cache
|
||||
rootdir: $REGENDOC_TMPDIR
|
||||
collected 50 items / 48 deselected / 2 selected
|
||||
collected 2 items
|
||||
run-last-failure: rerun previous 2 failures
|
||||
|
||||
test_50.py FF [100%]
|
||||
|
@ -114,7 +117,10 @@ If you then run it with ``--lf``:
|
|||
E Failed: bad luck
|
||||
|
||||
test_50.py:7: Failed
|
||||
===================== 2 failed, 48 deselected in 0.12s =====================
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_50.py::test_num[17] - Failed: bad luck
|
||||
FAILED test_50.py::test_num[25] - Failed: bad luck
|
||||
============================ 2 failed in 0.12s =============================
|
||||
|
||||
You have run only the two failing tests from the last run, while the 48 passing
|
||||
tests have not been run ("deselected").
|
||||
|
@ -158,6 +164,9 @@ of ``FF`` and dots):
|
|||
E Failed: bad luck
|
||||
|
||||
test_50.py:7: Failed
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_50.py::test_num[17] - Failed: bad luck
|
||||
FAILED test_50.py::test_num[25] - Failed: bad luck
|
||||
======================= 2 failed, 48 passed in 0.12s =======================
|
||||
|
||||
.. _`config.cache`:
|
||||
|
@ -230,6 +239,8 @@ If you run this command for the first time, you can see the print statement:
|
|||
test_caching.py:20: AssertionError
|
||||
-------------------------- Captured stdout setup ---------------------------
|
||||
running expensive computation...
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_caching.py::test_function - assert 42 == 23
|
||||
1 failed in 0.12s
|
||||
|
||||
If you run it a second time, the value will be retrieved from
|
||||
|
@ -249,6 +260,8 @@ the cache and nothing will be printed:
|
|||
E assert 42 == 23
|
||||
|
||||
test_caching.py:20: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_caching.py::test_function - assert 42 == 23
|
||||
1 failed in 0.12s
|
||||
|
||||
See the :fixture:`config.cache fixture <config.cache>` for more details.
|
||||
|
|
|
@ -100,6 +100,8 @@ of the failing function and hide the other one:
|
|||
test_module.py:12: AssertionError
|
||||
-------------------------- Captured stdout setup ---------------------------
|
||||
setting up <function test_func2 at 0xdeadbeef>
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_module.py::test_func2 - assert False
|
||||
======================= 1 failed, 1 passed in 0.12s ========================
|
||||
|
||||
Accessing captured output from a test function
|
||||
|
|
|
@ -715,6 +715,9 @@ We can now use the ``-m option`` to select one set:
|
|||
test_module.py:8: in test_interface_complex
|
||||
assert 0
|
||||
E assert 0
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_module.py::test_interface_simple - assert 0
|
||||
FAILED test_module.py::test_interface_complex - assert 0
|
||||
===================== 2 failed, 2 deselected in 0.12s ======================
|
||||
|
||||
or to select both "event" and "interface" tests:
|
||||
|
@ -743,4 +746,8 @@ or to select both "event" and "interface" tests:
|
|||
test_module.py:12: in test_event_simple
|
||||
assert 0
|
||||
E assert 0
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_module.py::test_interface_simple - assert 0
|
||||
FAILED test_module.py::test_interface_complex - assert 0
|
||||
FAILED test_module.py::test_event_simple - assert 0
|
||||
===================== 3 failed, 1 deselected in 0.12s ======================
|
||||
|
|
|
@ -41,6 +41,8 @@ now execute the test specification:
|
|||
usecase execution failed
|
||||
spec failed: 'some': 'other'
|
||||
no further details known at this point.
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_simple.yaml::hello
|
||||
======================= 1 failed, 1 passed in 0.12s ========================
|
||||
|
||||
.. regendoc:wipe
|
||||
|
@ -77,6 +79,8 @@ consulted when reporting in ``verbose`` mode:
|
|||
usecase execution failed
|
||||
spec failed: 'some': 'other'
|
||||
no further details known at this point.
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_simple.yaml::hello
|
||||
======================= 1 failed, 1 passed in 0.12s ========================
|
||||
|
||||
.. regendoc:wipe
|
||||
|
|
|
@ -73,6 +73,8 @@ let's run the full monty:
|
|||
E assert 4 < 4
|
||||
|
||||
test_compute.py:4: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_compute.py::test_compute[4] - assert 4 < 4
|
||||
1 failed, 4 passed in 0.12s
|
||||
|
||||
As expected when running the full range of ``param1`` values
|
||||
|
@ -343,6 +345,8 @@ And then when we run the test:
|
|||
E Failed: deliberately failing for demo purposes
|
||||
|
||||
test_backends.py:8: Failed
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_backends.py::test_db_initialized[d2] - Failed: deliberately f...
|
||||
1 failed, 1 passed in 0.12s
|
||||
|
||||
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
|
||||
|
@ -457,6 +461,8 @@ argument sets to use for each test function. Let's run it:
|
|||
E assert 1 == 2
|
||||
|
||||
test_parametrize.py:21: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_parametrize.py::TestClass::test_equals[1-2] - assert 1 == 2
|
||||
1 failed, 2 passed in 0.12s
|
||||
|
||||
Indirect parametrization with multiple fixtures
|
||||
|
@ -478,11 +484,8 @@ Running it results in some skips if we don't have all the python interpreters in
|
|||
.. code-block:: pytest
|
||||
|
||||
. $ pytest -rs -q multipython.py
|
||||
ssssssssssss...ssssssssssss [100%]
|
||||
========================= short test summary info ==========================
|
||||
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:29: 'python3.5' not found
|
||||
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:29: 'python3.7' not found
|
||||
3 passed, 24 skipped in 0.12s
|
||||
........................... [100%]
|
||||
27 passed in 0.12s
|
||||
|
||||
Indirect parametrization of optional implementations/imports
|
||||
--------------------------------------------------------------------
|
||||
|
@ -607,13 +610,13 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
|
|||
platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
|
||||
cachedir: $PYTHON_PREFIX/.pytest_cache
|
||||
rootdir: $REGENDOC_TMPDIR
|
||||
collecting ... collected 17 items / 14 deselected / 3 selected
|
||||
collecting ... collected 14 items / 11 deselected / 3 selected
|
||||
|
||||
test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%]
|
||||
test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
|
||||
test_pytest_param_example.py::test_eval[basic_6*9] XFAIL [100%]
|
||||
|
||||
=============== 2 passed, 14 deselected, 1 xfailed in 0.12s ================
|
||||
=============== 2 passed, 11 deselected, 1 xfailed in 0.12s ================
|
||||
|
||||
As the result:
|
||||
|
||||
|
|
|
@ -436,7 +436,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
items = [1, 2, 3]
|
||||
print("items is {!r}".format(items))
|
||||
> a, b = items.pop()
|
||||
E TypeError: 'int' object is not iterable
|
||||
E TypeError: cannot unpack non-iterable int object
|
||||
|
||||
failure_demo.py:181: TypeError
|
||||
--------------------------- Captured stdout call ---------------------------
|
||||
|
@ -516,7 +516,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
def test_z2_type_error(self):
|
||||
items = 3
|
||||
> a, b = items
|
||||
E TypeError: 'int' object is not iterable
|
||||
E TypeError: cannot unpack non-iterable int object
|
||||
|
||||
failure_demo.py:222: TypeError
|
||||
______________________ TestMoreErrors.test_startswith ______________________
|
||||
|
@ -650,4 +650,49 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
|
||||
|
||||
failure_demo.py:282: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED failure_demo.py::test_generative[3-6] - assert (3 * 2) < 6
|
||||
FAILED failure_demo.py::TestFailing::test_simple - assert 42 == 43
|
||||
FAILED failure_demo.py::TestFailing::test_simple_multiline - assert 42 == 54
|
||||
FAILED failure_demo.py::TestFailing::test_not - assert not 42
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_text - Asser...
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_similar_text
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_multiline_text
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text - ...
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text_multiline
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list - asser...
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list_long - ...
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dict - Asser...
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_set - Assert...
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_longer_list
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_in_list - asser...
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_multiline
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long_term
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dataclass - ...
|
||||
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_attrs - Asse...
|
||||
FAILED failure_demo.py::test_attribute - assert 1 == 2
|
||||
FAILED failure_demo.py::test_attribute_instance - AssertionError: assert ...
|
||||
FAILED failure_demo.py::test_attribute_failure - Exception: Failed to get...
|
||||
FAILED failure_demo.py::test_attribute_multiple - AssertionError: assert ...
|
||||
FAILED failure_demo.py::TestRaises::test_raises - ValueError: invalid lit...
|
||||
FAILED failure_demo.py::TestRaises::test_raises_doesnt - Failed: DID NOT ...
|
||||
FAILED failure_demo.py::TestRaises::test_raise - ValueError: demo error
|
||||
FAILED failure_demo.py::TestRaises::test_tupleerror - ValueError: not eno...
|
||||
FAILED failure_demo.py::TestRaises::test_reinterpret_fails_with_print_for_the_fun_of_it
|
||||
FAILED failure_demo.py::TestRaises::test_some_error - NameError: name 'na...
|
||||
FAILED failure_demo.py::test_dynamic_compile_shows_nicely - AssertionError
|
||||
FAILED failure_demo.py::TestMoreErrors::test_complex_error - assert 44 == 43
|
||||
FAILED failure_demo.py::TestMoreErrors::test_z1_unpack_error - ValueError...
|
||||
FAILED failure_demo.py::TestMoreErrors::test_z2_type_error - TypeError: c...
|
||||
FAILED failure_demo.py::TestMoreErrors::test_startswith - AssertionError:...
|
||||
FAILED failure_demo.py::TestMoreErrors::test_startswith_nested - Assertio...
|
||||
FAILED failure_demo.py::TestMoreErrors::test_global_func - assert False
|
||||
FAILED failure_demo.py::TestMoreErrors::test_instance - assert 42 != 42
|
||||
FAILED failure_demo.py::TestMoreErrors::test_compare - assert 11 < 5
|
||||
FAILED failure_demo.py::TestMoreErrors::test_try_finally - assert 1 == 0
|
||||
FAILED failure_demo.py::TestCustomAssertMsg::test_single_line - Assertion...
|
||||
FAILED failure_demo.py::TestCustomAssertMsg::test_multiline - AssertionEr...
|
||||
FAILED failure_demo.py::TestCustomAssertMsg::test_custom_repr - Assertion...
|
||||
============================ 44 failed in 0.12s ============================
|
||||
|
|
|
@ -65,6 +65,8 @@ Let's run this without supplying our new option:
|
|||
test_sample.py:6: AssertionError
|
||||
--------------------------- Captured stdout call ---------------------------
|
||||
first
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_sample.py::test_answer - assert 0
|
||||
1 failed in 0.12s
|
||||
|
||||
And now with supplying a command line option:
|
||||
|
@ -89,6 +91,8 @@ And now with supplying a command line option:
|
|||
test_sample.py:6: AssertionError
|
||||
--------------------------- Captured stdout call ---------------------------
|
||||
second
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_sample.py::test_answer - assert 0
|
||||
1 failed in 0.12s
|
||||
|
||||
You can see that the command line option arrived in our test. This
|
||||
|
@ -261,6 +265,8 @@ Let's run our little function:
|
|||
E Failed: not configured: 42
|
||||
|
||||
test_checkconfig.py:11: Failed
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_checkconfig.py::test_something - Failed: not configured: 42
|
||||
1 failed in 0.12s
|
||||
|
||||
If you only want to hide certain exceptions, you can set ``__tracebackhide__``
|
||||
|
@ -443,7 +449,7 @@ Now we can profile which test functions execute the slowest:
|
|||
========================= slowest 3 test durations =========================
|
||||
0.30s call test_some_are_slow.py::test_funcslow2
|
||||
0.20s call test_some_are_slow.py::test_funcslow1
|
||||
0.11s call test_some_are_slow.py::test_funcfast
|
||||
0.10s call test_some_are_slow.py::test_funcfast
|
||||
============================ 3 passed in 0.12s =============================
|
||||
|
||||
incremental testing - test steps
|
||||
|
@ -461,6 +467,9 @@ an ``incremental`` marker which is to be used on classes:
|
|||
|
||||
# content of conftest.py
|
||||
|
||||
from typing import Dict, Tuple
|
||||
import pytest
|
||||
|
||||
# store history of failures per test class name and per index in parametrize (if parametrize used)
|
||||
_test_failed_incremental: Dict[str, Dict[Tuple[int, ...], str]] = {}
|
||||
|
||||
|
@ -669,6 +678,11 @@ We can run this:
|
|||
E assert 0
|
||||
|
||||
a/test_db2.py:2: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_step.py::TestUserHandling::test_modification - assert 0
|
||||
FAILED a/test_db.py::test_a1 - AssertionError: <conftest.DB object at 0x7...
|
||||
FAILED a/test_db2.py::test_a2 - AssertionError: <conftest.DB object at 0x...
|
||||
ERROR b/test_error.py::test_root
|
||||
============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12s ==============
|
||||
|
||||
The two test modules in the ``a`` directory see the same ``db`` fixture instance
|
||||
|
@ -758,6 +772,9 @@ and run them:
|
|||
E assert 0
|
||||
|
||||
test_module.py:6: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_module.py::test_fail1 - assert 0
|
||||
FAILED test_module.py::test_fail2 - assert 0
|
||||
============================ 2 failed in 0.12s =============================
|
||||
|
||||
you will have a "failures" file which contains the failing test ids:
|
||||
|
@ -873,6 +890,10 @@ and run it:
|
|||
E assert 0
|
||||
|
||||
test_module.py:19: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_module.py::test_call_fails - assert 0
|
||||
FAILED test_module.py::test_fail2 - assert 0
|
||||
ERROR test_module.py::test_setup_fails - assert 0
|
||||
======================== 2 failed, 1 error in 0.12s ========================
|
||||
|
||||
You'll see that the fixture finalizers could use the precise reporting
|
||||
|
|
|
@ -170,6 +170,8 @@ marked ``smtp_connection`` fixture function. Running the test looks like this:
|
|||
E assert 0
|
||||
|
||||
test_smtpsimple.py:14: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_smtpsimple.py::test_ehlo - assert 0
|
||||
============================ 1 failed in 0.12s =============================
|
||||
|
||||
In the failure traceback we see that the test function was called with a
|
||||
|
@ -332,6 +334,9 @@ inspect what is going on and can now run the tests:
|
|||
E assert 0
|
||||
|
||||
test_module.py:13: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_module.py::test_ehlo - assert 0
|
||||
FAILED test_module.py::test_noop - assert 0
|
||||
============================ 2 failed in 0.12s =============================
|
||||
|
||||
You see the two ``assert 0`` failing and more importantly you can also see
|
||||
|
@ -465,6 +470,9 @@ Let's execute it:
|
|||
$ pytest -s -q --tb=no
|
||||
FFteardown smtp
|
||||
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_module.py::test_ehlo - assert 0
|
||||
FAILED test_module.py::test_noop - assert 0
|
||||
2 failed in 0.12s
|
||||
|
||||
We see that the ``smtp_connection`` instance is finalized after the two
|
||||
|
@ -619,6 +627,9 @@ again, nothing much has changed:
|
|||
$ pytest -s -q --tb=no
|
||||
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
|
||||
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_module.py::test_ehlo - assert 0
|
||||
FAILED test_module.py::test_noop - assert 0
|
||||
2 failed in 0.12s
|
||||
|
||||
Let's quickly create another test module that actually sets the
|
||||
|
@ -648,6 +659,8 @@ Running it:
|
|||
E assert 0
|
||||
------------------------- Captured stdout teardown -------------------------
|
||||
finalizing <smtplib.SMTP object at 0xdeadbeef> (mail.python.org)
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_anothersmtp.py::test_showhelo - AssertionError: (250, b'mail....
|
||||
|
||||
voila! The ``smtp_connection`` fixture function picked up our mail server name
|
||||
from the module namespace.
|
||||
|
@ -796,6 +809,11 @@ So let's just do another run:
|
|||
test_module.py:13: AssertionError
|
||||
------------------------- Captured stdout teardown -------------------------
|
||||
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_module.py::test_ehlo[smtp.gmail.com] - assert 0
|
||||
FAILED test_module.py::test_noop[smtp.gmail.com] - assert 0
|
||||
FAILED test_module.py::test_ehlo[mail.python.org] - AssertionError: asser...
|
||||
FAILED test_module.py::test_noop[mail.python.org] - assert 0
|
||||
4 failed in 0.12s
|
||||
|
||||
We see that our two test functions each ran twice, against the different
|
||||
|
|
|
@ -28,7 +28,7 @@ Install ``pytest``
|
|||
.. code-block:: bash
|
||||
|
||||
$ pytest --version
|
||||
This is pytest version 5.x.y, imported from $PYTHON_PREFIX/lib/python3.6/site-packages/pytest/__init__.py
|
||||
This is pytest version 5.x.y, imported from $PYTHON_PREFIX/lib/python3.7/site-packages/pytest/__init__.py
|
||||
|
||||
.. _`simpletest`:
|
||||
|
||||
|
@ -69,6 +69,8 @@ That’s it. You can now execute the test function:
|
|||
E + where 4 = func(3)
|
||||
|
||||
test_sample.py:6: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_sample.py::test_answer - assert 4 == 5
|
||||
============================ 1 failed in 0.12s =============================
|
||||
|
||||
This test returns a failure report because ``func(3)`` does not return ``5``.
|
||||
|
@ -145,6 +147,8 @@ Once you develop multiple tests, you may want to group them into a class. pytest
|
|||
E + where False = hasattr('hello', 'check')
|
||||
|
||||
test_class.py:8: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_class.py::TestClass::test_two - AssertionError: assert False
|
||||
1 failed, 1 passed in 0.12s
|
||||
|
||||
The first test passed and the second failed. You can easily see the intermediate values in the assertion to help you understand the reason for the failure.
|
||||
|
@ -180,6 +184,8 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look
|
|||
test_tmpdir.py:3: AssertionError
|
||||
--------------------------- Captured stdout call ---------------------------
|
||||
PYTEST_TMPDIR/test_needsfiles0
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_tmpdir.py::test_needsfiles - assert 0
|
||||
1 failed in 0.12s
|
||||
|
||||
More info on tmpdir handling is available at :ref:`Temporary directories and files <tmpdir handling>`.
|
||||
|
|
|
@ -44,6 +44,8 @@ To execute it:
|
|||
E + where 4 = inc(3)
|
||||
|
||||
test_sample.py:6: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_sample.py::test_answer - assert 4 == 5
|
||||
============================ 1 failed in 0.12s =============================
|
||||
|
||||
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used.
|
||||
|
|
|
@ -75,6 +75,8 @@ them in turn:
|
|||
E + where 54 = eval('6*9')
|
||||
|
||||
test_expectation.py:6: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_expectation.py::test_eval[6*9-42] - AssertionError: assert 54...
|
||||
======================= 1 failed, 2 passed in 0.12s ========================
|
||||
|
||||
.. note::
|
||||
|
@ -225,6 +227,8 @@ Let's also run with a stringinput that will lead to a failing test:
|
|||
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha
|
||||
|
||||
test_strings.py:4: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_strings.py::test_valid_string[!] - AssertionError: assert False
|
||||
1 failed in 0.12s
|
||||
|
||||
As expected our test function fails.
|
||||
|
|
|
@ -64,6 +64,8 @@ Running this would result in a passed test except for the last
|
|||
E assert 0
|
||||
|
||||
test_tmp_path.py:13: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_tmp_path.py::test_create_file - assert 0
|
||||
============================ 1 failed in 0.12s =============================
|
||||
|
||||
.. _`tmp_path_factory example`:
|
||||
|
@ -133,6 +135,8 @@ Running this would result in a passed test except for the last
|
|||
E assert 0
|
||||
|
||||
test_tmpdir.py:9: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_tmpdir.py::test_create_file - assert 0
|
||||
============================ 1 failed in 0.12s =============================
|
||||
|
||||
.. _`tmpdir factory example`:
|
||||
|
|
|
@ -166,6 +166,9 @@ the ``self.db`` values in the traceback:
|
|||
E assert 0
|
||||
|
||||
test_unittest_db.py:13: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_unittest_db.py::MyTest::test_method1 - AssertionError: <conft...
|
||||
FAILED test_unittest_db.py::MyTest::test_method2 - AssertionError: <conft...
|
||||
============================ 2 failed in 0.12s =============================
|
||||
|
||||
This default pytest traceback shows that the two test methods
|
||||
|
|
|
@ -821,6 +821,9 @@ hook was invoked:
|
|||
E assert 0
|
||||
|
||||
test_example.py:14: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_example.py::test_fail - assert 0
|
||||
ERROR test_example.py::test_error - assert 0
|
||||
|
||||
.. note::
|
||||
|
||||
|
|
|
@ -64,6 +64,8 @@ them into errors:
|
|||
E UserWarning: api v1, should use functions from v2
|
||||
|
||||
test_show_warnings.py:5: UserWarning
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_show_warnings.py::test_one - UserWarning: api v1, should use ...
|
||||
1 failed in 0.12s
|
||||
|
||||
The same option can be set in the ``pytest.ini`` file using the ``filterwarnings`` ini option.
|
||||
|
|
Loading…
Reference in New Issue