Preparing release version 5.1.2

This commit is contained in:
Bruno Oliveira 2019-08-30 12:43:47 -03:00
parent f9cc704b1a
commit e56544cb58
27 changed files with 127 additions and 86 deletions

View File

@ -18,6 +18,32 @@ with advance notice in the **Deprecations** section of releases.
.. towncrier release notes start .. towncrier release notes start
pytest 5.1.2 (2019-08-30)
=========================
Bug Fixes
---------
- `#2270 <https://github.com/pytest-dev/pytest/issues/2270>`_: Fixed ``self`` reference in function-scoped fixtures defined plugin classes: previously ``self``
would be a reference to a *test* class, not the *plugin* class.
- `#570 <https://github.com/pytest-dev/pytest/issues/570>`_: Fixed long standing issue where fixture scope was not respected when indirect fixtures were used during
parametrization.
- `#5782 <https://github.com/pytest-dev/pytest/issues/5782>`_: Fix decoding error when printing an error response from ``--pastebin``.
- `#5786 <https://github.com/pytest-dev/pytest/issues/5786>`_: Chained exceptions in test and collection reports are now correctly serialized, allowing plugins like
``pytest-xdist`` to display them properly.
- `#5792 <https://github.com/pytest-dev/pytest/issues/5792>`_: Windows: Fix error that occurs in certain circumstances when loading
``conftest.py`` from a working directory that has casing other than the one stored
in the filesystem (e.g., ``c:\test`` instead of ``C:\test``).
pytest 5.1.1 (2019-08-20) pytest 5.1.1 (2019-08-20)
========================= =========================

View File

@ -1,2 +0,0 @@
Fixed ``self`` reference in function-scoped fixtures defined plugin classes: previously ``self``
would be a reference to a *test* class, not the *plugin* class.

View File

@ -1,2 +0,0 @@
Fixed long standing issue where fixture scope was not respected when indirect fixtures were used during
parametrization.

View File

@ -1 +0,0 @@
Fix decoding error when printing an error response from ``--pastebin``.

View File

@ -1,2 +0,0 @@
Chained exceptions in test and collection reports are now correctly serialized, allowing plugins like
``pytest-xdist`` to display them properly.

View File

@ -1,3 +0,0 @@
Windows: Fix error that occurs in certain circumstances when loading
``conftest.py`` from a working directory that has casing other than the one stored
in the filesystem (e.g., ``c:\test`` instead of ``C:\test``).

View File

@ -6,6 +6,7 @@ Release announcements
:maxdepth: 2 :maxdepth: 2
release-5.1.2
release-5.1.1 release-5.1.1
release-5.1.0 release-5.1.0
release-5.0.1 release-5.0.1

View File

@ -0,0 +1,23 @@
pytest-5.1.2
=======================================
pytest 5.1.2 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at https://docs.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Andrzej Klajnert
* Anthony Sottile
* Bruno Oliveira
* Christian Neumüller
* Robert Holt
* linchiwei123
Happy testing,
The pytest Development Team

View File

@ -47,7 +47,7 @@ you will see the return value of the function call:
E + where 3 = f() E + where 3 = f()
test_assert1.py:6: AssertionError test_assert1.py:6: AssertionError
============================ 1 failed in 0.02s ============================= ============================ 1 failed in 0.12s =============================
``pytest`` has support for showing the values of the most common subexpressions ``pytest`` has support for showing the values of the most common subexpressions
including calls, attributes, comparisons, and binary and unary including calls, attributes, comparisons, and binary and unary
@ -208,7 +208,7 @@ if you run this module:
E Use -v to get the full diff E Use -v to get the full diff
test_assert2.py:6: AssertionError test_assert2.py:6: AssertionError
============================ 1 failed in 0.02s ============================= ============================ 1 failed in 0.12s =============================
Special comparisons are done for a number of cases: Special comparisons are done for a number of cases:

View File

@ -75,7 +75,7 @@ If you run this for the first time you will see two failures:
E Failed: bad luck E Failed: bad luck
test_50.py:7: Failed test_50.py:7: Failed
2 failed, 48 passed in 0.08s 2 failed, 48 passed in 0.07s
If you then run it with ``--lf``: If you then run it with ``--lf``:
@ -114,7 +114,7 @@ If you then run it with ``--lf``:
E Failed: bad luck E Failed: bad luck
test_50.py:7: Failed test_50.py:7: Failed
===================== 2 failed, 48 deselected in 0.02s ===================== ===================== 2 failed, 48 deselected in 0.12s =====================
You have run only the two failing tests from the last run, while the 48 passing You have run only the two failing tests from the last run, while the 48 passing
tests have not been run ("deselected"). tests have not been run ("deselected").
@ -158,7 +158,7 @@ of ``FF`` and dots):
E Failed: bad luck E Failed: bad luck
test_50.py:7: Failed test_50.py:7: Failed
======================= 2 failed, 48 passed in 0.07s ======================= ======================= 2 failed, 48 passed in 0.12s =======================
.. _`config.cache`: .. _`config.cache`:
@ -283,7 +283,7 @@ You can always peek at the content of the cache using the
example/value contains: example/value contains:
42 42
========================== no tests ran in 0.00s =========================== ========================== no tests ran in 0.12s ===========================
``--cache-show`` takes an optional argument to specify a glob pattern for ``--cache-show`` takes an optional argument to specify a glob pattern for
filtering: filtering:
@ -300,7 +300,7 @@ filtering:
example/value contains: example/value contains:
42 42
========================== no tests ran in 0.00s =========================== ========================== no tests ran in 0.12s ===========================
Clearing Cache content Clearing Cache content
---------------------- ----------------------

View File

@ -91,7 +91,7 @@ of the failing function and hide the other one:
test_module.py:12: AssertionError test_module.py:12: AssertionError
-------------------------- Captured stdout setup --------------------------- -------------------------- Captured stdout setup ---------------------------
setting up <function test_func2 at 0xdeadbeef> setting up <function test_func2 at 0xdeadbeef>
======================= 1 failed, 1 passed in 0.02s ======================== ======================= 1 failed, 1 passed in 0.12s ========================
Accessing captured output from a test function Accessing captured output from a test function
--------------------------------------------------- ---------------------------------------------------

View File

@ -36,7 +36,7 @@ then you can just invoke ``pytest`` directly:
test_example.txt . [100%] test_example.txt . [100%]
============================ 1 passed in 0.01s ============================= ============================ 1 passed in 0.12s =============================
By default, pytest will collect ``test*.txt`` files looking for doctest directives, but you By default, pytest will collect ``test*.txt`` files looking for doctest directives, but you
can pass additional globs using the ``--doctest-glob`` option (multi-allowed). can pass additional globs using the ``--doctest-glob`` option (multi-allowed).
@ -66,7 +66,7 @@ and functions, including from test modules:
mymodule.py . [ 50%] mymodule.py . [ 50%]
test_example.txt . [100%] test_example.txt . [100%]
============================ 2 passed in 0.01s ============================= ============================ 2 passed in 0.12s =============================
You can make these changes permanent in your project by You can make these changes permanent in your project by
putting them into a pytest.ini file like this: putting them into a pytest.ini file like this:

View File

@ -52,7 +52,7 @@ You can then restrict a test run to only run tests marked with ``webtest``:
test_server.py::test_send_http PASSED [100%] test_server.py::test_send_http PASSED [100%]
===================== 1 passed, 3 deselected in 0.01s ====================== ===================== 1 passed, 3 deselected in 0.12s ======================
Or the inverse, running all tests except the webtest ones: Or the inverse, running all tests except the webtest ones:
@ -69,7 +69,7 @@ Or the inverse, running all tests except the webtest ones:
test_server.py::test_another PASSED [ 66%] test_server.py::test_another PASSED [ 66%]
test_server.py::TestClass::test_method PASSED [100%] test_server.py::TestClass::test_method PASSED [100%]
===================== 3 passed, 1 deselected in 0.01s ====================== ===================== 3 passed, 1 deselected in 0.12s ======================
Selecting tests based on their node ID Selecting tests based on their node ID
-------------------------------------- --------------------------------------
@ -89,7 +89,7 @@ tests based on their module, class, method, or function name:
test_server.py::TestClass::test_method PASSED [100%] test_server.py::TestClass::test_method PASSED [100%]
============================ 1 passed in 0.01s ============================= ============================ 1 passed in 0.12s =============================
You can also select on the class: You can also select on the class:
@ -104,7 +104,7 @@ You can also select on the class:
test_server.py::TestClass::test_method PASSED [100%] test_server.py::TestClass::test_method PASSED [100%]
============================ 1 passed in 0.01s ============================= ============================ 1 passed in 0.12s =============================
Or select multiple nodes: Or select multiple nodes:
@ -120,7 +120,7 @@ Or select multiple nodes:
test_server.py::TestClass::test_method PASSED [ 50%] test_server.py::TestClass::test_method PASSED [ 50%]
test_server.py::test_send_http PASSED [100%] test_server.py::test_send_http PASSED [100%]
============================ 2 passed in 0.01s ============================= ============================ 2 passed in 0.12s =============================
.. _node-id: .. _node-id:
@ -159,7 +159,7 @@ select tests based on their names:
test_server.py::test_send_http PASSED [100%] test_server.py::test_send_http PASSED [100%]
===================== 1 passed, 3 deselected in 0.01s ====================== ===================== 1 passed, 3 deselected in 0.12s ======================
And you can also run all tests except the ones that match the keyword: And you can also run all tests except the ones that match the keyword:
@ -176,7 +176,7 @@ And you can also run all tests except the ones that match the keyword:
test_server.py::test_another PASSED [ 66%] test_server.py::test_another PASSED [ 66%]
test_server.py::TestClass::test_method PASSED [100%] test_server.py::TestClass::test_method PASSED [100%]
===================== 3 passed, 1 deselected in 0.01s ====================== ===================== 3 passed, 1 deselected in 0.12s ======================
Or to select "http" and "quick" tests: Or to select "http" and "quick" tests:
@ -192,7 +192,7 @@ Or to select "http" and "quick" tests:
test_server.py::test_send_http PASSED [ 50%] test_server.py::test_send_http PASSED [ 50%]
test_server.py::test_something_quick PASSED [100%] test_server.py::test_something_quick PASSED [100%]
===================== 2 passed, 2 deselected in 0.01s ====================== ===================== 2 passed, 2 deselected in 0.12s ======================
.. note:: .. note::
@ -413,7 +413,7 @@ the test needs:
test_someenv.py s [100%] test_someenv.py s [100%]
============================ 1 skipped in 0.00s ============================ ============================ 1 skipped in 0.12s ============================
and here is one that specifies exactly the environment needed: and here is one that specifies exactly the environment needed:
@ -428,7 +428,7 @@ and here is one that specifies exactly the environment needed:
test_someenv.py . [100%] test_someenv.py . [100%]
============================ 1 passed in 0.01s ============================= ============================ 1 passed in 0.12s =============================
The ``--markers`` option always gives you a list of available markers: The ``--markers`` option always gives you a list of available markers:
@ -499,7 +499,7 @@ The output is as follows:
$ pytest -q -s $ pytest -q -s
Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={}) Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={})
. .
1 passed in 0.00s 1 passed in 0.01s
We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``. We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``.
@ -551,7 +551,7 @@ Let's run this without capturing output and see what we get:
glob args=('class',) kwargs={'x': 2} glob args=('class',) kwargs={'x': 2}
glob args=('module',) kwargs={'x': 1} glob args=('module',) kwargs={'x': 1}
. .
1 passed in 0.01s 1 passed in 0.02s
marking platform specific tests with pytest marking platform specific tests with pytest
-------------------------------------------------------------- --------------------------------------------------------------
@ -623,7 +623,7 @@ then you will see two tests skipped and two executed tests as expected:
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [2] $REGENDOC_TMPDIR/conftest.py:13: cannot run on platform linux SKIPPED [2] $REGENDOC_TMPDIR/conftest.py:13: cannot run on platform linux
======================= 2 passed, 2 skipped in 0.01s ======================= ======================= 2 passed, 2 skipped in 0.12s =======================
Note that if you specify a platform via the marker-command line option like this: Note that if you specify a platform via the marker-command line option like this:
@ -638,7 +638,7 @@ Note that if you specify a platform via the marker-command line option like this
test_plat.py . [100%] test_plat.py . [100%]
===================== 1 passed, 3 deselected in 0.01s ====================== ===================== 1 passed, 3 deselected in 0.12s ======================
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests. then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
@ -711,7 +711,7 @@ We can now use the ``-m option`` to select one set:
test_module.py:8: in test_interface_complex test_module.py:8: in test_interface_complex
assert 0 assert 0
E assert 0 E assert 0
===================== 2 failed, 2 deselected in 0.02s ====================== ===================== 2 failed, 2 deselected in 0.12s ======================
or to select both "event" and "interface" tests: or to select both "event" and "interface" tests:
@ -739,4 +739,4 @@ or to select both "event" and "interface" tests:
test_module.py:12: in test_event_simple test_module.py:12: in test_event_simple
assert 0 assert 0
E assert 0 E assert 0
===================== 3 failed, 1 deselected in 0.03s ====================== ===================== 3 failed, 1 deselected in 0.12s ======================

View File

@ -41,7 +41,7 @@ now execute the test specification:
usecase execution failed usecase execution failed
spec failed: 'some': 'other' spec failed: 'some': 'other'
no further details known at this point. no further details known at this point.
======================= 1 failed, 1 passed in 0.02s ======================== ======================= 1 failed, 1 passed in 0.12s ========================
.. regendoc:wipe .. regendoc:wipe
@ -77,7 +77,7 @@ consulted when reporting in ``verbose`` mode:
usecase execution failed usecase execution failed
spec failed: 'some': 'other' spec failed: 'some': 'other'
no further details known at this point. no further details known at this point.
======================= 1 failed, 1 passed in 0.02s ======================== ======================= 1 failed, 1 passed in 0.12s ========================
.. regendoc:wipe .. regendoc:wipe
@ -97,4 +97,4 @@ interesting to just look at the collection tree:
<YamlItem hello> <YamlItem hello>
<YamlItem ok> <YamlItem ok>
========================== no tests ran in 0.02s =========================== ========================== no tests ran in 0.12s ===========================

View File

@ -172,7 +172,7 @@ objects, they are still using the default pytest representation:
<Function test_timedistance_v3[forward]> <Function test_timedistance_v3[forward]>
<Function test_timedistance_v3[backward]> <Function test_timedistance_v3[backward]>
========================== no tests ran in 0.01s =========================== ========================== no tests ran in 0.12s ===========================
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
together with the actual data, instead of listing them separately. together with the actual data, instead of listing them separately.
@ -229,7 +229,7 @@ this is a fully self-contained example which you can run with:
test_scenarios.py .... [100%] test_scenarios.py .... [100%]
============================ 4 passed in 0.01s ============================= ============================ 4 passed in 0.12s =============================
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function: If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:
@ -248,7 +248,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
<Function test_demo1[advanced]> <Function test_demo1[advanced]>
<Function test_demo2[advanced]> <Function test_demo2[advanced]>
========================== no tests ran in 0.01s =========================== ========================== no tests ran in 0.12s ===========================
Note that we told ``metafunc.parametrize()`` that your scenario values Note that we told ``metafunc.parametrize()`` that your scenario values
should be considered class-scoped. With pytest-2.3 this leads to a should be considered class-scoped. With pytest-2.3 this leads to a
@ -323,7 +323,7 @@ Let's first see how it looks like at collection time:
<Function test_db_initialized[d1]> <Function test_db_initialized[d1]>
<Function test_db_initialized[d2]> <Function test_db_initialized[d2]>
========================== no tests ran in 0.00s =========================== ========================== no tests ran in 0.12s ===========================
And then when we run the test: And then when we run the test:
@ -394,7 +394,7 @@ The result of this test will be successful:
<Module test_indirect_list.py> <Module test_indirect_list.py>
<Function test_indirect[a-b]> <Function test_indirect[a-b]>
========================== no tests ran in 0.00s =========================== ========================== no tests ran in 0.12s ===========================
.. regendoc:wipe .. regendoc:wipe
@ -475,10 +475,11 @@ Running it results in some skips if we don't have all the python interpreters in
.. code-block:: pytest .. code-block:: pytest
. $ pytest -rs -q multipython.py . $ pytest -rs -q multipython.py
ssssssssssss......sss...... [100%] ssssssssssss...ssssssssssss [100%]
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [15] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.5' not found SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.5' not found
12 passed, 15 skipped in 0.62s SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.7' not found
3 passed, 24 skipped in 0.24s
Indirect parametrization of optional implementations/imports Indirect parametrization of optional implementations/imports
-------------------------------------------------------------------- --------------------------------------------------------------------
@ -547,7 +548,7 @@ If you run this with reporting for skips enabled:
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [1] $REGENDOC_TMPDIR/conftest.py:13: could not import 'opt2': No module named 'opt2' SKIPPED [1] $REGENDOC_TMPDIR/conftest.py:13: could not import 'opt2': No module named 'opt2'
======================= 1 passed, 1 skipped in 0.01s ======================= ======================= 1 passed, 1 skipped in 0.12s =======================
You'll see that we don't have an ``opt2`` module and thus the second test run You'll see that we don't have an ``opt2`` module and thus the second test run
of our ``test_func1`` was skipped. A few notes: of our ``test_func1`` was skipped. A few notes:
@ -609,7 +610,7 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%] test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
test_pytest_param_example.py::test_eval[basic_6*9] XFAIL [100%] test_pytest_param_example.py::test_eval[basic_6*9] XFAIL [100%]
=============== 2 passed, 15 deselected, 1 xfailed in 0.08s ================ =============== 2 passed, 15 deselected, 1 xfailed in 0.12s ================
As the result: As the result:

View File

@ -158,7 +158,7 @@ The test collection would look like this:
<Function simple_check> <Function simple_check>
<Function complex_check> <Function complex_check>
========================== no tests ran in 0.01s =========================== ========================== no tests ran in 0.12s ===========================
You can check for multiple glob patterns by adding a space between the patterns: You can check for multiple glob patterns by adding a space between the patterns:
@ -221,7 +221,7 @@ You can always peek at the collection tree without running tests like this:
<Function test_method> <Function test_method>
<Function test_anothermethod> <Function test_anothermethod>
========================== no tests ran in 0.00s =========================== ========================== no tests ran in 0.12s ===========================
.. _customizing-test-collection: .. _customizing-test-collection:
@ -297,7 +297,7 @@ file will be left out:
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 0 items collected 0 items
========================== no tests ran in 0.01s =========================== ========================== no tests ran in 0.12s ===========================
It's also possible to ignore files based on Unix shell-style wildcards by adding It's also possible to ignore files based on Unix shell-style wildcards by adding
patterns to ``collect_ignore_glob``. patterns to ``collect_ignore_glob``.

View File

@ -650,4 +650,4 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
failure_demo.py:282: AssertionError failure_demo.py:282: AssertionError
============================ 44 failed in 0.26s ============================ ============================ 44 failed in 0.12s ============================

View File

@ -132,7 +132,7 @@ directory with the above conftest.py:
rootdir: $REGENDOC_TMPDIR rootdir: $REGENDOC_TMPDIR
collected 0 items collected 0 items
========================== no tests ran in 0.00s =========================== ========================== no tests ran in 0.12s ===========================
.. _`excontrolskip`: .. _`excontrolskip`:
@ -201,7 +201,7 @@ and when running it will see a skipped "slow" test:
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [1] test_module.py:8: need --runslow option to run SKIPPED [1] test_module.py:8: need --runslow option to run
======================= 1 passed, 1 skipped in 0.01s ======================= ======================= 1 passed, 1 skipped in 0.12s =======================
Or run it including the ``slow`` marked test: Or run it including the ``slow`` marked test:
@ -216,7 +216,7 @@ Or run it including the ``slow`` marked test:
test_module.py .. [100%] test_module.py .. [100%]
============================ 2 passed in 0.01s ============================= ============================ 2 passed in 0.12s =============================
Writing well integrated assertion helpers Writing well integrated assertion helpers
-------------------------------------------------- --------------------------------------------------
@ -358,7 +358,7 @@ which will add the string to the test header accordingly:
rootdir: $REGENDOC_TMPDIR rootdir: $REGENDOC_TMPDIR
collected 0 items collected 0 items
========================== no tests ran in 0.00s =========================== ========================== no tests ran in 0.12s ===========================
.. regendoc:wipe .. regendoc:wipe
@ -388,7 +388,7 @@ which will add info only when run with "--v":
rootdir: $REGENDOC_TMPDIR rootdir: $REGENDOC_TMPDIR
collecting ... collected 0 items collecting ... collected 0 items
========================== no tests ran in 0.00s =========================== ========================== no tests ran in 0.12s ===========================
and nothing when run plainly: and nothing when run plainly:
@ -401,7 +401,7 @@ and nothing when run plainly:
rootdir: $REGENDOC_TMPDIR rootdir: $REGENDOC_TMPDIR
collected 0 items collected 0 items
========================== no tests ran in 0.00s =========================== ========================== no tests ran in 0.12s ===========================
profiling test duration profiling test duration
-------------------------- --------------------------
@ -447,7 +447,7 @@ Now we can profile which test functions execute the slowest:
0.30s call test_some_are_slow.py::test_funcslow2 0.30s call test_some_are_slow.py::test_funcslow2
0.20s call test_some_are_slow.py::test_funcslow1 0.20s call test_some_are_slow.py::test_funcslow1
0.10s call test_some_are_slow.py::test_funcfast 0.10s call test_some_are_slow.py::test_funcfast
============================ 3 passed in 0.61s ============================= ============================ 3 passed in 0.12s =============================
incremental testing - test steps incremental testing - test steps
--------------------------------------------------- ---------------------------------------------------
@ -531,7 +531,7 @@ If we run this:
========================= short test summary info ========================== ========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::test_deletion XFAIL test_step.py::TestUserHandling::test_deletion
reason: previous test failed (test_modification) reason: previous test failed (test_modification)
================== 1 failed, 2 passed, 1 xfailed in 0.03s ================== ================== 1 failed, 2 passed, 1 xfailed in 0.12s ==================
We'll see that ``test_deletion`` was not executed because ``test_modification`` We'll see that ``test_deletion`` was not executed because ``test_modification``
failed. It is reported as an "expected failure". failed. It is reported as an "expected failure".
@ -644,7 +644,7 @@ We can run this:
E assert 0 E assert 0
a/test_db2.py:2: AssertionError a/test_db2.py:2: AssertionError
============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.05s ============== ============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12s ==============
The two test modules in the ``a`` directory see the same ``db`` fixture instance The two test modules in the ``a`` directory see the same ``db`` fixture instance
while the one test in the sister-directory ``b`` doesn't see it. We could of course while the one test in the sister-directory ``b`` doesn't see it. We could of course
@ -733,7 +733,7 @@ and run them:
E assert 0 E assert 0
test_module.py:6: AssertionError test_module.py:6: AssertionError
============================ 2 failed in 0.02s ============================= ============================ 2 failed in 0.12s =============================
you will have a "failures" file which contains the failing test ids: you will have a "failures" file which contains the failing test ids:
@ -848,7 +848,7 @@ and run it:
E assert 0 E assert 0
test_module.py:19: AssertionError test_module.py:19: AssertionError
======================== 2 failed, 1 error in 0.02s ======================== ======================== 2 failed, 1 error in 0.12s ========================
You'll see that the fixture finalizers could use the precise reporting You'll see that the fixture finalizers could use the precise reporting
information. information.

View File

@ -96,7 +96,7 @@ marked ``smtp_connection`` fixture function. Running the test looks like this:
E assert 0 E assert 0
test_smtpsimple.py:14: AssertionError test_smtpsimple.py:14: AssertionError
============================ 1 failed in 0.18s ============================= ============================ 1 failed in 0.12s =============================
In the failure traceback we see that the test function was called with a In the failure traceback we see that the test function was called with a
``smtp_connection`` argument, the ``smtplib.SMTP()`` instance created by the fixture ``smtp_connection`` argument, the ``smtplib.SMTP()`` instance created by the fixture
@ -258,7 +258,7 @@ inspect what is going on and can now run the tests:
E assert 0 E assert 0
test_module.py:13: AssertionError test_module.py:13: AssertionError
============================ 2 failed in 0.20s ============================= ============================ 2 failed in 0.12s =============================
You see the two ``assert 0`` failing and more importantly you can also see You see the two ``assert 0`` failing and more importantly you can also see
that the same (module-scoped) ``smtp_connection`` object was passed into the that the same (module-scoped) ``smtp_connection`` object was passed into the
@ -361,7 +361,7 @@ Let's execute it:
$ pytest -s -q --tb=no $ pytest -s -q --tb=no
FFteardown smtp FFteardown smtp
2 failed in 0.20s 2 failed in 0.79s
We see that the ``smtp_connection`` instance is finalized after the two We see that the ``smtp_connection`` instance is finalized after the two
tests finished execution. Note that if we decorated our fixture tests finished execution. Note that if we decorated our fixture
@ -515,7 +515,7 @@ again, nothing much has changed:
$ pytest -s -q --tb=no $ pytest -s -q --tb=no
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com) FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
2 failed in 0.21s 2 failed in 0.77s
Let's quickly create another test module that actually sets the Let's quickly create another test module that actually sets the
server URL in its module namespace: server URL in its module namespace:
@ -692,7 +692,7 @@ So let's just do another run:
test_module.py:13: AssertionError test_module.py:13: AssertionError
------------------------- Captured stdout teardown ------------------------- ------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef> finalizing <smtplib.SMTP object at 0xdeadbeef>
4 failed in 0.89s 4 failed in 1.69s
We see that our two test functions each ran twice, against the different We see that our two test functions each ran twice, against the different
``smtp_connection`` instances. Note also, that with the ``mail.python.org`` ``smtp_connection`` instances. Note also, that with the ``mail.python.org``
@ -771,7 +771,7 @@ Running the above tests results in the following test IDs being used:
<Function test_ehlo[mail.python.org]> <Function test_ehlo[mail.python.org]>
<Function test_noop[mail.python.org]> <Function test_noop[mail.python.org]>
========================== no tests ran in 0.01s =========================== ========================== no tests ran in 0.12s ===========================
.. _`fixture-parametrize-marks`: .. _`fixture-parametrize-marks`:
@ -812,7 +812,7 @@ Running this test will *skip* the invocation of ``data_set`` with value ``2``:
test_fixture_marks.py::test_data[1] PASSED [ 66%] test_fixture_marks.py::test_data[1] PASSED [ 66%]
test_fixture_marks.py::test_data[2] SKIPPED [100%] test_fixture_marks.py::test_data[2] SKIPPED [100%]
======================= 2 passed, 1 skipped in 0.01s ======================= ======================= 2 passed, 1 skipped in 0.12s =======================
.. _`interdependent fixtures`: .. _`interdependent fixtures`:
@ -861,7 +861,7 @@ Here we declare an ``app`` fixture which receives the previously defined
test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%] test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%]
test_appsetup.py::test_smtp_connection_exists[mail.python.org] PASSED [100%] test_appsetup.py::test_smtp_connection_exists[mail.python.org] PASSED [100%]
============================ 2 passed in 0.44s ============================= ============================ 2 passed in 0.12s =============================
Due to the parametrization of ``smtp_connection``, the test will run twice with two Due to the parametrization of ``smtp_connection``, the test will run twice with two
different ``App`` instances and respective smtp servers. There is no different ``App`` instances and respective smtp servers. There is no
@ -971,7 +971,7 @@ Let's run the tests in verbose mode and with looking at the print-output:
TEARDOWN modarg mod2 TEARDOWN modarg mod2
============================ 8 passed in 0.01s ============================= ============================ 8 passed in 0.12s =============================
You can see that the parametrized module-scoped ``modarg`` resource caused an You can see that the parametrized module-scoped ``modarg`` resource caused an
ordering of test execution that lead to the fewest possible "active" resources. ordering of test execution that lead to the fewest possible "active" resources.

View File

@ -69,7 +69,7 @@ Thats it. You can now execute the test function:
E + where 4 = func(3) E + where 4 = func(3)
test_sample.py:6: AssertionError test_sample.py:6: AssertionError
============================ 1 failed in 0.02s ============================= ============================ 1 failed in 0.12s =============================
This test returns a failure report because ``func(3)`` does not return ``5``. This test returns a failure report because ``func(3)`` does not return ``5``.
@ -108,7 +108,7 @@ Execute the test function with “quiet” reporting mode:
$ pytest -q test_sysexit.py $ pytest -q test_sysexit.py
. [100%] . [100%]
1 passed in 0.00s 1 passed in 0.01s
Group multiple tests in a class Group multiple tests in a class
-------------------------------------------------------------- --------------------------------------------------------------

View File

@ -44,7 +44,7 @@ To execute it:
E + where 4 = inc(3) E + where 4 = inc(3)
test_sample.py:6: AssertionError test_sample.py:6: AssertionError
============================ 1 failed in 0.02s ============================= ============================ 1 failed in 0.12s =============================
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used. Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used.
See :ref:`Getting Started <getstarted>` for more examples. See :ref:`Getting Started <getstarted>` for more examples.

View File

@ -75,7 +75,7 @@ them in turn:
E + where 54 = eval('6*9') E + where 54 = eval('6*9')
test_expectation.py:6: AssertionError test_expectation.py:6: AssertionError
======================= 1 failed, 2 passed in 0.02s ======================== ======================= 1 failed, 2 passed in 0.12s ========================
.. note:: .. note::
@ -128,7 +128,7 @@ Let's run this:
test_expectation.py ..x [100%] test_expectation.py ..x [100%]
======================= 2 passed, 1 xfailed in 0.02s ======================= ======================= 2 passed, 1 xfailed in 0.12s =======================
The one parameter set which caused a failure previously now The one parameter set which caused a failure previously now
shows up as an "xfailed (expected to fail)" test. shows up as an "xfailed (expected to fail)" test.

View File

@ -371,7 +371,7 @@ Running it with the report-on-xfail option gives this output:
XFAIL xfail_demo.py::test_hello6 XFAIL xfail_demo.py::test_hello6
reason: reason reason: reason
XFAIL xfail_demo.py::test_hello7 XFAIL xfail_demo.py::test_hello7
============================ 7 xfailed in 0.05s ============================ ============================ 7 xfailed in 0.12s ============================
.. _`skip/xfail with parametrize`: .. _`skip/xfail with parametrize`:

View File

@ -64,7 +64,7 @@ Running this would result in a passed test except for the last
E assert 0 E assert 0
test_tmp_path.py:13: AssertionError test_tmp_path.py:13: AssertionError
============================ 1 failed in 0.02s ============================= ============================ 1 failed in 0.12s =============================
.. _`tmp_path_factory example`: .. _`tmp_path_factory example`:
@ -133,7 +133,7 @@ Running this would result in a passed test except for the last
E assert 0 E assert 0
test_tmpdir.py:9: AssertionError test_tmpdir.py:9: AssertionError
============================ 1 failed in 0.02s ============================= ============================ 1 failed in 0.12s =============================
.. _`tmpdir factory example`: .. _`tmpdir factory example`:

View File

@ -166,7 +166,7 @@ the ``self.db`` values in the traceback:
E assert 0 E assert 0
test_unittest_db.py:13: AssertionError test_unittest_db.py:13: AssertionError
============================ 2 failed in 0.02s ============================= ============================ 2 failed in 0.12s =============================
This default pytest traceback shows that the two test methods This default pytest traceback shows that the two test methods
share the same ``self.db`` instance which was our intention share the same ``self.db`` instance which was our intention

View File

@ -247,7 +247,7 @@ Example:
XPASS test_example.py::test_xpass always xfail XPASS test_example.py::test_xpass always xfail
ERROR test_example.py::test_error - assert 0 ERROR test_example.py::test_error - assert 0
FAILED test_example.py::test_fail - assert 0 FAILED test_example.py::test_fail - assert 0
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.03s === == 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===
The ``-r`` options accepts a number of characters after it, with ``a`` used The ``-r`` options accepts a number of characters after it, with ``a`` used
above meaning "all except passes". above meaning "all except passes".
@ -297,7 +297,7 @@ More than one character can be used, so for example to only see failed and skipp
========================= short test summary info ========================== ========================= short test summary info ==========================
FAILED test_example.py::test_fail - assert 0 FAILED test_example.py::test_fail - assert 0
SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.03s === == 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===
Using ``p`` lists the passing tests, whilst ``P`` adds an extra section "PASSES" with those tests that passed but had Using ``p`` lists the passing tests, whilst ``P`` adds an extra section "PASSES" with those tests that passed but had
captured output: captured output:
@ -336,7 +336,7 @@ captured output:
ok ok
========================= short test summary info ========================== ========================= short test summary info ==========================
PASSED test_example.py::test_ok PASSED test_example.py::test_ok
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.03s === == 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===
.. _pdb-option: .. _pdb-option:

View File

@ -41,7 +41,7 @@ Running pytest now produces this output:
warnings.warn(UserWarning("api v1, should use functions from v2")) warnings.warn(UserWarning("api v1, should use functions from v2"))
-- Docs: https://docs.pytest.org/en/latest/warnings.html -- Docs: https://docs.pytest.org/en/latest/warnings.html
====================== 1 passed, 1 warnings in 0.00s ======================= ====================== 1 passed, 1 warnings in 0.12s =======================
The ``-W`` flag can be passed to control which warnings will be displayed or even turn The ``-W`` flag can be passed to control which warnings will be displayed or even turn
them into errors: them into errors: