Merge remote-tracking branch 'upstream/master' into robholt/fixture-class-instance

This commit is contained in:
Bruno Oliveira 2019-08-30 11:21:33 -03:00
commit 955dc6d18a
48 changed files with 624 additions and 418 deletions

View File

@ -26,7 +26,7 @@ repos:
hooks: hooks:
- id: flake8 - id: flake8
language_version: python3 language_version: python3
additional_dependencies: [flake8-typing-imports] additional_dependencies: [flake8-typing-imports==1.3.0]
- repo: https://github.com/asottile/reorder_python_imports - repo: https://github.com/asottile/reorder_python_imports
rev: v1.4.0 rev: v1.4.0
hooks: hooks:

View File

@ -43,7 +43,8 @@ jobs:
python: 'pypy3' python: 'pypy3'
- env: TOXENV=py35-xdist - env: TOXENV=py35-xdist
python: '3.5' dist: trusty
python: '3.5.0'
# Coverage for: # Coverage for:
# - pytester's LsofFdLeakChecker # - pytester's LsofFdLeakChecker

View File

@ -55,6 +55,7 @@ Charnjit SiNGH (CCSJ)
Chris Lamb Chris Lamb
Christian Boelsen Christian Boelsen
Christian Fetzer Christian Fetzer
Christian Neumüller
Christian Theunert Christian Theunert
Christian Tismer Christian Tismer
Christopher Gilling Christopher Gilling

View File

@ -18,6 +18,15 @@ with advance notice in the **Deprecations** section of releases.
.. towncrier release notes start .. towncrier release notes start
pytest 5.1.1 (2019-08-20)
=========================
Bug Fixes
---------
- `#5751 <https://github.com/pytest-dev/pytest/issues/5751>`_: Fixed ``TypeError`` when importing pytest on Python 3.5.0 and 3.5.1.
pytest 5.1.0 (2019-08-15) pytest 5.1.0 (2019-08-15)
========================= =========================

View File

@ -0,0 +1 @@
Fix decoding error when printing an error response from ``--pastebin``.

View File

@ -0,0 +1,2 @@
Chained exceptions in test and collection reports are now correctly serialized, allowing plugins like
``pytest-xdist`` to display them properly.

View File

@ -0,0 +1,3 @@
Windows: Fix error that occurs in certain circumstances when loading
``conftest.py`` from a working directory that has casing other than the one stored
in the filesystem (e.g., ``c:\test`` instead of ``C:\test``).

View File

@ -6,6 +6,7 @@ Release announcements
:maxdepth: 2 :maxdepth: 2
release-5.1.1
release-5.1.0 release-5.1.0
release-5.0.1 release-5.0.1
release-5.0.0 release-5.0.0

View File

@ -0,0 +1,24 @@
pytest-5.1.1
=======================================
pytest 5.1.1 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at https://docs.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Anthony Sottile
* Bruno Oliveira
* Daniel Hahler
* Florian Bruhin
* Hugo van Kemenade
* Ran Benita
* Ronny Pfannschmidt
Happy testing,
The pytest Development Team

View File

@ -47,7 +47,7 @@ you will see the return value of the function call:
E + where 3 = f() E + where 3 = f()
test_assert1.py:6: AssertionError test_assert1.py:6: AssertionError
============================ 1 failed in 0.05s ============================= ============================ 1 failed in 0.02s =============================
``pytest`` has support for showing the values of the most common subexpressions ``pytest`` has support for showing the values of the most common subexpressions
including calls, attributes, comparisons, and binary and unary including calls, attributes, comparisons, and binary and unary
@ -208,7 +208,7 @@ if you run this module:
E Use -v to get the full diff E Use -v to get the full diff
test_assert2.py:6: AssertionError test_assert2.py:6: AssertionError
============================ 1 failed in 0.05s ============================= ============================ 1 failed in 0.02s =============================
Special comparisons are done for a number of cases: Special comparisons are done for a number of cases:
@ -279,7 +279,7 @@ the conftest file:
E vals: 1 != 2 E vals: 1 != 2
test_foocompare.py:12: AssertionError test_foocompare.py:12: AssertionError
1 failed in 0.05s 1 failed in 0.02s
.. _assert-details: .. _assert-details:
.. _`assert introspection`: .. _`assert introspection`:

View File

@ -160,7 +160,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
in python < 3.6 this is a pathlib2.Path in python < 3.6 this is a pathlib2.Path
no tests ran in 0.01s no tests ran in 0.00s
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like: You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like:

View File

@ -75,7 +75,7 @@ If you run this for the first time you will see two failures:
E Failed: bad luck E Failed: bad luck
test_50.py:7: Failed test_50.py:7: Failed
2 failed, 48 passed in 0.16s 2 failed, 48 passed in 0.08s
If you then run it with ``--lf``: If you then run it with ``--lf``:
@ -114,7 +114,7 @@ If you then run it with ``--lf``:
E Failed: bad luck E Failed: bad luck
test_50.py:7: Failed test_50.py:7: Failed
===================== 2 failed, 48 deselected in 0.07s ===================== ===================== 2 failed, 48 deselected in 0.02s =====================
You have run only the two failing tests from the last run, while the 48 passing You have run only the two failing tests from the last run, while the 48 passing
tests have not been run ("deselected"). tests have not been run ("deselected").
@ -158,7 +158,7 @@ of ``FF`` and dots):
E Failed: bad luck E Failed: bad luck
test_50.py:7: Failed test_50.py:7: Failed
======================= 2 failed, 48 passed in 0.15s ======================= ======================= 2 failed, 48 passed in 0.07s =======================
.. _`config.cache`: .. _`config.cache`:
@ -230,7 +230,7 @@ If you run this command for the first time, you can see the print statement:
test_caching.py:20: AssertionError test_caching.py:20: AssertionError
-------------------------- Captured stdout setup --------------------------- -------------------------- Captured stdout setup ---------------------------
running expensive computation... running expensive computation...
1 failed in 0.05s 1 failed in 0.02s
If you run it a second time, the value will be retrieved from If you run it a second time, the value will be retrieved from
the cache and nothing will be printed: the cache and nothing will be printed:
@ -249,7 +249,7 @@ the cache and nothing will be printed:
E assert 42 == 23 E assert 42 == 23
test_caching.py:20: AssertionError test_caching.py:20: AssertionError
1 failed in 0.05s 1 failed in 0.02s
See the :ref:`cache-api` for more details. See the :ref:`cache-api` for more details.
@ -300,7 +300,7 @@ filtering:
example/value contains: example/value contains:
42 42
========================== no tests ran in 0.01s =========================== ========================== no tests ran in 0.00s ===========================
Clearing Cache content Clearing Cache content
---------------------- ----------------------

View File

@ -91,7 +91,7 @@ of the failing function and hide the other one:
test_module.py:12: AssertionError test_module.py:12: AssertionError
-------------------------- Captured stdout setup --------------------------- -------------------------- Captured stdout setup ---------------------------
setting up <function test_func2 at 0xdeadbeef> setting up <function test_func2 at 0xdeadbeef>
======================= 1 failed, 1 passed in 0.05s ======================== ======================= 1 failed, 1 passed in 0.02s ========================
Accessing captured output from a test function Accessing captured output from a test function
--------------------------------------------------- ---------------------------------------------------

View File

@ -36,7 +36,7 @@ then you can just invoke ``pytest`` directly:
test_example.txt . [100%] test_example.txt . [100%]
============================ 1 passed in 0.02s ============================= ============================ 1 passed in 0.01s =============================
By default, pytest will collect ``test*.txt`` files looking for doctest directives, but you By default, pytest will collect ``test*.txt`` files looking for doctest directives, but you
can pass additional globs using the ``--doctest-glob`` option (multi-allowed). can pass additional globs using the ``--doctest-glob`` option (multi-allowed).
@ -66,7 +66,7 @@ and functions, including from test modules:
mymodule.py . [ 50%] mymodule.py . [ 50%]
test_example.txt . [100%] test_example.txt . [100%]
============================ 2 passed in 0.03s ============================= ============================ 2 passed in 0.01s =============================
You can make these changes permanent in your project by You can make these changes permanent in your project by
putting them into a pytest.ini file like this: putting them into a pytest.ini file like this:

View File

@ -69,7 +69,7 @@ Or the inverse, running all tests except the webtest ones:
test_server.py::test_another PASSED [ 66%] test_server.py::test_another PASSED [ 66%]
test_server.py::TestClass::test_method PASSED [100%] test_server.py::TestClass::test_method PASSED [100%]
===================== 3 passed, 1 deselected in 0.02s ====================== ===================== 3 passed, 1 deselected in 0.01s ======================
Selecting tests based on their node ID Selecting tests based on their node ID
-------------------------------------- --------------------------------------
@ -120,7 +120,7 @@ Or select multiple nodes:
test_server.py::TestClass::test_method PASSED [ 50%] test_server.py::TestClass::test_method PASSED [ 50%]
test_server.py::test_send_http PASSED [100%] test_server.py::test_send_http PASSED [100%]
============================ 2 passed in 0.02s ============================= ============================ 2 passed in 0.01s =============================
.. _node-id: .. _node-id:
@ -176,7 +176,7 @@ And you can also run all tests except the ones that match the keyword:
test_server.py::test_another PASSED [ 66%] test_server.py::test_another PASSED [ 66%]
test_server.py::TestClass::test_method PASSED [100%] test_server.py::TestClass::test_method PASSED [100%]
===================== 3 passed, 1 deselected in 0.02s ====================== ===================== 3 passed, 1 deselected in 0.01s ======================
Or to select "http" and "quick" tests: Or to select "http" and "quick" tests:
@ -192,7 +192,7 @@ Or to select "http" and "quick" tests:
test_server.py::test_send_http PASSED [ 50%] test_server.py::test_send_http PASSED [ 50%]
test_server.py::test_something_quick PASSED [100%] test_server.py::test_something_quick PASSED [100%]
===================== 2 passed, 2 deselected in 0.02s ====================== ===================== 2 passed, 2 deselected in 0.01s ======================
.. note:: .. note::
@ -413,7 +413,7 @@ the test needs:
test_someenv.py s [100%] test_someenv.py s [100%]
============================ 1 skipped in 0.01s ============================ ============================ 1 skipped in 0.00s ============================
and here is one that specifies exactly the environment needed: and here is one that specifies exactly the environment needed:
@ -499,7 +499,7 @@ The output is as follows:
$ pytest -q -s $ pytest -q -s
Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={}) Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={})
. .
1 passed in 0.01s 1 passed in 0.00s
We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``. We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``.
@ -623,7 +623,7 @@ then you will see two tests skipped and two executed tests as expected:
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [2] $REGENDOC_TMPDIR/conftest.py:13: cannot run on platform linux SKIPPED [2] $REGENDOC_TMPDIR/conftest.py:13: cannot run on platform linux
======================= 2 passed, 2 skipped in 0.02s ======================= ======================= 2 passed, 2 skipped in 0.01s =======================
Note that if you specify a platform via the marker-command line option like this: Note that if you specify a platform via the marker-command line option like this:
@ -711,7 +711,7 @@ We can now use the ``-m option`` to select one set:
test_module.py:8: in test_interface_complex test_module.py:8: in test_interface_complex
assert 0 assert 0
E assert 0 E assert 0
===================== 2 failed, 2 deselected in 0.07s ====================== ===================== 2 failed, 2 deselected in 0.02s ======================
or to select both "event" and "interface" tests: or to select both "event" and "interface" tests:
@ -739,4 +739,4 @@ or to select both "event" and "interface" tests:
test_module.py:12: in test_event_simple test_module.py:12: in test_event_simple
assert 0 assert 0
E assert 0 E assert 0
===================== 3 failed, 1 deselected in 0.07s ====================== ===================== 3 failed, 1 deselected in 0.03s ======================

View File

@ -41,7 +41,7 @@ now execute the test specification:
usecase execution failed usecase execution failed
spec failed: 'some': 'other' spec failed: 'some': 'other'
no further details known at this point. no further details known at this point.
======================= 1 failed, 1 passed in 0.06s ======================== ======================= 1 failed, 1 passed in 0.02s ========================
.. regendoc:wipe .. regendoc:wipe
@ -77,7 +77,7 @@ consulted when reporting in ``verbose`` mode:
usecase execution failed usecase execution failed
spec failed: 'some': 'other' spec failed: 'some': 'other'
no further details known at this point. no further details known at this point.
======================= 1 failed, 1 passed in 0.07s ======================== ======================= 1 failed, 1 passed in 0.02s ========================
.. regendoc:wipe .. regendoc:wipe
@ -97,4 +97,4 @@ interesting to just look at the collection tree:
<YamlItem hello> <YamlItem hello>
<YamlItem ok> <YamlItem ok>
========================== no tests ran in 0.05s =========================== ========================== no tests ran in 0.02s ===========================

View File

@ -73,7 +73,7 @@ let's run the full monty:
E assert 4 < 4 E assert 4 < 4
test_compute.py:4: AssertionError test_compute.py:4: AssertionError
1 failed, 4 passed in 0.06s 1 failed, 4 passed in 0.02s
As expected when running the full range of ``param1`` values As expected when running the full range of ``param1`` values
we'll get an error on the last one. we'll get an error on the last one.
@ -172,7 +172,7 @@ objects, they are still using the default pytest representation:
<Function test_timedistance_v3[forward]> <Function test_timedistance_v3[forward]>
<Function test_timedistance_v3[backward]> <Function test_timedistance_v3[backward]>
========================== no tests ran in 0.02s =========================== ========================== no tests ran in 0.01s ===========================
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
together with the actual data, instead of listing them separately. together with the actual data, instead of listing them separately.
@ -229,7 +229,7 @@ this is a fully self-contained example which you can run with:
test_scenarios.py .... [100%] test_scenarios.py .... [100%]
============================ 4 passed in 0.02s ============================= ============================ 4 passed in 0.01s =============================
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function: If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:
@ -248,7 +248,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
<Function test_demo1[advanced]> <Function test_demo1[advanced]>
<Function test_demo2[advanced]> <Function test_demo2[advanced]>
========================== no tests ran in 0.02s =========================== ========================== no tests ran in 0.01s ===========================
Note that we told ``metafunc.parametrize()`` that your scenario values Note that we told ``metafunc.parametrize()`` that your scenario values
should be considered class-scoped. With pytest-2.3 this leads to a should be considered class-scoped. With pytest-2.3 this leads to a
@ -323,7 +323,7 @@ Let's first see how it looks like at collection time:
<Function test_db_initialized[d1]> <Function test_db_initialized[d1]>
<Function test_db_initialized[d2]> <Function test_db_initialized[d2]>
========================== no tests ran in 0.01s =========================== ========================== no tests ran in 0.00s ===========================
And then when we run the test: And then when we run the test:
@ -343,7 +343,7 @@ And then when we run the test:
E Failed: deliberately failing for demo purposes E Failed: deliberately failing for demo purposes
test_backends.py:8: Failed test_backends.py:8: Failed
1 failed, 1 passed in 0.05s 1 failed, 1 passed in 0.02s
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase. The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
@ -394,7 +394,7 @@ The result of this test will be successful:
<Module test_indirect_list.py> <Module test_indirect_list.py>
<Function test_indirect[a-b]> <Function test_indirect[a-b]>
========================== no tests ran in 0.01s =========================== ========================== no tests ran in 0.00s ===========================
.. regendoc:wipe .. regendoc:wipe
@ -454,7 +454,7 @@ argument sets to use for each test function. Let's run it:
E assert 1 == 2 E assert 1 == 2
test_parametrize.py:21: AssertionError test_parametrize.py:21: AssertionError
1 failed, 2 passed in 0.07s 1 failed, 2 passed in 0.03s
Indirect parametrization with multiple fixtures Indirect parametrization with multiple fixtures
-------------------------------------------------------------- --------------------------------------------------------------
@ -475,11 +475,10 @@ Running it results in some skips if we don't have all the python interpreters in
.. code-block:: pytest .. code-block:: pytest
. $ pytest -rs -q multipython.py . $ pytest -rs -q multipython.py
ssssssssssss...ssssssssssss [100%] ssssssssssss......sss...... [100%]
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.5' not found SKIPPED [15] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.5' not found
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.7' not found 12 passed, 15 skipped in 0.62s
3 passed, 24 skipped in 0.43s
Indirect parametrization of optional implementations/imports Indirect parametrization of optional implementations/imports
-------------------------------------------------------------------- --------------------------------------------------------------------
@ -548,7 +547,7 @@ If you run this with reporting for skips enabled:
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [1] $REGENDOC_TMPDIR/conftest.py:13: could not import 'opt2': No module named 'opt2' SKIPPED [1] $REGENDOC_TMPDIR/conftest.py:13: could not import 'opt2': No module named 'opt2'
======================= 1 passed, 1 skipped in 0.02s ======================= ======================= 1 passed, 1 skipped in 0.01s =======================
You'll see that we don't have an ``opt2`` module and thus the second test run You'll see that we don't have an ``opt2`` module and thus the second test run
of our ``test_func1`` was skipped. A few notes: of our ``test_func1`` was skipped. A few notes:
@ -610,7 +609,7 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%] test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
test_pytest_param_example.py::test_eval[basic_6*9] XFAIL [100%] test_pytest_param_example.py::test_eval[basic_6*9] XFAIL [100%]
=============== 2 passed, 15 deselected, 1 xfailed in 0.23s ================ =============== 2 passed, 15 deselected, 1 xfailed in 0.08s ================
As the result: As the result:

View File

@ -221,7 +221,7 @@ You can always peek at the collection tree without running tests like this:
<Function test_method> <Function test_method>
<Function test_anothermethod> <Function test_anothermethod>
========================== no tests ran in 0.01s =========================== ========================== no tests ran in 0.00s ===========================
.. _customizing-test-collection: .. _customizing-test-collection:
@ -297,7 +297,7 @@ file will be left out:
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 0 items collected 0 items
========================== no tests ran in 0.04s =========================== ========================== no tests ran in 0.01s ===========================
It's also possible to ignore files based on Unix shell-style wildcards by adding It's also possible to ignore files based on Unix shell-style wildcards by adding
patterns to ``collect_ignore_glob``. patterns to ``collect_ignore_glob``.

View File

@ -650,4 +650,4 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
failure_demo.py:282: AssertionError failure_demo.py:282: AssertionError
============================ 44 failed in 0.82s ============================ ============================ 44 failed in 0.26s ============================

View File

@ -65,7 +65,7 @@ Let's run this without supplying our new option:
test_sample.py:6: AssertionError test_sample.py:6: AssertionError
--------------------------- Captured stdout call --------------------------- --------------------------- Captured stdout call ---------------------------
first first
1 failed in 0.06s 1 failed in 0.02s
And now with supplying a command line option: And now with supplying a command line option:
@ -89,7 +89,7 @@ And now with supplying a command line option:
test_sample.py:6: AssertionError test_sample.py:6: AssertionError
--------------------------- Captured stdout call --------------------------- --------------------------- Captured stdout call ---------------------------
second second
1 failed in 0.06s 1 failed in 0.02s
You can see that the command line option arrived in our test. This You can see that the command line option arrived in our test. This
completes the basic pattern. However, one often rather wants to process completes the basic pattern. However, one often rather wants to process
@ -132,7 +132,7 @@ directory with the above conftest.py:
rootdir: $REGENDOC_TMPDIR rootdir: $REGENDOC_TMPDIR
collected 0 items collected 0 items
========================== no tests ran in 0.01s =========================== ========================== no tests ran in 0.00s ===========================
.. _`excontrolskip`: .. _`excontrolskip`:
@ -261,7 +261,7 @@ Let's run our little function:
E Failed: not configured: 42 E Failed: not configured: 42
test_checkconfig.py:11: Failed test_checkconfig.py:11: Failed
1 failed in 0.05s 1 failed in 0.02s
If you only want to hide certain exceptions, you can set ``__tracebackhide__`` If you only want to hide certain exceptions, you can set ``__tracebackhide__``
to a callable which gets the ``ExceptionInfo`` object. You can for example use to a callable which gets the ``ExceptionInfo`` object. You can for example use
@ -445,9 +445,9 @@ Now we can profile which test functions execute the slowest:
========================= slowest 3 test durations ========================= ========================= slowest 3 test durations =========================
0.30s call test_some_are_slow.py::test_funcslow2 0.30s call test_some_are_slow.py::test_funcslow2
0.25s call test_some_are_slow.py::test_funcslow1 0.20s call test_some_are_slow.py::test_funcslow1
0.10s call test_some_are_slow.py::test_funcfast 0.10s call test_some_are_slow.py::test_funcfast
============================ 3 passed in 0.68s ============================= ============================ 3 passed in 0.61s =============================
incremental testing - test steps incremental testing - test steps
--------------------------------------------------- ---------------------------------------------------
@ -531,7 +531,7 @@ If we run this:
========================= short test summary info ========================== ========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::test_deletion XFAIL test_step.py::TestUserHandling::test_deletion
reason: previous test failed (test_modification) reason: previous test failed (test_modification)
================== 1 failed, 2 passed, 1 xfailed in 0.07s ================== ================== 1 failed, 2 passed, 1 xfailed in 0.03s ==================
We'll see that ``test_deletion`` was not executed because ``test_modification`` We'll see that ``test_deletion`` was not executed because ``test_modification``
failed. It is reported as an "expected failure". failed. It is reported as an "expected failure".
@ -644,7 +644,7 @@ We can run this:
E assert 0 E assert 0
a/test_db2.py:2: AssertionError a/test_db2.py:2: AssertionError
============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.10s ============== ============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.05s ==============
The two test modules in the ``a`` directory see the same ``db`` fixture instance The two test modules in the ``a`` directory see the same ``db`` fixture instance
while the one test in the sister-directory ``b`` doesn't see it. We could of course while the one test in the sister-directory ``b`` doesn't see it. We could of course
@ -733,7 +733,7 @@ and run them:
E assert 0 E assert 0
test_module.py:6: AssertionError test_module.py:6: AssertionError
============================ 2 failed in 0.07s ============================= ============================ 2 failed in 0.02s =============================
you will have a "failures" file which contains the failing test ids: you will have a "failures" file which contains the failing test ids:
@ -848,7 +848,7 @@ and run it:
E assert 0 E assert 0
test_module.py:19: AssertionError test_module.py:19: AssertionError
======================== 2 failed, 1 error in 0.07s ======================== ======================== 2 failed, 1 error in 0.02s ========================
You'll see that the fixture finalizers could use the precise reporting You'll see that the fixture finalizers could use the precise reporting
information. information.

View File

@ -81,4 +81,4 @@ If you run this without output capturing:
.test other .test other
.test_unit1 method called .test_unit1 method called
. .
4 passed in 0.02s 4 passed in 0.01s

View File

@ -96,7 +96,7 @@ marked ``smtp_connection`` fixture function. Running the test looks like this:
E assert 0 E assert 0
test_smtpsimple.py:14: AssertionError test_smtpsimple.py:14: AssertionError
============================ 1 failed in 0.57s ============================= ============================ 1 failed in 0.18s =============================
In the failure traceback we see that the test function was called with a In the failure traceback we see that the test function was called with a
``smtp_connection`` argument, the ``smtplib.SMTP()`` instance created by the fixture ``smtp_connection`` argument, the ``smtplib.SMTP()`` instance created by the fixture
@ -258,7 +258,7 @@ inspect what is going on and can now run the tests:
E assert 0 E assert 0
test_module.py:13: AssertionError test_module.py:13: AssertionError
============================ 2 failed in 0.76s ============================= ============================ 2 failed in 0.20s =============================
You see the two ``assert 0`` failing and more importantly you can also see You see the two ``assert 0`` failing and more importantly you can also see
that the same (module-scoped) ``smtp_connection`` object was passed into the that the same (module-scoped) ``smtp_connection`` object was passed into the
@ -315,15 +315,15 @@ Consider the code below:
.. literalinclude:: example/fixtures/test_fixtures_order.py .. literalinclude:: example/fixtures/test_fixtures_order.py
The fixtures requested by ``test_foo`` will be instantiated in the following order: The fixtures requested by ``test_order`` will be instantiated in the following order:
1. ``s1``: is the highest-scoped fixture (``session``). 1. ``s1``: is the highest-scoped fixture (``session``).
2. ``m1``: is the second highest-scoped fixture (``module``). 2. ``m1``: is the second highest-scoped fixture (``module``).
3. ``a1``: is a ``function``-scoped ``autouse`` fixture: it will be instantiated before other fixtures 3. ``a1``: is a ``function``-scoped ``autouse`` fixture: it will be instantiated before other fixtures
within the same scope. within the same scope.
4. ``f3``: is a ``function``-scoped fixture, required by ``f1``: it needs to be instantiated at this point 4. ``f3``: is a ``function``-scoped fixture, required by ``f1``: it needs to be instantiated at this point
5. ``f1``: is the first ``function``-scoped fixture in ``test_foo`` parameter list. 5. ``f1``: is the first ``function``-scoped fixture in ``test_order`` parameter list.
6. ``f2``: is the last ``function``-scoped fixture in ``test_foo`` parameter list. 6. ``f2``: is the last ``function``-scoped fixture in ``test_order`` parameter list.
.. _`finalization`: .. _`finalization`:
@ -361,7 +361,7 @@ Let's execute it:
$ pytest -s -q --tb=no $ pytest -s -q --tb=no
FFteardown smtp FFteardown smtp
2 failed in 0.76s 2 failed in 0.20s
We see that the ``smtp_connection`` instance is finalized after the two We see that the ``smtp_connection`` instance is finalized after the two
tests finished execution. Note that if we decorated our fixture tests finished execution. Note that if we decorated our fixture
@ -515,7 +515,7 @@ again, nothing much has changed:
$ pytest -s -q --tb=no $ pytest -s -q --tb=no
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com) FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
2 failed in 0.76s 2 failed in 0.21s
Let's quickly create another test module that actually sets the Let's quickly create another test module that actually sets the
server URL in its module namespace: server URL in its module namespace:
@ -692,7 +692,7 @@ So let's just do another run:
test_module.py:13: AssertionError test_module.py:13: AssertionError
------------------------- Captured stdout teardown ------------------------- ------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef> finalizing <smtplib.SMTP object at 0xdeadbeef>
4 failed in 1.77s 4 failed in 0.89s
We see that our two test functions each ran twice, against the different We see that our two test functions each ran twice, against the different
``smtp_connection`` instances. Note also, that with the ``mail.python.org`` ``smtp_connection`` instances. Note also, that with the ``mail.python.org``
@ -771,7 +771,7 @@ Running the above tests results in the following test IDs being used:
<Function test_ehlo[mail.python.org]> <Function test_ehlo[mail.python.org]>
<Function test_noop[mail.python.org]> <Function test_noop[mail.python.org]>
========================== no tests ran in 0.04s =========================== ========================== no tests ran in 0.01s ===========================
.. _`fixture-parametrize-marks`: .. _`fixture-parametrize-marks`:
@ -861,7 +861,7 @@ Here we declare an ``app`` fixture which receives the previously defined
test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%] test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%]
test_appsetup.py::test_smtp_connection_exists[mail.python.org] PASSED [100%] test_appsetup.py::test_smtp_connection_exists[mail.python.org] PASSED [100%]
============================ 2 passed in 0.79s ============================= ============================ 2 passed in 0.44s =============================
Due to the parametrization of ``smtp_connection``, the test will run twice with two Due to the parametrization of ``smtp_connection``, the test will run twice with two
different ``App`` instances and respective smtp servers. There is no different ``App`` instances and respective smtp servers. There is no
@ -971,7 +971,7 @@ Let's run the tests in verbose mode and with looking at the print-output:
TEARDOWN modarg mod2 TEARDOWN modarg mod2
============================ 8 passed in 0.02s ============================= ============================ 8 passed in 0.01s =============================
You can see that the parametrized module-scoped ``modarg`` resource caused an You can see that the parametrized module-scoped ``modarg`` resource caused an
ordering of test execution that lead to the fewest possible "active" resources. ordering of test execution that lead to the fewest possible "active" resources.
@ -1043,7 +1043,7 @@ to verify our fixture is activated and the tests pass:
$ pytest -q $ pytest -q
.. [100%] .. [100%]
2 passed in 0.02s 2 passed in 0.01s
You can specify multiple fixtures like this: You can specify multiple fixtures like this:
@ -1151,7 +1151,7 @@ If we run it, we get two passing tests:
$ pytest -q $ pytest -q
.. [100%] .. [100%]
2 passed in 0.02s 2 passed in 0.01s
Here is how autouse fixtures work in other scopes: Here is how autouse fixtures work in other scopes:

View File

@ -69,7 +69,7 @@ Thats it. You can now execute the test function:
E + where 4 = func(3) E + where 4 = func(3)
test_sample.py:6: AssertionError test_sample.py:6: AssertionError
============================ 1 failed in 0.05s ============================= ============================ 1 failed in 0.02s =============================
This test returns a failure report because ``func(3)`` does not return ``5``. This test returns a failure report because ``func(3)`` does not return ``5``.
@ -108,7 +108,7 @@ Execute the test function with “quiet” reporting mode:
$ pytest -q test_sysexit.py $ pytest -q test_sysexit.py
. [100%] . [100%]
1 passed in 0.01s 1 passed in 0.00s
Group multiple tests in a class Group multiple tests in a class
-------------------------------------------------------------- --------------------------------------------------------------
@ -145,7 +145,7 @@ Once you develop multiple tests, you may want to group them into a class. pytest
E + where False = hasattr('hello', 'check') E + where False = hasattr('hello', 'check')
test_class.py:8: AssertionError test_class.py:8: AssertionError
1 failed, 1 passed in 0.05s 1 failed, 1 passed in 0.02s
The first test passed and the second failed. You can easily see the intermediate values in the assertion to help you understand the reason for the failure. The first test passed and the second failed. You can easily see the intermediate values in the assertion to help you understand the reason for the failure.
@ -180,7 +180,7 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look
test_tmpdir.py:3: AssertionError test_tmpdir.py:3: AssertionError
--------------------------- Captured stdout call --------------------------- --------------------------- Captured stdout call ---------------------------
PYTEST_TMPDIR/test_needsfiles0 PYTEST_TMPDIR/test_needsfiles0
1 failed in 0.05s 1 failed in 0.02s
More info on tmpdir handling is available at :ref:`Temporary directories and files <tmpdir handling>`. More info on tmpdir handling is available at :ref:`Temporary directories and files <tmpdir handling>`.

View File

@ -44,7 +44,7 @@ To execute it:
E + where 4 = inc(3) E + where 4 = inc(3)
test_sample.py:6: AssertionError test_sample.py:6: AssertionError
============================ 1 failed in 0.06s ============================= ============================ 1 failed in 0.02s =============================
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used. Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used.
See :ref:`Getting Started <getstarted>` for more examples. See :ref:`Getting Started <getstarted>` for more examples.

View File

@ -50,7 +50,7 @@ these patches.
:py:meth:`monkeypatch.chdir` to change the context of the current working directory :py:meth:`monkeypatch.chdir` to change the context of the current working directory
during a test. during a test.
5. Use py:meth:`monkeypatch.syspath_prepend` to modify ``sys.path`` which will also 5. Use :py:meth:`monkeypatch.syspath_prepend` to modify ``sys.path`` which will also
call :py:meth:`pkg_resources.fixup_namespace_packages` and :py:meth:`importlib.invalidate_caches`. call :py:meth:`pkg_resources.fixup_namespace_packages` and :py:meth:`importlib.invalidate_caches`.
See the `monkeypatch blog post`_ for some introduction material See the `monkeypatch blog post`_ for some introduction material

View File

@ -75,7 +75,7 @@ them in turn:
E + where 54 = eval('6*9') E + where 54 = eval('6*9')
test_expectation.py:6: AssertionError test_expectation.py:6: AssertionError
======================= 1 failed, 2 passed in 0.05s ======================== ======================= 1 failed, 2 passed in 0.02s ========================
.. note:: .. note::
@ -128,7 +128,7 @@ Let's run this:
test_expectation.py ..x [100%] test_expectation.py ..x [100%]
======================= 2 passed, 1 xfailed in 0.06s ======================= ======================= 2 passed, 1 xfailed in 0.02s =======================
The one parameter set which caused a failure previously now The one parameter set which caused a failure previously now
shows up as an "xfailed (expected to fail)" test. shows up as an "xfailed (expected to fail)" test.
@ -225,7 +225,7 @@ Let's also run with a stringinput that will lead to a failing test:
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha
test_strings.py:4: AssertionError test_strings.py:4: AssertionError
1 failed in 0.05s 1 failed in 0.02s
As expected our test function fails. As expected our test function fails.
@ -239,7 +239,7 @@ list:
s [100%] s [100%]
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:2 SKIPPED [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:2
1 skipped in 0.01s 1 skipped in 0.00s
Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across
those sets cannot be duplicated, otherwise an error will be raised. those sets cannot be duplicated, otherwise an error will be raised.

View File

@ -371,7 +371,7 @@ Running it with the report-on-xfail option gives this output:
XFAIL xfail_demo.py::test_hello6 XFAIL xfail_demo.py::test_hello6
reason: reason reason: reason
XFAIL xfail_demo.py::test_hello7 XFAIL xfail_demo.py::test_hello7
============================ 7 xfailed in 0.17s ============================ ============================ 7 xfailed in 0.05s ============================
.. _`skip/xfail with parametrize`: .. _`skip/xfail with parametrize`:

View File

@ -4,7 +4,6 @@ Talks and Tutorials
.. sidebar:: Next Open Trainings .. sidebar:: Next Open Trainings
- `Training at Workshoptage 2019 <https://workshoptage.ch/workshops/2019/test-driven-development-fuer-python-mit-pytest/>`_ (German), 10th September 2019, Rapperswil, Switzerland.
- `3 day hands-on workshop covering pytest, tox and devpi: "Professional Testing with Python" <https://python-academy.com/courses/specialtopics/python_course_testing.html>`_ (English), October 21 - 23, 2019, Leipzig, Germany. - `3 day hands-on workshop covering pytest, tox and devpi: "Professional Testing with Python" <https://python-academy.com/courses/specialtopics/python_course_testing.html>`_ (English), October 21 - 23, 2019, Leipzig, Germany.
.. _`funcargs`: funcargs.html .. _`funcargs`: funcargs.html

View File

@ -64,7 +64,7 @@ Running this would result in a passed test except for the last
E assert 0 E assert 0
test_tmp_path.py:13: AssertionError test_tmp_path.py:13: AssertionError
============================ 1 failed in 0.06s ============================= ============================ 1 failed in 0.02s =============================
.. _`tmp_path_factory example`: .. _`tmp_path_factory example`:
@ -133,7 +133,7 @@ Running this would result in a passed test except for the last
E assert 0 E assert 0
test_tmpdir.py:9: AssertionError test_tmpdir.py:9: AssertionError
============================ 1 failed in 0.05s ============================= ============================ 1 failed in 0.02s =============================
.. _`tmpdir factory example`: .. _`tmpdir factory example`:

View File

@ -166,7 +166,7 @@ the ``self.db`` values in the traceback:
E assert 0 E assert 0
test_unittest_db.py:13: AssertionError test_unittest_db.py:13: AssertionError
============================ 2 failed in 0.07s ============================= ============================ 2 failed in 0.02s =============================
This default pytest traceback shows that the two test methods This default pytest traceback shows that the two test methods
share the same ``self.db`` instance which was our intention share the same ``self.db`` instance which was our intention
@ -219,7 +219,7 @@ Running this test module ...:
$ pytest -q test_unittest_cleandir.py $ pytest -q test_unittest_cleandir.py
. [100%] . [100%]
1 passed in 0.02s 1 passed in 0.01s
... gives us one passed test because the ``initdir`` fixture function ... gives us one passed test because the ``initdir`` fixture function
was executed ahead of the ``test_method``. was executed ahead of the ``test_method``.

View File

@ -247,7 +247,7 @@ Example:
XPASS test_example.py::test_xpass always xfail XPASS test_example.py::test_xpass always xfail
ERROR test_example.py::test_error - assert 0 ERROR test_example.py::test_error - assert 0
FAILED test_example.py::test_fail - assert 0 FAILED test_example.py::test_fail - assert 0
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.08s === == 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.03s ===
The ``-r`` options accepts a number of characters after it, with ``a`` used The ``-r`` options accepts a number of characters after it, with ``a`` used
above meaning "all except passes". above meaning "all except passes".
@ -297,7 +297,7 @@ More than one character can be used, so for example to only see failed and skipp
========================= short test summary info ========================== ========================= short test summary info ==========================
FAILED test_example.py::test_fail - assert 0 FAILED test_example.py::test_fail - assert 0
SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.08s === == 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.03s ===
Using ``p`` lists the passing tests, whilst ``P`` adds an extra section "PASSES" with those tests that passed but had Using ``p`` lists the passing tests, whilst ``P`` adds an extra section "PASSES" with those tests that passed but had
captured output: captured output:
@ -336,7 +336,7 @@ captured output:
ok ok
========================= short test summary info ========================== ========================= short test summary info ==========================
PASSED test_example.py::test_ok PASSED test_example.py::test_ok
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.08s === == 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.03s ===
.. _pdb-option: .. _pdb-option:

View File

@ -41,7 +41,7 @@ Running pytest now produces this output:
warnings.warn(UserWarning("api v1, should use functions from v2")) warnings.warn(UserWarning("api v1, should use functions from v2"))
-- Docs: https://docs.pytest.org/en/latest/warnings.html -- Docs: https://docs.pytest.org/en/latest/warnings.html
====================== 1 passed, 1 warnings in 0.01s ======================= ====================== 1 passed, 1 warnings in 0.00s =======================
The ``-W`` flag can be passed to control which warnings will be displayed or even turn The ``-W`` flag can be passed to control which warnings will be displayed or even turn
them into errors: them into errors:
@ -64,7 +64,7 @@ them into errors:
E UserWarning: api v1, should use functions from v2 E UserWarning: api v1, should use functions from v2
test_show_warnings.py:5: UserWarning test_show_warnings.py:5: UserWarning
1 failed in 0.05s 1 failed in 0.02s
The same option can be set in the ``pytest.ini`` file using the ``filterwarnings`` ini option. The same option can be set in the ``pytest.ini`` file using the ``filterwarnings`` ini option.
For example, the configuration below will ignore all user warnings, but will transform For example, the configuration below will ignore all user warnings, but will transform
@ -407,7 +407,7 @@ defines an ``__init__`` constructor, as this prevents the class from being insta
class Test: class Test:
-- Docs: https://docs.pytest.org/en/latest/warnings.html -- Docs: https://docs.pytest.org/en/latest/warnings.html
1 warnings in 0.01s 1 warnings in 0.00s
These warnings might be filtered using the same builtin mechanisms used to filter other types of warnings. These warnings might be filtered using the same builtin mechanisms used to filter other types of warnings.

View File

@ -442,7 +442,7 @@ additionally it is possible to copy examples for an example folder before runnin
testdir.copy_example("test_example.py") testdir.copy_example("test_example.py")
-- Docs: https://docs.pytest.org/en/latest/warnings.html -- Docs: https://docs.pytest.org/en/latest/warnings.html
====================== 2 passed, 1 warnings in 0.28s ======================= ====================== 2 passed, 1 warnings in 0.12s =======================
For more information about the result object that ``runpytest()`` returns, and For more information about the result object that ``runpytest()`` returns, and
the methods that it provides please check out the :py:class:`RunResult the methods that it provides please check out the :py:class:`RunResult

View File

@ -591,7 +591,7 @@ class ExceptionInfo(Generic[_E]):
) )
return fmt.repr_excinfo(self) return fmt.repr_excinfo(self)
def match(self, regexp: Union[str, Pattern]) -> bool: def match(self, regexp: "Union[str, Pattern]") -> bool:
""" """
Check whether the regular expression 'regexp' is found in the string Check whether the regular expression 'regexp' is found in the string
representation of the exception using ``re.search``. If it matches representation of the exception using ``re.search``. If it matches

View File

@ -9,6 +9,7 @@ import sys
from contextlib import contextmanager from contextlib import contextmanager
from inspect import Parameter from inspect import Parameter
from inspect import signature from inspect import signature
from typing import overload
import attr import attr
import py import py
@ -347,3 +348,9 @@ class FuncargnamesCompatAttr:
warnings.warn(FUNCARGNAMES, stacklevel=2) warnings.warn(FUNCARGNAMES, stacklevel=2)
return self.fixturenames return self.fixturenames
if sys.version_info < (3, 5, 2): # pragma: no cover
def overload(f): # noqa: F811
return f

View File

@ -30,6 +30,7 @@ from _pytest._code import filter_traceback
from _pytest.compat import importlib_metadata from _pytest.compat import importlib_metadata
from _pytest.outcomes import fail from _pytest.outcomes import fail
from _pytest.outcomes import Skipped from _pytest.outcomes import Skipped
from _pytest.pathlib import unique_path
from _pytest.warning_types import PytestConfigWarning from _pytest.warning_types import PytestConfigWarning
hookimpl = HookimplMarker("pytest") hookimpl = HookimplMarker("pytest")
@ -366,7 +367,7 @@ class PytestPluginManager(PluginManager):
""" """
current = py.path.local() current = py.path.local()
self._confcutdir = ( self._confcutdir = (
current.join(namespace.confcutdir, abs=True) unique_path(current.join(namespace.confcutdir, abs=True))
if namespace.confcutdir if namespace.confcutdir
else None else None
) )
@ -405,19 +406,18 @@ class PytestPluginManager(PluginManager):
else: else:
directory = path directory = path
directory = unique_path(directory)
# XXX these days we may rather want to use config.rootdir # XXX these days we may rather want to use config.rootdir
# and allow users to opt into looking into the rootdir parent # and allow users to opt into looking into the rootdir parent
# directories instead of requiring to specify confcutdir # directories instead of requiring to specify confcutdir
clist = [] clist = []
for parent in directory.realpath().parts(): for parent in directory.parts():
if self._confcutdir and self._confcutdir.relto(parent): if self._confcutdir and self._confcutdir.relto(parent):
continue continue
conftestpath = parent.join("conftest.py") conftestpath = parent.join("conftest.py")
if conftestpath.isfile(): if conftestpath.isfile():
# Use realpath to avoid loading the same conftest twice mod = self._importconftest(conftestpath)
# with build systems that create build directories containing
# symlinks to actual files.
mod = self._importconftest(conftestpath.realpath())
clist.append(mod) clist.append(mod)
self._dirpath2confmods[directory] = clist self._dirpath2confmods[directory] = clist
return clist return clist
@ -432,6 +432,10 @@ class PytestPluginManager(PluginManager):
raise KeyError(name) raise KeyError(name)
def _importconftest(self, conftestpath): def _importconftest(self, conftestpath):
# Use realpath to avoid loading the same conftest twice
# with build systems that create build directories containing
# symlinks to actual files.
conftestpath = unique_path(conftestpath)
try: try:
return self._conftestpath2mod[conftestpath] return self._conftestpath2mod[conftestpath]
except KeyError: except KeyError:

View File

@ -72,7 +72,7 @@ def create_new_paste(contents):
if m: if m:
return "{}/show/{}".format(url, m.group(1)) return "{}/show/{}".format(url, m.group(1))
else: else:
return "bad response: " + response return "bad response: " + response.decode("utf-8")
def pytest_terminal_summary(terminalreporter): def pytest_terminal_summary(terminalreporter):

View File

@ -11,6 +11,7 @@ from functools import partial
from os.path import expanduser from os.path import expanduser
from os.path import expandvars from os.path import expandvars
from os.path import isabs from os.path import isabs
from os.path import normcase
from os.path import sep from os.path import sep
from posixpath import sep as posix_sep from posixpath import sep as posix_sep
@ -334,3 +335,12 @@ def fnmatch_ex(pattern, path):
def parts(s): def parts(s):
parts = s.split(sep) parts = s.split(sep)
return {sep.join(parts[: i + 1]) or sep for i in range(len(parts))} return {sep.join(parts[: i + 1]) or sep for i in range(len(parts))}
def unique_path(path):
"""Returns a unique path in case-insensitive (but case-preserving) file
systems such as Windows.
This is needed only for ``py.path.local``; ``pathlib.Path`` handles this
natively with ``resolve()``."""
return type(path)(normcase(str(path.realpath())))

View File

@ -13,7 +13,6 @@ from typing import Callable
from typing import cast from typing import cast
from typing import Generic from typing import Generic
from typing import Optional from typing import Optional
from typing import overload
from typing import Pattern from typing import Pattern
from typing import Tuple from typing import Tuple
from typing import TypeVar from typing import TypeVar
@ -22,12 +21,14 @@ from typing import Union
from more_itertools.more import always_iterable from more_itertools.more import always_iterable
import _pytest._code import _pytest._code
from _pytest.compat import overload
from _pytest.compat import STRING_TYPES from _pytest.compat import STRING_TYPES
from _pytest.outcomes import fail from _pytest.outcomes import fail
if False: # TYPE_CHECKING if False: # TYPE_CHECKING
from typing import Type # noqa: F401 (used in type string) from typing import Type # noqa: F401 (used in type string)
BASE_TYPE = (type, STRING_TYPES) BASE_TYPE = (type, STRING_TYPES)
@ -547,12 +548,12 @@ _E = TypeVar("_E", bound=BaseException)
def raises( def raises(
expected_exception: Union["Type[_E]", Tuple["Type[_E]", ...]], expected_exception: Union["Type[_E]", Tuple["Type[_E]", ...]],
*, *,
match: Optional[Union[str, Pattern]] = ... match: "Optional[Union[str, Pattern]]" = ...
) -> "RaisesContext[_E]": ) -> "RaisesContext[_E]":
... # pragma: no cover ... # pragma: no cover
@overload @overload # noqa: F811
def raises( def raises(
expected_exception: Union["Type[_E]", Tuple["Type[_E]", ...]], expected_exception: Union["Type[_E]", Tuple["Type[_E]", ...]],
func: Callable, func: Callable,
@ -563,10 +564,10 @@ def raises(
... # pragma: no cover ... # pragma: no cover
def raises( def raises( # noqa: F811
expected_exception: Union["Type[_E]", Tuple["Type[_E]", ...]], expected_exception: Union["Type[_E]", Tuple["Type[_E]", ...]],
*args: Any, *args: Any,
match: Optional[Union[str, Pattern]] = None, match: Optional[Union[str, "Pattern"]] = None,
**kwargs: Any **kwargs: Any
) -> Union["RaisesContext[_E]", Optional[_pytest._code.ExceptionInfo[_E]]]: ) -> Union["RaisesContext[_E]", Optional[_pytest._code.ExceptionInfo[_E]]]:
r""" r"""
@ -724,7 +725,7 @@ class RaisesContext(Generic[_E]):
self, self,
expected_exception: Union["Type[_E]", Tuple["Type[_E]", ...]], expected_exception: Union["Type[_E]", Tuple["Type[_E]", ...]],
message: str, message: str,
match_expr: Optional[Union[str, Pattern]] = None, match_expr: Optional[Union[str, "Pattern"]] = None,
) -> None: ) -> None:
self.expected_exception = expected_exception self.expected_exception = expected_exception
self.message = message self.message = message

View File

@ -7,11 +7,11 @@ from typing import Callable
from typing import Iterator from typing import Iterator
from typing import List from typing import List
from typing import Optional from typing import Optional
from typing import overload
from typing import Pattern from typing import Pattern
from typing import Tuple from typing import Tuple
from typing import Union from typing import Union
from _pytest.compat import overload
from _pytest.fixtures import yield_fixture from _pytest.fixtures import yield_fixture
from _pytest.outcomes import fail from _pytest.outcomes import fail
@ -58,26 +58,26 @@ def deprecated_call(func=None, *args, **kwargs):
def warns( def warns(
expected_warning: Union["Type[Warning]", Tuple["Type[Warning]", ...]], expected_warning: Union["Type[Warning]", Tuple["Type[Warning]", ...]],
*, *,
match: Optional[Union[str, Pattern]] = ... match: "Optional[Union[str, Pattern]]" = ...
) -> "WarningsChecker": ) -> "WarningsChecker":
... # pragma: no cover ... # pragma: no cover
@overload @overload # noqa: F811
def warns( def warns(
expected_warning: Union["Type[Warning]", Tuple["Type[Warning]", ...]], expected_warning: Union["Type[Warning]", Tuple["Type[Warning]", ...]],
func: Callable, func: Callable,
*args: Any, *args: Any,
match: Optional[Union[str, Pattern]] = ..., match: Optional[Union[str, "Pattern"]] = ...,
**kwargs: Any **kwargs: Any
) -> Union[Any]: ) -> Union[Any]:
... # pragma: no cover ... # pragma: no cover
def warns( def warns( # noqa: F811
expected_warning: Union["Type[Warning]", Tuple["Type[Warning]", ...]], expected_warning: Union["Type[Warning]", Tuple["Type[Warning]", ...]],
*args: Any, *args: Any,
match: Optional[Union[str, Pattern]] = None, match: Optional[Union[str, "Pattern"]] = None,
**kwargs: Any **kwargs: Any
) -> Union["WarningsChecker", Any]: ) -> Union["WarningsChecker", Any]:
r"""Assert that code raises a particular class of warning. r"""Assert that code raises a particular class of warning.
@ -207,7 +207,7 @@ class WarningsChecker(WarningsRecorder):
expected_warning: Optional[ expected_warning: Optional[
Union["Type[Warning]", Tuple["Type[Warning]", ...]] Union["Type[Warning]", Tuple["Type[Warning]", ...]]
] = None, ] = None,
match_expr: Optional[Union[str, Pattern]] = None, match_expr: Optional[Union[str, "Pattern"]] = None,
) -> None: ) -> None:
super().__init__() super().__init__()

View File

@ -3,6 +3,7 @@ from typing import Optional
import py import py
from _pytest._code.code import ExceptionChainRepr
from _pytest._code.code import ExceptionInfo from _pytest._code.code import ExceptionInfo
from _pytest._code.code import ReprEntry from _pytest._code.code import ReprEntry
from _pytest._code.code import ReprEntryNative from _pytest._code.code import ReprEntryNative
@ -160,46 +161,7 @@ class BaseReport:
Experimental method. Experimental method.
""" """
return _report_to_json(self)
def disassembled_report(rep):
reprtraceback = rep.longrepr.reprtraceback.__dict__.copy()
reprcrash = rep.longrepr.reprcrash.__dict__.copy()
new_entries = []
for entry in reprtraceback["reprentries"]:
entry_data = {
"type": type(entry).__name__,
"data": entry.__dict__.copy(),
}
for key, value in entry_data["data"].items():
if hasattr(value, "__dict__"):
entry_data["data"][key] = value.__dict__.copy()
new_entries.append(entry_data)
reprtraceback["reprentries"] = new_entries
return {
"reprcrash": reprcrash,
"reprtraceback": reprtraceback,
"sections": rep.longrepr.sections,
}
d = self.__dict__.copy()
if hasattr(self.longrepr, "toterminal"):
if hasattr(self.longrepr, "reprtraceback") and hasattr(
self.longrepr, "reprcrash"
):
d["longrepr"] = disassembled_report(self)
else:
d["longrepr"] = str(self.longrepr)
else:
d["longrepr"] = self.longrepr
for name in d:
if isinstance(d[name], (py.path.local, Path)):
d[name] = str(d[name])
elif name == "result":
d[name] = None # for now
return d
@classmethod @classmethod
def _from_json(cls, reportdict): def _from_json(cls, reportdict):
@ -211,55 +173,8 @@ class BaseReport:
Experimental method. Experimental method.
""" """
if reportdict["longrepr"]: kwargs = _report_kwargs_from_json(reportdict)
if ( return cls(**kwargs)
"reprcrash" in reportdict["longrepr"]
and "reprtraceback" in reportdict["longrepr"]
):
reprtraceback = reportdict["longrepr"]["reprtraceback"]
reprcrash = reportdict["longrepr"]["reprcrash"]
unserialized_entries = []
reprentry = None
for entry_data in reprtraceback["reprentries"]:
data = entry_data["data"]
entry_type = entry_data["type"]
if entry_type == "ReprEntry":
reprfuncargs = None
reprfileloc = None
reprlocals = None
if data["reprfuncargs"]:
reprfuncargs = ReprFuncArgs(**data["reprfuncargs"])
if data["reprfileloc"]:
reprfileloc = ReprFileLocation(**data["reprfileloc"])
if data["reprlocals"]:
reprlocals = ReprLocals(data["reprlocals"]["lines"])
reprentry = ReprEntry(
lines=data["lines"],
reprfuncargs=reprfuncargs,
reprlocals=reprlocals,
filelocrepr=reprfileloc,
style=data["style"],
)
elif entry_type == "ReprEntryNative":
reprentry = ReprEntryNative(data["lines"])
else:
_report_unserialization_failure(entry_type, cls, reportdict)
unserialized_entries.append(reprentry)
reprtraceback["reprentries"] = unserialized_entries
exception_info = ReprExceptionInfo(
reprtraceback=ReprTraceback(**reprtraceback),
reprcrash=ReprFileLocation(**reprcrash),
)
for section in reportdict["longrepr"]["sections"]:
exception_info.addsection(*section)
reportdict["longrepr"] = exception_info
return cls(**reportdict)
def _report_unserialization_failure(type_name, report_class, reportdict): def _report_unserialization_failure(type_name, report_class, reportdict):
@ -424,3 +339,142 @@ def pytest_report_from_serializable(data):
assert False, "Unknown report_type unserialize data: {}".format( assert False, "Unknown report_type unserialize data: {}".format(
data["_report_type"] data["_report_type"]
) )
def _report_to_json(report):
"""
This was originally the serialize_report() function from xdist (ca03269).
Returns the contents of this report as a dict of builtin entries, suitable for
serialization.
"""
def serialize_repr_entry(entry):
entry_data = {"type": type(entry).__name__, "data": entry.__dict__.copy()}
for key, value in entry_data["data"].items():
if hasattr(value, "__dict__"):
entry_data["data"][key] = value.__dict__.copy()
return entry_data
def serialize_repr_traceback(reprtraceback):
result = reprtraceback.__dict__.copy()
result["reprentries"] = [
serialize_repr_entry(x) for x in reprtraceback.reprentries
]
return result
def serialize_repr_crash(reprcrash):
return reprcrash.__dict__.copy()
def serialize_longrepr(rep):
result = {
"reprcrash": serialize_repr_crash(rep.longrepr.reprcrash),
"reprtraceback": serialize_repr_traceback(rep.longrepr.reprtraceback),
"sections": rep.longrepr.sections,
}
if isinstance(rep.longrepr, ExceptionChainRepr):
result["chain"] = []
for repr_traceback, repr_crash, description in rep.longrepr.chain:
result["chain"].append(
(
serialize_repr_traceback(repr_traceback),
serialize_repr_crash(repr_crash),
description,
)
)
else:
result["chain"] = None
return result
d = report.__dict__.copy()
if hasattr(report.longrepr, "toterminal"):
if hasattr(report.longrepr, "reprtraceback") and hasattr(
report.longrepr, "reprcrash"
):
d["longrepr"] = serialize_longrepr(report)
else:
d["longrepr"] = str(report.longrepr)
else:
d["longrepr"] = report.longrepr
for name in d:
if isinstance(d[name], (py.path.local, Path)):
d[name] = str(d[name])
elif name == "result":
d[name] = None # for now
return d
def _report_kwargs_from_json(reportdict):
"""
This was originally the serialize_report() function from xdist (ca03269).
Returns **kwargs that can be used to construct a TestReport or CollectReport instance.
"""
def deserialize_repr_entry(entry_data):
data = entry_data["data"]
entry_type = entry_data["type"]
if entry_type == "ReprEntry":
reprfuncargs = None
reprfileloc = None
reprlocals = None
if data["reprfuncargs"]:
reprfuncargs = ReprFuncArgs(**data["reprfuncargs"])
if data["reprfileloc"]:
reprfileloc = ReprFileLocation(**data["reprfileloc"])
if data["reprlocals"]:
reprlocals = ReprLocals(data["reprlocals"]["lines"])
reprentry = ReprEntry(
lines=data["lines"],
reprfuncargs=reprfuncargs,
reprlocals=reprlocals,
filelocrepr=reprfileloc,
style=data["style"],
)
elif entry_type == "ReprEntryNative":
reprentry = ReprEntryNative(data["lines"])
else:
_report_unserialization_failure(entry_type, TestReport, reportdict)
return reprentry
def deserialize_repr_traceback(repr_traceback_dict):
repr_traceback_dict["reprentries"] = [
deserialize_repr_entry(x) for x in repr_traceback_dict["reprentries"]
]
return ReprTraceback(**repr_traceback_dict)
def deserialize_repr_crash(repr_crash_dict):
return ReprFileLocation(**repr_crash_dict)
if (
reportdict["longrepr"]
and "reprcrash" in reportdict["longrepr"]
and "reprtraceback" in reportdict["longrepr"]
):
reprtraceback = deserialize_repr_traceback(
reportdict["longrepr"]["reprtraceback"]
)
reprcrash = deserialize_repr_crash(reportdict["longrepr"]["reprcrash"])
if reportdict["longrepr"]["chain"]:
chain = []
for repr_traceback_data, repr_crash_data, description in reportdict[
"longrepr"
]["chain"]:
chain.append(
(
deserialize_repr_traceback(repr_traceback_data),
deserialize_repr_crash(repr_crash_data),
description,
)
)
exception_info = ExceptionChainRepr(chain)
else:
exception_info = ReprExceptionInfo(reprtraceback, reprcrash)
for section in reportdict["longrepr"]["sections"]:
exception_info.addsection(*section)
reportdict["longrepr"] = exception_info
return reportdict

View File

@ -1,8 +1,6 @@
import sys import sys
from unittest import mock from unittest import mock
from test_excinfo import TWMock
import _pytest._code import _pytest._code
import pytest import pytest
@ -168,17 +166,15 @@ class TestTracebackEntry:
class TestReprFuncArgs: class TestReprFuncArgs:
def test_not_raise_exception_with_mixed_encoding(self): def test_not_raise_exception_with_mixed_encoding(self, tw_mock):
from _pytest._code.code import ReprFuncArgs from _pytest._code.code import ReprFuncArgs
tw = TWMock()
args = [("unicode_string", "São Paulo"), ("utf8_string", b"S\xc3\xa3o Paulo")] args = [("unicode_string", "São Paulo"), ("utf8_string", b"S\xc3\xa3o Paulo")]
r = ReprFuncArgs(args) r = ReprFuncArgs(args)
r.toterminal(tw) r.toterminal(tw_mock)
assert ( assert (
tw.lines[0] tw_mock.lines[0]
== r"unicode_string = São Paulo, utf8_string = b'S\xc3\xa3o Paulo'" == r"unicode_string = São Paulo, utf8_string = b'S\xc3\xa3o Paulo'"
) )

View File

@ -31,33 +31,6 @@ def limited_recursion_depth():
sys.setrecursionlimit(before) sys.setrecursionlimit(before)
class TWMock:
WRITE = object()
def __init__(self):
self.lines = []
self.is_writing = False
def sep(self, sep, line=None):
self.lines.append((sep, line))
def write(self, msg, **kw):
self.lines.append((TWMock.WRITE, msg))
def line(self, line, **kw):
self.lines.append(line)
def markup(self, text, **kw):
return text
def get_write_msg(self, idx):
flag, msg = self.lines[idx]
assert flag == TWMock.WRITE
return msg
fullwidth = 80
def test_excinfo_simple() -> None: def test_excinfo_simple() -> None:
try: try:
raise ValueError raise ValueError
@ -658,7 +631,7 @@ raise ValueError()
assert loc.lineno == 3 assert loc.lineno == 3
# assert loc.message == "ValueError: hello" # assert loc.message == "ValueError: hello"
def test_repr_tracebackentry_lines2(self, importasmod): def test_repr_tracebackentry_lines2(self, importasmod, tw_mock):
mod = importasmod( mod = importasmod(
""" """
def func1(m, x, y, z): def func1(m, x, y, z):
@ -678,13 +651,12 @@ raise ValueError()
p = FormattedExcinfo(funcargs=True) p = FormattedExcinfo(funcargs=True)
repr_entry = p.repr_traceback_entry(entry) repr_entry = p.repr_traceback_entry(entry)
assert repr_entry.reprfuncargs.args == reprfuncargs.args assert repr_entry.reprfuncargs.args == reprfuncargs.args
tw = TWMock() repr_entry.toterminal(tw_mock)
repr_entry.toterminal(tw) assert tw_mock.lines[0] == "m = " + repr("m" * 90)
assert tw.lines[0] == "m = " + repr("m" * 90) assert tw_mock.lines[1] == "x = 5, y = 13"
assert tw.lines[1] == "x = 5, y = 13" assert tw_mock.lines[2] == "z = " + repr("z" * 120)
assert tw.lines[2] == "z = " + repr("z" * 120)
def test_repr_tracebackentry_lines_var_kw_args(self, importasmod): def test_repr_tracebackentry_lines_var_kw_args(self, importasmod, tw_mock):
mod = importasmod( mod = importasmod(
""" """
def func1(x, *y, **z): def func1(x, *y, **z):
@ -703,9 +675,8 @@ raise ValueError()
p = FormattedExcinfo(funcargs=True) p = FormattedExcinfo(funcargs=True)
repr_entry = p.repr_traceback_entry(entry) repr_entry = p.repr_traceback_entry(entry)
assert repr_entry.reprfuncargs.args == reprfuncargs.args assert repr_entry.reprfuncargs.args == reprfuncargs.args
tw = TWMock() repr_entry.toterminal(tw_mock)
repr_entry.toterminal(tw) assert tw_mock.lines[0] == "x = 'a', y = ('b',), z = {'c': 'd'}"
assert tw.lines[0] == "x = 'a', y = ('b',), z = {'c': 'd'}"
def test_repr_tracebackentry_short(self, importasmod): def test_repr_tracebackentry_short(self, importasmod):
mod = importasmod( mod = importasmod(
@ -842,7 +813,7 @@ raise ValueError()
assert p._makepath(__file__) == __file__ assert p._makepath(__file__) == __file__
p.repr_traceback(excinfo) p.repr_traceback(excinfo)
def test_repr_excinfo_addouterr(self, importasmod): def test_repr_excinfo_addouterr(self, importasmod, tw_mock):
mod = importasmod( mod = importasmod(
""" """
def entry(): def entry():
@ -852,10 +823,9 @@ raise ValueError()
excinfo = pytest.raises(ValueError, mod.entry) excinfo = pytest.raises(ValueError, mod.entry)
repr = excinfo.getrepr() repr = excinfo.getrepr()
repr.addsection("title", "content") repr.addsection("title", "content")
twmock = TWMock() repr.toterminal(tw_mock)
repr.toterminal(twmock) assert tw_mock.lines[-1] == "content"
assert twmock.lines[-1] == "content" assert tw_mock.lines[-2] == ("-", "title")
assert twmock.lines[-2] == ("-", "title")
def test_repr_excinfo_reprcrash(self, importasmod): def test_repr_excinfo_reprcrash(self, importasmod):
mod = importasmod( mod = importasmod(
@ -920,7 +890,7 @@ raise ValueError()
x = str(MyRepr()) x = str(MyRepr())
assert x == "я" assert x == "я"
def test_toterminal_long(self, importasmod): def test_toterminal_long(self, importasmod, tw_mock):
mod = importasmod( mod = importasmod(
""" """
def g(x): def g(x):
@ -932,27 +902,26 @@ raise ValueError()
excinfo = pytest.raises(ValueError, mod.f) excinfo = pytest.raises(ValueError, mod.f)
excinfo.traceback = excinfo.traceback.filter() excinfo.traceback = excinfo.traceback.filter()
repr = excinfo.getrepr() repr = excinfo.getrepr()
tw = TWMock() repr.toterminal(tw_mock)
repr.toterminal(tw) assert tw_mock.lines[0] == ""
assert tw.lines[0] == "" tw_mock.lines.pop(0)
tw.lines.pop(0) assert tw_mock.lines[0] == " def f():"
assert tw.lines[0] == " def f():" assert tw_mock.lines[1] == "> g(3)"
assert tw.lines[1] == "> g(3)" assert tw_mock.lines[2] == ""
assert tw.lines[2] == "" line = tw_mock.get_write_msg(3)
line = tw.get_write_msg(3)
assert line.endswith("mod.py") assert line.endswith("mod.py")
assert tw.lines[4] == (":5: ") assert tw_mock.lines[4] == (":5: ")
assert tw.lines[5] == ("_ ", None) assert tw_mock.lines[5] == ("_ ", None)
assert tw.lines[6] == "" assert tw_mock.lines[6] == ""
assert tw.lines[7] == " def g(x):" assert tw_mock.lines[7] == " def g(x):"
assert tw.lines[8] == "> raise ValueError(x)" assert tw_mock.lines[8] == "> raise ValueError(x)"
assert tw.lines[9] == "E ValueError: 3" assert tw_mock.lines[9] == "E ValueError: 3"
assert tw.lines[10] == "" assert tw_mock.lines[10] == ""
line = tw.get_write_msg(11) line = tw_mock.get_write_msg(11)
assert line.endswith("mod.py") assert line.endswith("mod.py")
assert tw.lines[12] == ":3: ValueError" assert tw_mock.lines[12] == ":3: ValueError"
def test_toterminal_long_missing_source(self, importasmod, tmpdir): def test_toterminal_long_missing_source(self, importasmod, tmpdir, tw_mock):
mod = importasmod( mod = importasmod(
""" """
def g(x): def g(x):
@ -965,25 +934,24 @@ raise ValueError()
tmpdir.join("mod.py").remove() tmpdir.join("mod.py").remove()
excinfo.traceback = excinfo.traceback.filter() excinfo.traceback = excinfo.traceback.filter()
repr = excinfo.getrepr() repr = excinfo.getrepr()
tw = TWMock() repr.toterminal(tw_mock)
repr.toterminal(tw) assert tw_mock.lines[0] == ""
assert tw.lines[0] == "" tw_mock.lines.pop(0)
tw.lines.pop(0) assert tw_mock.lines[0] == "> ???"
assert tw.lines[0] == "> ???" assert tw_mock.lines[1] == ""
assert tw.lines[1] == "" line = tw_mock.get_write_msg(2)
line = tw.get_write_msg(2)
assert line.endswith("mod.py") assert line.endswith("mod.py")
assert tw.lines[3] == ":5: " assert tw_mock.lines[3] == ":5: "
assert tw.lines[4] == ("_ ", None) assert tw_mock.lines[4] == ("_ ", None)
assert tw.lines[5] == "" assert tw_mock.lines[5] == ""
assert tw.lines[6] == "> ???" assert tw_mock.lines[6] == "> ???"
assert tw.lines[7] == "E ValueError: 3" assert tw_mock.lines[7] == "E ValueError: 3"
assert tw.lines[8] == "" assert tw_mock.lines[8] == ""
line = tw.get_write_msg(9) line = tw_mock.get_write_msg(9)
assert line.endswith("mod.py") assert line.endswith("mod.py")
assert tw.lines[10] == ":3: ValueError" assert tw_mock.lines[10] == ":3: ValueError"
def test_toterminal_long_incomplete_source(self, importasmod, tmpdir): def test_toterminal_long_incomplete_source(self, importasmod, tmpdir, tw_mock):
mod = importasmod( mod = importasmod(
""" """
def g(x): def g(x):
@ -996,25 +964,24 @@ raise ValueError()
tmpdir.join("mod.py").write("asdf") tmpdir.join("mod.py").write("asdf")
excinfo.traceback = excinfo.traceback.filter() excinfo.traceback = excinfo.traceback.filter()
repr = excinfo.getrepr() repr = excinfo.getrepr()
tw = TWMock() repr.toterminal(tw_mock)
repr.toterminal(tw) assert tw_mock.lines[0] == ""
assert tw.lines[0] == "" tw_mock.lines.pop(0)
tw.lines.pop(0) assert tw_mock.lines[0] == "> ???"
assert tw.lines[0] == "> ???" assert tw_mock.lines[1] == ""
assert tw.lines[1] == "" line = tw_mock.get_write_msg(2)
line = tw.get_write_msg(2)
assert line.endswith("mod.py") assert line.endswith("mod.py")
assert tw.lines[3] == ":5: " assert tw_mock.lines[3] == ":5: "
assert tw.lines[4] == ("_ ", None) assert tw_mock.lines[4] == ("_ ", None)
assert tw.lines[5] == "" assert tw_mock.lines[5] == ""
assert tw.lines[6] == "> ???" assert tw_mock.lines[6] == "> ???"
assert tw.lines[7] == "E ValueError: 3" assert tw_mock.lines[7] == "E ValueError: 3"
assert tw.lines[8] == "" assert tw_mock.lines[8] == ""
line = tw.get_write_msg(9) line = tw_mock.get_write_msg(9)
assert line.endswith("mod.py") assert line.endswith("mod.py")
assert tw.lines[10] == ":3: ValueError" assert tw_mock.lines[10] == ":3: ValueError"
def test_toterminal_long_filenames(self, importasmod): def test_toterminal_long_filenames(self, importasmod, tw_mock):
mod = importasmod( mod = importasmod(
""" """
def f(): def f():
@ -1022,23 +989,22 @@ raise ValueError()
""" """
) )
excinfo = pytest.raises(ValueError, mod.f) excinfo = pytest.raises(ValueError, mod.f)
tw = TWMock()
path = py.path.local(mod.__file__) path = py.path.local(mod.__file__)
old = path.dirpath().chdir() old = path.dirpath().chdir()
try: try:
repr = excinfo.getrepr(abspath=False) repr = excinfo.getrepr(abspath=False)
repr.toterminal(tw) repr.toterminal(tw_mock)
x = py.path.local().bestrelpath(path) x = py.path.local().bestrelpath(path)
if len(x) < len(str(path)): if len(x) < len(str(path)):
msg = tw.get_write_msg(-2) msg = tw_mock.get_write_msg(-2)
assert msg == "mod.py" assert msg == "mod.py"
assert tw.lines[-1] == ":3: ValueError" assert tw_mock.lines[-1] == ":3: ValueError"
repr = excinfo.getrepr(abspath=True) repr = excinfo.getrepr(abspath=True)
repr.toterminal(tw) repr.toterminal(tw_mock)
msg = tw.get_write_msg(-2) msg = tw_mock.get_write_msg(-2)
assert msg == path assert msg == path
line = tw.lines[-1] line = tw_mock.lines[-1]
assert line == ":3: ValueError" assert line == ":3: ValueError"
finally: finally:
old.chdir() old.chdir()
@ -1073,7 +1039,7 @@ raise ValueError()
repr.toterminal(tw) repr.toterminal(tw)
assert tw.stringio.getvalue() assert tw.stringio.getvalue()
def test_traceback_repr_style(self, importasmod): def test_traceback_repr_style(self, importasmod, tw_mock):
mod = importasmod( mod = importasmod(
""" """
def f(): def f():
@ -1091,35 +1057,34 @@ raise ValueError()
excinfo.traceback[1].set_repr_style("short") excinfo.traceback[1].set_repr_style("short")
excinfo.traceback[2].set_repr_style("short") excinfo.traceback[2].set_repr_style("short")
r = excinfo.getrepr(style="long") r = excinfo.getrepr(style="long")
tw = TWMock() r.toterminal(tw_mock)
r.toterminal(tw) for line in tw_mock.lines:
for line in tw.lines:
print(line) print(line)
assert tw.lines[0] == "" assert tw_mock.lines[0] == ""
assert tw.lines[1] == " def f():" assert tw_mock.lines[1] == " def f():"
assert tw.lines[2] == "> g()" assert tw_mock.lines[2] == "> g()"
assert tw.lines[3] == "" assert tw_mock.lines[3] == ""
msg = tw.get_write_msg(4) msg = tw_mock.get_write_msg(4)
assert msg.endswith("mod.py") assert msg.endswith("mod.py")
assert tw.lines[5] == ":3: " assert tw_mock.lines[5] == ":3: "
assert tw.lines[6] == ("_ ", None) assert tw_mock.lines[6] == ("_ ", None)
tw.get_write_msg(7) tw_mock.get_write_msg(7)
assert tw.lines[8].endswith("in g") assert tw_mock.lines[8].endswith("in g")
assert tw.lines[9] == " h()" assert tw_mock.lines[9] == " h()"
tw.get_write_msg(10) tw_mock.get_write_msg(10)
assert tw.lines[11].endswith("in h") assert tw_mock.lines[11].endswith("in h")
assert tw.lines[12] == " i()" assert tw_mock.lines[12] == " i()"
assert tw.lines[13] == ("_ ", None) assert tw_mock.lines[13] == ("_ ", None)
assert tw.lines[14] == "" assert tw_mock.lines[14] == ""
assert tw.lines[15] == " def i():" assert tw_mock.lines[15] == " def i():"
assert tw.lines[16] == "> raise ValueError()" assert tw_mock.lines[16] == "> raise ValueError()"
assert tw.lines[17] == "E ValueError" assert tw_mock.lines[17] == "E ValueError"
assert tw.lines[18] == "" assert tw_mock.lines[18] == ""
msg = tw.get_write_msg(19) msg = tw_mock.get_write_msg(19)
msg.endswith("mod.py") msg.endswith("mod.py")
assert tw.lines[20] == ":9: ValueError" assert tw_mock.lines[20] == ":9: ValueError"
def test_exc_chain_repr(self, importasmod): def test_exc_chain_repr(self, importasmod, tw_mock):
mod = importasmod( mod = importasmod(
""" """
class Err(Exception): class Err(Exception):
@ -1140,72 +1105,71 @@ raise ValueError()
) )
excinfo = pytest.raises(AttributeError, mod.f) excinfo = pytest.raises(AttributeError, mod.f)
r = excinfo.getrepr(style="long") r = excinfo.getrepr(style="long")
tw = TWMock() r.toterminal(tw_mock)
r.toterminal(tw) for line in tw_mock.lines:
for line in tw.lines:
print(line) print(line)
assert tw.lines[0] == "" assert tw_mock.lines[0] == ""
assert tw.lines[1] == " def f():" assert tw_mock.lines[1] == " def f():"
assert tw.lines[2] == " try:" assert tw_mock.lines[2] == " try:"
assert tw.lines[3] == "> g()" assert tw_mock.lines[3] == "> g()"
assert tw.lines[4] == "" assert tw_mock.lines[4] == ""
line = tw.get_write_msg(5) line = tw_mock.get_write_msg(5)
assert line.endswith("mod.py") assert line.endswith("mod.py")
assert tw.lines[6] == ":6: " assert tw_mock.lines[6] == ":6: "
assert tw.lines[7] == ("_ ", None) assert tw_mock.lines[7] == ("_ ", None)
assert tw.lines[8] == "" assert tw_mock.lines[8] == ""
assert tw.lines[9] == " def g():" assert tw_mock.lines[9] == " def g():"
assert tw.lines[10] == "> raise ValueError()" assert tw_mock.lines[10] == "> raise ValueError()"
assert tw.lines[11] == "E ValueError" assert tw_mock.lines[11] == "E ValueError"
assert tw.lines[12] == "" assert tw_mock.lines[12] == ""
line = tw.get_write_msg(13) line = tw_mock.get_write_msg(13)
assert line.endswith("mod.py") assert line.endswith("mod.py")
assert tw.lines[14] == ":12: ValueError" assert tw_mock.lines[14] == ":12: ValueError"
assert tw.lines[15] == "" assert tw_mock.lines[15] == ""
assert ( assert (
tw.lines[16] tw_mock.lines[16]
== "The above exception was the direct cause of the following exception:" == "The above exception was the direct cause of the following exception:"
) )
assert tw.lines[17] == "" assert tw_mock.lines[17] == ""
assert tw.lines[18] == " def f():" assert tw_mock.lines[18] == " def f():"
assert tw.lines[19] == " try:" assert tw_mock.lines[19] == " try:"
assert tw.lines[20] == " g()" assert tw_mock.lines[20] == " g()"
assert tw.lines[21] == " except Exception as e:" assert tw_mock.lines[21] == " except Exception as e:"
assert tw.lines[22] == "> raise Err() from e" assert tw_mock.lines[22] == "> raise Err() from e"
assert tw.lines[23] == "E test_exc_chain_repr0.mod.Err" assert tw_mock.lines[23] == "E test_exc_chain_repr0.mod.Err"
assert tw.lines[24] == "" assert tw_mock.lines[24] == ""
line = tw.get_write_msg(25) line = tw_mock.get_write_msg(25)
assert line.endswith("mod.py") assert line.endswith("mod.py")
assert tw.lines[26] == ":8: Err" assert tw_mock.lines[26] == ":8: Err"
assert tw.lines[27] == "" assert tw_mock.lines[27] == ""
assert ( assert (
tw.lines[28] tw_mock.lines[28]
== "During handling of the above exception, another exception occurred:" == "During handling of the above exception, another exception occurred:"
) )
assert tw.lines[29] == "" assert tw_mock.lines[29] == ""
assert tw.lines[30] == " def f():" assert tw_mock.lines[30] == " def f():"
assert tw.lines[31] == " try:" assert tw_mock.lines[31] == " try:"
assert tw.lines[32] == " g()" assert tw_mock.lines[32] == " g()"
assert tw.lines[33] == " except Exception as e:" assert tw_mock.lines[33] == " except Exception as e:"
assert tw.lines[34] == " raise Err() from e" assert tw_mock.lines[34] == " raise Err() from e"
assert tw.lines[35] == " finally:" assert tw_mock.lines[35] == " finally:"
assert tw.lines[36] == "> h()" assert tw_mock.lines[36] == "> h()"
assert tw.lines[37] == "" assert tw_mock.lines[37] == ""
line = tw.get_write_msg(38) line = tw_mock.get_write_msg(38)
assert line.endswith("mod.py") assert line.endswith("mod.py")
assert tw.lines[39] == ":10: " assert tw_mock.lines[39] == ":10: "
assert tw.lines[40] == ("_ ", None) assert tw_mock.lines[40] == ("_ ", None)
assert tw.lines[41] == "" assert tw_mock.lines[41] == ""
assert tw.lines[42] == " def h():" assert tw_mock.lines[42] == " def h():"
assert tw.lines[43] == "> raise AttributeError()" assert tw_mock.lines[43] == "> raise AttributeError()"
assert tw.lines[44] == "E AttributeError" assert tw_mock.lines[44] == "E AttributeError"
assert tw.lines[45] == "" assert tw_mock.lines[45] == ""
line = tw.get_write_msg(46) line = tw_mock.get_write_msg(46)
assert line.endswith("mod.py") assert line.endswith("mod.py")
assert tw.lines[47] == ":15: AttributeError" assert tw_mock.lines[47] == ":15: AttributeError"
@pytest.mark.parametrize("mode", ["from_none", "explicit_suppress"]) @pytest.mark.parametrize("mode", ["from_none", "explicit_suppress"])
def test_exc_repr_chain_suppression(self, importasmod, mode): def test_exc_repr_chain_suppression(self, importasmod, mode, tw_mock):
"""Check that exc repr does not show chained exceptions in Python 3. """Check that exc repr does not show chained exceptions in Python 3.
- When the exception is raised with "from None" - When the exception is raised with "from None"
- Explicitly suppressed with "chain=False" to ExceptionInfo.getrepr(). - Explicitly suppressed with "chain=False" to ExceptionInfo.getrepr().
@ -1226,24 +1190,23 @@ raise ValueError()
) )
excinfo = pytest.raises(AttributeError, mod.f) excinfo = pytest.raises(AttributeError, mod.f)
r = excinfo.getrepr(style="long", chain=mode != "explicit_suppress") r = excinfo.getrepr(style="long", chain=mode != "explicit_suppress")
tw = TWMock() r.toterminal(tw_mock)
r.toterminal(tw) for line in tw_mock.lines:
for line in tw.lines:
print(line) print(line)
assert tw.lines[0] == "" assert tw_mock.lines[0] == ""
assert tw.lines[1] == " def f():" assert tw_mock.lines[1] == " def f():"
assert tw.lines[2] == " try:" assert tw_mock.lines[2] == " try:"
assert tw.lines[3] == " g()" assert tw_mock.lines[3] == " g()"
assert tw.lines[4] == " except Exception:" assert tw_mock.lines[4] == " except Exception:"
assert tw.lines[5] == "> raise AttributeError(){}".format( assert tw_mock.lines[5] == "> raise AttributeError(){}".format(
raise_suffix raise_suffix
) )
assert tw.lines[6] == "E AttributeError" assert tw_mock.lines[6] == "E AttributeError"
assert tw.lines[7] == "" assert tw_mock.lines[7] == ""
line = tw.get_write_msg(8) line = tw_mock.get_write_msg(8)
assert line.endswith("mod.py") assert line.endswith("mod.py")
assert tw.lines[9] == ":6: AttributeError" assert tw_mock.lines[9] == ":6: AttributeError"
assert len(tw.lines) == 10 assert len(tw_mock.lines) == 10
@pytest.mark.parametrize( @pytest.mark.parametrize(
"reason, description", "reason, description",
@ -1304,7 +1267,7 @@ raise ValueError()
] ]
) )
def test_exc_chain_repr_cycle(self, importasmod): def test_exc_chain_repr_cycle(self, importasmod, tw_mock):
mod = importasmod( mod = importasmod(
""" """
class Err(Exception): class Err(Exception):
@ -1325,9 +1288,8 @@ raise ValueError()
) )
excinfo = pytest.raises(ZeroDivisionError, mod.unreraise) excinfo = pytest.raises(ZeroDivisionError, mod.unreraise)
r = excinfo.getrepr(style="short") r = excinfo.getrepr(style="short")
tw = TWMock() r.toterminal(tw_mock)
r.toterminal(tw) out = "\n".join(line for line in tw_mock.lines if isinstance(line, str))
out = "\n".join(line for line in tw.lines if isinstance(line, str))
expected_out = textwrap.dedent( expected_out = textwrap.dedent(
"""\ """\
:13: in unreraise :13: in unreraise

View File

@ -55,3 +55,36 @@ def pytest_collection_modifyitems(config, items):
items[:] = fast_items + neutral_items + slow_items + slowest_items items[:] = fast_items + neutral_items + slow_items + slowest_items
yield yield
@pytest.fixture
def tw_mock():
"""Returns a mock terminal writer"""
class TWMock:
WRITE = object()
def __init__(self):
self.lines = []
self.is_writing = False
def sep(self, sep, line=None):
self.lines.append((sep, line))
def write(self, msg, **kw):
self.lines.append((TWMock.WRITE, msg))
def line(self, line, **kw):
self.lines.append(line)
def markup(self, text, **kw):
return text
def get_write_msg(self, idx):
flag, msg = self.lines[idx]
assert flag == TWMock.WRITE
return msg
fullwidth = 80
return TWMock()

View File

@ -163,9 +163,16 @@ class TestRaises:
class T: class T:
def __call__(self): def __call__(self):
# Early versions of Python 3.5 have some bug causing the
# __call__ frame to still refer to t even after everything
# is done. This makes the test pass for them.
if sys.version_info < (3, 5, 2): # pragma: no cover
del self
raise ValueError raise ValueError
t = T() t = T()
refcount = len(gc.get_referrers(t))
if method == "function": if method == "function":
pytest.raises(ValueError, t) pytest.raises(ValueError, t)
else: else:
@ -175,14 +182,7 @@ class TestRaises:
# ensure both forms of pytest.raises don't leave exceptions in sys.exc_info() # ensure both forms of pytest.raises don't leave exceptions in sys.exc_info()
assert sys.exc_info() == (None, None, None) assert sys.exc_info() == (None, None, None)
del t assert refcount == len(gc.get_referrers(t))
# Make sure this does get updated in locals dict
# otherwise it could keep a reference
locals()
# ensure the t instance is not stuck in a cyclic reference
for o in gc.get_objects():
assert type(o) is not T
def test_raises_match(self): def test_raises_match(self):
msg = r"with base \d+" msg = r"with base \d+"

View File

@ -1,3 +1,4 @@
import os.path
import textwrap import textwrap
import py import py
@ -5,6 +6,7 @@ import py
import pytest import pytest
from _pytest.config import PytestPluginManager from _pytest.config import PytestPluginManager
from _pytest.main import ExitCode from _pytest.main import ExitCode
from _pytest.pathlib import unique_path
def ConftestWithSetinitial(path): def ConftestWithSetinitial(path):
@ -141,11 +143,11 @@ def test_conftestcutdir(testdir):
# but we can still import a conftest directly # but we can still import a conftest directly
conftest._importconftest(conf) conftest._importconftest(conf)
values = conftest._getconftestmodules(conf.dirpath()) values = conftest._getconftestmodules(conf.dirpath())
assert values[0].__file__.startswith(str(conf)) assert values[0].__file__.startswith(str(unique_path(conf)))
# and all sub paths get updated properly # and all sub paths get updated properly
values = conftest._getconftestmodules(p) values = conftest._getconftestmodules(p)
assert len(values) == 1 assert len(values) == 1
assert values[0].__file__.startswith(str(conf)) assert values[0].__file__.startswith(str(unique_path(conf)))
def test_conftestcutdir_inplace_considered(testdir): def test_conftestcutdir_inplace_considered(testdir):
@ -154,7 +156,7 @@ def test_conftestcutdir_inplace_considered(testdir):
conftest_setinitial(conftest, [conf.dirpath()], confcutdir=conf.dirpath()) conftest_setinitial(conftest, [conf.dirpath()], confcutdir=conf.dirpath())
values = conftest._getconftestmodules(conf.dirpath()) values = conftest._getconftestmodules(conf.dirpath())
assert len(values) == 1 assert len(values) == 1
assert values[0].__file__.startswith(str(conf)) assert values[0].__file__.startswith(str(unique_path(conf)))
@pytest.mark.parametrize("name", "test tests whatever .dotdir".split()) @pytest.mark.parametrize("name", "test tests whatever .dotdir".split())
@ -164,7 +166,7 @@ def test_setinitial_conftest_subdirs(testdir, name):
conftest = PytestPluginManager() conftest = PytestPluginManager()
conftest_setinitial(conftest, [sub.dirpath()], confcutdir=testdir.tmpdir) conftest_setinitial(conftest, [sub.dirpath()], confcutdir=testdir.tmpdir)
if name not in ("whatever", ".dotdir"): if name not in ("whatever", ".dotdir"):
assert subconftest in conftest._conftestpath2mod assert unique_path(subconftest) in conftest._conftestpath2mod
assert len(conftest._conftestpath2mod) == 1 assert len(conftest._conftestpath2mod) == 1
else: else:
assert subconftest not in conftest._conftestpath2mod assert subconftest not in conftest._conftestpath2mod
@ -275,6 +277,21 @@ def test_conftest_symlink_files(testdir):
assert result.ret == ExitCode.OK assert result.ret == ExitCode.OK
@pytest.mark.skipif(
os.path.normcase("x") != os.path.normcase("X"),
reason="only relevant for case insensitive file systems",
)
def test_conftest_badcase(testdir):
"""Check conftest.py loading when directory casing is wrong."""
testdir.tmpdir.mkdir("JenkinsRoot").mkdir("test")
source = {"setup.py": "", "test/__init__.py": "", "test/conftest.py": ""}
testdir.makepyfile(**{"JenkinsRoot/%s" % k: v for k, v in source.items()})
testdir.tmpdir.join("jenkinsroot/test").chdir()
result = testdir.runpytest()
assert result.ret == ExitCode.NO_TESTS_COLLECTED
def test_no_conftest(testdir): def test_no_conftest(testdir):
testdir.makeconftest("assert 0") testdir.makeconftest("assert 0")
result = testdir.runpytest("--noconftest") result = testdir.runpytest("--noconftest")

View File

@ -116,3 +116,15 @@ class TestPaste:
assert "lexer=%s" % lexer in data.decode() assert "lexer=%s" % lexer in data.decode()
assert "code=full-paste-contents" in data.decode() assert "code=full-paste-contents" in data.decode()
assert "expiry=1week" in data.decode() assert "expiry=1week" in data.decode()
def test_create_new_paste_failure(self, pastebin, monkeypatch):
import io
import urllib.request
def response(url, data):
stream = io.BytesIO(b"something bad occurred")
return stream
monkeypatch.setattr(urllib.request, "urlopen", response)
result = pastebin.create_new_paste(b"full-paste-contents")
assert result == "bad response: something bad occurred"

View File

@ -1,4 +1,5 @@
import pytest import pytest
from _pytest._code.code import ExceptionChainRepr
from _pytest.pathlib import Path from _pytest.pathlib import Path
from _pytest.reports import CollectReport from _pytest.reports import CollectReport
from _pytest.reports import TestReport from _pytest.reports import TestReport
@ -220,8 +221,8 @@ class TestReportSerialization:
assert data["path1"] == str(testdir.tmpdir) assert data["path1"] == str(testdir.tmpdir)
assert data["path2"] == str(testdir.tmpdir) assert data["path2"] == str(testdir.tmpdir)
def test_unserialization_failure(self, testdir): def test_deserialization_failure(self, testdir):
"""Check handling of failure during unserialization of report types.""" """Check handling of failure during deserialization of report types."""
testdir.makepyfile( testdir.makepyfile(
""" """
def test_a(): def test_a():
@ -242,6 +243,75 @@ class TestReportSerialization:
): ):
TestReport._from_json(data) TestReport._from_json(data)
@pytest.mark.parametrize("report_class", [TestReport, CollectReport])
def test_chained_exceptions(self, testdir, tw_mock, report_class):
"""Check serialization/deserialization of report objects containing chained exceptions (#5786)"""
testdir.makepyfile(
"""
def foo():
raise ValueError('value error')
def test_a():
try:
foo()
except ValueError as e:
raise RuntimeError('runtime error') from e
if {error_during_import}:
test_a()
""".format(
error_during_import=report_class is CollectReport
)
)
reprec = testdir.inline_run()
if report_class is TestReport:
reports = reprec.getreports("pytest_runtest_logreport")
# we have 3 reports: setup/call/teardown
assert len(reports) == 3
# get the call report
report = reports[1]
else:
assert report_class is CollectReport
# two collection reports: session and test file
reports = reprec.getreports("pytest_collectreport")
assert len(reports) == 2
report = reports[1]
def check_longrepr(longrepr):
"""Check the attributes of the given longrepr object according to the test file.
We can get away with testing both CollectReport and TestReport with this function because
the longrepr objects are very similar.
"""
assert isinstance(longrepr, ExceptionChainRepr)
assert longrepr.sections == [("title", "contents", "=")]
assert len(longrepr.chain) == 2
entry1, entry2 = longrepr.chain
tb1, fileloc1, desc1 = entry1
tb2, fileloc2, desc2 = entry2
assert "ValueError('value error')" in str(tb1)
assert "RuntimeError('runtime error')" in str(tb2)
assert (
desc1
== "The above exception was the direct cause of the following exception:"
)
assert desc2 is None
assert report.failed
assert len(report.sections) == 0
report.longrepr.addsection("title", "contents", "=")
check_longrepr(report.longrepr)
data = report._to_json()
loaded_report = report_class._from_json(data)
check_longrepr(loaded_report.longrepr)
# make sure we don't blow up on ``toterminal`` call; we don't test the actual output because it is very
# brittle and hard to maintain, but we can assume it is correct because ``toterminal`` is already tested
# elsewhere and we do check the contents of the longrepr object after loading it.
loaded_report.longrepr.toterminal(tw_mock)
class TestHooks: class TestHooks:
"""Test that the hooks are working correctly for plugins""" """Test that the hooks are working correctly for plugins"""