Preparing release version 5.1.1
This commit is contained in:
parent
daff9066c0
commit
b135f5af8d
|
@ -18,6 +18,15 @@ with advance notice in the **Deprecations** section of releases.
|
||||||
|
|
||||||
.. towncrier release notes start
|
.. towncrier release notes start
|
||||||
|
|
||||||
|
pytest 5.1.1 (2019-08-20)
|
||||||
|
=========================
|
||||||
|
|
||||||
|
Bug Fixes
|
||||||
|
---------
|
||||||
|
|
||||||
|
- `#5751 <https://github.com/pytest-dev/pytest/issues/5751>`_: Fixed ``TypeError`` when importing pytest on Python 3.5.0 and 3.5.1.
|
||||||
|
|
||||||
|
|
||||||
pytest 5.1.0 (2019-08-15)
|
pytest 5.1.0 (2019-08-15)
|
||||||
=========================
|
=========================
|
||||||
|
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Fixed ``TypeError`` when importing pytest on Python 3.5.0 and 3.5.1.
|
|
|
@ -6,6 +6,7 @@ Release announcements
|
||||||
:maxdepth: 2
|
:maxdepth: 2
|
||||||
|
|
||||||
|
|
||||||
|
release-5.1.1
|
||||||
release-5.1.0
|
release-5.1.0
|
||||||
release-5.0.1
|
release-5.0.1
|
||||||
release-5.0.0
|
release-5.0.0
|
||||||
|
|
|
@ -0,0 +1,24 @@
|
||||||
|
pytest-5.1.1
|
||||||
|
=======================================
|
||||||
|
|
||||||
|
pytest 5.1.1 has just been released to PyPI.
|
||||||
|
|
||||||
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
|
pip install --upgrade pytest
|
||||||
|
|
||||||
|
The full changelog is available at https://docs.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
||||||
|
* Anthony Sottile
|
||||||
|
* Bruno Oliveira
|
||||||
|
* Daniel Hahler
|
||||||
|
* Florian Bruhin
|
||||||
|
* Hugo van Kemenade
|
||||||
|
* Ran Benita
|
||||||
|
* Ronny Pfannschmidt
|
||||||
|
|
||||||
|
|
||||||
|
Happy testing,
|
||||||
|
The pytest Development Team
|
|
@ -47,7 +47,7 @@ you will see the return value of the function call:
|
||||||
E + where 3 = f()
|
E + where 3 = f()
|
||||||
|
|
||||||
test_assert1.py:6: AssertionError
|
test_assert1.py:6: AssertionError
|
||||||
============================ 1 failed in 0.05s =============================
|
============================ 1 failed in 0.02s =============================
|
||||||
|
|
||||||
``pytest`` has support for showing the values of the most common subexpressions
|
``pytest`` has support for showing the values of the most common subexpressions
|
||||||
including calls, attributes, comparisons, and binary and unary
|
including calls, attributes, comparisons, and binary and unary
|
||||||
|
@ -208,7 +208,7 @@ if you run this module:
|
||||||
E Use -v to get the full diff
|
E Use -v to get the full diff
|
||||||
|
|
||||||
test_assert2.py:6: AssertionError
|
test_assert2.py:6: AssertionError
|
||||||
============================ 1 failed in 0.05s =============================
|
============================ 1 failed in 0.02s =============================
|
||||||
|
|
||||||
Special comparisons are done for a number of cases:
|
Special comparisons are done for a number of cases:
|
||||||
|
|
||||||
|
@ -279,7 +279,7 @@ the conftest file:
|
||||||
E vals: 1 != 2
|
E vals: 1 != 2
|
||||||
|
|
||||||
test_foocompare.py:12: AssertionError
|
test_foocompare.py:12: AssertionError
|
||||||
1 failed in 0.05s
|
1 failed in 0.02s
|
||||||
|
|
||||||
.. _assert-details:
|
.. _assert-details:
|
||||||
.. _`assert introspection`:
|
.. _`assert introspection`:
|
||||||
|
|
|
@ -160,7 +160,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
in python < 3.6 this is a pathlib2.Path
|
in python < 3.6 this is a pathlib2.Path
|
||||||
|
|
||||||
|
|
||||||
no tests ran in 0.01s
|
no tests ran in 0.00s
|
||||||
|
|
||||||
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like:
|
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like:
|
||||||
|
|
||||||
|
|
|
@ -75,7 +75,7 @@ If you run this for the first time you will see two failures:
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:7: Failed
|
test_50.py:7: Failed
|
||||||
2 failed, 48 passed in 0.16s
|
2 failed, 48 passed in 0.08s
|
||||||
|
|
||||||
If you then run it with ``--lf``:
|
If you then run it with ``--lf``:
|
||||||
|
|
||||||
|
@ -114,7 +114,7 @@ If you then run it with ``--lf``:
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:7: Failed
|
test_50.py:7: Failed
|
||||||
===================== 2 failed, 48 deselected in 0.07s =====================
|
===================== 2 failed, 48 deselected in 0.02s =====================
|
||||||
|
|
||||||
You have run only the two failing tests from the last run, while the 48 passing
|
You have run only the two failing tests from the last run, while the 48 passing
|
||||||
tests have not been run ("deselected").
|
tests have not been run ("deselected").
|
||||||
|
@ -158,7 +158,7 @@ of ``FF`` and dots):
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:7: Failed
|
test_50.py:7: Failed
|
||||||
======================= 2 failed, 48 passed in 0.15s =======================
|
======================= 2 failed, 48 passed in 0.07s =======================
|
||||||
|
|
||||||
.. _`config.cache`:
|
.. _`config.cache`:
|
||||||
|
|
||||||
|
@ -230,7 +230,7 @@ If you run this command for the first time, you can see the print statement:
|
||||||
test_caching.py:20: AssertionError
|
test_caching.py:20: AssertionError
|
||||||
-------------------------- Captured stdout setup ---------------------------
|
-------------------------- Captured stdout setup ---------------------------
|
||||||
running expensive computation...
|
running expensive computation...
|
||||||
1 failed in 0.05s
|
1 failed in 0.02s
|
||||||
|
|
||||||
If you run it a second time, the value will be retrieved from
|
If you run it a second time, the value will be retrieved from
|
||||||
the cache and nothing will be printed:
|
the cache and nothing will be printed:
|
||||||
|
@ -249,7 +249,7 @@ the cache and nothing will be printed:
|
||||||
E assert 42 == 23
|
E assert 42 == 23
|
||||||
|
|
||||||
test_caching.py:20: AssertionError
|
test_caching.py:20: AssertionError
|
||||||
1 failed in 0.05s
|
1 failed in 0.02s
|
||||||
|
|
||||||
See the :ref:`cache-api` for more details.
|
See the :ref:`cache-api` for more details.
|
||||||
|
|
||||||
|
@ -300,7 +300,7 @@ filtering:
|
||||||
example/value contains:
|
example/value contains:
|
||||||
42
|
42
|
||||||
|
|
||||||
========================== no tests ran in 0.01s ===========================
|
========================== no tests ran in 0.00s ===========================
|
||||||
|
|
||||||
Clearing Cache content
|
Clearing Cache content
|
||||||
----------------------
|
----------------------
|
||||||
|
|
|
@ -91,7 +91,7 @@ of the failing function and hide the other one:
|
||||||
test_module.py:12: AssertionError
|
test_module.py:12: AssertionError
|
||||||
-------------------------- Captured stdout setup ---------------------------
|
-------------------------- Captured stdout setup ---------------------------
|
||||||
setting up <function test_func2 at 0xdeadbeef>
|
setting up <function test_func2 at 0xdeadbeef>
|
||||||
======================= 1 failed, 1 passed in 0.05s ========================
|
======================= 1 failed, 1 passed in 0.02s ========================
|
||||||
|
|
||||||
Accessing captured output from a test function
|
Accessing captured output from a test function
|
||||||
---------------------------------------------------
|
---------------------------------------------------
|
||||||
|
|
|
@ -36,7 +36,7 @@ then you can just invoke ``pytest`` directly:
|
||||||
|
|
||||||
test_example.txt . [100%]
|
test_example.txt . [100%]
|
||||||
|
|
||||||
============================ 1 passed in 0.02s =============================
|
============================ 1 passed in 0.01s =============================
|
||||||
|
|
||||||
By default, pytest will collect ``test*.txt`` files looking for doctest directives, but you
|
By default, pytest will collect ``test*.txt`` files looking for doctest directives, but you
|
||||||
can pass additional globs using the ``--doctest-glob`` option (multi-allowed).
|
can pass additional globs using the ``--doctest-glob`` option (multi-allowed).
|
||||||
|
@ -66,7 +66,7 @@ and functions, including from test modules:
|
||||||
mymodule.py . [ 50%]
|
mymodule.py . [ 50%]
|
||||||
test_example.txt . [100%]
|
test_example.txt . [100%]
|
||||||
|
|
||||||
============================ 2 passed in 0.03s =============================
|
============================ 2 passed in 0.01s =============================
|
||||||
|
|
||||||
You can make these changes permanent in your project by
|
You can make these changes permanent in your project by
|
||||||
putting them into a pytest.ini file like this:
|
putting them into a pytest.ini file like this:
|
||||||
|
|
|
@ -69,7 +69,7 @@ Or the inverse, running all tests except the webtest ones:
|
||||||
test_server.py::test_another PASSED [ 66%]
|
test_server.py::test_another PASSED [ 66%]
|
||||||
test_server.py::TestClass::test_method PASSED [100%]
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
===================== 3 passed, 1 deselected in 0.02s ======================
|
===================== 3 passed, 1 deselected in 0.01s ======================
|
||||||
|
|
||||||
Selecting tests based on their node ID
|
Selecting tests based on their node ID
|
||||||
--------------------------------------
|
--------------------------------------
|
||||||
|
@ -120,7 +120,7 @@ Or select multiple nodes:
|
||||||
test_server.py::TestClass::test_method PASSED [ 50%]
|
test_server.py::TestClass::test_method PASSED [ 50%]
|
||||||
test_server.py::test_send_http PASSED [100%]
|
test_server.py::test_send_http PASSED [100%]
|
||||||
|
|
||||||
============================ 2 passed in 0.02s =============================
|
============================ 2 passed in 0.01s =============================
|
||||||
|
|
||||||
.. _node-id:
|
.. _node-id:
|
||||||
|
|
||||||
|
@ -176,7 +176,7 @@ And you can also run all tests except the ones that match the keyword:
|
||||||
test_server.py::test_another PASSED [ 66%]
|
test_server.py::test_another PASSED [ 66%]
|
||||||
test_server.py::TestClass::test_method PASSED [100%]
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
===================== 3 passed, 1 deselected in 0.02s ======================
|
===================== 3 passed, 1 deselected in 0.01s ======================
|
||||||
|
|
||||||
Or to select "http" and "quick" tests:
|
Or to select "http" and "quick" tests:
|
||||||
|
|
||||||
|
@ -192,7 +192,7 @@ Or to select "http" and "quick" tests:
|
||||||
test_server.py::test_send_http PASSED [ 50%]
|
test_server.py::test_send_http PASSED [ 50%]
|
||||||
test_server.py::test_something_quick PASSED [100%]
|
test_server.py::test_something_quick PASSED [100%]
|
||||||
|
|
||||||
===================== 2 passed, 2 deselected in 0.02s ======================
|
===================== 2 passed, 2 deselected in 0.01s ======================
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
@ -413,7 +413,7 @@ the test needs:
|
||||||
|
|
||||||
test_someenv.py s [100%]
|
test_someenv.py s [100%]
|
||||||
|
|
||||||
============================ 1 skipped in 0.01s ============================
|
============================ 1 skipped in 0.00s ============================
|
||||||
|
|
||||||
and here is one that specifies exactly the environment needed:
|
and here is one that specifies exactly the environment needed:
|
||||||
|
|
||||||
|
@ -499,7 +499,7 @@ The output is as follows:
|
||||||
$ pytest -q -s
|
$ pytest -q -s
|
||||||
Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={})
|
Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={})
|
||||||
.
|
.
|
||||||
1 passed in 0.01s
|
1 passed in 0.00s
|
||||||
|
|
||||||
We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``.
|
We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``.
|
||||||
|
|
||||||
|
@ -623,7 +623,7 @@ then you will see two tests skipped and two executed tests as expected:
|
||||||
|
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
SKIPPED [2] $REGENDOC_TMPDIR/conftest.py:13: cannot run on platform linux
|
SKIPPED [2] $REGENDOC_TMPDIR/conftest.py:13: cannot run on platform linux
|
||||||
======================= 2 passed, 2 skipped in 0.02s =======================
|
======================= 2 passed, 2 skipped in 0.01s =======================
|
||||||
|
|
||||||
Note that if you specify a platform via the marker-command line option like this:
|
Note that if you specify a platform via the marker-command line option like this:
|
||||||
|
|
||||||
|
@ -711,7 +711,7 @@ We can now use the ``-m option`` to select one set:
|
||||||
test_module.py:8: in test_interface_complex
|
test_module.py:8: in test_interface_complex
|
||||||
assert 0
|
assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
===================== 2 failed, 2 deselected in 0.07s ======================
|
===================== 2 failed, 2 deselected in 0.02s ======================
|
||||||
|
|
||||||
or to select both "event" and "interface" tests:
|
or to select both "event" and "interface" tests:
|
||||||
|
|
||||||
|
@ -739,4 +739,4 @@ or to select both "event" and "interface" tests:
|
||||||
test_module.py:12: in test_event_simple
|
test_module.py:12: in test_event_simple
|
||||||
assert 0
|
assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
===================== 3 failed, 1 deselected in 0.07s ======================
|
===================== 3 failed, 1 deselected in 0.03s ======================
|
||||||
|
|
|
@ -41,7 +41,7 @@ now execute the test specification:
|
||||||
usecase execution failed
|
usecase execution failed
|
||||||
spec failed: 'some': 'other'
|
spec failed: 'some': 'other'
|
||||||
no further details known at this point.
|
no further details known at this point.
|
||||||
======================= 1 failed, 1 passed in 0.06s ========================
|
======================= 1 failed, 1 passed in 0.02s ========================
|
||||||
|
|
||||||
.. regendoc:wipe
|
.. regendoc:wipe
|
||||||
|
|
||||||
|
@ -77,7 +77,7 @@ consulted when reporting in ``verbose`` mode:
|
||||||
usecase execution failed
|
usecase execution failed
|
||||||
spec failed: 'some': 'other'
|
spec failed: 'some': 'other'
|
||||||
no further details known at this point.
|
no further details known at this point.
|
||||||
======================= 1 failed, 1 passed in 0.07s ========================
|
======================= 1 failed, 1 passed in 0.02s ========================
|
||||||
|
|
||||||
.. regendoc:wipe
|
.. regendoc:wipe
|
||||||
|
|
||||||
|
@ -97,4 +97,4 @@ interesting to just look at the collection tree:
|
||||||
<YamlItem hello>
|
<YamlItem hello>
|
||||||
<YamlItem ok>
|
<YamlItem ok>
|
||||||
|
|
||||||
========================== no tests ran in 0.05s ===========================
|
========================== no tests ran in 0.02s ===========================
|
||||||
|
|
|
@ -73,7 +73,7 @@ let's run the full monty:
|
||||||
E assert 4 < 4
|
E assert 4 < 4
|
||||||
|
|
||||||
test_compute.py:4: AssertionError
|
test_compute.py:4: AssertionError
|
||||||
1 failed, 4 passed in 0.06s
|
1 failed, 4 passed in 0.02s
|
||||||
|
|
||||||
As expected when running the full range of ``param1`` values
|
As expected when running the full range of ``param1`` values
|
||||||
we'll get an error on the last one.
|
we'll get an error on the last one.
|
||||||
|
@ -172,7 +172,7 @@ objects, they are still using the default pytest representation:
|
||||||
<Function test_timedistance_v3[forward]>
|
<Function test_timedistance_v3[forward]>
|
||||||
<Function test_timedistance_v3[backward]>
|
<Function test_timedistance_v3[backward]>
|
||||||
|
|
||||||
========================== no tests ran in 0.02s ===========================
|
========================== no tests ran in 0.01s ===========================
|
||||||
|
|
||||||
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
|
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
|
||||||
together with the actual data, instead of listing them separately.
|
together with the actual data, instead of listing them separately.
|
||||||
|
@ -229,7 +229,7 @@ this is a fully self-contained example which you can run with:
|
||||||
|
|
||||||
test_scenarios.py .... [100%]
|
test_scenarios.py .... [100%]
|
||||||
|
|
||||||
============================ 4 passed in 0.02s =============================
|
============================ 4 passed in 0.01s =============================
|
||||||
|
|
||||||
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:
|
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:
|
||||||
|
|
||||||
|
@ -248,7 +248,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
|
||||||
<Function test_demo1[advanced]>
|
<Function test_demo1[advanced]>
|
||||||
<Function test_demo2[advanced]>
|
<Function test_demo2[advanced]>
|
||||||
|
|
||||||
========================== no tests ran in 0.02s ===========================
|
========================== no tests ran in 0.01s ===========================
|
||||||
|
|
||||||
Note that we told ``metafunc.parametrize()`` that your scenario values
|
Note that we told ``metafunc.parametrize()`` that your scenario values
|
||||||
should be considered class-scoped. With pytest-2.3 this leads to a
|
should be considered class-scoped. With pytest-2.3 this leads to a
|
||||||
|
@ -323,7 +323,7 @@ Let's first see how it looks like at collection time:
|
||||||
<Function test_db_initialized[d1]>
|
<Function test_db_initialized[d1]>
|
||||||
<Function test_db_initialized[d2]>
|
<Function test_db_initialized[d2]>
|
||||||
|
|
||||||
========================== no tests ran in 0.01s ===========================
|
========================== no tests ran in 0.00s ===========================
|
||||||
|
|
||||||
And then when we run the test:
|
And then when we run the test:
|
||||||
|
|
||||||
|
@ -343,7 +343,7 @@ And then when we run the test:
|
||||||
E Failed: deliberately failing for demo purposes
|
E Failed: deliberately failing for demo purposes
|
||||||
|
|
||||||
test_backends.py:8: Failed
|
test_backends.py:8: Failed
|
||||||
1 failed, 1 passed in 0.05s
|
1 failed, 1 passed in 0.02s
|
||||||
|
|
||||||
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
|
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
|
||||||
|
|
||||||
|
@ -394,7 +394,7 @@ The result of this test will be successful:
|
||||||
<Module test_indirect_list.py>
|
<Module test_indirect_list.py>
|
||||||
<Function test_indirect[a-b]>
|
<Function test_indirect[a-b]>
|
||||||
|
|
||||||
========================== no tests ran in 0.01s ===========================
|
========================== no tests ran in 0.00s ===========================
|
||||||
|
|
||||||
.. regendoc:wipe
|
.. regendoc:wipe
|
||||||
|
|
||||||
|
@ -454,7 +454,7 @@ argument sets to use for each test function. Let's run it:
|
||||||
E assert 1 == 2
|
E assert 1 == 2
|
||||||
|
|
||||||
test_parametrize.py:21: AssertionError
|
test_parametrize.py:21: AssertionError
|
||||||
1 failed, 2 passed in 0.07s
|
1 failed, 2 passed in 0.03s
|
||||||
|
|
||||||
Indirect parametrization with multiple fixtures
|
Indirect parametrization with multiple fixtures
|
||||||
--------------------------------------------------------------
|
--------------------------------------------------------------
|
||||||
|
@ -475,11 +475,10 @@ Running it results in some skips if we don't have all the python interpreters in
|
||||||
.. code-block:: pytest
|
.. code-block:: pytest
|
||||||
|
|
||||||
. $ pytest -rs -q multipython.py
|
. $ pytest -rs -q multipython.py
|
||||||
ssssssssssss...ssssssssssss [100%]
|
ssssssssssss......sss...... [100%]
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.5' not found
|
SKIPPED [15] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.5' not found
|
||||||
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.7' not found
|
12 passed, 15 skipped in 0.62s
|
||||||
3 passed, 24 skipped in 0.43s
|
|
||||||
|
|
||||||
Indirect parametrization of optional implementations/imports
|
Indirect parametrization of optional implementations/imports
|
||||||
--------------------------------------------------------------------
|
--------------------------------------------------------------------
|
||||||
|
@ -548,7 +547,7 @@ If you run this with reporting for skips enabled:
|
||||||
|
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
SKIPPED [1] $REGENDOC_TMPDIR/conftest.py:13: could not import 'opt2': No module named 'opt2'
|
SKIPPED [1] $REGENDOC_TMPDIR/conftest.py:13: could not import 'opt2': No module named 'opt2'
|
||||||
======================= 1 passed, 1 skipped in 0.02s =======================
|
======================= 1 passed, 1 skipped in 0.01s =======================
|
||||||
|
|
||||||
You'll see that we don't have an ``opt2`` module and thus the second test run
|
You'll see that we don't have an ``opt2`` module and thus the second test run
|
||||||
of our ``test_func1`` was skipped. A few notes:
|
of our ``test_func1`` was skipped. A few notes:
|
||||||
|
@ -610,7 +609,7 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
|
||||||
test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
|
test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
|
||||||
test_pytest_param_example.py::test_eval[basic_6*9] XFAIL [100%]
|
test_pytest_param_example.py::test_eval[basic_6*9] XFAIL [100%]
|
||||||
|
|
||||||
=============== 2 passed, 15 deselected, 1 xfailed in 0.23s ================
|
=============== 2 passed, 15 deselected, 1 xfailed in 0.08s ================
|
||||||
|
|
||||||
As the result:
|
As the result:
|
||||||
|
|
||||||
|
|
|
@ -221,7 +221,7 @@ You can always peek at the collection tree without running tests like this:
|
||||||
<Function test_method>
|
<Function test_method>
|
||||||
<Function test_anothermethod>
|
<Function test_anothermethod>
|
||||||
|
|
||||||
========================== no tests ran in 0.01s ===========================
|
========================== no tests ran in 0.00s ===========================
|
||||||
|
|
||||||
.. _customizing-test-collection:
|
.. _customizing-test-collection:
|
||||||
|
|
||||||
|
@ -297,7 +297,7 @@ file will be left out:
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
========================== no tests ran in 0.04s ===========================
|
========================== no tests ran in 0.01s ===========================
|
||||||
|
|
||||||
It's also possible to ignore files based on Unix shell-style wildcards by adding
|
It's also possible to ignore files based on Unix shell-style wildcards by adding
|
||||||
patterns to ``collect_ignore_glob``.
|
patterns to ``collect_ignore_glob``.
|
||||||
|
|
|
@ -650,4 +650,4 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
||||||
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
|
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
|
||||||
|
|
||||||
failure_demo.py:282: AssertionError
|
failure_demo.py:282: AssertionError
|
||||||
============================ 44 failed in 0.82s ============================
|
============================ 44 failed in 0.26s ============================
|
||||||
|
|
|
@ -65,7 +65,7 @@ Let's run this without supplying our new option:
|
||||||
test_sample.py:6: AssertionError
|
test_sample.py:6: AssertionError
|
||||||
--------------------------- Captured stdout call ---------------------------
|
--------------------------- Captured stdout call ---------------------------
|
||||||
first
|
first
|
||||||
1 failed in 0.06s
|
1 failed in 0.02s
|
||||||
|
|
||||||
And now with supplying a command line option:
|
And now with supplying a command line option:
|
||||||
|
|
||||||
|
@ -89,7 +89,7 @@ And now with supplying a command line option:
|
||||||
test_sample.py:6: AssertionError
|
test_sample.py:6: AssertionError
|
||||||
--------------------------- Captured stdout call ---------------------------
|
--------------------------- Captured stdout call ---------------------------
|
||||||
second
|
second
|
||||||
1 failed in 0.06s
|
1 failed in 0.02s
|
||||||
|
|
||||||
You can see that the command line option arrived in our test. This
|
You can see that the command line option arrived in our test. This
|
||||||
completes the basic pattern. However, one often rather wants to process
|
completes the basic pattern. However, one often rather wants to process
|
||||||
|
@ -132,7 +132,7 @@ directory with the above conftest.py:
|
||||||
rootdir: $REGENDOC_TMPDIR
|
rootdir: $REGENDOC_TMPDIR
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
========================== no tests ran in 0.01s ===========================
|
========================== no tests ran in 0.00s ===========================
|
||||||
|
|
||||||
.. _`excontrolskip`:
|
.. _`excontrolskip`:
|
||||||
|
|
||||||
|
@ -261,7 +261,7 @@ Let's run our little function:
|
||||||
E Failed: not configured: 42
|
E Failed: not configured: 42
|
||||||
|
|
||||||
test_checkconfig.py:11: Failed
|
test_checkconfig.py:11: Failed
|
||||||
1 failed in 0.05s
|
1 failed in 0.02s
|
||||||
|
|
||||||
If you only want to hide certain exceptions, you can set ``__tracebackhide__``
|
If you only want to hide certain exceptions, you can set ``__tracebackhide__``
|
||||||
to a callable which gets the ``ExceptionInfo`` object. You can for example use
|
to a callable which gets the ``ExceptionInfo`` object. You can for example use
|
||||||
|
@ -445,9 +445,9 @@ Now we can profile which test functions execute the slowest:
|
||||||
|
|
||||||
========================= slowest 3 test durations =========================
|
========================= slowest 3 test durations =========================
|
||||||
0.30s call test_some_are_slow.py::test_funcslow2
|
0.30s call test_some_are_slow.py::test_funcslow2
|
||||||
0.25s call test_some_are_slow.py::test_funcslow1
|
0.20s call test_some_are_slow.py::test_funcslow1
|
||||||
0.10s call test_some_are_slow.py::test_funcfast
|
0.10s call test_some_are_slow.py::test_funcfast
|
||||||
============================ 3 passed in 0.68s =============================
|
============================ 3 passed in 0.61s =============================
|
||||||
|
|
||||||
incremental testing - test steps
|
incremental testing - test steps
|
||||||
---------------------------------------------------
|
---------------------------------------------------
|
||||||
|
@ -531,7 +531,7 @@ If we run this:
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
XFAIL test_step.py::TestUserHandling::test_deletion
|
XFAIL test_step.py::TestUserHandling::test_deletion
|
||||||
reason: previous test failed (test_modification)
|
reason: previous test failed (test_modification)
|
||||||
================== 1 failed, 2 passed, 1 xfailed in 0.07s ==================
|
================== 1 failed, 2 passed, 1 xfailed in 0.03s ==================
|
||||||
|
|
||||||
We'll see that ``test_deletion`` was not executed because ``test_modification``
|
We'll see that ``test_deletion`` was not executed because ``test_modification``
|
||||||
failed. It is reported as an "expected failure".
|
failed. It is reported as an "expected failure".
|
||||||
|
@ -644,7 +644,7 @@ We can run this:
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
a/test_db2.py:2: AssertionError
|
a/test_db2.py:2: AssertionError
|
||||||
============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.10s ==============
|
============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.05s ==============
|
||||||
|
|
||||||
The two test modules in the ``a`` directory see the same ``db`` fixture instance
|
The two test modules in the ``a`` directory see the same ``db`` fixture instance
|
||||||
while the one test in the sister-directory ``b`` doesn't see it. We could of course
|
while the one test in the sister-directory ``b`` doesn't see it. We could of course
|
||||||
|
@ -733,7 +733,7 @@ and run them:
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:6: AssertionError
|
test_module.py:6: AssertionError
|
||||||
============================ 2 failed in 0.07s =============================
|
============================ 2 failed in 0.02s =============================
|
||||||
|
|
||||||
you will have a "failures" file which contains the failing test ids:
|
you will have a "failures" file which contains the failing test ids:
|
||||||
|
|
||||||
|
@ -848,7 +848,7 @@ and run it:
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:19: AssertionError
|
test_module.py:19: AssertionError
|
||||||
======================== 2 failed, 1 error in 0.07s ========================
|
======================== 2 failed, 1 error in 0.02s ========================
|
||||||
|
|
||||||
You'll see that the fixture finalizers could use the precise reporting
|
You'll see that the fixture finalizers could use the precise reporting
|
||||||
information.
|
information.
|
||||||
|
|
|
@ -81,4 +81,4 @@ If you run this without output capturing:
|
||||||
.test other
|
.test other
|
||||||
.test_unit1 method called
|
.test_unit1 method called
|
||||||
.
|
.
|
||||||
4 passed in 0.02s
|
4 passed in 0.01s
|
||||||
|
|
|
@ -96,7 +96,7 @@ marked ``smtp_connection`` fixture function. Running the test looks like this:
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_smtpsimple.py:14: AssertionError
|
test_smtpsimple.py:14: AssertionError
|
||||||
============================ 1 failed in 0.57s =============================
|
============================ 1 failed in 0.18s =============================
|
||||||
|
|
||||||
In the failure traceback we see that the test function was called with a
|
In the failure traceback we see that the test function was called with a
|
||||||
``smtp_connection`` argument, the ``smtplib.SMTP()`` instance created by the fixture
|
``smtp_connection`` argument, the ``smtplib.SMTP()`` instance created by the fixture
|
||||||
|
@ -258,7 +258,7 @@ inspect what is going on and can now run the tests:
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:13: AssertionError
|
test_module.py:13: AssertionError
|
||||||
============================ 2 failed in 0.76s =============================
|
============================ 2 failed in 0.20s =============================
|
||||||
|
|
||||||
You see the two ``assert 0`` failing and more importantly you can also see
|
You see the two ``assert 0`` failing and more importantly you can also see
|
||||||
that the same (module-scoped) ``smtp_connection`` object was passed into the
|
that the same (module-scoped) ``smtp_connection`` object was passed into the
|
||||||
|
@ -361,7 +361,7 @@ Let's execute it:
|
||||||
$ pytest -s -q --tb=no
|
$ pytest -s -q --tb=no
|
||||||
FFteardown smtp
|
FFteardown smtp
|
||||||
|
|
||||||
2 failed in 0.76s
|
2 failed in 0.20s
|
||||||
|
|
||||||
We see that the ``smtp_connection`` instance is finalized after the two
|
We see that the ``smtp_connection`` instance is finalized after the two
|
||||||
tests finished execution. Note that if we decorated our fixture
|
tests finished execution. Note that if we decorated our fixture
|
||||||
|
@ -515,7 +515,7 @@ again, nothing much has changed:
|
||||||
$ pytest -s -q --tb=no
|
$ pytest -s -q --tb=no
|
||||||
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
|
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
|
||||||
|
|
||||||
2 failed in 0.76s
|
2 failed in 0.21s
|
||||||
|
|
||||||
Let's quickly create another test module that actually sets the
|
Let's quickly create another test module that actually sets the
|
||||||
server URL in its module namespace:
|
server URL in its module namespace:
|
||||||
|
@ -692,7 +692,7 @@ So let's just do another run:
|
||||||
test_module.py:13: AssertionError
|
test_module.py:13: AssertionError
|
||||||
------------------------- Captured stdout teardown -------------------------
|
------------------------- Captured stdout teardown -------------------------
|
||||||
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
||||||
4 failed in 1.77s
|
4 failed in 0.89s
|
||||||
|
|
||||||
We see that our two test functions each ran twice, against the different
|
We see that our two test functions each ran twice, against the different
|
||||||
``smtp_connection`` instances. Note also, that with the ``mail.python.org``
|
``smtp_connection`` instances. Note also, that with the ``mail.python.org``
|
||||||
|
@ -771,7 +771,7 @@ Running the above tests results in the following test IDs being used:
|
||||||
<Function test_ehlo[mail.python.org]>
|
<Function test_ehlo[mail.python.org]>
|
||||||
<Function test_noop[mail.python.org]>
|
<Function test_noop[mail.python.org]>
|
||||||
|
|
||||||
========================== no tests ran in 0.04s ===========================
|
========================== no tests ran in 0.01s ===========================
|
||||||
|
|
||||||
.. _`fixture-parametrize-marks`:
|
.. _`fixture-parametrize-marks`:
|
||||||
|
|
||||||
|
@ -861,7 +861,7 @@ Here we declare an ``app`` fixture which receives the previously defined
|
||||||
test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%]
|
test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%]
|
||||||
test_appsetup.py::test_smtp_connection_exists[mail.python.org] PASSED [100%]
|
test_appsetup.py::test_smtp_connection_exists[mail.python.org] PASSED [100%]
|
||||||
|
|
||||||
============================ 2 passed in 0.79s =============================
|
============================ 2 passed in 0.44s =============================
|
||||||
|
|
||||||
Due to the parametrization of ``smtp_connection``, the test will run twice with two
|
Due to the parametrization of ``smtp_connection``, the test will run twice with two
|
||||||
different ``App`` instances and respective smtp servers. There is no
|
different ``App`` instances and respective smtp servers. There is no
|
||||||
|
@ -971,7 +971,7 @@ Let's run the tests in verbose mode and with looking at the print-output:
|
||||||
TEARDOWN modarg mod2
|
TEARDOWN modarg mod2
|
||||||
|
|
||||||
|
|
||||||
============================ 8 passed in 0.02s =============================
|
============================ 8 passed in 0.01s =============================
|
||||||
|
|
||||||
You can see that the parametrized module-scoped ``modarg`` resource caused an
|
You can see that the parametrized module-scoped ``modarg`` resource caused an
|
||||||
ordering of test execution that lead to the fewest possible "active" resources.
|
ordering of test execution that lead to the fewest possible "active" resources.
|
||||||
|
@ -1043,7 +1043,7 @@ to verify our fixture is activated and the tests pass:
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
.. [100%]
|
.. [100%]
|
||||||
2 passed in 0.02s
|
2 passed in 0.01s
|
||||||
|
|
||||||
You can specify multiple fixtures like this:
|
You can specify multiple fixtures like this:
|
||||||
|
|
||||||
|
@ -1151,7 +1151,7 @@ If we run it, we get two passing tests:
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
.. [100%]
|
.. [100%]
|
||||||
2 passed in 0.02s
|
2 passed in 0.01s
|
||||||
|
|
||||||
Here is how autouse fixtures work in other scopes:
|
Here is how autouse fixtures work in other scopes:
|
||||||
|
|
||||||
|
|
|
@ -69,7 +69,7 @@ That’s it. You can now execute the test function:
|
||||||
E + where 4 = func(3)
|
E + where 4 = func(3)
|
||||||
|
|
||||||
test_sample.py:6: AssertionError
|
test_sample.py:6: AssertionError
|
||||||
============================ 1 failed in 0.05s =============================
|
============================ 1 failed in 0.02s =============================
|
||||||
|
|
||||||
This test returns a failure report because ``func(3)`` does not return ``5``.
|
This test returns a failure report because ``func(3)`` does not return ``5``.
|
||||||
|
|
||||||
|
@ -108,7 +108,7 @@ Execute the test function with “quiet” reporting mode:
|
||||||
|
|
||||||
$ pytest -q test_sysexit.py
|
$ pytest -q test_sysexit.py
|
||||||
. [100%]
|
. [100%]
|
||||||
1 passed in 0.01s
|
1 passed in 0.00s
|
||||||
|
|
||||||
Group multiple tests in a class
|
Group multiple tests in a class
|
||||||
--------------------------------------------------------------
|
--------------------------------------------------------------
|
||||||
|
@ -145,7 +145,7 @@ Once you develop multiple tests, you may want to group them into a class. pytest
|
||||||
E + where False = hasattr('hello', 'check')
|
E + where False = hasattr('hello', 'check')
|
||||||
|
|
||||||
test_class.py:8: AssertionError
|
test_class.py:8: AssertionError
|
||||||
1 failed, 1 passed in 0.05s
|
1 failed, 1 passed in 0.02s
|
||||||
|
|
||||||
The first test passed and the second failed. You can easily see the intermediate values in the assertion to help you understand the reason for the failure.
|
The first test passed and the second failed. You can easily see the intermediate values in the assertion to help you understand the reason for the failure.
|
||||||
|
|
||||||
|
@ -180,7 +180,7 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look
|
||||||
test_tmpdir.py:3: AssertionError
|
test_tmpdir.py:3: AssertionError
|
||||||
--------------------------- Captured stdout call ---------------------------
|
--------------------------- Captured stdout call ---------------------------
|
||||||
PYTEST_TMPDIR/test_needsfiles0
|
PYTEST_TMPDIR/test_needsfiles0
|
||||||
1 failed in 0.05s
|
1 failed in 0.02s
|
||||||
|
|
||||||
More info on tmpdir handling is available at :ref:`Temporary directories and files <tmpdir handling>`.
|
More info on tmpdir handling is available at :ref:`Temporary directories and files <tmpdir handling>`.
|
||||||
|
|
||||||
|
|
|
@ -44,7 +44,7 @@ To execute it:
|
||||||
E + where 4 = inc(3)
|
E + where 4 = inc(3)
|
||||||
|
|
||||||
test_sample.py:6: AssertionError
|
test_sample.py:6: AssertionError
|
||||||
============================ 1 failed in 0.06s =============================
|
============================ 1 failed in 0.02s =============================
|
||||||
|
|
||||||
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used.
|
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used.
|
||||||
See :ref:`Getting Started <getstarted>` for more examples.
|
See :ref:`Getting Started <getstarted>` for more examples.
|
||||||
|
|
|
@ -75,7 +75,7 @@ them in turn:
|
||||||
E + where 54 = eval('6*9')
|
E + where 54 = eval('6*9')
|
||||||
|
|
||||||
test_expectation.py:6: AssertionError
|
test_expectation.py:6: AssertionError
|
||||||
======================= 1 failed, 2 passed in 0.05s ========================
|
======================= 1 failed, 2 passed in 0.02s ========================
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
@ -128,7 +128,7 @@ Let's run this:
|
||||||
|
|
||||||
test_expectation.py ..x [100%]
|
test_expectation.py ..x [100%]
|
||||||
|
|
||||||
======================= 2 passed, 1 xfailed in 0.06s =======================
|
======================= 2 passed, 1 xfailed in 0.02s =======================
|
||||||
|
|
||||||
The one parameter set which caused a failure previously now
|
The one parameter set which caused a failure previously now
|
||||||
shows up as an "xfailed (expected to fail)" test.
|
shows up as an "xfailed (expected to fail)" test.
|
||||||
|
@ -225,7 +225,7 @@ Let's also run with a stringinput that will lead to a failing test:
|
||||||
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha
|
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha
|
||||||
|
|
||||||
test_strings.py:4: AssertionError
|
test_strings.py:4: AssertionError
|
||||||
1 failed in 0.05s
|
1 failed in 0.02s
|
||||||
|
|
||||||
As expected our test function fails.
|
As expected our test function fails.
|
||||||
|
|
||||||
|
@ -239,7 +239,7 @@ list:
|
||||||
s [100%]
|
s [100%]
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
SKIPPED [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:2
|
SKIPPED [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:2
|
||||||
1 skipped in 0.01s
|
1 skipped in 0.00s
|
||||||
|
|
||||||
Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across
|
Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across
|
||||||
those sets cannot be duplicated, otherwise an error will be raised.
|
those sets cannot be duplicated, otherwise an error will be raised.
|
||||||
|
|
|
@ -371,7 +371,7 @@ Running it with the report-on-xfail option gives this output:
|
||||||
XFAIL xfail_demo.py::test_hello6
|
XFAIL xfail_demo.py::test_hello6
|
||||||
reason: reason
|
reason: reason
|
||||||
XFAIL xfail_demo.py::test_hello7
|
XFAIL xfail_demo.py::test_hello7
|
||||||
============================ 7 xfailed in 0.17s ============================
|
============================ 7 xfailed in 0.05s ============================
|
||||||
|
|
||||||
.. _`skip/xfail with parametrize`:
|
.. _`skip/xfail with parametrize`:
|
||||||
|
|
||||||
|
|
|
@ -64,7 +64,7 @@ Running this would result in a passed test except for the last
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_tmp_path.py:13: AssertionError
|
test_tmp_path.py:13: AssertionError
|
||||||
============================ 1 failed in 0.06s =============================
|
============================ 1 failed in 0.02s =============================
|
||||||
|
|
||||||
.. _`tmp_path_factory example`:
|
.. _`tmp_path_factory example`:
|
||||||
|
|
||||||
|
@ -133,7 +133,7 @@ Running this would result in a passed test except for the last
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_tmpdir.py:9: AssertionError
|
test_tmpdir.py:9: AssertionError
|
||||||
============================ 1 failed in 0.05s =============================
|
============================ 1 failed in 0.02s =============================
|
||||||
|
|
||||||
.. _`tmpdir factory example`:
|
.. _`tmpdir factory example`:
|
||||||
|
|
||||||
|
|
|
@ -166,7 +166,7 @@ the ``self.db`` values in the traceback:
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_unittest_db.py:13: AssertionError
|
test_unittest_db.py:13: AssertionError
|
||||||
============================ 2 failed in 0.07s =============================
|
============================ 2 failed in 0.02s =============================
|
||||||
|
|
||||||
This default pytest traceback shows that the two test methods
|
This default pytest traceback shows that the two test methods
|
||||||
share the same ``self.db`` instance which was our intention
|
share the same ``self.db`` instance which was our intention
|
||||||
|
@ -219,7 +219,7 @@ Running this test module ...:
|
||||||
|
|
||||||
$ pytest -q test_unittest_cleandir.py
|
$ pytest -q test_unittest_cleandir.py
|
||||||
. [100%]
|
. [100%]
|
||||||
1 passed in 0.02s
|
1 passed in 0.01s
|
||||||
|
|
||||||
... gives us one passed test because the ``initdir`` fixture function
|
... gives us one passed test because the ``initdir`` fixture function
|
||||||
was executed ahead of the ``test_method``.
|
was executed ahead of the ``test_method``.
|
||||||
|
|
|
@ -247,7 +247,7 @@ Example:
|
||||||
XPASS test_example.py::test_xpass always xfail
|
XPASS test_example.py::test_xpass always xfail
|
||||||
ERROR test_example.py::test_error - assert 0
|
ERROR test_example.py::test_error - assert 0
|
||||||
FAILED test_example.py::test_fail - assert 0
|
FAILED test_example.py::test_fail - assert 0
|
||||||
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.08s ===
|
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.03s ===
|
||||||
|
|
||||||
The ``-r`` options accepts a number of characters after it, with ``a`` used
|
The ``-r`` options accepts a number of characters after it, with ``a`` used
|
||||||
above meaning "all except passes".
|
above meaning "all except passes".
|
||||||
|
@ -297,7 +297,7 @@ More than one character can be used, so for example to only see failed and skipp
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
FAILED test_example.py::test_fail - assert 0
|
FAILED test_example.py::test_fail - assert 0
|
||||||
SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test
|
SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test
|
||||||
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.08s ===
|
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.03s ===
|
||||||
|
|
||||||
Using ``p`` lists the passing tests, whilst ``P`` adds an extra section "PASSES" with those tests that passed but had
|
Using ``p`` lists the passing tests, whilst ``P`` adds an extra section "PASSES" with those tests that passed but had
|
||||||
captured output:
|
captured output:
|
||||||
|
@ -336,7 +336,7 @@ captured output:
|
||||||
ok
|
ok
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
PASSED test_example.py::test_ok
|
PASSED test_example.py::test_ok
|
||||||
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.08s ===
|
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.03s ===
|
||||||
|
|
||||||
.. _pdb-option:
|
.. _pdb-option:
|
||||||
|
|
||||||
|
|
|
@ -41,7 +41,7 @@ Running pytest now produces this output:
|
||||||
warnings.warn(UserWarning("api v1, should use functions from v2"))
|
warnings.warn(UserWarning("api v1, should use functions from v2"))
|
||||||
|
|
||||||
-- Docs: https://docs.pytest.org/en/latest/warnings.html
|
-- Docs: https://docs.pytest.org/en/latest/warnings.html
|
||||||
====================== 1 passed, 1 warnings in 0.01s =======================
|
====================== 1 passed, 1 warnings in 0.00s =======================
|
||||||
|
|
||||||
The ``-W`` flag can be passed to control which warnings will be displayed or even turn
|
The ``-W`` flag can be passed to control which warnings will be displayed or even turn
|
||||||
them into errors:
|
them into errors:
|
||||||
|
@ -64,7 +64,7 @@ them into errors:
|
||||||
E UserWarning: api v1, should use functions from v2
|
E UserWarning: api v1, should use functions from v2
|
||||||
|
|
||||||
test_show_warnings.py:5: UserWarning
|
test_show_warnings.py:5: UserWarning
|
||||||
1 failed in 0.05s
|
1 failed in 0.02s
|
||||||
|
|
||||||
The same option can be set in the ``pytest.ini`` file using the ``filterwarnings`` ini option.
|
The same option can be set in the ``pytest.ini`` file using the ``filterwarnings`` ini option.
|
||||||
For example, the configuration below will ignore all user warnings, but will transform
|
For example, the configuration below will ignore all user warnings, but will transform
|
||||||
|
@ -407,7 +407,7 @@ defines an ``__init__`` constructor, as this prevents the class from being insta
|
||||||
class Test:
|
class Test:
|
||||||
|
|
||||||
-- Docs: https://docs.pytest.org/en/latest/warnings.html
|
-- Docs: https://docs.pytest.org/en/latest/warnings.html
|
||||||
1 warnings in 0.01s
|
1 warnings in 0.00s
|
||||||
|
|
||||||
These warnings might be filtered using the same builtin mechanisms used to filter other types of warnings.
|
These warnings might be filtered using the same builtin mechanisms used to filter other types of warnings.
|
||||||
|
|
||||||
|
|
|
@ -442,7 +442,7 @@ additionally it is possible to copy examples for an example folder before runnin
|
||||||
testdir.copy_example("test_example.py")
|
testdir.copy_example("test_example.py")
|
||||||
|
|
||||||
-- Docs: https://docs.pytest.org/en/latest/warnings.html
|
-- Docs: https://docs.pytest.org/en/latest/warnings.html
|
||||||
====================== 2 passed, 1 warnings in 0.28s =======================
|
====================== 2 passed, 1 warnings in 0.12s =======================
|
||||||
|
|
||||||
For more information about the result object that ``runpytest()`` returns, and
|
For more information about the result object that ``runpytest()`` returns, and
|
||||||
the methods that it provides please check out the :py:class:`RunResult
|
the methods that it provides please check out the :py:class:`RunResult
|
||||||
|
|
Loading…
Reference in New Issue