Merge pull request #4587 from nicoddemus/merge-master-into-features

Merge master into features
This commit is contained in:
Anthony Sottile 2019-01-04 09:57:08 -08:00 committed by GitHub
commit 56aecfc081
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 77 additions and 23 deletions

View File

@ -6,6 +6,7 @@ Contributors include::
Aaron Coleman
Abdeali JK
Abhijeet Kasurde
Adam Johnson
Ahn Ki-Wook
Alan Velasco
Alexander Johnson

1
changelog/4557.doc.rst Normal file
View File

@ -0,0 +1 @@
Markers example documentation page updated to support latest pytest version.

1
changelog/4558.doc.rst Normal file
View File

@ -0,0 +1 @@
Update cache documentation example to correctly show cache hit and miss.

1
changelog/4580.doc.rst Normal file
View File

@ -0,0 +1 @@
Improved detailed summary report documentation.

View File

@ -185,11 +185,14 @@ across pytest invocations::
import pytest
import time
def expensive_computation():
print("running expensive computation...")
@pytest.fixture
def mydata(request):
val = request.config.cache.get("example/value", None)
if val is None:
time.sleep(9*0.6) # expensive computation :)
expensive_computation()
val = 42
request.config.cache.set("example/value", val)
return val
@ -197,8 +200,7 @@ across pytest invocations::
def test_function(mydata):
assert mydata == 23
If you run this command once, it will take a while because
of the sleep:
If you run this command for the first time, you can see the print statement:
.. code-block:: pytest
@ -217,7 +219,7 @@ of the sleep:
1 failed in 0.12 seconds
If you run it a second time the value will be retrieved from
the cache and this will be quick:
the cache and nothing will be printed:
.. code-block:: pytest

View File

@ -308,7 +308,7 @@ apply a marker to an individual test instance::
@pytest.mark.foo
@pytest.mark.parametrize(("n", "expected"), [
(1, 2),
pytest.mark.bar((1, 3)),
pytest.param((1, 3), marks=pytest.mark.bar),
(2, 3),
])
def test_increment(n, expected):
@ -318,15 +318,6 @@ In this example the mark "foo" will apply to each of the three
tests, whereas the "bar" mark is only applied to the second test.
Skip and xfail marks can also be applied in this way, see :ref:`skip/xfail with parametrize`.
.. note::
If the data you are parametrizing happen to be single callables, you need to be careful
when marking these items. ``pytest.mark.xfail(my_func)`` won't work because it's also the
signature of a function being decorated. To resolve this ambiguity, you need to pass a
reason argument:
``pytest.mark.xfail(func_bar, reason="Issue#7")``.
.. _`adding a custom marker from a plugin`:
Custom marker and command line option to control test runs

View File

@ -804,7 +804,7 @@ different ``App`` instances and respective smtp servers. There is no
need for the ``app`` fixture to be aware of the ``smtp_connection``
parametrization because pytest will fully analyse the fixture dependency graph.
Note, that the ``app`` fixture has a scope of ``module`` and uses a
Note that the ``app`` fixture has a scope of ``module`` and uses a
module-scoped ``smtp_connection`` fixture. The example would still work if
``smtp_connection`` was cached on a ``session`` scope: it is fine for fixtures to use
"broader" scoped fixtures but not the other way round:

View File

@ -7,9 +7,6 @@ Installation and Getting Started
**PyPI package name**: `pytest <https://pypi.org/project/pytest/>`_
**Dependencies**: `py <https://pypi.org/project/py/>`_,
`colorama (Windows) <https://pypi.org/project/colorama/>`_,
**Documentation as PDF**: `download latest <https://media.readthedocs.org/pdf/pytest/latest/pytest.pdf>`_
``pytest`` is a framework that makes building simple and scalable tests easy. Tests are expressive and readable—no boilerplate code required. Get started in minutes with a small unit test or complex functional test for your application or library.

View File

@ -147,7 +147,7 @@ Detailed summary report
.. versionadded:: 2.9
The ``-r`` flag can be used to display test results summary at the end of the test session,
The ``-r`` flag can be used to display a "short test summary info" at the end of the test session,
making it easy in large test suites to get a clear picture of all failures, skips, xfails, etc.
Example:
@ -158,9 +158,34 @@ Example:
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items
collected 7 items
======================= no tests ran in 0.12 seconds =======================
test_examples.py ..FEsxX [100%]
==================================== ERRORS ====================================
_________________________ ERROR at setup of test_error _________________________
file /Users/chainz/tmp/pytestratest/test_examples.py, line 17
def test_error(unknown_fixture):
E fixture 'unknown_fixture' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_xml_attribute, record_xml_property, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/Users/chainz/tmp/pytestratest/test_examples.py:17
=================================== FAILURES ===================================
__________________________________ test_fail ___________________________________
def test_fail():
> assert 0
E assert 0
test_examples.py:14: AssertionError
=========================== short test summary info ============================
FAIL test_examples.py::test_fail
ERROR test_examples.py::test_error
SKIP [1] test_examples.py:21: Example
XFAIL test_examples.py::test_xfail
XPASS test_examples.py::test_xpass
= 1 failed, 2 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.07 seconds =
The ``-r`` options accepts a number of characters after it, with ``a`` used above meaning "all except passes".
@ -183,9 +208,44 @@ More than one character can be used, so for example to only see failed and skipp
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items
collected 2 items
======================= no tests ran in 0.12 seconds =======================
test_examples.py Fs [100%]
=================================== FAILURES ===================================
__________________________________ test_fail ___________________________________
def test_fail():
> assert 0
E assert 0
test_examples.py:14: AssertionError
=========================== short test summary info ============================
FAIL test_examples.py::test_fail
SKIP [1] test_examples.py:21: Example
===================== 1 failed, 1 skipped in 0.09 seconds ======================
Using ``p`` lists the passing tests, whilst ``P`` adds an extra section "PASSES" with those tests that passed but had
captured output:
.. code-block:: pytest
$ pytest -rpP
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_examples.py .. [100%]
=========================== short test summary info ============================
PASSED test_examples.py::test_pass
PASSED test_examples.py::test_pass_with_output
==================================== PASSES ====================================
____________________________ test_pass_with_output _____________________________
----------------------------- Captured stdout call -----------------------------
Passing test
=========================== 2 passed in 0.04 seconds ===========================
.. _pdb-option: