Merge remote-tracking branch 'origin/features' into short-summary-message

Conflicts:
	src/_pytest/skipping.py
This commit is contained in:
Daniel Hahler 2019-04-17 15:30:19 +02:00
commit df1d1105b0
74 changed files with 1243 additions and 655 deletions

View File

@ -1,16 +1,16 @@
exclude: doc/en/example/py2py3/test_py2.py exclude: doc/en/example/py2py3/test_py2.py
repos: repos:
- repo: https://github.com/ambv/black - repo: https://github.com/ambv/black
rev: 18.9b0 rev: 19.3b0
hooks: hooks:
- id: black - id: black
args: [--safe, --quiet] args: [--safe, --quiet]
language_version: python3 language_version: python3
- repo: https://github.com/asottile/blacken-docs - repo: https://github.com/asottile/blacken-docs
rev: v0.3.0 rev: v0.5.0
hooks: hooks:
- id: blacken-docs - id: blacken-docs
additional_dependencies: [black==18.9b0] additional_dependencies: [black==19.3b0]
language_version: python3 language_version: python3
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.1.0 rev: v2.1.0
@ -22,22 +22,22 @@ repos:
exclude: _pytest/debugging.py exclude: _pytest/debugging.py
language_version: python3 language_version: python3
- repo: https://gitlab.com/pycqa/flake8 - repo: https://gitlab.com/pycqa/flake8
rev: 3.7.0 rev: 3.7.7
hooks: hooks:
- id: flake8 - id: flake8
language_version: python3 language_version: python3
- repo: https://github.com/asottile/reorder_python_imports - repo: https://github.com/asottile/reorder_python_imports
rev: v1.3.5 rev: v1.4.0
hooks: hooks:
- id: reorder-python-imports - id: reorder-python-imports
args: ['--application-directories=.:src'] args: ['--application-directories=.:src']
- repo: https://github.com/asottile/pyupgrade - repo: https://github.com/asottile/pyupgrade
rev: v1.11.1 rev: v1.15.0
hooks: hooks:
- id: pyupgrade - id: pyupgrade
args: [--keep-percent-format] args: [--keep-percent-format]
- repo: https://github.com/pre-commit/pygrep-hooks - repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.2.0 rev: v1.3.0
hooks: hooks:
- id: rst-backticks - id: rst-backticks
- repo: local - repo: local

View File

@ -208,6 +208,7 @@ Ross Lawley
Russel Winder Russel Winder
Ryan Wooden Ryan Wooden
Samuel Dion-Girardeau Samuel Dion-Girardeau
Samuel Searles-Bryant
Samuele Pedroni Samuele Pedroni
Sankt Petersbug Sankt Petersbug
Segev Finer Segev Finer

View File

@ -18,6 +18,24 @@ with advance notice in the **Deprecations** section of releases.
.. towncrier release notes start .. towncrier release notes start
pytest 4.4.1 (2019-04-15)
=========================
Bug Fixes
---------
- `#5031 <https://github.com/pytest-dev/pytest/issues/5031>`_: Environment variables are properly restored when using pytester's ``testdir`` fixture.
- `#5039 <https://github.com/pytest-dev/pytest/issues/5039>`_: Fix regression with ``--pdbcls``, which stopped working with local modules in 4.0.0.
- `#5092 <https://github.com/pytest-dev/pytest/issues/5092>`_: Produce a warning when unknown keywords are passed to ``pytest.param(...)``.
- `#5098 <https://github.com/pytest-dev/pytest/issues/5098>`_: Invalidate import caches with ``monkeypatch.syspath_prepend``, which is required with namespace packages being used.
pytest 4.4.0 (2019-03-29) pytest 4.4.0 (2019-03-29)
========================= =========================

View File

@ -16,4 +16,4 @@ run = 'fc("/d")'
if __name__ == "__main__": if __name__ == "__main__":
print(timeit.timeit(run, setup=setup % imports[0], number=count)) print(timeit.timeit(run, setup=setup % imports[0], number=count))
print((timeit.timeit(run, setup=setup % imports[1], number=count))) print(timeit.timeit(run, setup=setup % imports[1], number=count))

View File

@ -0,0 +1 @@
Show XFail reason as part of JUnitXML message field.

View File

@ -0,0 +1 @@
Assertion failure messages for sequences and dicts contain the number of different items now.

View File

@ -1 +0,0 @@
Environment variables are properly restored when using pytester's ``testdir`` fixture.

View File

@ -0,0 +1 @@
The ``--cache-show`` option/action accepts an optional glob to show only matching cache entries.

View File

@ -0,0 +1 @@
Standard input (stdin) can be given to pytester's ``Testdir.run()`` and ``Testdir.popen()``.

View File

@ -0,0 +1 @@
pytester's ``Testdir.popen()`` uses ``stdout`` and ``stderr`` via keyword arguments with defaults now (``subprocess.PIPE``).

View File

@ -0,0 +1 @@
The ``-r`` option learnt about ``A`` to display all reports (including passed ones) in the short test summary.

View File

@ -0,0 +1 @@
The code for the short test summary in the terminal was moved to the terminal plugin.

View File

@ -0,0 +1 @@
Improved validation of kwargs for various methods in the pytester plugin.

View File

@ -0,0 +1 @@
The short test summary is displayed after passes with output (``-rP``).

View File

@ -6,6 +6,7 @@ Release announcements
:maxdepth: 2 :maxdepth: 2
release-4.4.1
release-4.4.0 release-4.4.0
release-4.3.1 release-4.3.1
release-4.3.0 release-4.3.0

View File

@ -0,0 +1,20 @@
pytest-4.4.1
=======================================
pytest 4.4.1 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at https://docs.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Anthony Sottile
* Bruno Oliveira
* Daniel Hahler
Happy testing,
The pytest Development Team

View File

@ -12,12 +12,15 @@ Asserting with the ``assert`` statement
``pytest`` allows you to use the standard python ``assert`` for verifying ``pytest`` allows you to use the standard python ``assert`` for verifying
expectations and values in Python tests. For example, you can write the expectations and values in Python tests. For example, you can write the
following:: following:
.. code-block:: python
# content of test_assert1.py # content of test_assert1.py
def f(): def f():
return 3 return 3
def test_function(): def test_function():
assert f() == 4 assert f() == 4
@ -30,7 +33,7 @@ you will see the return value of the function call:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 1 item collected 1 item
test_assert1.py F [100%] test_assert1.py F [100%]
@ -43,7 +46,7 @@ you will see the return value of the function call:
E assert 3 == 4 E assert 3 == 4
E + where 3 = f() E + where 3 = f()
test_assert1.py:5: AssertionError test_assert1.py:6: AssertionError
========================= 1 failed in 0.12 seconds ========================= ========================= 1 failed in 0.12 seconds =========================
``pytest`` has support for showing the values of the most common subexpressions ``pytest`` has support for showing the values of the most common subexpressions
@ -52,7 +55,9 @@ operators. (See :ref:`tbreportdemo`). This allows you to use the
idiomatic python constructs without boilerplate code while not losing idiomatic python constructs without boilerplate code while not losing
introspection information. introspection information.
However, if you specify a message with the assertion like this:: However, if you specify a message with the assertion like this:
.. code-block:: python
assert a % 2 == 0, "value was odd, should be even" assert a % 2 == 0, "value was odd, should be even"
@ -67,22 +72,29 @@ Assertions about expected exceptions
------------------------------------------ ------------------------------------------
In order to write assertions about raised exceptions, you can use In order to write assertions about raised exceptions, you can use
``pytest.raises`` as a context manager like this:: ``pytest.raises`` as a context manager like this:
.. code-block:: python
import pytest import pytest
def test_zero_division(): def test_zero_division():
with pytest.raises(ZeroDivisionError): with pytest.raises(ZeroDivisionError):
1 / 0 1 / 0
and if you need to have access to the actual exception info you may use:: and if you need to have access to the actual exception info you may use:
.. code-block:: python
def test_recursion_depth(): def test_recursion_depth():
with pytest.raises(RuntimeError) as excinfo: with pytest.raises(RuntimeError) as excinfo:
def f(): def f():
f() f()
f() f()
assert 'maximum recursion' in str(excinfo.value) assert "maximum recursion" in str(excinfo.value)
``excinfo`` is a ``ExceptionInfo`` instance, which is a wrapper around ``excinfo`` is a ``ExceptionInfo`` instance, which is a wrapper around
the actual exception raised. The main attributes of interest are the actual exception raised. The main attributes of interest are
@ -90,15 +102,19 @@ the actual exception raised. The main attributes of interest are
You can pass a ``match`` keyword parameter to the context-manager to test You can pass a ``match`` keyword parameter to the context-manager to test
that a regular expression matches on the string representation of an exception that a regular expression matches on the string representation of an exception
(similar to the ``TestCase.assertRaisesRegexp`` method from ``unittest``):: (similar to the ``TestCase.assertRaisesRegexp`` method from ``unittest``):
.. code-block:: python
import pytest import pytest
def myfunc(): def myfunc():
raise ValueError("Exception 123 raised") raise ValueError("Exception 123 raised")
def test_match(): def test_match():
with pytest.raises(ValueError, match=r'.* 123 .*'): with pytest.raises(ValueError, match=r".* 123 .*"):
myfunc() myfunc()
The regexp parameter of the ``match`` method is matched with the ``re.search`` The regexp parameter of the ``match`` method is matched with the ``re.search``
@ -107,7 +123,9 @@ well.
There's an alternate form of the ``pytest.raises`` function where you pass There's an alternate form of the ``pytest.raises`` function where you pass
a function that will be executed with the given ``*args`` and ``**kwargs`` and a function that will be executed with the given ``*args`` and ``**kwargs`` and
assert that the given exception is raised:: assert that the given exception is raised:
.. code-block:: python
pytest.raises(ExpectedException, func, *args, **kwargs) pytest.raises(ExpectedException, func, *args, **kwargs)
@ -116,7 +134,9 @@ exception* or *wrong exception*.
Note that it is also possible to specify a "raises" argument to Note that it is also possible to specify a "raises" argument to
``pytest.mark.xfail``, which checks that the test is failing in a more ``pytest.mark.xfail``, which checks that the test is failing in a more
specific way than just having any exception raised:: specific way than just having any exception raised:
.. code-block:: python
@pytest.mark.xfail(raises=IndexError) @pytest.mark.xfail(raises=IndexError)
def test_f(): def test_f():
@ -148,10 +168,13 @@ Making use of context-sensitive comparisons
.. versionadded:: 2.0 .. versionadded:: 2.0
``pytest`` has rich support for providing context-sensitive information ``pytest`` has rich support for providing context-sensitive information
when it encounters comparisons. For example:: when it encounters comparisons. For example:
.. code-block:: python
# content of test_assert2.py # content of test_assert2.py
def test_set_comparison(): def test_set_comparison():
set1 = set("1308") set1 = set("1308")
set2 = set("8035") set2 = set("8035")
@ -165,7 +188,7 @@ if you run this module:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 1 item collected 1 item
test_assert2.py F [100%] test_assert2.py F [100%]
@ -184,7 +207,7 @@ if you run this module:
E '5' E '5'
E Use -v to get the full diff E Use -v to get the full diff
test_assert2.py:5: AssertionError test_assert2.py:6: AssertionError
========================= 1 failed in 0.12 seconds ========================= ========================= 1 failed in 0.12 seconds =========================
Special comparisons are done for a number of cases: Special comparisons are done for a number of cases:
@ -205,16 +228,21 @@ the ``pytest_assertrepr_compare`` hook.
:noindex: :noindex:
As an example consider adding the following hook in a :ref:`conftest.py <conftest.py>` As an example consider adding the following hook in a :ref:`conftest.py <conftest.py>`
file which provides an alternative explanation for ``Foo`` objects:: file which provides an alternative explanation for ``Foo`` objects:
.. code-block:: python
# content of conftest.py # content of conftest.py
from test_foocompare import Foo from test_foocompare import Foo
def pytest_assertrepr_compare(op, left, right): def pytest_assertrepr_compare(op, left, right):
if isinstance(left, Foo) and isinstance(right, Foo) and op == "==": if isinstance(left, Foo) and isinstance(right, Foo) and op == "==":
return ['Comparing Foo instances:', return ["Comparing Foo instances:", " vals: %s != %s" % (left.val, right.val)]
' vals: %s != %s' % (left.val, right.val)]
now, given this test module:: now, given this test module:
.. code-block:: python
# content of test_foocompare.py # content of test_foocompare.py
class Foo(object): class Foo(object):
@ -224,6 +252,7 @@ now, given this test module::
def __eq__(self, other): def __eq__(self, other):
return self.val == other.val return self.val == other.val
def test_compare(): def test_compare():
f1 = Foo(1) f1 = Foo(1)
f2 = Foo(2) f2 = Foo(2)
@ -246,7 +275,7 @@ the conftest file:
E assert Comparing Foo instances: E assert Comparing Foo instances:
E vals: 1 != 2 E vals: 1 != 2
test_foocompare.py:11: AssertionError test_foocompare.py:12: AssertionError
1 failed in 0.12 seconds 1 failed in 0.12 seconds
.. _assert-details: .. _assert-details:

View File

@ -82,7 +82,7 @@ If you then run it with ``--lf``:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 50 items / 48 deselected / 2 selected collected 50 items / 48 deselected / 2 selected
run-last-failure: rerun previous 2 failures run-last-failure: rerun previous 2 failures
@ -126,7 +126,7 @@ of ``FF`` and dots):
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 50 items collected 50 items
run-last-failure: rerun previous 2 failures first run-last-failure: rerun previous 2 failures first
@ -247,7 +247,7 @@ See the :ref:`cache-api` for more details.
Inspecting Cache content Inspecting Cache content
------------------------------- ------------------------
You can always peek at the content of the cache using the You can always peek at the content of the cache using the
``--cache-show`` command line option: ``--cache-show`` command line option:
@ -258,9 +258,9 @@ You can always peek at the content of the cache using the
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
------------------------------- cache values ------------------------------- --------------------------- cache values for '*' ---------------------------
cache/lastfailed contains: cache/lastfailed contains:
{'test_50.py::test_num[17]': True, {'test_50.py::test_num[17]': True,
'test_50.py::test_num[25]': True, 'test_50.py::test_num[25]': True,
@ -277,8 +277,25 @@ You can always peek at the content of the cache using the
======================= no tests ran in 0.12 seconds ======================= ======================= no tests ran in 0.12 seconds =======================
``--cache-show`` takes an optional argument to specify a glob pattern for
filtering:
.. code-block:: pytest
$ pytest --cache-show example/*
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
cachedir: $PYTHON_PREFIX/.pytest_cache
----------------------- cache values for 'example/*' -----------------------
example/value contains:
42
======================= no tests ran in 0.12 seconds =======================
Clearing Cache content Clearing Cache content
------------------------------- ----------------------
You can instruct pytest to clear all cache files and values You can instruct pytest to clear all cache files and values
by adding the ``--cache-clear`` option like this: by adding the ``--cache-clear`` option like this:

View File

@ -71,7 +71,7 @@ of the failing function and hide the other one:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 2 items collected 2 items
test_module.py .F [100%] test_module.py .F [100%]

View File

@ -72,7 +72,7 @@ then you can just invoke ``pytest`` without command line options:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 1 item collected 1 item
mymodule.py . [100%] mymodule.py . [100%]

View File

@ -9,18 +9,28 @@ Here are some example using the :ref:`mark` mechanism.
Marking test functions and selecting them for a run Marking test functions and selecting them for a run
---------------------------------------------------- ----------------------------------------------------
You can "mark" a test function with custom metadata like this:: You can "mark" a test function with custom metadata like this:
.. code-block:: python
# content of test_server.py # content of test_server.py
import pytest import pytest
@pytest.mark.webtest @pytest.mark.webtest
def test_send_http(): def test_send_http():
pass # perform some webtest test for your app pass # perform some webtest test for your app
def test_something_quick(): def test_something_quick():
pass pass
def test_another(): def test_another():
pass pass
class TestClass(object): class TestClass(object):
def test_method(self): def test_method(self):
pass pass
@ -35,7 +45,7 @@ You can then restrict a test run to only run tests marked with ``webtest``:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 3 deselected / 1 selected collecting ... collected 4 items / 3 deselected / 1 selected
test_server.py::test_send_http PASSED [100%] test_server.py::test_send_http PASSED [100%]
@ -50,7 +60,7 @@ Or the inverse, running all tests except the webtest ones:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 1 deselected / 3 selected collecting ... collected 4 items / 1 deselected / 3 selected
test_server.py::test_something_quick PASSED [ 33%] test_server.py::test_something_quick PASSED [ 33%]
@ -72,7 +82,7 @@ tests based on their module, class, method, or function name:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collecting ... collected 1 item collecting ... collected 1 item
test_server.py::TestClass::test_method PASSED [100%] test_server.py::TestClass::test_method PASSED [100%]
@ -87,7 +97,7 @@ You can also select on the class:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collecting ... collected 1 item collecting ... collected 1 item
test_server.py::TestClass::test_method PASSED [100%] test_server.py::TestClass::test_method PASSED [100%]
@ -102,7 +112,7 @@ Or select multiple nodes:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collecting ... collected 2 items collecting ... collected 2 items
test_server.py::TestClass::test_method PASSED [ 50%] test_server.py::TestClass::test_method PASSED [ 50%]
@ -142,7 +152,7 @@ select tests based on their names:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 3 deselected / 1 selected collecting ... collected 4 items / 3 deselected / 1 selected
test_server.py::test_send_http PASSED [100%] test_server.py::test_send_http PASSED [100%]
@ -157,7 +167,7 @@ And you can also run all tests except the ones that match the keyword:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 1 deselected / 3 selected collecting ... collected 4 items / 1 deselected / 3 selected
test_server.py::test_something_quick PASSED [ 33%] test_server.py::test_something_quick PASSED [ 33%]
@ -174,7 +184,7 @@ Or to select "http" and "quick" tests:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 2 deselected / 2 selected collecting ... collected 4 items / 2 deselected / 2 selected
test_server.py::test_send_http PASSED [ 50%] test_server.py::test_send_http PASSED [ 50%]
@ -257,14 +267,19 @@ Marking whole classes or modules
---------------------------------------------------- ----------------------------------------------------
You may use ``pytest.mark`` decorators with classes to apply markers to all of You may use ``pytest.mark`` decorators with classes to apply markers to all of
its test methods:: its test methods:
.. code-block:: python
# content of test_mark_classlevel.py # content of test_mark_classlevel.py
import pytest import pytest
@pytest.mark.webtest @pytest.mark.webtest
class TestClass(object): class TestClass(object):
def test_startup(self): def test_startup(self):
pass pass
def test_startup_and_more(self): def test_startup_and_more(self):
pass pass
@ -272,17 +287,23 @@ This is equivalent to directly applying the decorator to the
two test functions. two test functions.
To remain backward-compatible with Python 2.4 you can also set a To remain backward-compatible with Python 2.4 you can also set a
``pytestmark`` attribute on a TestClass like this:: ``pytestmark`` attribute on a TestClass like this:
.. code-block:: python
import pytest import pytest
class TestClass(object): class TestClass(object):
pytestmark = pytest.mark.webtest pytestmark = pytest.mark.webtest
or if you need to use multiple markers you can use a list:: or if you need to use multiple markers you can use a list:
.. code-block:: python
import pytest import pytest
class TestClass(object): class TestClass(object):
pytestmark = [pytest.mark.webtest, pytest.mark.slowtest] pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
@ -305,18 +326,19 @@ Marking individual tests when using parametrize
When using parametrize, applying a mark will make it apply When using parametrize, applying a mark will make it apply
to each individual test. However it is also possible to to each individual test. However it is also possible to
apply a marker to an individual test instance:: apply a marker to an individual test instance:
.. code-block:: python
import pytest import pytest
@pytest.mark.foo @pytest.mark.foo
@pytest.mark.parametrize(("n", "expected"), [ @pytest.mark.parametrize(
(1, 2), ("n", "expected"), [(1, 2), pytest.param((1, 3), marks=pytest.mark.bar), (2, 3)]
pytest.param((1, 3), marks=pytest.mark.bar), )
(2, 3),
])
def test_increment(n, expected): def test_increment(n, expected):
assert n + 1 == expected assert n + 1 == expected
In this example the mark "foo" will apply to each of the three In this example the mark "foo" will apply to each of the three
tests, whereas the "bar" mark is only applied to the second test. tests, whereas the "bar" mark is only applied to the second test.
@ -332,31 +354,46 @@ Custom marker and command line option to control test runs
Plugins can provide custom markers and implement specific behaviour Plugins can provide custom markers and implement specific behaviour
based on it. This is a self-contained example which adds a command based on it. This is a self-contained example which adds a command
line option and a parametrized test function marker to run tests line option and a parametrized test function marker to run tests
specifies via named environments:: specifies via named environments:
.. code-block:: python
# content of conftest.py # content of conftest.py
import pytest import pytest
def pytest_addoption(parser): def pytest_addoption(parser):
parser.addoption("-E", action="store", metavar="NAME", parser.addoption(
help="only run tests matching the environment NAME.") "-E",
action="store",
metavar="NAME",
help="only run tests matching the environment NAME.",
)
def pytest_configure(config): def pytest_configure(config):
# register an additional marker # register an additional marker
config.addinivalue_line("markers", config.addinivalue_line(
"env(name): mark test to run only on named environment") "markers", "env(name): mark test to run only on named environment"
)
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
envnames = [mark.args[0] for mark in item.iter_markers(name='env')] envnames = [mark.args[0] for mark in item.iter_markers(name="env")]
if envnames: if envnames:
if item.config.getoption("-E") not in envnames: if item.config.getoption("-E") not in envnames:
pytest.skip("test requires env in %r" % envnames) pytest.skip("test requires env in %r" % envnames)
A test file using this local plugin:: A test file using this local plugin:
.. code-block:: python
# content of test_someenv.py # content of test_someenv.py
import pytest import pytest
@pytest.mark.env("stage1") @pytest.mark.env("stage1")
def test_basic_db_operation(): def test_basic_db_operation():
pass pass
@ -370,7 +407,7 @@ the test needs:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 1 item collected 1 item
test_someenv.py s [100%] test_someenv.py s [100%]
@ -385,7 +422,7 @@ and here is one that specifies exactly the environment needed:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 1 item collected 1 item
test_someenv.py . [100%] test_someenv.py . [100%]
@ -423,25 +460,32 @@ Passing a callable to custom markers
.. regendoc:wipe .. regendoc:wipe
Below is the config file that will be used in the next examples:: Below is the config file that will be used in the next examples:
.. code-block:: python
# content of conftest.py # content of conftest.py
import sys import sys
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
for marker in item.iter_markers(name='my_marker'): for marker in item.iter_markers(name="my_marker"):
print(marker) print(marker)
sys.stdout.flush() sys.stdout.flush()
A custom marker can have its argument set, i.e. ``args`` and ``kwargs`` properties, defined by either invoking it as a callable or using ``pytest.mark.MARKER_NAME.with_args``. These two methods achieve the same effect most of the time. A custom marker can have its argument set, i.e. ``args`` and ``kwargs`` properties, defined by either invoking it as a callable or using ``pytest.mark.MARKER_NAME.with_args``. These two methods achieve the same effect most of the time.
However, if there is a callable as the single positional argument with no keyword arguments, using the ``pytest.mark.MARKER_NAME(c)`` will not pass ``c`` as a positional argument but decorate ``c`` with the custom marker (see :ref:`MarkDecorator <mark>`). Fortunately, ``pytest.mark.MARKER_NAME.with_args`` comes to the rescue:: However, if there is a callable as the single positional argument with no keyword arguments, using the ``pytest.mark.MARKER_NAME(c)`` will not pass ``c`` as a positional argument but decorate ``c`` with the custom marker (see :ref:`MarkDecorator <mark>`). Fortunately, ``pytest.mark.MARKER_NAME.with_args`` comes to the rescue:
.. code-block:: python
# content of test_custom_marker.py # content of test_custom_marker.py
import pytest import pytest
def hello_world(*args, **kwargs): def hello_world(*args, **kwargs):
return 'Hello World' return "Hello World"
@pytest.mark.my_marker.with_args(hello_world) @pytest.mark.my_marker.with_args(hello_world)
def test_with_args(): def test_with_args():
@ -467,12 +511,16 @@ Reading markers which were set from multiple places
.. regendoc:wipe .. regendoc:wipe
If you are heavily using markers in your test suite you may encounter the case where a marker is applied several times to a test function. From plugin If you are heavily using markers in your test suite you may encounter the case where a marker is applied several times to a test function. From plugin
code you can read over all such settings. Example:: code you can read over all such settings. Example:
.. code-block:: python
# content of test_mark_three_times.py # content of test_mark_three_times.py
import pytest import pytest
pytestmark = pytest.mark.glob("module", x=1) pytestmark = pytest.mark.glob("module", x=1)
@pytest.mark.glob("class", x=2) @pytest.mark.glob("class", x=2)
class TestClass(object): class TestClass(object):
@pytest.mark.glob("function", x=3) @pytest.mark.glob("function", x=3)
@ -480,13 +528,16 @@ code you can read over all such settings. Example::
pass pass
Here we have the marker "glob" applied three times to the same Here we have the marker "glob" applied three times to the same
test function. From a conftest file we can read it like this:: test function. From a conftest file we can read it like this:
.. code-block:: python
# content of conftest.py # content of conftest.py
import sys import sys
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
for mark in item.iter_markers(name='glob'): for mark in item.iter_markers(name="glob"):
print("glob args=%s kwargs=%s" % (mark.args, mark.kwargs)) print("glob args=%s kwargs=%s" % (mark.args, mark.kwargs))
sys.stdout.flush() sys.stdout.flush()
@ -510,7 +561,9 @@ Consider you have a test suite which marks tests for particular platforms,
namely ``pytest.mark.darwin``, ``pytest.mark.win32`` etc. and you namely ``pytest.mark.darwin``, ``pytest.mark.win32`` etc. and you
also have tests that run on all platforms and have no specific also have tests that run on all platforms and have no specific
marker. If you now want to have a way to only run the tests marker. If you now want to have a way to only run the tests
for your particular platform, you could use the following plugin:: for your particular platform, you could use the following plugin:
.. code-block:: python
# content of conftest.py # content of conftest.py
# #
@ -519,6 +572,7 @@ for your particular platform, you could use the following plugin::
ALL = set("darwin linux win32".split()) ALL = set("darwin linux win32".split())
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
supported_platforms = ALL.intersection(mark.name for mark in item.iter_markers()) supported_platforms = ALL.intersection(mark.name for mark in item.iter_markers())
plat = sys.platform plat = sys.platform
@ -526,24 +580,30 @@ for your particular platform, you could use the following plugin::
pytest.skip("cannot run on platform %s" % (plat)) pytest.skip("cannot run on platform %s" % (plat))
then tests will be skipped if they were specified for a different platform. then tests will be skipped if they were specified for a different platform.
Let's do a little test file to show how this looks like:: Let's do a little test file to show how this looks like:
.. code-block:: python
# content of test_plat.py # content of test_plat.py
import pytest import pytest
@pytest.mark.darwin @pytest.mark.darwin
def test_if_apple_is_evil(): def test_if_apple_is_evil():
pass pass
@pytest.mark.linux @pytest.mark.linux
def test_if_linux_works(): def test_if_linux_works():
pass pass
@pytest.mark.win32 @pytest.mark.win32
def test_if_win32_crashes(): def test_if_win32_crashes():
pass pass
def test_runs_everywhere(): def test_runs_everywhere():
pass pass
@ -555,12 +615,12 @@ then you will see two tests skipped and two executed tests as expected:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 4 items collected 4 items
test_plat.py s.s. [100%] test_plat.py s.s. [100%]
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [2] /home/sweet/project/conftest.py:12: cannot run on platform linux SKIPPED [2] $REGENDOC_TMPDIR/conftest.py:13: cannot run on platform linux
=================== 2 passed, 2 skipped in 0.12 seconds ==================== =================== 2 passed, 2 skipped in 0.12 seconds ====================
@ -572,7 +632,7 @@ Note that if you specify a platform via the marker-command line option like this
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 4 items / 3 deselected / 1 selected collected 4 items / 3 deselected / 1 selected
test_plat.py . [100%] test_plat.py . [100%]
@ -589,28 +649,38 @@ Automatically adding markers based on test names
If you a test suite where test function names indicate a certain If you a test suite where test function names indicate a certain
type of test, you can implement a hook that automatically defines type of test, you can implement a hook that automatically defines
markers so that you can use the ``-m`` option with it. Let's look markers so that you can use the ``-m`` option with it. Let's look
at this test module:: at this test module:
.. code-block:: python
# content of test_module.py # content of test_module.py
def test_interface_simple(): def test_interface_simple():
assert 0 assert 0
def test_interface_complex(): def test_interface_complex():
assert 0 assert 0
def test_event_simple(): def test_event_simple():
assert 0 assert 0
def test_something_else(): def test_something_else():
assert 0 assert 0
We want to dynamically define two markers and can do it in a We want to dynamically define two markers and can do it in a
``conftest.py`` plugin:: ``conftest.py`` plugin:
.. code-block:: python
# content of conftest.py # content of conftest.py
import pytest import pytest
def pytest_collection_modifyitems(items): def pytest_collection_modifyitems(items):
for item in items: for item in items:
if "interface" in item.nodeid: if "interface" in item.nodeid:
@ -626,18 +696,18 @@ We can now use the ``-m option`` to select one set:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 4 items / 2 deselected / 2 selected collected 4 items / 2 deselected / 2 selected
test_module.py FF [100%] test_module.py FF [100%]
================================= FAILURES ================================= ================================= FAILURES =================================
__________________________ test_interface_simple ___________________________ __________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple test_module.py:4: in test_interface_simple
assert 0 assert 0
E assert 0 E assert 0
__________________________ test_interface_complex __________________________ __________________________ test_interface_complex __________________________
test_module.py:6: in test_interface_complex test_module.py:8: in test_interface_complex
assert 0 assert 0
E assert 0 E assert 0
================== 2 failed, 2 deselected in 0.12 seconds ================== ================== 2 failed, 2 deselected in 0.12 seconds ==================
@ -650,22 +720,22 @@ or to select both "event" and "interface" tests:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 4 items / 1 deselected / 3 selected collected 4 items / 1 deselected / 3 selected
test_module.py FFF [100%] test_module.py FFF [100%]
================================= FAILURES ================================= ================================= FAILURES =================================
__________________________ test_interface_simple ___________________________ __________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple test_module.py:4: in test_interface_simple
assert 0 assert 0
E assert 0 E assert 0
__________________________ test_interface_complex __________________________ __________________________ test_interface_complex __________________________
test_module.py:6: in test_interface_complex test_module.py:8: in test_interface_complex
assert 0 assert 0
E assert 0 E assert 0
____________________________ test_event_simple _____________________________ ____________________________ test_event_simple _____________________________
test_module.py:9: in test_event_simple test_module.py:12: in test_event_simple
assert 0 assert 0
E assert 0 E assert 0
================== 3 failed, 1 deselected in 0.12 seconds ================== ================== 3 failed, 1 deselected in 0.12 seconds ==================

View File

@ -31,7 +31,7 @@ now execute the test specification:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project/nonpython rootdir: $REGENDOC_TMPDIR/nonpython
collected 2 items collected 2 items
test_simple.yml F. [100%] test_simple.yml F. [100%]
@ -66,7 +66,7 @@ consulted when reporting in ``verbose`` mode:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project/nonpython rootdir: $REGENDOC_TMPDIR/nonpython
collecting ... collected 2 items collecting ... collected 2 items
test_simple.yml::hello FAILED [ 50%] test_simple.yml::hello FAILED [ 50%]
@ -90,9 +90,9 @@ interesting to just look at the collection tree:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project/nonpython rootdir: $REGENDOC_TMPDIR/nonpython
collected 2 items collected 2 items
<Package /home/sweet/project/nonpython> <Package $REGENDOC_TMPDIR/nonpython>
<YamlFile test_simple.yml> <YamlFile test_simple.yml>
<YamlItem hello> <YamlItem hello>
<YamlItem ok> <YamlItem ok>

View File

@ -146,7 +146,7 @@ objects, they are still using the default pytest representation:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 8 items collected 8 items
<Module test_time.py> <Module test_time.py>
<Function test_timedistance_v0[a0-b0-expected0]> <Function test_timedistance_v0[a0-b0-expected0]>
@ -205,7 +205,7 @@ this is a fully self-contained example which you can run with:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 4 items collected 4 items
test_scenarios.py .... [100%] test_scenarios.py .... [100%]
@ -220,7 +220,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 4 items collected 4 items
<Module test_scenarios.py> <Module test_scenarios.py>
<Class TestSampleWithScenarios> <Class TestSampleWithScenarios>
@ -287,7 +287,7 @@ Let's first see how it looks like at collection time:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 2 items collected 2 items
<Module test_backends.py> <Module test_backends.py>
<Function test_db_initialized[d1]> <Function test_db_initialized[d1]>
@ -353,7 +353,7 @@ The result of this test will be successful:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 1 item collected 1 item
<Module test_indirect_list.py> <Module test_indirect_list.py>
<Function test_indirect[a-b]> <Function test_indirect[a-b]>
@ -434,9 +434,9 @@ Running it results in some skips if we don't have all the python interpreters in
.. code-block:: pytest .. code-block:: pytest
. $ pytest -rs -q multipython.py . $ pytest -rs -q multipython.py
......sss......ssssssssssss [100%] ...sss...sssssssss...sss... [100%]
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [15] /home/sweet/project/CWD/multipython.py:30: 'python3.5' not found SKIPPED [15] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.4' not found
12 passed, 15 skipped in 0.12 seconds 12 passed, 15 skipped in 0.12 seconds
Indirect parametrization of optional implementations/imports Indirect parametrization of optional implementations/imports
@ -488,12 +488,12 @@ If you run this with reporting for skips enabled:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 2 items collected 2 items
test_module.py .s [100%] test_module.py .s [100%]
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [1] /home/sweet/project/conftest.py:11: could not import 'opt2' SKIPPED [1] $REGENDOC_TMPDIR/conftest.py:11: could not import 'opt2'
=================== 1 passed, 1 skipped in 0.12 seconds ==================== =================== 1 passed, 1 skipped in 0.12 seconds ====================
@ -515,21 +515,25 @@ Set marks or test ID for individual parametrized test
-------------------------------------------------------------------- --------------------------------------------------------------------
Use ``pytest.param`` to apply marks or set test ID to individual parametrized test. Use ``pytest.param`` to apply marks or set test ID to individual parametrized test.
For example:: For example:
.. code-block:: python
# content of test_pytest_param_example.py # content of test_pytest_param_example.py
import pytest import pytest
@pytest.mark.parametrize('test_input,expected', [
('3+5', 8),
pytest.param('1+7', 8, @pytest.mark.parametrize(
marks=pytest.mark.basic), "test_input,expected",
pytest.param('2+4', 6, [
marks=pytest.mark.basic, ("3+5", 8),
id='basic_2+4'), pytest.param("1+7", 8, marks=pytest.mark.basic),
pytest.param('6*9', 42, pytest.param("2+4", 6, marks=pytest.mark.basic, id="basic_2+4"),
marks=[pytest.mark.basic, pytest.mark.xfail], pytest.param(
id='basic_6*9'), "6*9", 42, marks=[pytest.mark.basic, pytest.mark.xfail], id="basic_6*9"
]) ),
],
)
def test_eval(test_input, expected): def test_eval(test_input, expected):
assert eval(test_input) == expected assert eval(test_input) == expected
@ -546,7 +550,7 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collecting ... collected 17 items / 14 deselected / 3 selected collecting ... collected 17 items / 14 deselected / 3 selected
test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%] test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%]

View File

@ -148,7 +148,7 @@ The test collection would look like this:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 2 items collected 2 items
<Module check_myapp.py> <Module check_myapp.py>
<Class CheckMyApp> <Class CheckMyApp>
@ -210,7 +210,7 @@ You can always peek at the collection tree without running tests like this:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 3 items collected 3 items
<Module CWD/pythoncollection.py> <Module CWD/pythoncollection.py>
<Function test_function> <Function test_function>
@ -285,7 +285,7 @@ file will be left out:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 0 items collected 0 items
======================= no tests ran in 0.12 seconds ======================= ======================= no tests ran in 0.12 seconds =======================

View File

@ -1,13 +1,9 @@
.. _`tbreportdemo`: .. _`tbreportdemo`:
Demo of Python failure reports with pytest Demo of Python failure reports with pytest
================================================== ==========================================
Here is a nice run of several tens of failures Here is a nice run of several failures and how ``pytest`` presents things:
and how ``pytest`` presents things (unfortunately
not showing the nice colors here in the HTML that you
get on the terminal - we are working on that):
.. code-block:: pytest .. code-block:: pytest
@ -15,7 +11,7 @@ get on the terminal - we are working on that):
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project/assertion rootdir: $REGENDOC_TMPDIR/assertion
collected 44 items collected 44 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%] failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%]
@ -475,7 +471,7 @@ get on the terminal - we are working on that):
> assert 1 == 0 > assert 1 == 0
E AssertionError E AssertionError
<0-codegen 'abc-123' /home/sweet/project/assertion/failure_demo.py:201>:2: AssertionError <0-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:201>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________ ____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef>

View File

@ -129,7 +129,7 @@ directory with the above conftest.py:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 0 items collected 0 items
======================= no tests ran in 0.12 seconds ======================= ======================= no tests ran in 0.12 seconds =======================
@ -190,7 +190,7 @@ and when running it will see a skipped "slow" test:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 2 items collected 2 items
test_module.py .s [100%] test_module.py .s [100%]
@ -207,7 +207,7 @@ Or run it including the ``slow`` marked test:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 2 items collected 2 items
test_module.py .. [100%] test_module.py .. [100%]
@ -351,7 +351,7 @@ which will add the string to the test header accordingly:
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
project deps: mylib-1.1 project deps: mylib-1.1
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 0 items collected 0 items
======================= no tests ran in 0.12 seconds ======================= ======================= no tests ran in 0.12 seconds =======================
@ -381,7 +381,7 @@ which will add info only when run with "--v":
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
info1: did you know that ... info1: did you know that ...
did you? did you?
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collecting ... collected 0 items collecting ... collected 0 items
======================= no tests ran in 0.12 seconds ======================= ======================= no tests ran in 0.12 seconds =======================
@ -394,7 +394,7 @@ and nothing when run plainly:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 0 items collected 0 items
======================= no tests ran in 0.12 seconds ======================= ======================= no tests ran in 0.12 seconds =======================
@ -434,7 +434,7 @@ Now we can profile which test functions execute the slowest:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 3 items collected 3 items
test_some_are_slow.py ... [100%] test_some_are_slow.py ... [100%]
@ -509,7 +509,7 @@ If we run this:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 4 items collected 4 items
test_step.py .Fx. [100%] test_step.py .Fx. [100%]
@ -593,7 +593,7 @@ We can run this:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 7 items collected 7 items
test_step.py .Fx. [ 57%] test_step.py .Fx. [ 57%]
@ -603,13 +603,13 @@ We can run this:
================================== ERRORS ================================== ================================== ERRORS ==================================
_______________________ ERROR at setup of test_root ________________________ _______________________ ERROR at setup of test_root ________________________
file /home/sweet/project/b/test_error.py, line 1 file $REGENDOC_TMPDIR/b/test_error.py, line 1
def test_root(db): # no db here, will error out def test_root(db): # no db here, will error out
E fixture 'db' not found E fixture 'db' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory > available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them. > use 'pytest --fixtures [testpath]' for help on them.
/home/sweet/project/b/test_error.py:1 $REGENDOC_TMPDIR/b/test_error.py:1
================================= FAILURES ================================= ================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________ ____________________ TestUserHandling.test_modification ____________________
@ -707,7 +707,7 @@ and run them:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 2 items collected 2 items
test_module.py FF [100%] test_module.py FF [100%]
@ -811,7 +811,7 @@ and run it:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 3 items collected 3 items
test_module.py Esetting up a test failed! test_module.py::test_setup_fails test_module.py Esetting up a test failed! test_module.py::test_setup_fails

View File

@ -74,7 +74,7 @@ marked ``smtp_connection`` fixture function. Running the test looks like this:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 1 item collected 1 item
test_smtpsimple.py F [100%] test_smtpsimple.py F [100%]
@ -217,7 +217,7 @@ inspect what is going on and can now run the tests:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 2 items collected 2 items
test_module.py FF [100%] test_module.py FF [100%]
@ -710,7 +710,7 @@ Running the above tests results in the following test IDs being used:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 10 items collected 10 items
<Module test_anothersmtp.py> <Module test_anothersmtp.py>
<Function test_showhelo[smtp.gmail.com]> <Function test_showhelo[smtp.gmail.com]>
@ -755,7 +755,7 @@ Running this test will *skip* the invocation of ``data_set`` with value ``2``:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collecting ... collected 3 items collecting ... collected 3 items
test_fixture_marks.py::test_data[0] PASSED [ 33%] test_fixture_marks.py::test_data[0] PASSED [ 33%]
@ -800,7 +800,7 @@ Here we declare an ``app`` fixture which receives the previously defined
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collecting ... collected 2 items collecting ... collected 2 items
test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%] test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%]
@ -871,7 +871,7 @@ Let's run the tests in verbose mode and with looking at the print-output:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collecting ... collected 8 items collecting ... collected 8 items
test_module.py::test_0[1] SETUP otherarg 1 test_module.py::test_0[1] SETUP otherarg 1

View File

@ -52,7 +52,7 @@ Thats it. You can now execute the test function:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 1 item collected 1 item
test_sample.py F [100%] test_sample.py F [100%]

View File

@ -57,14 +57,16 @@ Applying marks to ``@pytest.mark.parametrize`` parameters
.. versionchanged:: 3.1 .. versionchanged:: 3.1
Prior to version 3.1 the supported mechanism for marking values Prior to version 3.1 the supported mechanism for marking values
used the syntax:: used the syntax:
.. code-block:: python
import pytest import pytest
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6), @pytest.mark.parametrize(
pytest.mark.xfail(("6*9", 42),), "test_input,expected", [("3+5", 8), ("2+4", 6), pytest.mark.xfail(("6*9", 42))]
]) )
def test_eval(test_input, expected): def test_eval(test_input, expected):
assert eval(test_input) == expected assert eval(test_input) == expected
@ -105,9 +107,13 @@ Conditions as strings instead of booleans
.. versionchanged:: 2.4 .. versionchanged:: 2.4
Prior to pytest-2.4 the only way to specify skipif/xfail conditions was Prior to pytest-2.4 the only way to specify skipif/xfail conditions was
to use strings:: to use strings:
.. code-block:: python
import sys import sys
@pytest.mark.skipif("sys.version_info >= (3,3)") @pytest.mark.skipif("sys.version_info >= (3,3)")
def test_function(): def test_function():
... ...
@ -139,17 +145,20 @@ dictionary which is constructed as follows:
expression is applied. expression is applied.
The pytest ``config`` object allows you to skip based on a test The pytest ``config`` object allows you to skip based on a test
configuration value which you might have added:: configuration value which you might have added:
.. code-block:: python
@pytest.mark.skipif("not config.getvalue('db')") @pytest.mark.skipif("not config.getvalue('db')")
def test_function(...): def test_function():
... ...
The equivalent with "boolean conditions" is:: The equivalent with "boolean conditions" is:
@pytest.mark.skipif(not pytest.config.getvalue("db"), .. code-block:: python
reason="--db was not specified")
def test_function(...): @pytest.mark.skipif(not pytest.config.getvalue("db"), reason="--db was not specified")
def test_function():
pass pass
.. note:: .. note::
@ -164,12 +173,16 @@ The equivalent with "boolean conditions" is::
.. versionchanged:: 2.4 .. versionchanged:: 2.4
Previous to version 2.4 to set a break point in code one needed to use ``pytest.set_trace()``:: Previous to version 2.4 to set a break point in code one needed to use ``pytest.set_trace()``:
.. code-block:: python
import pytest import pytest
def test_function(): def test_function():
... ...
pytest.set_trace() # invoke PDB debugger and tracing pytest.set_trace() # invoke PDB debugger and tracing
This is no longer needed and one can use the native ``import pdb;pdb.set_trace()`` call directly. This is no longer needed and one can use the native ``import pdb;pdb.set_trace()`` call directly.

View File

@ -30,7 +30,7 @@ To execute it:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 1 item collected 1 item
test_sample.py F [100%] test_sample.py F [100%]

View File

@ -36,15 +36,15 @@ pytest enables test parametrization at several levels:
The builtin :ref:`pytest.mark.parametrize ref` decorator enables The builtin :ref:`pytest.mark.parametrize ref` decorator enables
parametrization of arguments for a test function. Here is a typical example parametrization of arguments for a test function. Here is a typical example
of a test function that implements checking that a certain input leads of a test function that implements checking that a certain input leads
to an expected output:: to an expected output:
.. code-block:: python
# content of test_expectation.py # content of test_expectation.py
import pytest import pytest
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6), @pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
("6*9", 42),
])
def test_eval(test_input, expected): def test_eval(test_input, expected):
assert eval(test_input) == expected assert eval(test_input) == expected
@ -58,7 +58,7 @@ them in turn:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 3 items collected 3 items
test_expectation.py ..F [100%] test_expectation.py ..F [100%]
@ -68,17 +68,13 @@ them in turn:
test_input = '6*9', expected = 42 test_input = '6*9', expected = 42
@pytest.mark.parametrize("test_input,expected", [ @pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(test_input, expected): def test_eval(test_input, expected):
> assert eval(test_input) == expected > assert eval(test_input) == expected
E AssertionError: assert 54 == 42 E AssertionError: assert 54 == 42
E + where 54 = eval('6*9') E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError test_expectation.py:6: AssertionError
==================== 1 failed, 2 passed in 0.12 seconds ==================== ==================== 1 failed, 2 passed in 0.12 seconds ====================
.. note:: .. note::
@ -104,16 +100,18 @@ Note that you could also use the parametrize marker on a class or a module
(see :ref:`mark`) which would invoke several functions with the argument sets. (see :ref:`mark`) which would invoke several functions with the argument sets.
It is also possible to mark individual test instances within parametrize, It is also possible to mark individual test instances within parametrize,
for example with the builtin ``mark.xfail``:: for example with the builtin ``mark.xfail``:
.. code-block:: python
# content of test_expectation.py # content of test_expectation.py
import pytest import pytest
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6), @pytest.mark.parametrize(
pytest.param("6*9", 42, "test_input,expected",
marks=pytest.mark.xfail), [("3+5", 8), ("2+4", 6), pytest.param("6*9", 42, marks=pytest.mark.xfail)],
]) )
def test_eval(test_input, expected): def test_eval(test_input, expected):
assert eval(test_input) == expected assert eval(test_input) == expected
@ -125,7 +123,7 @@ Let's run this:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 3 items collected 3 items
test_expectation.py ..x [100%] test_expectation.py ..x [100%]
@ -140,9 +138,13 @@ example, if they're dynamically generated by some function - the behaviour of
pytest is defined by the :confval:`empty_parameter_set_mark` option. pytest is defined by the :confval:`empty_parameter_set_mark` option.
To get all combinations of multiple parametrized arguments you can stack To get all combinations of multiple parametrized arguments you can stack
``parametrize`` decorators:: ``parametrize`` decorators:
.. code-block:: python
import pytest import pytest
@pytest.mark.parametrize("x", [0, 1]) @pytest.mark.parametrize("x", [0, 1])
@pytest.mark.parametrize("y", [2, 3]) @pytest.mark.parametrize("y", [2, 3])
def test_foo(x, y): def test_foo(x, y):
@ -166,26 +168,36 @@ parametrization.
For example, let's say we want to run a test taking string inputs which For example, let's say we want to run a test taking string inputs which
we want to set via a new ``pytest`` command line option. Let's first write we want to set via a new ``pytest`` command line option. Let's first write
a simple test accepting a ``stringinput`` fixture function argument:: a simple test accepting a ``stringinput`` fixture function argument:
.. code-block:: python
# content of test_strings.py # content of test_strings.py
def test_valid_string(stringinput): def test_valid_string(stringinput):
assert stringinput.isalpha() assert stringinput.isalpha()
Now we add a ``conftest.py`` file containing the addition of a Now we add a ``conftest.py`` file containing the addition of a
command line option and the parametrization of our test function:: command line option and the parametrization of our test function:
.. code-block:: python
# content of conftest.py # content of conftest.py
def pytest_addoption(parser): def pytest_addoption(parser):
parser.addoption("--stringinput", action="append", default=[], parser.addoption(
help="list of stringinputs to pass to test functions") "--stringinput",
action="append",
default=[],
help="list of stringinputs to pass to test functions",
)
def pytest_generate_tests(metafunc): def pytest_generate_tests(metafunc):
if 'stringinput' in metafunc.fixturenames: if "stringinput" in metafunc.fixturenames:
metafunc.parametrize("stringinput", metafunc.parametrize("stringinput", metafunc.config.getoption("stringinput"))
metafunc.config.getoption('stringinput'))
If we now pass two stringinput values, our test will run twice: If we now pass two stringinput values, our test will run twice:
@ -212,7 +224,7 @@ Let's also run with a stringinput that will lead to a failing test:
E + where False = <built-in method isalpha of str object at 0xdeadbeef>() E + where False = <built-in method isalpha of str object at 0xdeadbeef>()
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha
test_strings.py:3: AssertionError test_strings.py:4: AssertionError
1 failed in 0.12 seconds 1 failed in 0.12 seconds
As expected our test function fails. As expected our test function fails.
@ -226,7 +238,7 @@ list:
$ pytest -q -rs test_strings.py $ pytest -q -rs test_strings.py
s [100%] s [100%]
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at /home/sweet/project/test_strings.py:1 SKIPPED [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:2
1 skipped in 0.12 seconds 1 skipped in 0.12 seconds
Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across

View File

@ -84,32 +84,44 @@ It is also possible to skip the whole module using
If you wish to skip something conditionally then you can use ``skipif`` instead. If you wish to skip something conditionally then you can use ``skipif`` instead.
Here is an example of marking a test function to be skipped Here is an example of marking a test function to be skipped
when run on an interpreter earlier than Python3.6:: when run on an interpreter earlier than Python3.6:
.. code-block:: python
import sys import sys
@pytest.mark.skipif(sys.version_info < (3,6),
reason="requires python3.6 or higher")
@pytest.mark.skipif(sys.version_info < (3, 6), reason="requires python3.6 or higher")
def test_function(): def test_function():
... ...
If the condition evaluates to ``True`` during collection, the test function will be skipped, If the condition evaluates to ``True`` during collection, the test function will be skipped,
with the specified reason appearing in the summary when using ``-rs``. with the specified reason appearing in the summary when using ``-rs``.
You can share ``skipif`` markers between modules. Consider this test module:: You can share ``skipif`` markers between modules. Consider this test module:
.. code-block:: python
# content of test_mymodule.py # content of test_mymodule.py
import mymodule import mymodule
minversion = pytest.mark.skipif(mymodule.__versioninfo__ < (1,1),
reason="at least mymodule-1.1 required") minversion = pytest.mark.skipif(
mymodule.__versioninfo__ < (1, 1), reason="at least mymodule-1.1 required"
)
@minversion @minversion
def test_function(): def test_function():
... ...
You can import the marker and reuse it in another test module:: You can import the marker and reuse it in another test module:
.. code-block:: python
# test_myothermodule.py # test_myothermodule.py
from test_mymodule import minversion from test_mymodule import minversion
@minversion @minversion
def test_anotherfunction(): def test_anotherfunction():
... ...
@ -128,12 +140,12 @@ so they are supported mainly for backward compatibility reasons.
Skip all test functions of a class or module Skip all test functions of a class or module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use the ``skipif`` marker (as any other marker) on classes:: You can use the ``skipif`` marker (as any other marker) on classes:
@pytest.mark.skipif(sys.platform == 'win32', .. code-block:: python
reason="does not run on windows")
@pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows")
class TestPosixCalls(object): class TestPosixCalls(object):
def test_function(self): def test_function(self):
"will not be setup or run under 'win32' platform" "will not be setup or run under 'win32' platform"
@ -269,10 +281,11 @@ You can change the default value of the ``strict`` parameter using the
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
As with skipif_ you can also mark your expectation of a failure As with skipif_ you can also mark your expectation of a failure
on a particular platform:: on a particular platform:
@pytest.mark.xfail(sys.version_info >= (3,6), .. code-block:: python
reason="python3.6 api changes")
@pytest.mark.xfail(sys.version_info >= (3, 6), reason="python3.6 api changes")
def test_function(): def test_function():
... ...
@ -335,7 +348,7 @@ Running it with the report-on-xfail option gives this output:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project/example rootdir: $REGENDOC_TMPDIR/example
collected 7 items collected 7 items
xfail_demo.py xxxxxxx [100%] xfail_demo.py xxxxxxx [100%]

View File

@ -43,7 +43,7 @@ Running this would result in a passed test except for the last
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 1 item collected 1 item
test_tmp_path.py F [100%] test_tmp_path.py F [100%]
@ -110,7 +110,7 @@ Running this would result in a passed test except for the last
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 1 item collected 1 item
test_tmpdir.py F [100%] test_tmpdir.py F [100%]

View File

@ -130,7 +130,7 @@ the ``self.db`` values in the traceback:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 2 items collected 2 items
test_unittest_db.py FF [100%] test_unittest_db.py FF [100%]

View File

@ -204,7 +204,7 @@ Example:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 6 items collected 6 items
test_example.py .FEsxX [100%] test_example.py .FEsxX [100%]
@ -227,15 +227,16 @@ Example:
test_example.py:14: AssertionError test_example.py:14: AssertionError
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [1] /home/sweet/project/test_example.py:23: skipping this test SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test
XFAIL test_example.py::test_xfail XFAIL test_example.py::test_xfail
reason: xfailing this test reason: xfailing this test
XPASS test_example.py::test_xpass always xfail XPASS test_example.py::test_xpass always xfail
ERROR test_example.py::test_error ERROR test_example.py::test_error
FAILED test_example.py::test_fail FAILED test_example.py::test_fail
= 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds = 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds
The ``-r`` options accepts a number of characters after it, with ``a`` used above meaning "all except passes". The ``-r`` options accepts a number of characters after it, with ``a`` used
above meaning "all except passes".
Here is the full list of available characters that can be used: Here is the full list of available characters that can be used:
@ -247,6 +248,7 @@ Here is the full list of available characters that can be used:
- ``p`` - passed - ``p`` - passed
- ``P`` - passed with output - ``P`` - passed with output
- ``a`` - all except ``pP`` - ``a`` - all except ``pP``
- ``A`` - all
More than one character can be used, so for example to only see failed and skipped tests, you can execute: More than one character can be used, so for example to only see failed and skipped tests, you can execute:
@ -256,7 +258,7 @@ More than one character can be used, so for example to only see failed and skipp
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 6 items collected 6 items
test_example.py .FEsxX [100%] test_example.py .FEsxX [100%]
@ -280,8 +282,8 @@ More than one character can be used, so for example to only see failed and skipp
test_example.py:14: AssertionError test_example.py:14: AssertionError
========================= short test summary info ========================== ========================= short test summary info ==========================
FAILED test_example.py::test_fail FAILED test_example.py::test_fail
SKIPPED [1] /home/sweet/project/test_example.py:23: skipping this test SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test
= 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds = 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds
Using ``p`` lists the passing tests, whilst ``P`` adds an extra section "PASSES" with those tests that passed but had Using ``p`` lists the passing tests, whilst ``P`` adds an extra section "PASSES" with those tests that passed but had
captured output: captured output:
@ -292,7 +294,7 @@ captured output:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 6 items collected 6 items
test_example.py .FEsxX [100%] test_example.py .FEsxX [100%]
@ -320,7 +322,7 @@ captured output:
_________________________________ test_ok __________________________________ _________________________________ test_ok __________________________________
--------------------------- Captured stdout call --------------------------- --------------------------- Captured stdout call ---------------------------
ok ok
= 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds = 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds
.. _pdb-option: .. _pdb-option:

View File

@ -6,15 +6,19 @@ Warnings Capture
.. versionadded:: 3.1 .. versionadded:: 3.1
Starting from version ``3.1``, pytest now automatically catches warnings during test execution Starting from version ``3.1``, pytest now automatically catches warnings during test execution
and displays them at the end of the session:: and displays them at the end of the session:
.. code-block:: python
# content of test_show_warnings.py # content of test_show_warnings.py
import warnings import warnings
def api_v1(): def api_v1():
warnings.warn(UserWarning("api v1, should use functions from v2")) warnings.warn(UserWarning("api v1, should use functions from v2"))
return 1 return 1
def test_one(): def test_one():
assert api_v1() == 1 assert api_v1() == 1
@ -26,14 +30,14 @@ Running pytest now produces this output:
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project rootdir: $REGENDOC_TMPDIR
collected 1 item collected 1 item
test_show_warnings.py . [100%] test_show_warnings.py . [100%]
============================= warnings summary ============================= ============================= warnings summary =============================
test_show_warnings.py::test_one test_show_warnings.py::test_one
/home/sweet/project/test_show_warnings.py:4: UserWarning: api v1, should use functions from v2 $REGENDOC_TMPDIR/test_show_warnings.py:5: UserWarning: api v1, should use functions from v2
warnings.warn(UserWarning("api v1, should use functions from v2")) warnings.warn(UserWarning("api v1, should use functions from v2"))
-- Docs: https://docs.pytest.org/en/latest/warnings.html -- Docs: https://docs.pytest.org/en/latest/warnings.html
@ -52,14 +56,14 @@ them into errors:
def test_one(): def test_one():
> assert api_v1() == 1 > assert api_v1() == 1
test_show_warnings.py:8: test_show_warnings.py:10:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def api_v1(): def api_v1():
> warnings.warn(UserWarning("api v1, should use functions from v2")) > warnings.warn(UserWarning("api v1, should use functions from v2"))
E UserWarning: api v1, should use functions from v2 E UserWarning: api v1, should use functions from v2
test_show_warnings.py:4: UserWarning test_show_warnings.py:5: UserWarning
1 failed in 0.12 seconds 1 failed in 0.12 seconds
The same option can be set in the ``pytest.ini`` file using the ``filterwarnings`` ini option. The same option can be set in the ``pytest.ini`` file using the ``filterwarnings`` ini option.
@ -195,28 +199,36 @@ Ensuring code triggers a deprecation warning
You can also call a global helper for checking You can also call a global helper for checking
that a certain function call triggers a ``DeprecationWarning`` or that a certain function call triggers a ``DeprecationWarning`` or
``PendingDeprecationWarning``:: ``PendingDeprecationWarning``:
.. code-block:: python
import pytest import pytest
def test_global(): def test_global():
pytest.deprecated_call(myfunction, 17) pytest.deprecated_call(myfunction, 17)
By default, ``DeprecationWarning`` and ``PendingDeprecationWarning`` will not be By default, ``DeprecationWarning`` and ``PendingDeprecationWarning`` will not be
caught when using ``pytest.warns`` or ``recwarn`` because default Python warnings filters hide caught when using ``pytest.warns`` or ``recwarn`` because default Python warnings filters hide
them. If you wish to record them in your own code, use the them. If you wish to record them in your own code, use the
command ``warnings.simplefilter('always')``:: command ``warnings.simplefilter('always')``:
.. code-block:: python
import warnings import warnings
import pytest import pytest
def test_deprecation(recwarn): def test_deprecation(recwarn):
warnings.simplefilter('always') warnings.simplefilter("always")
warnings.warn("deprecated", DeprecationWarning) warnings.warn("deprecated", DeprecationWarning)
assert len(recwarn) == 1 assert len(recwarn) == 1
assert recwarn.pop(DeprecationWarning) assert recwarn.pop(DeprecationWarning)
You can also use it as a contextmanager:: You can also use it as a contextmanager:
.. code-block:: python
def test_global(): def test_global():
with pytest.deprecated_call(): with pytest.deprecated_call():
@ -238,11 +250,14 @@ Asserting warnings with the warns function
.. versionadded:: 2.8 .. versionadded:: 2.8
You can check that code raises a particular warning using ``pytest.warns``, You can check that code raises a particular warning using ``pytest.warns``,
which works in a similar manner to :ref:`raises <assertraises>`:: which works in a similar manner to :ref:`raises <assertraises>`:
.. code-block:: python
import warnings import warnings
import pytest import pytest
def test_warning(): def test_warning():
with pytest.warns(UserWarning): with pytest.warns(UserWarning):
warnings.warn("my warning", UserWarning) warnings.warn("my warning", UserWarning)
@ -269,7 +284,9 @@ You can also call ``pytest.warns`` on a function or code string::
The function also returns a list of all raised warnings (as The function also returns a list of all raised warnings (as
``warnings.WarningMessage`` objects), which you can query for ``warnings.WarningMessage`` objects), which you can query for
additional information:: additional information:
.. code-block:: python
with pytest.warns(RuntimeWarning) as record: with pytest.warns(RuntimeWarning) as record:
warnings.warn("another warning", RuntimeWarning) warnings.warn("another warning", RuntimeWarning)
@ -297,7 +314,9 @@ You can record raised warnings either using ``pytest.warns`` or with
the ``recwarn`` fixture. the ``recwarn`` fixture.
To record with ``pytest.warns`` without asserting anything about the warnings, To record with ``pytest.warns`` without asserting anything about the warnings,
pass ``None`` as the expected warning type:: pass ``None`` as the expected warning type:
.. code-block:: python
with pytest.warns(None) as record: with pytest.warns(None) as record:
warnings.warn("user", UserWarning) warnings.warn("user", UserWarning)
@ -307,10 +326,13 @@ pass ``None`` as the expected warning type::
assert str(record[0].message) == "user" assert str(record[0].message) == "user"
assert str(record[1].message) == "runtime" assert str(record[1].message) == "runtime"
The ``recwarn`` fixture will record warnings for the whole function:: The ``recwarn`` fixture will record warnings for the whole function:
.. code-block:: python
import warnings import warnings
def test_hello(recwarn): def test_hello(recwarn):
warnings.warn("hello", UserWarning) warnings.warn("hello", UserWarning)
assert len(recwarn) == 1 assert len(recwarn) == 1
@ -378,7 +400,7 @@ defines an ``__init__`` constructor, as this prevents the class from being insta
============================= warnings summary ============================= ============================= warnings summary =============================
test_pytest_warnings.py:1 test_pytest_warnings.py:1
/home/sweet/project/test_pytest_warnings.py:1: PytestWarning: cannot collect test class 'Test' because it has a __init__ constructor $REGENDOC_TMPDIR/test_pytest_warnings.py:1: PytestWarning: cannot collect test class 'Test' because it has a __init__ constructor
class Test: class Test:
-- Docs: https://docs.pytest.org/en/latest/warnings.html -- Docs: https://docs.pytest.org/en/latest/warnings.html

View File

@ -433,14 +433,14 @@ additionally it is possible to copy examples for an example folder before runnin
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: /home/sweet/project, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 2 items collected 2 items
test_example.py .. [100%] test_example.py .. [100%]
============================= warnings summary ============================= ============================= warnings summary =============================
test_example.py::test_plugin test_example.py::test_plugin
/home/sweet/project/test_example.py:4: PytestExperimentalApiWarning: testdir.copy_example is an experimental api that may change over time $REGENDOC_TMPDIR/test_example.py:4: PytestExperimentalApiWarning: testdir.copy_example is an experimental api that may change over time
testdir.copy_example("test_example.py") testdir.copy_example("test_example.py")
-- Docs: https://docs.pytest.org/en/latest/warnings.html -- Docs: https://docs.pytest.org/en/latest/warnings.html
@ -528,10 +528,13 @@ a :py:class:`Result <pluggy._Result>` instance which encapsulates a result or
exception info. The yield point itself will thus typically not raise exception info. The yield point itself will thus typically not raise
exceptions (unless there are bugs). exceptions (unless there are bugs).
Here is an example definition of a hook wrapper:: Here is an example definition of a hook wrapper:
.. code-block:: python
import pytest import pytest
@pytest.hookimpl(hookwrapper=True) @pytest.hookimpl(hookwrapper=True)
def pytest_pyfunc_call(pyfuncitem): def pytest_pyfunc_call(pyfuncitem):
do_something_before_next_hook_executes() do_something_before_next_hook_executes()
@ -636,10 +639,13 @@ if you depend on a plugin that is not installed, validation will fail and
the error message will not make much sense to your users. the error message will not make much sense to your users.
One approach is to defer the hook implementation to a new plugin instead of One approach is to defer the hook implementation to a new plugin instead of
declaring the hook functions directly in your plugin module, for example:: declaring the hook functions directly in your plugin module, for example:
.. code-block:: python
# contents of myplugin.py # contents of myplugin.py
class DeferPlugin(object): class DeferPlugin(object):
"""Simple plugin to defer pytest-xdist hook functions.""" """Simple plugin to defer pytest-xdist hook functions."""
@ -647,8 +653,9 @@ declaring the hook functions directly in your plugin module, for example::
"""standard xdist hook function. """standard xdist hook function.
""" """
def pytest_configure(config): def pytest_configure(config):
if config.pluginmanager.hasplugin('xdist'): if config.pluginmanager.hasplugin("xdist"):
config.pluginmanager.register(DeferPlugin()) config.pluginmanager.register(DeferPlugin())
This has the added benefit of allowing you to conditionally install hooks This has the added benefit of allowing you to conditionally install hooks

View File

@ -285,20 +285,30 @@ def _compare_eq_iterable(left, right, verbose=0):
def _compare_eq_sequence(left, right, verbose=0): def _compare_eq_sequence(left, right, verbose=0):
explanation = [] explanation = []
for i in range(min(len(left), len(right))): len_left = len(left)
len_right = len(right)
for i in range(min(len_left, len_right)):
if left[i] != right[i]: if left[i] != right[i]:
explanation += [u"At index %s diff: %r != %r" % (i, left[i], right[i])] explanation += [u"At index %s diff: %r != %r" % (i, left[i], right[i])]
break break
if len(left) > len(right): len_diff = len_left - len_right
explanation += [
u"Left contains more items, first extra item: %s" if len_diff:
% saferepr(left[len(right)]) if len_diff > 0:
] dir_with_more = "Left"
elif len(left) < len(right): extra = saferepr(left[len_right])
explanation += [ else:
u"Right contains more items, first extra item: %s" len_diff = 0 - len_diff
% saferepr(right[len(left)]) dir_with_more = "Right"
] extra = saferepr(right[len_left])
if len_diff == 1:
explanation += [u"%s contains one more item: %s" % (dir_with_more, extra)]
else:
explanation += [
u"%s contains %d more items, first extra item: %s"
% (dir_with_more, len_diff, extra)
]
return explanation return explanation
@ -319,7 +329,9 @@ def _compare_eq_set(left, right, verbose=0):
def _compare_eq_dict(left, right, verbose=0): def _compare_eq_dict(left, right, verbose=0):
explanation = [] explanation = []
common = set(left).intersection(set(right)) set_left = set(left)
set_right = set(right)
common = set_left.intersection(set_right)
same = {k: left[k] for k in common if left[k] == right[k]} same = {k: left[k] for k in common if left[k] == right[k]}
if same and verbose < 2: if same and verbose < 2:
explanation += [u"Omitting %s identical items, use -vv to show" % len(same)] explanation += [u"Omitting %s identical items, use -vv to show" % len(same)]
@ -331,15 +343,23 @@ def _compare_eq_dict(left, right, verbose=0):
explanation += [u"Differing items:"] explanation += [u"Differing items:"]
for k in diff: for k in diff:
explanation += [saferepr({k: left[k]}) + " != " + saferepr({k: right[k]})] explanation += [saferepr({k: left[k]}) + " != " + saferepr({k: right[k]})]
extra_left = set(left) - set(right) extra_left = set_left - set_right
if extra_left: len_extra_left = len(extra_left)
explanation.append(u"Left contains more items:") if len_extra_left:
explanation.append(
u"Left contains %d more item%s:"
% (len_extra_left, "" if len_extra_left == 1 else "s")
)
explanation.extend( explanation.extend(
pprint.pformat({k: left[k] for k in extra_left}).splitlines() pprint.pformat({k: left[k] for k in extra_left}).splitlines()
) )
extra_right = set(right) - set(left) extra_right = set_right - set_left
if extra_right: len_extra_right = len(extra_right)
explanation.append(u"Right contains more items:") if len_extra_right:
explanation.append(
u"Right contains %d more item%s:"
% (len_extra_right, "" if len_extra_right == 1 else "s")
)
explanation.extend( explanation.extend(
pprint.pformat({k: right[k] for k in extra_right}).splitlines() pprint.pformat({k: right[k] for k in extra_right}).splitlines()
) )

View File

@ -179,45 +179,45 @@ class LFPlugin(object):
self.lastfailed[report.nodeid] = True self.lastfailed[report.nodeid] = True
def pytest_collection_modifyitems(self, session, config, items): def pytest_collection_modifyitems(self, session, config, items):
if self.active: if not self.active:
if self.lastfailed: return
previously_failed = []
previously_passed = []
for item in items:
if item.nodeid in self.lastfailed:
previously_failed.append(item)
else:
previously_passed.append(item)
self._previously_failed_count = len(previously_failed)
if not previously_failed: if self.lastfailed:
# Running a subset of all tests with recorded failures previously_failed = []
# only outside of it. previously_passed = []
self._report_status = "%d known failures not in selected tests" % ( for item in items:
len(self.lastfailed), if item.nodeid in self.lastfailed:
) previously_failed.append(item)
else: else:
if self.config.getoption("lf"): previously_passed.append(item)
items[:] = previously_failed self._previously_failed_count = len(previously_failed)
config.hook.pytest_deselected(items=previously_passed)
else: # --failedfirst
items[:] = previously_failed + previously_passed
noun = ( if not previously_failed:
"failure" if self._previously_failed_count == 1 else "failures" # Running a subset of all tests with recorded failures
) # only outside of it.
suffix = " first" if self.config.getoption("failedfirst") else "" self._report_status = "%d known failures not in selected tests" % (
self._report_status = "rerun previous {count} {noun}{suffix}".format( len(self.lastfailed),
count=self._previously_failed_count, suffix=suffix, noun=noun )
)
else: else:
self._report_status = "no previously failed tests, " if self.config.getoption("lf"):
if self.config.getoption("last_failed_no_failures") == "none": items[:] = previously_failed
self._report_status += "deselecting all items." config.hook.pytest_deselected(items=previously_passed)
config.hook.pytest_deselected(items=items) else: # --failedfirst
items[:] = [] items[:] = previously_failed + previously_passed
else:
self._report_status += "not deselecting items." noun = "failure" if self._previously_failed_count == 1 else "failures"
suffix = " first" if self.config.getoption("failedfirst") else ""
self._report_status = "rerun previous {count} {noun}{suffix}".format(
count=self._previously_failed_count, suffix=suffix, noun=noun
)
else:
self._report_status = "no previously failed tests, "
if self.config.getoption("last_failed_no_failures") == "none":
self._report_status += "deselecting all items."
config.hook.pytest_deselected(items=items)
items[:] = []
else:
self._report_status += "not deselecting items."
def pytest_sessionfinish(self, session): def pytest_sessionfinish(self, session):
config = self.config config = self.config
@ -292,9 +292,13 @@ def pytest_addoption(parser):
) )
group.addoption( group.addoption(
"--cache-show", "--cache-show",
action="store_true", action="append",
nargs="?",
dest="cacheshow", dest="cacheshow",
help="show cache contents, don't perform collection or tests", help=(
"show cache contents, don't perform collection or tests. "
"Optional argument: glob (default: '*')."
),
) )
group.addoption( group.addoption(
"--cache-clear", "--cache-clear",
@ -369,11 +373,16 @@ def cacheshow(config, session):
if not config.cache._cachedir.is_dir(): if not config.cache._cachedir.is_dir():
tw.line("cache is empty") tw.line("cache is empty")
return 0 return 0
glob = config.option.cacheshow[0]
if glob is None:
glob = "*"
dummy = object() dummy = object()
basedir = config.cache._cachedir basedir = config.cache._cachedir
vdir = basedir / "v" vdir = basedir / "v"
tw.sep("-", "cache values") tw.sep("-", "cache values for %r" % glob)
for valpath in sorted(x for x in vdir.rglob("*") if x.is_file()): for valpath in sorted(x for x in vdir.rglob(glob) if x.is_file()):
key = valpath.relative_to(vdir) key = valpath.relative_to(vdir)
val = config.cache.get(key, dummy) val = config.cache.get(key, dummy)
if val is dummy: if val is dummy:
@ -385,8 +394,8 @@ def cacheshow(config, session):
ddir = basedir / "d" ddir = basedir / "d"
if ddir.is_dir(): if ddir.is_dir():
contents = sorted(ddir.rglob("*")) contents = sorted(ddir.rglob(glob))
tw.sep("-", "cache directories") tw.sep("-", "cache directories for %r" % glob)
for p in contents: for p in contents:
# if p.check(dir=1): # if p.check(dir=1):
# print("%s/" % p.relto(basedir)) # print("%s/" % p.relto(basedir))

View File

@ -282,7 +282,6 @@ class PytestPluginManager(PluginManager):
known_marks = {m.name for m in getattr(method, "pytestmark", [])} known_marks = {m.name for m in getattr(method, "pytestmark", [])}
for name in ("tryfirst", "trylast", "optionalhook", "hookwrapper"): for name in ("tryfirst", "trylast", "optionalhook", "hookwrapper"):
opts.setdefault(name, hasattr(method, name) or name in known_marks) opts.setdefault(name, hasattr(method, name) or name in known_marks)
return opts return opts

View File

@ -10,31 +10,18 @@ from doctest import UnexpectedException
from _pytest import outcomes from _pytest import outcomes
from _pytest.config import hookimpl from _pytest.config import hookimpl
from _pytest.config.exceptions import UsageError
def _validate_usepdb_cls(value): def _validate_usepdb_cls(value):
"""Validate syntax of --pdbcls option."""
try: try:
modname, classname = value.split(":") modname, classname = value.split(":")
except ValueError: except ValueError:
raise argparse.ArgumentTypeError( raise argparse.ArgumentTypeError(
"{!r} is not in the format 'modname:classname'".format(value) "{!r} is not in the format 'modname:classname'".format(value)
) )
return (modname, classname)
try:
__import__(modname)
mod = sys.modules[modname]
# Handle --pdbcls=pdb:pdb.Pdb (useful e.g. with pdbpp).
parts = classname.split(".")
pdb_cls = getattr(mod, parts[0])
for part in parts[1:]:
pdb_cls = getattr(pdb_cls, part)
return pdb_cls
except Exception as exc:
raise argparse.ArgumentTypeError(
"could not get pdb class for {!r}: {}".format(value, exc)
)
def pytest_addoption(parser): def pytest_addoption(parser):
@ -68,9 +55,28 @@ def pytest_addoption(parser):
) )
def _import_pdbcls(modname, classname):
try:
__import__(modname)
mod = sys.modules[modname]
# Handle --pdbcls=pdb:pdb.Pdb (useful e.g. with pdbpp).
parts = classname.split(".")
pdb_cls = getattr(mod, parts[0])
for part in parts[1:]:
pdb_cls = getattr(pdb_cls, part)
return pdb_cls
except Exception as exc:
value = ":".join((modname, classname))
raise UsageError("--pdbcls: could not import {!r}: {}".format(value, exc))
def pytest_configure(config): def pytest_configure(config):
pdb_cls = config.getvalue("usepdb_cls") pdb_cls = config.getvalue("usepdb_cls")
if not pdb_cls: if pdb_cls:
pdb_cls = _import_pdbcls(*pdb_cls)
else:
pdb_cls = pdb.Pdb pdb_cls = pdb.Pdb
if config.getvalue("trace"): if config.getvalue("trace"):
@ -250,7 +256,7 @@ def _test_pytest_function(pyfuncitem):
_pdb = pytestPDB._init_pdb() _pdb = pytestPDB._init_pdb()
testfunction = pyfuncitem.obj testfunction = pyfuncitem.obj
pyfuncitem.obj = _pdb.runcall pyfuncitem.obj = _pdb.runcall
if "func" in pyfuncitem._fixtureinfo.argnames: # noqa if "func" in pyfuncitem._fixtureinfo.argnames: # pragma: no branch
raise ValueError("--trace can't be used with a fixture named func!") raise ValueError("--trace can't be used with a fixture named func!")
pyfuncitem.funcargs["func"] = testfunction pyfuncitem.funcargs["func"] = testfunction
new_list = list(pyfuncitem._fixtureinfo.argnames) new_list = list(pyfuncitem._fixtureinfo.argnames)

View File

@ -93,3 +93,9 @@ PYTEST_WARNS_UNKNOWN_KWARGS = UnformattedWarning(
"pytest.warns() got unexpected keyword arguments: {args!r}.\n" "pytest.warns() got unexpected keyword arguments: {args!r}.\n"
"This will be an error in future versions.", "This will be an error in future versions.",
) )
PYTEST_PARAM_UNKNOWN_KWARGS = UnformattedWarning(
PytestDeprecationWarning,
"pytest.param() got unexpected keyword arguments: {args!r}.\n"
"This will be an error in future versions.",
)

View File

@ -853,7 +853,9 @@ class FixtureDef(object):
exceptions.append(sys.exc_info()) exceptions.append(sys.exc_info())
if exceptions: if exceptions:
e = exceptions[0] e = exceptions[0]
del exceptions # ensure we don't keep all frames alive because of the traceback del (
exceptions
) # ensure we don't keep all frames alive because of the traceback
six.reraise(*e) six.reraise(*e)
finally: finally:

View File

@ -151,13 +151,14 @@ def showhelp(config):
) )
tw.line() tw.line()
columns = tw.fullwidth # costly call
for name in config._parser._ininames: for name in config._parser._ininames:
help, type, default = config._parser._inidict[name] help, type, default = config._parser._inidict[name]
if type is None: if type is None:
type = "string" type = "string"
spec = "%s (%s)" % (name, type) spec = "%s (%s)" % (name, type)
line = " %-24s %s" % (spec, help) line = " %-24s %s" % (spec, help)
tw.line(line[: tw.fullwidth]) tw.line(line[:columns])
tw.line() tw.line()
tw.line("environment variables:") tw.line("environment variables:")

View File

@ -227,7 +227,7 @@ def pytest_collectreport(report):
def pytest_deselected(items): def pytest_deselected(items):
""" called for test items deselected by keyword. """ """ called for test items deselected, e.g. by keyword. """
@hookspec(firstresult=True) @hookspec(firstresult=True)

View File

@ -252,7 +252,14 @@ class _NodeReporter(object):
def append_skipped(self, report): def append_skipped(self, report):
if hasattr(report, "wasxfail"): if hasattr(report, "wasxfail"):
self._add_simple(Junit.skipped, "expected test failure", report.wasxfail) xfailreason = report.wasxfail
if xfailreason.startswith("reason: "):
xfailreason = xfailreason[8:]
self.append(
Junit.skipped(
"", type="pytest.xfail", message=bin_xml_escape(xfailreason)
)
)
else: else:
filename, lineno, skipreason = report.longrepr filename, lineno, skipreason = report.longrepr
if skipreason.startswith("Skipped: "): if skipreason.startswith("Skipped: "):

View File

@ -10,6 +10,7 @@ from ..compat import ascii_escaped
from ..compat import getfslineno from ..compat import getfslineno
from ..compat import MappingMixin from ..compat import MappingMixin
from ..compat import NOTSET from ..compat import NOTSET
from _pytest.deprecated import PYTEST_PARAM_UNKNOWN_KWARGS
from _pytest.outcomes import fail from _pytest.outcomes import fail
from _pytest.warning_types import UnknownMarkWarning from _pytest.warning_types import UnknownMarkWarning
@ -61,20 +62,25 @@ def get_empty_parameterset_mark(config, argnames, func):
class ParameterSet(namedtuple("ParameterSet", "values, marks, id")): class ParameterSet(namedtuple("ParameterSet", "values, marks, id")):
@classmethod @classmethod
def param(cls, *values, **kw): def param(cls, *values, **kwargs):
marks = kw.pop("marks", ()) marks = kwargs.pop("marks", ())
if isinstance(marks, MarkDecorator): if isinstance(marks, MarkDecorator):
marks = (marks,) marks = (marks,)
else: else:
assert isinstance(marks, (tuple, list, set)) assert isinstance(marks, (tuple, list, set))
id_ = kw.pop("id", None) id_ = kwargs.pop("id", None)
if id_ is not None: if id_ is not None:
if not isinstance(id_, six.string_types): if not isinstance(id_, six.string_types):
raise TypeError( raise TypeError(
"Expected id to be a string, got {}: {!r}".format(type(id_), id_) "Expected id to be a string, got {}: {!r}".format(type(id_), id_)
) )
id_ = ascii_escaped(id_) id_ = ascii_escaped(id_)
if kwargs:
warnings.warn(
PYTEST_PARAM_UNKNOWN_KWARGS.format(args=sorted(kwargs)), stacklevel=3
)
return cls(values, marks, id_) return cls(values, marks, id_)
@classmethod @classmethod
@ -298,7 +304,7 @@ class MarkGenerator(object):
for line in self._config.getini("markers"): for line in self._config.getini("markers"):
# example lines: "skipif(condition): skip the given test if..." # example lines: "skipif(condition): skip the given test if..."
# or "hypothesis: tests which use Hypothesis", so to get the # or "hypothesis: tests which use Hypothesis", so to get the
# marker name we we split on both `:` and `(`. # marker name we split on both `:` and `(`.
marker = line.split(":")[0].split("(")[0].strip() marker = line.split(":")[0].split("(")[0].strip()
self._markers.add(marker) self._markers.add(marker)
@ -306,7 +312,7 @@ class MarkGenerator(object):
# then it really is time to issue a warning or an error. # then it really is time to issue a warning or an error.
if name not in self._markers: if name not in self._markers:
if self._config.option.strict: if self._config.option.strict:
fail("{!r} not a registered marker".format(name), pytrace=False) fail("{!r} is not a registered marker".format(name), pytrace=False)
else: else:
warnings.warn( warnings.warn(
"Unknown pytest.mark.%s - is this a typo? You can register " "Unknown pytest.mark.%s - is this a typo? You can register "

View File

@ -271,6 +271,18 @@ class MonkeyPatch(object):
# https://github.com/pypa/setuptools/blob/d8b901bc/docs/pkg_resources.txt#L162-L171 # https://github.com/pypa/setuptools/blob/d8b901bc/docs/pkg_resources.txt#L162-L171
fixup_namespace_packages(str(path)) fixup_namespace_packages(str(path))
# A call to syspathinsert() usually means that the caller wants to
# import some dynamically created files, thus with python3 we
# invalidate its import caches.
# This is especially important when any namespace package is in used,
# since then the mtime based FileFinder cache (that gets created in
# this case already) gets not invalidated when writing the new files
# quickly afterwards.
if sys.version_info >= (3, 3):
from importlib import invalidate_caches
invalidate_caches()
def chdir(self, path): def chdir(self, path):
""" Change the current working directory to the specified path. """ Change the current working directory to the specified path.
Path can be a string or a py.path.local object. Path can be a string or a py.path.local object.

View File

@ -97,8 +97,7 @@ def skip(msg="", **kwargs):
__tracebackhide__ = True __tracebackhide__ = True
allow_module_level = kwargs.pop("allow_module_level", False) allow_module_level = kwargs.pop("allow_module_level", False)
if kwargs: if kwargs:
keys = [k for k in kwargs.keys()] raise TypeError("unexpected keyword arguments: {}".format(sorted(kwargs)))
raise TypeError("unexpected keyword arguments: {}".format(keys))
raise Skipped(msg=msg, allow_module_level=allow_module_level) raise Skipped(msg=msg, allow_module_level=allow_module_level)

View File

@ -76,8 +76,11 @@ def pytest_configure(config):
def raise_on_kwargs(kwargs): def raise_on_kwargs(kwargs):
if kwargs: __tracebackhide__ = True
raise TypeError("Unexpected arguments: {}".format(", ".join(sorted(kwargs)))) if kwargs: # pragma: no branch
raise TypeError(
"Unexpected keyword arguments: {}".format(", ".join(sorted(kwargs)))
)
class LsofFdLeakChecker(object): class LsofFdLeakChecker(object):
@ -309,7 +312,8 @@ class HookRecorder(object):
passed.append(rep) passed.append(rep)
elif rep.skipped: elif rep.skipped:
skipped.append(rep) skipped.append(rep)
elif rep.failed: else:
assert rep.failed, "Unexpected outcome: {!r}".format(rep)
failed.append(rep) failed.append(rep)
return passed, skipped, failed return passed, skipped, failed
@ -341,6 +345,15 @@ def testdir(request, tmpdir_factory):
return Testdir(request, tmpdir_factory) return Testdir(request, tmpdir_factory)
@pytest.fixture
def _sys_snapshot():
snappaths = SysPathsSnapshot()
snapmods = SysModulesSnapshot()
yield
snapmods.restore()
snappaths.restore()
@pytest.fixture @pytest.fixture
def _config_for_test(): def _config_for_test():
from _pytest.config import get_config from _pytest.config import get_config
@ -473,6 +486,8 @@ class Testdir(object):
""" """
CLOSE_STDIN = object
class TimeoutExpired(Exception): class TimeoutExpired(Exception):
pass pass
@ -613,27 +628,10 @@ class Testdir(object):
This is undone automatically when this object dies at the end of each This is undone automatically when this object dies at the end of each
test. test.
""" """
from pkg_resources import fixup_namespace_packages
if path is None: if path is None:
path = self.tmpdir path = self.tmpdir
dirname = str(path) self.monkeypatch.syspath_prepend(str(path))
sys.path.insert(0, dirname)
fixup_namespace_packages(dirname)
# a call to syspathinsert() usually means that the caller wants to
# import some dynamically created files, thus with python3 we
# invalidate its import caches
self._possibly_invalidate_import_caches()
def _possibly_invalidate_import_caches(self):
# invalidate caches if we can (py33 and above)
try:
from importlib import invalidate_caches
except ImportError:
return
invalidate_caches()
def mkdir(self, name): def mkdir(self, name):
"""Create a new (sub)directory.""" """Create a new (sub)directory."""
@ -801,12 +799,15 @@ class Testdir(object):
:param args: command line arguments to pass to :py:func:`pytest.main` :param args: command line arguments to pass to :py:func:`pytest.main`
:param plugin: (keyword-only) extra plugin instances the :param plugins: (keyword-only) extra plugin instances the
``pytest.main()`` instance should use ``pytest.main()`` instance should use
:return: a :py:class:`HookRecorder` instance :return: a :py:class:`HookRecorder` instance
""" """
plugins = kwargs.pop("plugins", [])
no_reraise_ctrlc = kwargs.pop("no_reraise_ctrlc", None)
raise_on_kwargs(kwargs)
finalizers = [] finalizers = []
try: try:
# Do not load user config (during runs only). # Do not load user config (during runs only).
@ -846,7 +847,6 @@ class Testdir(object):
def pytest_configure(x, config): def pytest_configure(x, config):
rec.append(self.make_hook_recorder(config.pluginmanager)) rec.append(self.make_hook_recorder(config.pluginmanager))
plugins = kwargs.get("plugins") or []
plugins.append(Collect()) plugins.append(Collect())
ret = pytest.main(list(args), plugins=plugins) ret = pytest.main(list(args), plugins=plugins)
if len(rec) == 1: if len(rec) == 1:
@ -860,7 +860,7 @@ class Testdir(object):
# typically we reraise keyboard interrupts from the child run # typically we reraise keyboard interrupts from the child run
# because it's our user requesting interruption of the testing # because it's our user requesting interruption of the testing
if ret == EXIT_INTERRUPTED and not kwargs.get("no_reraise_ctrlc"): if ret == EXIT_INTERRUPTED and not no_reraise_ctrlc:
calls = reprec.getcalls("pytest_keyboard_interrupt") calls = reprec.getcalls("pytest_keyboard_interrupt")
if calls and calls[-1].excinfo.type == KeyboardInterrupt: if calls and calls[-1].excinfo.type == KeyboardInterrupt:
raise KeyboardInterrupt() raise KeyboardInterrupt()
@ -872,9 +872,10 @@ class Testdir(object):
def runpytest_inprocess(self, *args, **kwargs): def runpytest_inprocess(self, *args, **kwargs):
"""Return result of running pytest in-process, providing a similar """Return result of running pytest in-process, providing a similar
interface to what self.runpytest() provides. interface to what self.runpytest() provides.
""" """
if kwargs.get("syspathinsert"): syspathinsert = kwargs.pop("syspathinsert", False)
if syspathinsert:
self.syspathinsert() self.syspathinsert()
now = time.time() now = time.time()
capture = MultiCapture(Capture=SysCapture) capture = MultiCapture(Capture=SysCapture)
@ -1032,7 +1033,14 @@ class Testdir(object):
if colitem.name == name: if colitem.name == name:
return colitem return colitem
def popen(self, cmdargs, stdout, stderr, **kw): def popen(
self,
cmdargs,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=CLOSE_STDIN,
**kw
):
"""Invoke subprocess.Popen. """Invoke subprocess.Popen.
This calls subprocess.Popen making sure the current working directory This calls subprocess.Popen making sure the current working directory
@ -1050,10 +1058,18 @@ class Testdir(object):
env["USERPROFILE"] = env["HOME"] env["USERPROFILE"] = env["HOME"]
kw["env"] = env kw["env"] = env
popen = subprocess.Popen( if stdin is Testdir.CLOSE_STDIN:
cmdargs, stdin=subprocess.PIPE, stdout=stdout, stderr=stderr, **kw kw["stdin"] = subprocess.PIPE
) elif isinstance(stdin, bytes):
popen.stdin.close() kw["stdin"] = subprocess.PIPE
else:
kw["stdin"] = stdin
popen = subprocess.Popen(cmdargs, stdout=stdout, stderr=stderr, **kw)
if stdin is Testdir.CLOSE_STDIN:
popen.stdin.close()
elif isinstance(stdin, bytes):
popen.stdin.write(stdin)
return popen return popen
@ -1065,6 +1081,10 @@ class Testdir(object):
:param args: the sequence of arguments to pass to `subprocess.Popen()` :param args: the sequence of arguments to pass to `subprocess.Popen()`
:param timeout: the period in seconds after which to timeout and raise :param timeout: the period in seconds after which to timeout and raise
:py:class:`Testdir.TimeoutExpired` :py:class:`Testdir.TimeoutExpired`
:param stdin: optional standard input. Bytes are being send, closing
the pipe, otherwise it is passed through to ``popen``.
Defaults to ``CLOSE_STDIN``, which translates to using a pipe
(``subprocess.PIPE``) that gets closed.
Returns a :py:class:`RunResult`. Returns a :py:class:`RunResult`.
@ -1072,6 +1092,7 @@ class Testdir(object):
__tracebackhide__ = True __tracebackhide__ = True
timeout = kwargs.pop("timeout", None) timeout = kwargs.pop("timeout", None)
stdin = kwargs.pop("stdin", Testdir.CLOSE_STDIN)
raise_on_kwargs(kwargs) raise_on_kwargs(kwargs)
cmdargs = [ cmdargs = [
@ -1086,8 +1107,14 @@ class Testdir(object):
try: try:
now = time.time() now = time.time()
popen = self.popen( popen = self.popen(
cmdargs, stdout=f1, stderr=f2, close_fds=(sys.platform != "win32") cmdargs,
stdin=stdin,
stdout=f1,
stderr=f2,
close_fds=(sys.platform != "win32"),
) )
if isinstance(stdin, bytes):
popen.stdin.close()
def handle_timeout(): def handle_timeout():
__tracebackhide__ = True __tracebackhide__ = True
@ -1173,9 +1200,10 @@ class Testdir(object):
:py:class:`Testdir.TimeoutExpired` :py:class:`Testdir.TimeoutExpired`
Returns a :py:class:`RunResult`. Returns a :py:class:`RunResult`.
""" """
__tracebackhide__ = True __tracebackhide__ = True
timeout = kwargs.pop("timeout", None)
raise_on_kwargs(kwargs)
p = py.path.local.make_numbered_dir( p = py.path.local.make_numbered_dir(
prefix="runpytest-", keep=None, rootdir=self.tmpdir prefix="runpytest-", keep=None, rootdir=self.tmpdir
@ -1185,7 +1213,7 @@ class Testdir(object):
if plugins: if plugins:
args = ("-p", plugins[0]) + args args = ("-p", plugins[0]) + args
args = self._getpytestargs() + args args = self._getpytestargs() + args
return self.run(*args, timeout=kwargs.get("timeout")) return self.run(*args, timeout=timeout)
def spawn_pytest(self, string, expect_timeout=10.0): def spawn_pytest(self, string, expect_timeout=10.0):
"""Run pytest using pexpect. """Run pytest using pexpect.
@ -1317,7 +1345,7 @@ class LineMatcher(object):
raise ValueError("line %r not found in output" % fnline) raise ValueError("line %r not found in output" % fnline)
def _log(self, *args): def _log(self, *args):
self._log_output.append(" ".join((str(x) for x in args))) self._log_output.append(" ".join(str(x) for x in args))
@property @property
def _log_text(self): def _log_text(self):

View File

@ -682,7 +682,7 @@ def raises(expected_exception, *args, **kwargs):
match_expr = kwargs.pop("match") match_expr = kwargs.pop("match")
if kwargs: if kwargs:
msg = "Unexpected keyword arguments passed to pytest.raises: " msg = "Unexpected keyword arguments passed to pytest.raises: "
msg += ", ".join(kwargs.keys()) msg += ", ".join(sorted(kwargs))
raise TypeError(msg) raise TypeError(msg)
return RaisesContext(expected_exception, message, match_expr) return RaisesContext(expected_exception, message, match_expr)
elif isinstance(args[0], str): elif isinstance(args[0], str):

View File

@ -148,6 +148,12 @@ class BaseReport(object):
fspath, lineno, domain = self.location fspath, lineno, domain = self.location
return domain return domain
def _get_verbose_word(self, config):
_category, _short, verbose = config.hook.pytest_report_teststatus(
report=self, config=config
)
return verbose
def _to_json(self): def _to_json(self):
""" """
This was originally the serialize_report() function from xdist (ca03269). This was originally the serialize_report() function from xdist (ca03269).
@ -328,7 +334,8 @@ class TestReport(BaseReport):
self.__dict__.update(extra) self.__dict__.update(extra)
def __repr__(self): def __repr__(self):
return "<TestReport %r when=%r outcome=%r>" % ( return "<%s %r when=%r outcome=%r>" % (
self.__class__.__name__,
self.nodeid, self.nodeid,
self.when, self.when,
self.outcome, self.outcome,

View File

@ -4,8 +4,6 @@ from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
import six
from _pytest.config import hookimpl from _pytest.config import hookimpl
from _pytest.mark.evaluate import MarkEvaluator from _pytest.mark.evaluate import MarkEvaluator
from _pytest.outcomes import fail from _pytest.outcomes import fail
@ -186,174 +184,3 @@ def pytest_report_teststatus(report):
return "xfailed", "x", "XFAIL" return "xfailed", "x", "XFAIL"
elif report.passed: elif report.passed:
return "xpassed", "X", "XPASS" return "xpassed", "X", "XPASS"
# called by the terminalreporter instance/plugin
def pytest_terminal_summary(terminalreporter):
tr = terminalreporter
if not tr.reportchars:
return
lines = []
for char in tr.reportchars:
action = REPORTCHAR_ACTIONS.get(char, lambda tr, lines: None)
action(terminalreporter, lines)
if lines:
tr._tw.sep("=", "short test summary info")
for line in lines:
tr._tw.line(line)
def _get_line_with_reprcrash_message(config, rep, termwidth):
"""Get summary line for a report, trying to add reprcrash message."""
from wcwidth import wcswidth
verbose_word = _get_report_str(config, rep)
pos = _get_pos(config, rep)
line = "%s %s" % (verbose_word, pos)
len_line = wcswidth(line)
ellipsis, len_ellipsis = "...", 3
if len_line > termwidth - len_ellipsis:
# No space for an additional message.
return line
try:
msg = rep.longrepr.reprcrash.message
except AttributeError:
pass
else:
# Only use the first line.
i = msg.find("\n")
if i != -1:
msg = msg[:i]
len_msg = wcswidth(msg)
sep, len_sep = " - ", 3
max_len_msg = termwidth - len_line - len_sep
if max_len_msg >= len_ellipsis:
if len_msg > max_len_msg:
max_len_msg -= len_ellipsis
msg = msg[:max_len_msg]
while wcswidth(msg) > max_len_msg:
msg = msg[:-1]
if six.PY2:
# on python 2 systems with narrow unicode compilation, trying to
# get a single character out of a multi-byte unicode character such as
# u'😄' will result in a High Surrogate (U+D83D) character, which is
# rendered as u'<27>'; in this case we just strip that character out as it
# serves no purpose being rendered
while msg.endswith(u"\uD83D"):
msg = msg[:-1]
msg += ellipsis
line += sep + msg
return line
def show_simple(terminalreporter, lines, stat):
failed = terminalreporter.stats.get(stat)
if failed:
config = terminalreporter.config
termwidth = terminalreporter.writer.fullwidth
for rep in failed:
line = _get_line_with_reprcrash_message(config, rep, termwidth)
lines.append(line)
def show_xfailed(terminalreporter, lines):
xfailed = terminalreporter.stats.get("xfailed")
if xfailed:
config = terminalreporter.config
for rep in xfailed:
verbose_word = _get_report_str(config, rep)
pos = _get_pos(config, rep)
lines.append("%s %s" % (verbose_word, pos))
reason = rep.wasxfail
if reason:
lines.append(" " + str(reason))
def show_xpassed(terminalreporter, lines):
xpassed = terminalreporter.stats.get("xpassed")
if xpassed:
config = terminalreporter.config
for rep in xpassed:
verbose_word = _get_report_str(config, rep)
pos = _get_pos(config, rep)
reason = rep.wasxfail
lines.append("%s %s %s" % (verbose_word, pos, reason))
def folded_skips(skipped):
d = {}
for event in skipped:
key = event.longrepr
assert len(key) == 3, (event, key)
keywords = getattr(event, "keywords", {})
# folding reports with global pytestmark variable
# this is workaround, because for now we cannot identify the scope of a skip marker
# TODO: revisit after marks scope would be fixed
if (
event.when == "setup"
and "skip" in keywords
and "pytestmark" not in keywords
):
key = (key[0], None, key[2])
d.setdefault(key, []).append(event)
values = []
for key, events in d.items():
values.append((len(events),) + key)
return values
def show_skipped(terminalreporter, lines):
tr = terminalreporter
skipped = tr.stats.get("skipped", [])
if skipped:
fskips = folded_skips(skipped)
if fskips:
verbose_word = _get_report_str(terminalreporter.config, report=skipped[0])
for num, fspath, lineno, reason in fskips:
if reason.startswith("Skipped: "):
reason = reason[9:]
if lineno is not None:
lines.append(
"%s [%d] %s:%d: %s"
% (verbose_word, num, fspath, lineno + 1, reason)
)
else:
lines.append("%s [%d] %s: %s" % (verbose_word, num, fspath, reason))
def shower(stat):
def show_(terminalreporter, lines):
return show_simple(terminalreporter, lines, stat)
return show_
def _get_report_str(config, report):
_category, _short, verbose = config.hook.pytest_report_teststatus(
report=report, config=config
)
return verbose
def _get_pos(config, rep):
nodeid = config.cwd_relative_nodeid(rep.nodeid)
return nodeid
REPORTCHAR_ACTIONS = {
"x": show_xfailed,
"X": show_xpassed,
"f": shower("failed"),
"F": shower("failed"),
"s": show_skipped,
"S": show_skipped,
"p": shower("passed"),
"E": shower("error"),
}

View File

@ -11,6 +11,7 @@ import collections
import platform import platform
import sys import sys
import time import time
from functools import partial
import attr import attr
import pluggy import pluggy
@ -81,11 +82,11 @@ def pytest_addoption(parser):
dest="reportchars", dest="reportchars",
default="", default="",
metavar="chars", metavar="chars",
help="show extra test summary info as specified by chars (f)ailed, " help="show extra test summary info as specified by chars: (f)ailed, "
"(E)error, (s)skipped, (x)failed, (X)passed, " "(E)rror, (s)kipped, (x)failed, (X)passed, "
"(p)passed, (P)passed with output, (a)all except pP. " "(p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. "
"Warnings are displayed at all times except when " "Warnings are displayed at all times except when "
"--disable-warnings is set", "--disable-warnings is set.",
) )
group._addoption( group._addoption(
"--disable-warnings", "--disable-warnings",
@ -140,7 +141,7 @@ def pytest_addoption(parser):
parser.addini( parser.addini(
"console_output_style", "console_output_style",
help="console output: classic or with additional progress information (classic|progress).", help='console output: "classic", or with additional progress information ("progress" (percentage) | "count").',
default="progress", default="progress",
) )
@ -164,12 +165,14 @@ def getreportopt(config):
reportchars += "w" reportchars += "w"
elif config.option.disable_warnings and "w" in reportchars: elif config.option.disable_warnings and "w" in reportchars:
reportchars = reportchars.replace("w", "") reportchars = reportchars.replace("w", "")
if reportchars: for char in reportchars:
for char in reportchars: if char == "a":
if char not in reportopts and char != "a": reportopts = "sxXwEf"
reportopts += char elif char == "A":
elif char == "a": reportopts = "sxXwEfpP"
reportopts = "sxXwEf" break
elif char not in reportopts:
reportopts += char
return reportopts return reportopts
@ -254,7 +257,10 @@ class TerminalReporter(object):
# do not show progress if we are showing fixture setup/teardown # do not show progress if we are showing fixture setup/teardown
if self.config.getoption("setupshow", False): if self.config.getoption("setupshow", False):
return False return False
return self.config.getini("console_output_style") in ("progress", "count") cfg = self.config.getini("console_output_style")
if cfg in ("progress", "count"):
return cfg
return False
@property @property
def verbosity(self): def verbosity(self):
@ -438,18 +444,18 @@ class TerminalReporter(object):
self.currentfspath = -2 self.currentfspath = -2
def pytest_runtest_logfinish(self, nodeid): def pytest_runtest_logfinish(self, nodeid):
if self.config.getini("console_output_style") == "count":
num_tests = self._session.testscollected
progress_length = len(" [{}/{}]".format(str(num_tests), str(num_tests)))
else:
progress_length = len(" [100%]")
if self.verbosity <= 0 and self._show_progress_info: if self.verbosity <= 0 and self._show_progress_info:
if self._show_progress_info == "count":
num_tests = self._session.testscollected
progress_length = len(" [{}/{}]".format(str(num_tests), str(num_tests)))
else:
progress_length = len(" [100%]")
self._progress_nodeids_reported.add(nodeid) self._progress_nodeids_reported.add(nodeid)
last_item = ( is_last_item = (
len(self._progress_nodeids_reported) == self._session.testscollected len(self._progress_nodeids_reported) == self._session.testscollected
) )
if last_item: if is_last_item:
self._write_progress_information_filling_space() self._write_progress_information_filling_space()
else: else:
w = self._width_of_current_line w = self._width_of_current_line
@ -460,7 +466,7 @@ class TerminalReporter(object):
def _get_progress_information_message(self): def _get_progress_information_message(self):
collected = self._session.testscollected collected = self._session.testscollected
if self.config.getini("console_output_style") == "count": if self._show_progress_info == "count":
if collected: if collected:
progress = self._progress_nodeids_reported progress = self._progress_nodeids_reported
counter_format = "{{:{}d}}".format(len(str(collected))) counter_format = "{{:{}d}}".format(len(str(collected)))
@ -677,8 +683,9 @@ class TerminalReporter(object):
self.summary_errors() self.summary_errors()
self.summary_failures() self.summary_failures()
self.summary_warnings() self.summary_warnings()
yield
self.summary_passes() self.summary_passes()
yield
self.short_test_summary()
# Display any extra warnings from teardown here (if any). # Display any extra warnings from teardown here (if any).
self.summary_warnings() self.summary_warnings()
@ -726,10 +733,10 @@ class TerminalReporter(object):
return res + " " return res + " "
def _getfailureheadline(self, rep): def _getfailureheadline(self, rep):
if rep.head_line: head_line = rep.head_line
return rep.head_line if head_line:
else: return head_line
return "test session" # XXX? return "test session" # XXX?
def _getcrashline(self, rep): def _getcrashline(self, rep):
try: try:
@ -820,17 +827,22 @@ class TerminalReporter(object):
if not reports: if not reports:
return return
self.write_sep("=", "FAILURES") self.write_sep("=", "FAILURES")
for rep in reports: if self.config.option.tbstyle == "line":
if self.config.option.tbstyle == "line": for rep in reports:
line = self._getcrashline(rep) line = self._getcrashline(rep)
self.write_line(line) self.write_line(line)
else: else:
teardown_sections = {}
for report in self.getreports(""):
if report.when == "teardown":
teardown_sections.setdefault(report.nodeid, []).append(report)
for rep in reports:
msg = self._getfailureheadline(rep) msg = self._getfailureheadline(rep)
self.write_sep("_", msg, red=True, bold=True) self.write_sep("_", msg, red=True, bold=True)
self._outrep_summary(rep) self._outrep_summary(rep)
for report in self.getreports(""): for report in teardown_sections.get(rep.nodeid, []):
if report.nodeid == rep.nodeid and report.when == "teardown": self.print_teardown_sections(report)
self.print_teardown_sections(report)
def summary_errors(self): def summary_errors(self):
if self.config.option.tbstyle != "no": if self.config.option.tbstyle != "no":
@ -842,10 +854,8 @@ class TerminalReporter(object):
msg = self._getfailureheadline(rep) msg = self._getfailureheadline(rep)
if rep.when == "collect": if rep.when == "collect":
msg = "ERROR collecting " + msg msg = "ERROR collecting " + msg
elif rep.when == "setup": else:
msg = "ERROR at setup of " + msg msg = "ERROR at %s of %s" % (rep.when, msg)
elif rep.when == "teardown":
msg = "ERROR at teardown of " + msg
self.write_sep("_", msg, red=True, bold=True) self.write_sep("_", msg, red=True, bold=True)
self._outrep_summary(rep) self._outrep_summary(rep)
@ -873,6 +883,150 @@ class TerminalReporter(object):
if self.verbosity == -1: if self.verbosity == -1:
self.write_line(msg, **markup) self.write_line(msg, **markup)
def short_test_summary(self):
if not self.reportchars:
return
def show_simple(stat, lines):
failed = self.stats.get(stat, [])
if not failed:
return
termwidth = self.writer.fullwidth
config = self.config
for rep in failed:
line = _get_line_with_reprcrash_message(config, rep, termwidth)
lines.append(line)
def show_xfailed(lines):
xfailed = self.stats.get("xfailed", [])
for rep in xfailed:
verbose_word = rep._get_verbose_word(self.config)
pos = _get_pos(self.config, rep)
lines.append("%s %s" % (verbose_word, pos))
reason = rep.wasxfail
if reason:
lines.append(" " + str(reason))
def show_xpassed(lines):
xpassed = self.stats.get("xpassed", [])
for rep in xpassed:
verbose_word = rep._get_verbose_word(self.config)
pos = _get_pos(self.config, rep)
reason = rep.wasxfail
lines.append("%s %s %s" % (verbose_word, pos, reason))
def show_skipped(lines):
skipped = self.stats.get("skipped", [])
fskips = _folded_skips(skipped) if skipped else []
if not fskips:
return
verbose_word = skipped[0]._get_verbose_word(self.config)
for num, fspath, lineno, reason in fskips:
if reason.startswith("Skipped: "):
reason = reason[9:]
if lineno is not None:
lines.append(
"%s [%d] %s:%d: %s"
% (verbose_word, num, fspath, lineno + 1, reason)
)
else:
lines.append("%s [%d] %s: %s" % (verbose_word, num, fspath, reason))
REPORTCHAR_ACTIONS = {
"x": show_xfailed,
"X": show_xpassed,
"f": partial(show_simple, "failed"),
"F": partial(show_simple, "failed"),
"s": show_skipped,
"S": show_skipped,
"p": partial(show_simple, "passed"),
"E": partial(show_simple, "error"),
}
lines = []
for char in self.reportchars:
action = REPORTCHAR_ACTIONS.get(char)
if action: # skipping e.g. "P" (passed with output) here.
action(lines)
if lines:
self.write_sep("=", "short test summary info")
for line in lines:
self.write_line(line)
def _get_pos(config, rep):
nodeid = config.cwd_relative_nodeid(rep.nodeid)
return nodeid
def _get_line_with_reprcrash_message(config, rep, termwidth):
"""Get summary line for a report, trying to add reprcrash message."""
from wcwidth import wcswidth
verbose_word = rep._get_verbose_word(config)
pos = _get_pos(config, rep)
line = "%s %s" % (verbose_word, pos)
len_line = wcswidth(line)
ellipsis, len_ellipsis = "...", 3
if len_line > termwidth - len_ellipsis:
# No space for an additional message.
return line
try:
msg = rep.longrepr.reprcrash.message
except AttributeError:
pass
else:
# Only use the first line.
i = msg.find("\n")
if i != -1:
msg = msg[:i]
len_msg = wcswidth(msg)
sep, len_sep = " - ", 3
max_len_msg = termwidth - len_line - len_sep
if max_len_msg >= len_ellipsis:
if len_msg > max_len_msg:
max_len_msg -= len_ellipsis
msg = msg[:max_len_msg]
while wcswidth(msg) > max_len_msg:
msg = msg[:-1]
if six.PY2:
# on python 2 systems with narrow unicode compilation, trying to
# get a single character out of a multi-byte unicode character such as
# u'😄' will result in a High Surrogate (U+D83D) character, which is
# rendered as u'<27>'; in this case we just strip that character out as it
# serves no purpose being rendered
while msg.endswith(u"\uD83D"):
msg = msg[:-1]
msg += ellipsis
line += sep + msg
return line
def _folded_skips(skipped):
d = {}
for event in skipped:
key = event.longrepr
assert len(key) == 3, (event, key)
keywords = getattr(event, "keywords", {})
# folding reports with global pytestmark variable
# this is workaround, because for now we cannot identify the scope of a skip marker
# TODO: revisit after marks scope would be fixed
if (
event.when == "setup"
and "skip" in keywords
and "pytestmark" not in keywords
):
key = (key[0], None, key[2])
d.setdefault(key, []).append(event)
values = []
for key, events in d.items():
values.append((len(events),) + key)
return values
def build_summary_stats_line(stats): def build_summary_stats_line(stats):
known_types = ( known_types = (

View File

@ -485,7 +485,7 @@ class TestGeneralUsage(object):
["*source code not available*", "E*fixture 'invalid_fixture' not found"] ["*source code not available*", "E*fixture 'invalid_fixture' not found"]
) )
def test_plugins_given_as_strings(self, tmpdir, monkeypatch): def test_plugins_given_as_strings(self, tmpdir, monkeypatch, _sys_snapshot):
"""test that str values passed to main() as `plugins` arg """test that str values passed to main() as `plugins` arg
are interpreted as module names to be imported and registered. are interpreted as module names to be imported and registered.
#855. #855.

View File

@ -441,7 +441,7 @@ def test_match_raises_error(testdir):
class TestFormattedExcinfo(object): class TestFormattedExcinfo(object):
@pytest.fixture @pytest.fixture
def importasmod(self, request): def importasmod(self, request, _sys_snapshot):
def importasmod(source): def importasmod(source):
source = textwrap.dedent(source) source = textwrap.dedent(source)
tmpdir = request.getfixturevalue("tmpdir") tmpdir = request.getfixturevalue("tmpdir")

View File

@ -410,7 +410,7 @@ def test_deindent():
assert lines == ["def f():", " def g():", " pass"] assert lines == ["def f():", " def g():", " pass"]
def test_source_of_class_at_eof_without_newline(tmpdir): def test_source_of_class_at_eof_without_newline(tmpdir, _sys_snapshot):
# this test fails because the implicit inspect.getsource(A) below # this test fails because the implicit inspect.getsource(A) below
# does not return the "x = 1" last line. # does not return the "x = 1" last line.
source = _pytest._code.Source( source = _pytest._code.Source(

26
testing/conftest.py Normal file
View File

@ -0,0 +1,26 @@
def pytest_collection_modifyitems(config, items):
"""Prefer faster tests."""
fast_items = []
slow_items = []
neutral_items = []
slow_fixturenames = ("testdir",)
for item in items:
try:
fixtures = item.fixturenames
except AttributeError:
# doctest at least
# (https://github.com/pytest-dev/pytest/issues/5070)
neutral_items.append(item)
else:
if any(x for x in fixtures if x in slow_fixturenames):
slow_items.append(item)
else:
marker = item.get_closest_marker("slow")
if marker:
slow_items.append(item)
else:
fast_items.append(item)
items[:] = fast_items + neutral_items + slow_items

View File

@ -1071,9 +1071,7 @@ class TestFixtureUsages(object):
) )
result = testdir.runpytest_inprocess() result = testdir.runpytest_inprocess()
result.stdout.fnmatch_lines( result.stdout.fnmatch_lines(
( "*Fixture 'badscope' from test_invalid_scope.py got an unexpected scope value 'functions'"
"*Fixture 'badscope' from test_invalid_scope.py got an unexpected scope value 'functions'"
)
) )
def test_funcarg_parametrized_and_used_twice(self, testdir): def test_funcarg_parametrized_and_used_twice(self, testdir):

View File

@ -446,6 +446,50 @@ class TestAssert_reprcompare(object):
assert "Omitting" not in lines[1] assert "Omitting" not in lines[1]
assert lines[2] == "{'b': 1}" assert lines[2] == "{'b': 1}"
def test_dict_different_items(self):
lines = callequal({"a": 0}, {"b": 1, "c": 2}, verbose=2)
assert lines == [
"{'a': 0} == {'b': 1, 'c': 2}",
"Left contains 1 more item:",
"{'a': 0}",
"Right contains 2 more items:",
"{'b': 1, 'c': 2}",
"Full diff:",
"- {'a': 0}",
"+ {'b': 1, 'c': 2}",
]
lines = callequal({"b": 1, "c": 2}, {"a": 0}, verbose=2)
assert lines == [
"{'b': 1, 'c': 2} == {'a': 0}",
"Left contains 2 more items:",
"{'b': 1, 'c': 2}",
"Right contains 1 more item:",
"{'a': 0}",
"Full diff:",
"- {'b': 1, 'c': 2}",
"+ {'a': 0}",
]
def test_sequence_different_items(self):
lines = callequal((1, 2), (3, 4, 5), verbose=2)
assert lines == [
"(1, 2) == (3, 4, 5)",
"At index 0 diff: 1 != 3",
"Right contains one more item: 5",
"Full diff:",
"- (1, 2)",
"+ (3, 4, 5)",
]
lines = callequal((1, 2, 3), (4,), verbose=2)
assert lines == [
"(1, 2, 3) == (4,)",
"At index 0 diff: 1 != 4",
"Left contains 2 more items, first extra item: 2",
"Full diff:",
"- (1, 2, 3)",
"+ (4,)",
]
def test_set(self): def test_set(self):
expl = callequal({0, 1}, {0, 2}) expl = callequal({0, 1}, {0, 2})
assert len(expl) > 1 assert len(expl) > 1

View File

@ -196,6 +196,7 @@ def test_cache_show(testdir):
""" """
def pytest_configure(config): def pytest_configure(config):
config.cache.set("my/name", [1,2,3]) config.cache.set("my/name", [1,2,3])
config.cache.set("my/hello", "world")
config.cache.set("other/some", {1:2}) config.cache.set("other/some", {1:2})
dp = config.cache.makedir("mydb") dp = config.cache.makedir("mydb")
dp.ensure("hello") dp.ensure("hello")
@ -204,20 +205,39 @@ def test_cache_show(testdir):
) )
result = testdir.runpytest() result = testdir.runpytest()
assert result.ret == 5 # no tests executed assert result.ret == 5 # no tests executed
result = testdir.runpytest("--cache-show") result = testdir.runpytest("--cache-show")
result.stdout.fnmatch_lines_random( result.stdout.fnmatch_lines(
[ [
"*cachedir:*", "*cachedir:*",
"-*cache values*-", "*- cache values for '[*]' -*",
"*my/name contains:", "cache/nodeids contains:",
"my/name contains:",
" [1, 2, 3]", " [1, 2, 3]",
"*other/some contains*", "other/some contains:",
" {*1*: 2}", " {*'1': 2}",
"-*cache directories*-", "*- cache directories for '[*]' -*",
"*mydb/hello*length 0*", "*mydb/hello*length 0*",
"*mydb/world*length 0*", "*mydb/world*length 0*",
] ]
) )
assert result.ret == 0
result = testdir.runpytest("--cache-show", "*/hello")
result.stdout.fnmatch_lines(
[
"*cachedir:*",
"*- cache values for '[*]/hello' -*",
"my/hello contains:",
" *'world'",
"*- cache directories for '[*]/hello' -*",
"d/mydb/hello*length 0*",
]
)
stdout = result.stdout.str()
assert "other/some" not in stdout
assert "d/mydb/world" not in stdout
assert result.ret == 0
class TestLastFailed(object): class TestLastFailed(object):

View File

@ -819,15 +819,15 @@ def test_error_during_readouterr(testdir):
testdir.makepyfile( testdir.makepyfile(
pytest_xyz=""" pytest_xyz="""
from _pytest.capture import FDCapture from _pytest.capture import FDCapture
def bad_snap(self): def bad_snap(self):
raise Exception('boom') raise Exception('boom')
assert FDCapture.snap assert FDCapture.snap
FDCapture.snap = bad_snap FDCapture.snap = bad_snap
""" """
) )
result = testdir.runpytest_subprocess( result = testdir.runpytest_subprocess("-p", "pytest_xyz", "--version")
"-p", "pytest_xyz", "--version", syspathinsert=True
)
result.stderr.fnmatch_lines( result.stderr.fnmatch_lines(
["*in bad_snap", " raise Exception('boom')", "Exception: boom"] ["*in bad_snap", " raise Exception('boom')", "Exception: boom"]
) )

View File

@ -436,7 +436,7 @@ class TestConfigAPI(object):
class TestConfigFromdictargs(object): class TestConfigFromdictargs(object):
def test_basic_behavior(self): def test_basic_behavior(self, _sys_snapshot):
from _pytest.config import Config from _pytest.config import Config
option_dict = {"verbose": 444, "foo": "bar", "capture": "no"} option_dict = {"verbose": 444, "foo": "bar", "capture": "no"}
@ -450,7 +450,7 @@ class TestConfigFromdictargs(object):
assert config.option.capture == "no" assert config.option.capture == "no"
assert config.args == args assert config.args == args
def test_origargs(self): def test_origargs(self, _sys_snapshot):
"""Show that fromdictargs can handle args in their "orig" format""" """Show that fromdictargs can handle args in their "orig" format"""
from _pytest.config import Config from _pytest.config import Config
@ -1057,7 +1057,7 @@ class TestOverrideIniArgs(object):
assert rootdir == tmpdir assert rootdir == tmpdir
assert inifile is None assert inifile is None
def test_addopts_before_initini(self, monkeypatch, _config_for_test): def test_addopts_before_initini(self, monkeypatch, _config_for_test, _sys_snapshot):
cache_dir = ".custom_cache" cache_dir = ".custom_cache"
monkeypatch.setenv("PYTEST_ADDOPTS", "-o cache_dir=%s" % cache_dir) monkeypatch.setenv("PYTEST_ADDOPTS", "-o cache_dir=%s" % cache_dir)
config = _config_for_test config = _config_for_test
@ -1092,7 +1092,7 @@ class TestOverrideIniArgs(object):
) )
assert result.ret == _pytest.main.EXIT_USAGEERROR assert result.ret == _pytest.main.EXIT_USAGEERROR
def test_override_ini_does_not_contain_paths(self, _config_for_test): def test_override_ini_does_not_contain_paths(self, _config_for_test, _sys_snapshot):
"""Check that -o no longer swallows all options after it (#3103)""" """Check that -o no longer swallows all options after it (#3103)"""
config = _config_for_test config = _config_for_test
config._preparse(["-o", "cache_dir=/cache", "/some/test/path"]) config._preparse(["-o", "cache_dir=/cache", "/some/test/path"])

View File

@ -13,17 +13,6 @@ from _pytest.main import EXIT_OK
from _pytest.main import EXIT_USAGEERROR from _pytest.main import EXIT_USAGEERROR
@pytest.fixture(scope="module", params=["global", "inpackage"])
def basedir(request, tmpdir_factory):
tmpdir = tmpdir_factory.mktemp("basedir", numbered=True)
tmpdir.ensure("adir/conftest.py").write("a=1 ; Directory = 3")
tmpdir.ensure("adir/b/conftest.py").write("b=2 ; a = 1.5")
if request.param == "inpackage":
tmpdir.ensure("adir/__init__.py")
tmpdir.ensure("adir/b/__init__.py")
return tmpdir
def ConftestWithSetinitial(path): def ConftestWithSetinitial(path):
conftest = PytestPluginManager() conftest = PytestPluginManager()
conftest_setinitial(conftest, [path]) conftest_setinitial(conftest, [path])
@ -41,7 +30,19 @@ def conftest_setinitial(conftest, args, confcutdir=None):
conftest._set_initial_conftests(Namespace()) conftest._set_initial_conftests(Namespace())
@pytest.mark.usefixtures("_sys_snapshot")
class TestConftestValueAccessGlobal(object): class TestConftestValueAccessGlobal(object):
@pytest.fixture(scope="module", params=["global", "inpackage"])
def basedir(self, request, tmpdir_factory):
tmpdir = tmpdir_factory.mktemp("basedir", numbered=True)
tmpdir.ensure("adir/conftest.py").write("a=1 ; Directory = 3")
tmpdir.ensure("adir/b/conftest.py").write("b=2 ; a = 1.5")
if request.param == "inpackage":
tmpdir.ensure("adir/__init__.py")
tmpdir.ensure("adir/b/__init__.py")
yield tmpdir
def test_basic_init(self, basedir): def test_basic_init(self, basedir):
conftest = PytestPluginManager() conftest = PytestPluginManager()
p = basedir.join("adir") p = basedir.join("adir")
@ -49,10 +50,10 @@ class TestConftestValueAccessGlobal(object):
def test_immediate_initialiation_and_incremental_are_the_same(self, basedir): def test_immediate_initialiation_and_incremental_are_the_same(self, basedir):
conftest = PytestPluginManager() conftest = PytestPluginManager()
len(conftest._dirpath2confmods) assert not len(conftest._dirpath2confmods)
conftest._getconftestmodules(basedir) conftest._getconftestmodules(basedir)
snap1 = len(conftest._dirpath2confmods) snap1 = len(conftest._dirpath2confmods)
# assert len(conftest._dirpath2confmods) == snap1 + 1 assert snap1 == 1
conftest._getconftestmodules(basedir.join("adir")) conftest._getconftestmodules(basedir.join("adir"))
assert len(conftest._dirpath2confmods) == snap1 + 1 assert len(conftest._dirpath2confmods) == snap1 + 1
conftest._getconftestmodules(basedir.join("b")) conftest._getconftestmodules(basedir.join("b"))
@ -80,7 +81,7 @@ class TestConftestValueAccessGlobal(object):
assert path.purebasename.startswith("conftest") assert path.purebasename.startswith("conftest")
def test_conftest_in_nonpkg_with_init(tmpdir): def test_conftest_in_nonpkg_with_init(tmpdir, _sys_snapshot):
tmpdir.ensure("adir-1.0/conftest.py").write("a=1 ; Directory = 3") tmpdir.ensure("adir-1.0/conftest.py").write("a=1 ; Directory = 3")
tmpdir.ensure("adir-1.0/b/conftest.py").write("b=2 ; a = 1.5") tmpdir.ensure("adir-1.0/b/conftest.py").write("b=2 ; a = 1.5")
tmpdir.ensure("adir-1.0/b/__init__.py") tmpdir.ensure("adir-1.0/b/__init__.py")

View File

@ -485,9 +485,27 @@ class TestPython(object):
tnode = node.find_first_by_tag("testcase") tnode = node.find_first_by_tag("testcase")
tnode.assert_attr(classname="test_xfailure_function", name="test_xfail") tnode.assert_attr(classname="test_xfailure_function", name="test_xfail")
fnode = tnode.find_first_by_tag("skipped") fnode = tnode.find_first_by_tag("skipped")
fnode.assert_attr(message="expected test failure") fnode.assert_attr(type="pytest.xfail", message="42")
# assert "ValueError" in fnode.toxml() # assert "ValueError" in fnode.toxml()
def test_xfailure_marker(self, testdir):
testdir.makepyfile(
"""
import pytest
@pytest.mark.xfail(reason="42")
def test_xfail():
assert False
"""
)
result, dom = runandparse(testdir)
assert not result.ret
node = dom.find_first_by_tag("testsuite")
node.assert_attr(skipped=1, tests=1)
tnode = node.find_first_by_tag("testcase")
tnode.assert_attr(classname="test_xfailure_marker", name="test_xfail")
fnode = tnode.find_first_by_tag("skipped")
fnode.assert_attr(type="pytest.xfail", message="42")
def test_xfail_captures_output_once(self, testdir): def test_xfail_captures_output_once(self, testdir):
testdir.makepyfile( testdir.makepyfile(
""" """

View File

@ -13,6 +13,7 @@ from _pytest.mark import EMPTY_PARAMETERSET_OPTION
from _pytest.mark import MarkGenerator as Mark from _pytest.mark import MarkGenerator as Mark
from _pytest.nodes import Collector from _pytest.nodes import Collector
from _pytest.nodes import Node from _pytest.nodes import Node
from _pytest.warning_types import PytestDeprecationWarning
from _pytest.warnings import SHOW_PYTEST_WARNINGS_ARG from _pytest.warnings import SHOW_PYTEST_WARNINGS_ARG
try: try:
@ -204,7 +205,7 @@ def test_strict_prohibits_unregistered_markers(testdir):
) )
result = testdir.runpytest("--strict") result = testdir.runpytest("--strict")
assert result.ret != 0 assert result.ret != 0
result.stdout.fnmatch_lines(["'unregisteredmark' not a registered marker"]) result.stdout.fnmatch_lines(["'unregisteredmark' is not a registered marker"])
@pytest.mark.parametrize( @pytest.mark.parametrize(
@ -991,3 +992,15 @@ def test_pytest_param_id_requires_string():
@pytest.mark.parametrize("s", (None, "hello world")) @pytest.mark.parametrize("s", (None, "hello world"))
def test_pytest_param_id_allows_none_or_string(s): def test_pytest_param_id_allows_none_or_string(s):
assert pytest.param(id=s) assert pytest.param(id=s)
def test_pytest_param_warning_on_unknown_kwargs():
with pytest.warns(PytestDeprecationWarning) as warninfo:
# typo, should be marks=
pytest.param(1, 2, mark=pytest.mark.xfail())
assert warninfo[0].filename == __file__
msg, = warninfo[0].message.args
assert msg == (
"pytest.param() got unexpected keyword arguments: ['mark'].\n"
"This will be an error in future versions."
)

View File

@ -6,6 +6,8 @@ import py
import _pytest import _pytest
import pytest import pytest
pytestmark = pytest.mark.slow
MODSET = [ MODSET = [
x x
for x in py.path.local(_pytest.__file__).dirpath().visit("*.py") for x in py.path.local(_pytest.__file__).dirpath().visit("*.py")

View File

@ -462,3 +462,10 @@ def test_syspath_prepend_with_namespace_packages(testdir, monkeypatch):
import ns_pkg.world import ns_pkg.world
assert ns_pkg.world.check() == "world" assert ns_pkg.world.check() == "world"
# Should invalidate caches via importlib.invalidate_caches.
tmpdir = testdir.tmpdir
modules_tmpdir = tmpdir.mkdir("modules_tmpdir")
monkeypatch.syspath_prepend(str(modules_tmpdir))
modules_tmpdir.join("main_app.py").write("app = True")
from main_app import app # noqa: F401

View File

@ -2,7 +2,6 @@ from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
import argparse
import os import os
import platform import platform
import sys import sys
@ -804,13 +803,12 @@ class TestPDB(object):
) )
def test_pdb_validate_usepdb_cls(self, testdir): def test_pdb_validate_usepdb_cls(self, testdir):
assert _validate_usepdb_cls("os.path:dirname.__name__") == "dirname" assert _validate_usepdb_cls("os.path:dirname.__name__") == (
"os.path",
"dirname.__name__",
)
with pytest.raises( assert _validate_usepdb_cls("pdb:DoesNotExist") == ("pdb", "DoesNotExist")
argparse.ArgumentTypeError,
match=r"^could not get pdb class for 'pdb:DoesNotExist': .*'DoesNotExist'",
):
_validate_usepdb_cls("pdb:DoesNotExist")
def test_pdb_custom_cls_without_pdb(self, testdir, custom_pdb_calls): def test_pdb_custom_cls_without_pdb(self, testdir, custom_pdb_calls):
p1 = testdir.makepyfile("""xxx """) p1 = testdir.makepyfile("""xxx """)
@ -1136,3 +1134,46 @@ def test_pdb_skip_option(testdir):
result = testdir.runpytest_inprocess("--pdb-ignore-set_trace", "-s", p) result = testdir.runpytest_inprocess("--pdb-ignore-set_trace", "-s", p)
assert result.ret == EXIT_NOTESTSCOLLECTED assert result.ret == EXIT_NOTESTSCOLLECTED
result.stdout.fnmatch_lines(["*before_set_trace*", "*after_set_trace*"]) result.stdout.fnmatch_lines(["*before_set_trace*", "*after_set_trace*"])
def test_pdbcls_via_local_module(testdir):
"""It should be imported in pytest_configure or later only."""
p1 = testdir.makepyfile(
"""
def test():
print("before_settrace")
__import__("pdb").set_trace()
""",
mypdb="""
class Wrapped:
class MyPdb:
def set_trace(self, *args):
print("settrace_called", args)
def runcall(self, *args, **kwds):
print("runcall_called", args, kwds)
assert "func" in kwds
""",
)
result = testdir.runpytest(
str(p1), "--pdbcls=really.invalid:Value", syspathinsert=True
)
result.stderr.fnmatch_lines(
[
"ERROR: --pdbcls: could not import 'really.invalid:Value': No module named *really*"
]
)
assert result.ret == 4
result = testdir.runpytest(
str(p1), "--pdbcls=mypdb:Wrapped.MyPdb", syspathinsert=True
)
assert result.ret == 0
result.stdout.fnmatch_lines(["*settrace_called*", "* 1 passed in *"])
# Ensure that it also works with --trace.
result = testdir.runpytest(
str(p1), "--pdbcls=mypdb:Wrapped.MyPdb", "--trace", syspathinsert=True
)
assert result.ret == 0
result.stdout.fnmatch_lines(["*runcall_called*", "* 1 passed in *"])

View File

@ -4,6 +4,7 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import os import os
import subprocess
import sys import sys
import time import time
@ -482,3 +483,79 @@ def test_pytester_addopts(request, monkeypatch):
testdir.finalize() testdir.finalize()
assert os.environ["PYTEST_ADDOPTS"] == "--orig-unused" assert os.environ["PYTEST_ADDOPTS"] == "--orig-unused"
def test_run_stdin(testdir):
with pytest.raises(testdir.TimeoutExpired):
testdir.run(
sys.executable,
"-c",
"import sys, time; time.sleep(1); print(sys.stdin.read())",
stdin=subprocess.PIPE,
timeout=0.1,
)
with pytest.raises(testdir.TimeoutExpired):
result = testdir.run(
sys.executable,
"-c",
"import sys, time; time.sleep(1); print(sys.stdin.read())",
stdin=b"input\n2ndline",
timeout=0.1,
)
result = testdir.run(
sys.executable,
"-c",
"import sys; print(sys.stdin.read())",
stdin=b"input\n2ndline",
)
assert result.stdout.lines == ["input", "2ndline"]
assert result.stderr.str() == ""
assert result.ret == 0
def test_popen_stdin_pipe(testdir):
proc = testdir.popen(
[sys.executable, "-c", "import sys; print(sys.stdin.read())"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
)
stdin = b"input\n2ndline"
stdout, stderr = proc.communicate(input=stdin)
assert stdout.decode("utf8").splitlines() == ["input", "2ndline"]
assert stderr == b""
assert proc.returncode == 0
def test_popen_stdin_bytes(testdir):
proc = testdir.popen(
[sys.executable, "-c", "import sys; print(sys.stdin.read())"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=b"input\n2ndline",
)
stdout, stderr = proc.communicate()
assert stdout.decode("utf8").splitlines() == ["input", "2ndline"]
assert stderr == b""
assert proc.returncode == 0
def test_popen_default_stdin_stderr_and_stdin_None(testdir):
# stdout, stderr default to pipes,
# stdin can be None to not close the pipe, avoiding
# "ValueError: flush of closed file" with `communicate()`.
p1 = testdir.makepyfile(
"""
import sys
print(sys.stdin.read()) # empty
print('stdout')
sys.stderr.write('stderr')
"""
)
proc = testdir.popen([sys.executable, str(p1)], stdin=None)
stdout, stderr = proc.communicate(b"ignored")
assert stdout.splitlines() == [b"", b"stdout"]
assert stderr.splitlines() == [b"stderr"]
assert proc.returncode == 0

View File

@ -581,7 +581,14 @@ def test_pytest_exit_returncode(testdir):
) )
result = testdir.runpytest() result = testdir.runpytest()
result.stdout.fnmatch_lines(["*! *Exit: some exit msg !*"]) result.stdout.fnmatch_lines(["*! *Exit: some exit msg !*"])
assert result.stderr.lines == [""] # Assert no output on stderr, except for unreliable ResourceWarnings.
# (https://github.com/pytest-dev/pytest/issues/5088)
assert [
x
for x in result.stderr.lines
if not x.startswith("Exception ignored in:")
and not x.startswith("ResourceWarning")
] == [""]
assert result.ret == 99 assert result.ret == 99
# It prints to stderr also in case of exit during pytest_sessionstart. # It prints to stderr also in case of exit during pytest_sessionstart.

View File

@ -7,7 +7,6 @@ import sys
import pytest import pytest
from _pytest.runner import runtestprotocol from _pytest.runner import runtestprotocol
from _pytest.skipping import folded_skips
from _pytest.skipping import MarkEvaluator from _pytest.skipping import MarkEvaluator
from _pytest.skipping import pytest_runtest_setup from _pytest.skipping import pytest_runtest_setup
@ -750,40 +749,6 @@ def test_skipif_class(testdir):
result.stdout.fnmatch_lines(["*2 skipped*"]) result.stdout.fnmatch_lines(["*2 skipped*"])
def test_skip_reasons_folding():
path = "xyz"
lineno = 3
message = "justso"
longrepr = (path, lineno, message)
class X(object):
pass
ev1 = X()
ev1.when = "execute"
ev1.skipped = True
ev1.longrepr = longrepr
ev2 = X()
ev2.when = "execute"
ev2.longrepr = longrepr
ev2.skipped = True
# ev3 might be a collection report
ev3 = X()
ev3.when = "collect"
ev3.longrepr = longrepr
ev3.skipped = True
values = folded_skips([ev1, ev2, ev3])
assert len(values) == 1
num, fspath, lineno, reason = values[0]
assert num == 3
assert fspath == path
assert lineno == lineno
assert reason == message
def test_skipped_reasons_functional(testdir): def test_skipped_reasons_functional(testdir):
testdir.makepyfile( testdir.makepyfile(
test_one=""" test_one="""

View File

@ -16,6 +16,7 @@ import py
import pytest import pytest
from _pytest.main import EXIT_NOTESTSCOLLECTED from _pytest.main import EXIT_NOTESTSCOLLECTED
from _pytest.reports import BaseReport from _pytest.reports import BaseReport
from _pytest.terminal import _folded_skips
from _pytest.terminal import _plugin_nameversions from _pytest.terminal import _plugin_nameversions
from _pytest.terminal import build_summary_stats_line from _pytest.terminal import build_summary_stats_line
from _pytest.terminal import getreportopt from _pytest.terminal import getreportopt
@ -774,11 +775,19 @@ def test_pass_output_reporting(testdir):
assert "test_pass_has_output" not in s assert "test_pass_has_output" not in s
assert "Four score and seven years ago..." not in s assert "Four score and seven years ago..." not in s
assert "test_pass_no_output" not in s assert "test_pass_no_output" not in s
result = testdir.runpytest("-rP") result = testdir.runpytest("-rPp")
result.stdout.fnmatch_lines( result.stdout.fnmatch_lines(
["*test_pass_has_output*", "Four score and seven years ago..."] [
"*= PASSES =*",
"*_ test_pass_has_output _*",
"*- Captured stdout call -*",
"Four score and seven years ago...",
"*= short test summary info =*",
"PASSED test_pass_output_reporting.py::test_pass_has_output",
"PASSED test_pass_output_reporting.py::test_pass_no_output",
"*= 2 passed in *",
]
) )
assert "test_pass_no_output" not in result.stdout.str()
def test_color_yes(testdir): def test_color_yes(testdir):
@ -836,14 +845,23 @@ def test_getreportopt():
config.option.reportchars = "sfxw" config.option.reportchars = "sfxw"
assert getreportopt(config) == "sfx" assert getreportopt(config) == "sfx"
config.option.reportchars = "sfx" # Now with --disable-warnings.
config.option.disable_warnings = False config.option.disable_warnings = False
config.option.reportchars = "a"
assert getreportopt(config) == "sxXwEf" # NOTE: "w" included!
config.option.reportchars = "sfx"
assert getreportopt(config) == "sfxw" assert getreportopt(config) == "sfxw"
config.option.reportchars = "sfxw" config.option.reportchars = "sfxw"
config.option.disable_warnings = False
assert getreportopt(config) == "sfxw" assert getreportopt(config) == "sfxw"
config.option.reportchars = "a"
assert getreportopt(config) == "sxXwEf" # NOTE: "w" included!
config.option.reportchars = "A"
assert getreportopt(config) == "sxXwEfpP"
def test_terminalreporter_reportopt_addopts(testdir): def test_terminalreporter_reportopt_addopts(testdir):
testdir.makeini("[pytest]\naddopts=-rs") testdir.makeini("[pytest]\naddopts=-rs")
@ -1530,3 +1548,37 @@ class TestProgressWithTeardown(object):
monkeypatch.delenv("PYTEST_DISABLE_PLUGIN_AUTOLOAD", raising=False) monkeypatch.delenv("PYTEST_DISABLE_PLUGIN_AUTOLOAD", raising=False)
output = testdir.runpytest("-n2") output = testdir.runpytest("-n2")
output.stdout.re_match_lines([r"[\.E]{40} \s+ \[100%\]"]) output.stdout.re_match_lines([r"[\.E]{40} \s+ \[100%\]"])
def test_skip_reasons_folding():
path = "xyz"
lineno = 3
message = "justso"
longrepr = (path, lineno, message)
class X(object):
pass
ev1 = X()
ev1.when = "execute"
ev1.skipped = True
ev1.longrepr = longrepr
ev2 = X()
ev2.when = "execute"
ev2.longrepr = longrepr
ev2.skipped = True
# ev3 might be a collection report
ev3 = X()
ev3.when = "collect"
ev3.longrepr = longrepr
ev3.skipped = True
values = _folded_skips([ev1, ev2, ev3])
assert len(values) == 1
num, fspath, lineno, reason = values[0]
assert num == 3
assert fspath == path
assert lineno == lineno
assert reason == message

View File

@ -171,6 +171,7 @@ filterwarnings =
pytester_example_dir = testing/example_scripts pytester_example_dir = testing/example_scripts
markers = markers =
issue issue
slow
[flake8] [flake8]
max-line-length = 120 max-line-length = 120