Merge remote-tracking branch 'upstream/features' into update-deprecations-docs

This commit is contained in:
Bruno Oliveira 2018-12-01 14:15:33 -02:00
commit 63f38de38e
107 changed files with 1377 additions and 1277 deletions

1
.gitignore vendored
View File

@ -44,3 +44,4 @@ coverage.xml
.pydevproject .pydevproject
.project .project
.settings .settings
.vscode

View File

@ -11,6 +11,7 @@ Alan Velasco
Alexander Johnson Alexander Johnson
Alexei Kozlenok Alexei Kozlenok
Allan Feldman Allan Feldman
Aly Sivji
Anatoly Bubenkoff Anatoly Bubenkoff
Anders Hovmöller Anders Hovmöller
Andras Tim Andras Tim
@ -158,6 +159,7 @@ Michael Droettboom
Michael Seifert Michael Seifert
Michal Wajszczuk Michal Wajszczuk
Mihai Capotă Mihai Capotă
Mike Hoyle (hoylemd)
Mike Lundy Mike Lundy
Miro Hrončok Miro Hrončok
Nathaniel Waisbrot Nathaniel Waisbrot

View File

@ -18,6 +18,43 @@ with advance notice in the **Deprecations** section of releases.
.. towncrier release notes start .. towncrier release notes start
pytest 4.0.1 (2018-11-23)
=========================
Bug Fixes
---------
- `#3952 <https://github.com/pytest-dev/pytest/issues/3952>`_: Display warnings before "short test summary info" again, but still later warnings in the end.
- `#4386 <https://github.com/pytest-dev/pytest/issues/4386>`_: Handle uninitialized exceptioninfo in repr/str.
- `#4393 <https://github.com/pytest-dev/pytest/issues/4393>`_: Do not create ``.gitignore``/``README.md`` files in existing cache directories.
- `#4400 <https://github.com/pytest-dev/pytest/issues/4400>`_: Rearrange warning handling for the yield test errors so the opt-out in 4.0.x correctly works.
- `#4405 <https://github.com/pytest-dev/pytest/issues/4405>`_: Fix collection of testpaths with ``--pyargs``.
- `#4412 <https://github.com/pytest-dev/pytest/issues/4412>`_: Fix assertion rewriting involving ``Starred`` + side-effects.
- `#4425 <https://github.com/pytest-dev/pytest/issues/4425>`_: Ensure we resolve the absolute path when the given ``--basetemp`` is a relative path.
Trivial/Internal Changes
------------------------
- `#4315 <https://github.com/pytest-dev/pytest/issues/4315>`_: Use ``pkg_resources.parse_version`` instead of ``LooseVersion`` in minversion check.
- `#4440 <https://github.com/pytest-dev/pytest/issues/4440>`_: Adjust the stack level of some internal pytest warnings.
pytest 4.0.0 (2018-11-13) pytest 4.0.0 (2018-11-13)
========================= =========================

View File

@ -0,0 +1 @@
Remove support for yield tests - they are fundamentally broken since collection and test execution were separated.

View File

@ -0,0 +1 @@
Remove the deprecated compat properties for node.Class/Function/Module - use pytest... now.

View File

@ -0,0 +1 @@
Richer equality comparison introspection on ``AssertionError`` for objects created using `attrs <http://www.attrs.org/en/stable/>`_ or `dataclasses <https://docs.python.org/3/library/dataclasses.html>`_ (Python 3.7+, `backported to 3.6 <https://pypi.org/project/dataclasses>`_).

View File

@ -0,0 +1 @@
Restructure ExceptionInfo object construction and ensure incomplete instances have a ``repr``/``str``.

View File

@ -1 +0,0 @@
Rearrange warning handling for the yield test errors so the opt-out in 4.0.x correctly works.

View File

@ -1 +0,0 @@
Fix collection of testpaths with ``--pyargs``.

View File

@ -1 +0,0 @@
Fix assertion rewriting involving ``Starred`` + side-effects.

View File

@ -0,0 +1 @@
Fix ``raises(..., 'code(string)')`` frame filename.

View File

@ -0,0 +1 @@
Deprecate ``raises(..., 'code(as_a_string)')`` and ``warns(..., 'code(as_a_string)')``. See https://docs.pytest.org/en/latest/deprecations.html#raises-warns-exec

View File

@ -0,0 +1 @@
Display actual test ids in ``--collect-only``.

View File

@ -6,6 +6,7 @@ Release announcements
:maxdepth: 2 :maxdepth: 2
release-4.0.1
release-4.0.0 release-4.0.0
release-3.10.1 release-3.10.1
release-3.10.0 release-3.10.0

View File

@ -0,0 +1,23 @@
pytest-4.0.1
=======================================
pytest 4.0.1 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at https://docs.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Anthony Sottile
* Bruno Oliveira
* Daniel Hahler
* Michael D. Hoyle
* Ronny Pfannschmidt
* Slam
Happy testing,
The pytest Development Team

View File

@ -22,11 +22,13 @@ following::
assert f() == 4 assert f() == 4
to assert that your function returns a certain value. If this assertion fails to assert that your function returns a certain value. If this assertion fails
you will see the return value of the function call:: you will see the return value of the function call:
.. code-block:: pytest
$ pytest test_assert1.py $ pytest test_assert1.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item collected 1 item
@ -98,10 +100,9 @@ If you want to write test code that works on Python 2.4 as well,
you may also use two other ways to test for an expected exception:: you may also use two other ways to test for an expected exception::
pytest.raises(ExpectedException, func, *args, **kwargs) pytest.raises(ExpectedException, func, *args, **kwargs)
pytest.raises(ExpectedException, "func(*args, **kwargs)")
both of which execute the specified function with args and kwargs and which will execute the specified function with args and kwargs and
asserts that the given ``ExpectedException`` is raised. The reporter will assert that the given ``ExpectedException`` is raised. The reporter will
provide you with helpful output in case of failures such as *no provide you with helpful output in case of failures such as *no
exception* or *wrong exception*. exception* or *wrong exception*.
@ -165,11 +166,13 @@ when it encounters comparisons. For example::
set2 = set("8035") set2 = set("8035")
assert set1 == set2 assert set1 == set2
if you run this module:: if you run this module:
.. code-block:: pytest
$ pytest test_assert2.py $ pytest test_assert2.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item collected 1 item
@ -235,7 +238,9 @@ now, given this test module::
assert f1 == f2 assert f1 == f2
you can run the test module and get the custom output defined in you can run the test module and get the custom output defined in
the conftest file:: the conftest file:
.. code-block:: pytest
$ pytest -q test_foocompare.py $ pytest -q test_foocompare.py
F [100%] F [100%]

View File

@ -12,7 +12,9 @@ For information on plugin hooks and objects, see :ref:`plugins`.
For information on the ``pytest.mark`` mechanism, see :ref:`mark`. For information on the ``pytest.mark`` mechanism, see :ref:`mark`.
For information about fixtures, see :ref:`fixtures`. To see a complete list of available fixtures (add ``-v`` to also see fixtures with leading ``_``), type :: For information about fixtures, see :ref:`fixtures`. To see a complete list of available fixtures (add ``-v`` to also see fixtures with leading ``_``), type :
.. code-block:: pytest
$ pytest -q --fixtures $ pytest -q --fixtures
cache cache

View File

@ -43,7 +43,9 @@ First, let's create 50 test invocation of which only 2 fail::
if i in (17, 25): if i in (17, 25):
pytest.fail("bad luck") pytest.fail("bad luck")
If you run this for the first time you will see two failures:: If you run this for the first time you will see two failures:
.. code-block:: pytest
$ pytest -q $ pytest -q
.................F.......F........................ [100%] .................F.......F........................ [100%]
@ -72,11 +74,13 @@ If you run this for the first time you will see two failures::
test_50.py:6: Failed test_50.py:6: Failed
2 failed, 48 passed in 0.12 seconds 2 failed, 48 passed in 0.12 seconds
If you then run it with ``--lf``:: If you then run it with ``--lf``:
.. code-block:: pytest
$ pytest --lf $ pytest --lf
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 50 items / 48 deselected collected 50 items / 48 deselected
run-last-failure: rerun previous 2 failures run-last-failure: rerun previous 2 failures
@ -113,11 +117,13 @@ not been run ("deselected").
Now, if you run with the ``--ff`` option, all tests will be run but the first Now, if you run with the ``--ff`` option, all tests will be run but the first
previous failures will be executed first (as can be seen from the series previous failures will be executed first (as can be seen from the series
of ``FF`` and dots):: of ``FF`` and dots):
.. code-block:: pytest
$ pytest --ff $ pytest --ff
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 50 items collected 50 items
run-last-failure: rerun previous 2 failures first run-last-failure: rerun previous 2 failures first
@ -192,7 +198,9 @@ across pytest invocations::
assert mydata == 23 assert mydata == 23
If you run this command once, it will take a while because If you run this command once, it will take a while because
of the sleep:: of the sleep:
.. code-block:: pytest
$ pytest -q $ pytest -q
F [100%] F [100%]
@ -209,7 +217,9 @@ of the sleep::
1 failed in 0.12 seconds 1 failed in 0.12 seconds
If you run it a second time the value will be retrieved from If you run it a second time the value will be retrieved from
the cache and this will be quick:: the cache and this will be quick:
.. code-block:: pytest
$ pytest -q $ pytest -q
F [100%] F [100%]
@ -232,11 +242,13 @@ Inspecting Cache content
------------------------------- -------------------------------
You can always peek at the content of the cache using the You can always peek at the content of the cache using the
``--cache-show`` command line option:: ``--cache-show`` command line option:
.. code-block:: pytest
$ pytest --cache-show $ pytest --cache-show
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
cachedir: $REGENDOC_TMPDIR/.pytest_cache cachedir: $REGENDOC_TMPDIR/.pytest_cache
------------------------------- cache values ------------------------------- ------------------------------- cache values -------------------------------

View File

@ -52,7 +52,7 @@ is that you can use print statements for debugging::
# content of test_module.py # content of test_module.py
def setup_function(function): def setup_function(function):
print ("setting up %s" % function) print("setting up %s" % function)
def test_func1(): def test_func1():
assert True assert True
@ -61,11 +61,13 @@ is that you can use print statements for debugging::
assert False assert False
and running this module will show you precisely the output and running this module will show you precisely the output
of the failing function and hide the other one:: of the failing function and hide the other one:
.. code-block:: pytest
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items

View File

@ -40,6 +40,7 @@ todo_include_todos = 1
# Add any Sphinx extension module names here, as strings. They can be extensions # Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones. # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [ extensions = [
"pygments_pytest",
"sphinx.ext.autodoc", "sphinx.ext.autodoc",
"sphinx.ext.todo", "sphinx.ext.todo",
"sphinx.ext.autosummary", "sphinx.ext.autosummary",

View File

@ -14,6 +14,41 @@ Below is a complete list of all pytest features which are considered deprecated.
:class:`_pytest.warning_types.PytestWarning` or subclasses, which can be filtered using :class:`_pytest.warning_types.PytestWarning` or subclasses, which can be filtered using
:ref:`standard warning filters <warnings>`. :ref:`standard warning filters <warnings>`.
.. _raises-warns-exec:
``raises`` / ``warns`` with a string as the second argument
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. deprecated:: 4.1
Use the context manager form of these instead. When necessary, invoke ``exec``
directly.
Example:
.. code-block:: python
pytest.raises(ZeroDivisionError, "1 / 0")
pytest.raises(SyntaxError, "a $ b")
pytest.warns(DeprecationWarning, "my_function()")
pytest.warns(SyntaxWarning, "assert(1, 2)")
Becomes:
.. code-block:: python
with pytest.raises(ZeroDivisionError):
1 / 0
with pytest.raises(SyntaxError):
exec("a $ b") # exec is required for invalid syntax
with pytest.warns(DeprecationWarning):
my_function()
with pytest.warns(SyntaxWarning):
exec("assert(1, 2)") # exec is used to avoid a top-level warning
``cached_setup`` ``cached_setup``
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
@ -279,7 +314,7 @@ an appropriate period of deprecation has passed.
*Removed in version 4.0.* *Removed in version 4.0.*
pytest supports ``yield``-style tests, where a test function actually ``yield`` functions and values pytest supported ``yield``-style tests, where a test function actually ``yield`` functions and values
that are then turned into proper test methods. Example: that are then turned into proper test methods. Example:
.. code-block:: python .. code-block:: python

View File

@ -58,11 +58,13 @@ and another like this::
""" """
return 42 return 42
then you can just invoke ``pytest`` without command line options:: then you can just invoke ``pytest`` without command line options:
.. code-block:: pytest
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 1 item collected 1 item

View File

@ -98,6 +98,30 @@ class TestSpecialisedExplanations(object):
text = "head " * 50 + "f" * 70 + "tail " * 20 text = "head " * 50 + "f" * 70 + "tail " * 20
assert "f" * 70 not in text assert "f" * 70 not in text
def test_eq_dataclass(self):
from dataclasses import dataclass
@dataclass
class Foo(object):
a: int
b: str
left = Foo(1, "b")
right = Foo(1, "c")
assert left == right
def test_eq_attrs(self):
import attr
@attr.s
class Foo(object):
a = attr.ib()
b = attr.ib()
left = Foo(1, "b")
right = Foo(1, "c")
assert left == right
def test_attribute(): def test_attribute():
class Foo(object): class Foo(object):
@ -141,11 +165,11 @@ def globf(x):
class TestRaises(object): class TestRaises(object):
def test_raises(self): def test_raises(self):
s = "qwe" # NOQA s = "qwe"
raises(TypeError, "int(s)") raises(TypeError, int, s)
def test_raises_doesnt(self): def test_raises_doesnt(self):
raises(IOError, "int('3')") raises(IOError, int, "3")
def test_raise(self): def test_raise(self):
raise ValueError("demo error") raise ValueError("demo error")

View File

@ -9,5 +9,5 @@ def test_failure_demo_fails_properly(testdir):
failure_demo.copy(target) failure_demo.copy(target)
failure_demo.copy(testdir.tmpdir.join(failure_demo.basename)) failure_demo.copy(testdir.tmpdir.join(failure_demo.basename))
result = testdir.runpytest(target, syspathinsert=True) result = testdir.runpytest(target, syspathinsert=True)
result.stdout.fnmatch_lines(["*42 failed*"]) result.stdout.fnmatch_lines(["*44 failed*"])
assert result.ret != 0 assert result.ret != 0

View File

@ -27,11 +27,13 @@ You can "mark" a test function with custom metadata like this::
.. versionadded:: 2.2 .. versionadded:: 2.2
You can then restrict a test run to only run tests marked with ``webtest``:: You can then restrict a test run to only run tests marked with ``webtest``:
.. code-block:: pytest
$ pytest -v -m webtest $ pytest -v -m webtest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 3 deselected collecting ... collected 4 items / 3 deselected
@ -40,11 +42,13 @@ You can then restrict a test run to only run tests marked with ``webtest``::
================== 1 passed, 3 deselected in 0.12 seconds ================== ================== 1 passed, 3 deselected in 0.12 seconds ==================
Or the inverse, running all tests except the webtest ones:: Or the inverse, running all tests except the webtest ones:
.. code-block:: pytest
$ pytest -v -m "not webtest" $ pytest -v -m "not webtest"
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 1 deselected collecting ... collected 4 items / 1 deselected
@ -60,11 +64,13 @@ Selecting tests based on their node ID
You can provide one or more :ref:`node IDs <node-id>` as positional You can provide one or more :ref:`node IDs <node-id>` as positional
arguments to select only specified tests. This makes it easy to select arguments to select only specified tests. This makes it easy to select
tests based on their module, class, method, or function name:: tests based on their module, class, method, or function name:
.. code-block:: pytest
$ pytest -v test_server.py::TestClass::test_method $ pytest -v test_server.py::TestClass::test_method
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 1 item collecting ... collected 1 item
@ -73,11 +79,13 @@ tests based on their module, class, method, or function name::
========================= 1 passed in 0.12 seconds ========================= ========================= 1 passed in 0.12 seconds =========================
You can also select on the class:: You can also select on the class:
.. code-block:: pytest
$ pytest -v test_server.py::TestClass $ pytest -v test_server.py::TestClass
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 1 item collecting ... collected 1 item
@ -86,19 +94,21 @@ You can also select on the class::
========================= 1 passed in 0.12 seconds ========================= ========================= 1 passed in 0.12 seconds =========================
Or select multiple nodes:: Or select multiple nodes:
$ pytest -v test_server.py::TestClass test_server.py::test_send_http .. code-block:: pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items
test_server.py::TestClass::test_method PASSED [ 50%] $ pytest -v test_server.py::TestClass test_server.py::test_send_http
test_server.py::test_send_http PASSED [100%] =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items
========================= 2 passed in 0.12 seconds ========================= test_server.py::TestClass::test_method PASSED [ 50%]
test_server.py::test_send_http PASSED [100%]
========================= 2 passed in 0.12 seconds =========================
.. _node-id: .. _node-id:
@ -124,11 +134,13 @@ Using ``-k expr`` to select tests based on their name
You can use the ``-k`` command line option to specify an expression You can use the ``-k`` command line option to specify an expression
which implements a substring match on the test names instead of the which implements a substring match on the test names instead of the
exact match on markers that ``-m`` provides. This makes it easy to exact match on markers that ``-m`` provides. This makes it easy to
select tests based on their names:: select tests based on their names:
.. code-block:: pytest
$ pytest -v -k http # running with the above defined example module $ pytest -v -k http # running with the above defined example module
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 3 deselected collecting ... collected 4 items / 3 deselected
@ -137,11 +149,13 @@ select tests based on their names::
================== 1 passed, 3 deselected in 0.12 seconds ================== ================== 1 passed, 3 deselected in 0.12 seconds ==================
And you can also run all tests except the ones that match the keyword:: And you can also run all tests except the ones that match the keyword:
.. code-block:: pytest
$ pytest -k "not send_http" -v $ pytest -k "not send_http" -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 1 deselected collecting ... collected 4 items / 1 deselected
@ -152,11 +166,13 @@ And you can also run all tests except the ones that match the keyword::
================== 3 passed, 1 deselected in 0.12 seconds ================== ================== 3 passed, 1 deselected in 0.12 seconds ==================
Or to select "http" and "quick" tests:: Or to select "http" and "quick" tests:
.. code-block:: pytest
$ pytest -k "http or quick" -v $ pytest -k "http or quick" -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 2 deselected collecting ... collected 4 items / 2 deselected
@ -271,8 +287,12 @@ You can also set a module level marker::
import pytest import pytest
pytestmark = pytest.mark.webtest pytestmark = pytest.mark.webtest
in which case it will be applied to all functions and or multiple markers::
methods defined in the module.
pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
in which case markers will be applied (in left-to-right order) to
all functions and methods defined in the module.
.. _`marking individual tests when using parametrize`: .. _`marking individual tests when using parametrize`:
@ -347,11 +367,13 @@ A test file using this local plugin::
pass pass
and an example invocations specifying a different environment than what and an example invocations specifying a different environment than what
the test needs:: the test needs:
.. code-block:: pytest
$ pytest -E stage2 $ pytest -E stage2
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item collected 1 item
@ -359,11 +381,13 @@ the test needs::
======================== 1 skipped in 0.12 seconds ========================= ======================== 1 skipped in 0.12 seconds =========================
and here is one that specifies exactly the environment needed:: and here is one that specifies exactly the environment needed:
.. code-block:: pytest
$ pytest -E stage1 $ pytest -E stage1
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item collected 1 item
@ -424,7 +448,9 @@ However, if there is a callable as the single positional argument with no keywor
def test_with_args(): def test_with_args():
pass pass
The output is as follows:: The output is as follows:
.. code-block:: pytest
$ pytest -q -s $ pytest -q -s
Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={}) Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={})
@ -462,10 +488,12 @@ test function. From a conftest file we can read it like this::
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
for mark in item.iter_markers(name='glob'): for mark in item.iter_markers(name='glob'):
print ("glob args=%s kwargs=%s" %(mark.args, mark.kwargs)) print("glob args=%s kwargs=%s" % (mark.args, mark.kwargs))
sys.stdout.flush() sys.stdout.flush()
Let's run this without capturing output and see what we get:: Let's run this without capturing output and see what we get:
.. code-block:: pytest
$ pytest -q -s $ pytest -q -s
glob args=('function',) kwargs={'x': 3} glob args=('function',) kwargs={'x': 3}
@ -520,11 +548,13 @@ Let's do a little test file to show how this looks like::
def test_runs_everywhere(): def test_runs_everywhere():
pass pass
then you will see two tests skipped and two executed tests as expected:: then you will see two tests skipped and two executed tests as expected:
.. code-block:: pytest
$ pytest -rs # this option reports skip reasons $ pytest -rs # this option reports skip reasons
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
@ -534,11 +564,13 @@ then you will see two tests skipped and two executed tests as expected::
=================== 2 passed, 2 skipped in 0.12 seconds ==================== =================== 2 passed, 2 skipped in 0.12 seconds ====================
Note that if you specify a platform via the marker-command line option like this:: Note that if you specify a platform via the marker-command line option like this:
.. code-block:: pytest
$ pytest -m linux $ pytest -m linux
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items / 3 deselected collected 4 items / 3 deselected
@ -585,48 +617,52 @@ We want to dynamically define two markers and can do it in a
elif "event" in item.nodeid: elif "event" in item.nodeid:
item.add_marker(pytest.mark.event) item.add_marker(pytest.mark.event)
We can now use the ``-m option`` to select one set:: We can now use the ``-m option`` to select one set:
$ pytest -m interface --tb=short .. code-block:: pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items / 2 deselected
test_module.py FF [100%] $ pytest -m interface --tb=short
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items / 2 deselected
================================= FAILURES ================================= test_module.py FF [100%]
__________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple
assert 0
E assert 0
__________________________ test_interface_complex __________________________
test_module.py:6: in test_interface_complex
assert 0
E assert 0
================== 2 failed, 2 deselected in 0.12 seconds ==================
or to select both "event" and "interface" tests:: ================================= FAILURES =================================
__________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple
assert 0
E assert 0
__________________________ test_interface_complex __________________________
test_module.py:6: in test_interface_complex
assert 0
E assert 0
================== 2 failed, 2 deselected in 0.12 seconds ==================
$ pytest -m "interface or event" --tb=short or to select both "event" and "interface" tests:
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items / 1 deselected
test_module.py FFF [100%] .. code-block:: pytest
================================= FAILURES ================================= $ pytest -m "interface or event" --tb=short
__________________________ test_interface_simple ___________________________ =========================== test session starts ============================
test_module.py:3: in test_interface_simple platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
assert 0 rootdir: $REGENDOC_TMPDIR, inifile:
E assert 0 collected 4 items / 1 deselected
__________________________ test_interface_complex __________________________
test_module.py:6: in test_interface_complex test_module.py FFF [100%]
assert 0
E assert 0 ================================= FAILURES =================================
____________________________ test_event_simple _____________________________ __________________________ test_interface_simple ___________________________
test_module.py:9: in test_event_simple test_module.py:3: in test_interface_simple
assert 0 assert 0
E assert 0 E assert 0
================== 3 failed, 1 deselected in 0.12 seconds ================== __________________________ test_interface_complex __________________________
test_module.py:6: in test_interface_complex
assert 0
E assert 0
____________________________ test_event_simple _____________________________
test_module.py:9: in test_event_simple
assert 0
E assert 0
================== 3 failed, 1 deselected in 0.12 seconds ==================

View File

@ -23,11 +23,13 @@ You can create a simple example file:
:literal: :literal:
and if you installed `PyYAML`_ or a compatible YAML-parser you can and if you installed `PyYAML`_ or a compatible YAML-parser you can
now execute the test specification:: now execute the test specification:
.. code-block:: pytest
nonpython $ pytest test_simple.yml nonpython $ pytest test_simple.yml
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR/nonpython, inifile: rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collected 2 items collected 2 items
@ -55,11 +57,13 @@ your own domain specific testing language this way.
will be reported as a (red) string. will be reported as a (red) string.
``reportinfo()`` is used for representing the test location and is also ``reportinfo()`` is used for representing the test location and is also
consulted when reporting in ``verbose`` mode:: consulted when reporting in ``verbose`` mode:
.. code-block:: pytest
nonpython $ pytest -v nonpython $ pytest -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython, inifile: rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collecting ... collected 2 items collecting ... collected 2 items
@ -77,11 +81,13 @@ consulted when reporting in ``verbose`` mode::
.. regendoc:wipe .. regendoc:wipe
While developing your custom test collection and execution it's also While developing your custom test collection and execution it's also
interesting to just look at the collection tree:: interesting to just look at the collection tree:
.. code-block:: pytest
nonpython $ pytest --collect-only nonpython $ pytest --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR/nonpython, inifile: rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collected 2 items collected 2 items
<Package '$REGENDOC_TMPDIR/nonpython'> <Package '$REGENDOC_TMPDIR/nonpython'>

View File

@ -42,14 +42,18 @@ Now we add a test configuration like this::
end = 2 end = 2
metafunc.parametrize("param1", range(end)) metafunc.parametrize("param1", range(end))
This means that we only run 2 tests if we do not pass ``--all``:: This means that we only run 2 tests if we do not pass ``--all``:
.. code-block:: pytest
$ pytest -q test_compute.py $ pytest -q test_compute.py
.. [100%] .. [100%]
2 passed in 0.12 seconds 2 passed in 0.12 seconds
We run only two computations, so we see two dots. We run only two computations, so we see two dots.
let's run the full monty:: let's run the full monty:
.. code-block:: pytest
$ pytest -q --all $ pytest -q --all
....F [100%] ....F [100%]
@ -134,12 +138,13 @@ used as the test IDs. These are succinct, but can be a pain to maintain.
In ``test_timedistance_v2``, we specified ``ids`` as a function that can generate a In ``test_timedistance_v2``, we specified ``ids`` as a function that can generate a
string representation to make part of the test ID. So our ``datetime`` values use the string representation to make part of the test ID. So our ``datetime`` values use the
label generated by ``idfn``, but because we didn't generate a label for ``timedelta`` label generated by ``idfn``, but because we didn't generate a label for ``timedelta``
objects, they are still using the default pytest representation:: objects, they are still using the default pytest representation:
.. code-block:: pytest
$ pytest test_time.py --collect-only $ pytest test_time.py --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 8 items collected 8 items
<Module 'test_time.py'> <Module 'test_time.py'>
@ -191,11 +196,13 @@ only have to work a bit to construct the correct arguments for pytest's
def test_demo2(self, attribute): def test_demo2(self, attribute):
assert isinstance(attribute, str) assert isinstance(attribute, str)
this is a fully self-contained example which you can run with:: this is a fully self-contained example which you can run with:
.. code-block:: pytest
$ pytest test_scenarios.py $ pytest test_scenarios.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
@ -203,12 +210,13 @@ this is a fully self-contained example which you can run with::
========================= 4 passed in 0.12 seconds ========================= ========================= 4 passed in 0.12 seconds =========================
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:: If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:
.. code-block:: pytest
$ pytest --collect-only test_scenarios.py $ pytest --collect-only test_scenarios.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
<Module 'test_scenarios.py'> <Module 'test_scenarios.py'>
@ -268,11 +276,13 @@ creates a database object for the actual test invocations::
else: else:
raise ValueError("invalid internal test config") raise ValueError("invalid internal test config")
Let's first see how it looks like at collection time:: Let's first see how it looks like at collection time:
.. code-block:: pytest
$ pytest test_backends.py --collect-only $ pytest test_backends.py --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
<Module 'test_backends.py'> <Module 'test_backends.py'>
@ -281,7 +291,9 @@ Let's first see how it looks like at collection time::
======================= no tests ran in 0.12 seconds ======================= ======================= no tests ran in 0.12 seconds =======================
And then when we run the test:: And then when we run the test:
.. code-block:: pytest
$ pytest -q test_backends.py $ pytest -q test_backends.py
.F [100%] .F [100%]
@ -329,11 +341,13 @@ will be passed to respective fixture function::
assert x == 'aaa' assert x == 'aaa'
assert y == 'b' assert y == 'b'
The result of this test will be successful:: The result of this test will be successful:
.. code-block:: pytest
$ pytest test_indirect_list.py --collect-only $ pytest test_indirect_list.py --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item collected 1 item
<Module 'test_indirect_list.py'> <Module 'test_indirect_list.py'>
@ -374,10 +388,13 @@ parametrizer`_ but in a lot less code::
assert a == b assert a == b
def test_zerodivision(self, a, b): def test_zerodivision(self, a, b):
pytest.raises(ZeroDivisionError, "a/b") with pytest.raises(ZeroDivisionError):
a / b
Our test generator looks up a class-level definition which specifies which Our test generator looks up a class-level definition which specifies which
argument sets to use for each test function. Let's run it:: argument sets to use for each test function. Let's run it:
.. code-block:: pytest
$ pytest -q $ pytest -q
F.. [100%] F.. [100%]
@ -407,7 +424,9 @@ is to be run with different sets of arguments for its three arguments:
.. literalinclude:: multipython.py .. literalinclude:: multipython.py
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):: Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):
.. code-block:: pytest
. $ pytest -rs -q multipython.py . $ pytest -rs -q multipython.py
...sss...sssssssss...sss... [100%] ...sss...sssssssss...sss... [100%]
@ -456,11 +475,13 @@ And finally a little test module::
assert round(basemod.func1(), 3) == round(optmod.func1(), 3) assert round(basemod.func1(), 3) == round(optmod.func1(), 3)
If you run this with reporting for skips enabled:: If you run this with reporting for skips enabled:
.. code-block:: pytest
$ pytest -rs test_module.py $ pytest -rs test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
@ -511,21 +532,22 @@ we mark the rest three parametrized tests with the custom marker ``basic``,
and for the fourth test we also use the built-in mark ``xfail`` to indicate this and for the fourth test we also use the built-in mark ``xfail`` to indicate this
test is expected to fail. For explicitness, we set test ids for some tests. test is expected to fail. For explicitness, we set test ids for some tests.
Then run ``pytest`` with verbose mode and with only the ``basic`` marker:: Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
pytest -v -m basic .. code-block:: pytest
============================================ test session starts =============================================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y $ pytest -v -m basic
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collecting ... collected 17 items / 14 deselected
test_pytest_param_example.py::test_eval[1+7-8] PASSED test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%]
test_pytest_param_example.py::test_eval[basic_2+4] PASSED test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
test_pytest_param_example.py::test_eval[basic_6*9] xfail test_pytest_param_example.py::test_eval[basic_6*9] xfail [100%]
========================================== short test summary info ===========================================
XFAIL test_pytest_param_example.py::test_eval[basic_6*9]
============================================= 1 tests deselected ============================================= ============ 2 passed, 14 deselected, 1 xfailed in 0.12 seconds ============
As the result: As the result:

View File

@ -24,20 +24,22 @@ by passing the ``--ignore=path`` option on the cli. ``pytest`` allows multiple
'-- test_world_03.py '-- test_world_03.py
Now if you invoke ``pytest`` with ``--ignore=tests/foobar/test_foobar_03.py --ignore=tests/hello/``, Now if you invoke ``pytest`` with ``--ignore=tests/foobar/test_foobar_03.py --ignore=tests/hello/``,
you will see that ``pytest`` only collects test-modules, which do not match the patterns specified:: you will see that ``pytest`` only collects test-modules, which do not match the patterns specified:
========= test session starts ========== .. code-block:: pytest
platform darwin -- Python 2.7.10, pytest-2.8.2, py-1.4.30, pluggy-0.3.1
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 5 items collected 5 items
tests/example/test_example_01.py . tests/example/test_example_01.py . [ 20%]
tests/example/test_example_02.py . tests/example/test_example_02.py . [ 40%]
tests/example/test_example_03.py . tests/example/test_example_03.py . [ 60%]
tests/foobar/test_foobar_01.py . tests/foobar/test_foobar_01.py . [ 80%]
tests/foobar/test_foobar_02.py . tests/foobar/test_foobar_02.py . [100%]
======= 5 passed in 0.02 seconds ======= ========================= 5 passed in 0.02 seconds =========================
Deselect tests during test collection Deselect tests during test collection
------------------------------------- -------------------------------------
@ -123,11 +125,13 @@ that match ``*_check``. For example, if we have::
def complex_check(self): def complex_check(self):
pass pass
The test collection would look like this:: The test collection would look like this:
.. code-block:: pytest
$ pytest --collect-only $ pytest --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 2 items collected 2 items
<Module 'check_myapp.py'> <Module 'check_myapp.py'>
@ -176,11 +180,13 @@ treat it as a filesystem path.
Finding out what is collected Finding out what is collected
----------------------------------------------- -----------------------------------------------
You can always peek at the collection tree without running tests like this:: You can always peek at the collection tree without running tests like this:
.. code-block:: pytest
. $ pytest --collect-only pythoncollection.py . $ pytest --collect-only pythoncollection.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 3 items collected 3 items
<Module 'CWD/pythoncollection.py'> <Module 'CWD/pythoncollection.py'>
@ -231,7 +237,9 @@ and a ``setup.py`` dummy file like this::
0/0 # will raise exception if imported 0/0 # will raise exception if imported
If you run with a Python 2 interpreter then you will find the one test and will If you run with a Python 2 interpreter then you will find the one test and will
leave out the ``setup.py`` file:: leave out the ``setup.py`` file:
.. code-block:: pytest
#$ pytest --collect-only #$ pytest --collect-only
====== test session starts ====== ====== test session starts ======
@ -244,11 +252,13 @@ leave out the ``setup.py`` file::
====== no tests ran in 0.04 seconds ====== ====== no tests ran in 0.04 seconds ======
If you run with a Python 3 interpreter both the one test and the ``setup.py`` If you run with a Python 3 interpreter both the one test and the ``setup.py``
file will be left out:: file will be left out:
.. code-block:: pytest
$ pytest --collect-only $ pytest --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 0 items collected 0 items

View File

@ -7,11 +7,13 @@ Demo of Python failure reports with pytest
Here is a nice run of several tens of failures Here is a nice run of several tens of failures
and how ``pytest`` presents things (unfortunately and how ``pytest`` presents things (unfortunately
not showing the nice colors here in the HTML that you not showing the nice colors here in the HTML that you
get on the terminal - we are working on that):: get on the terminal - we are working on that):
.. code-block:: pytest
assertion $ pytest failure_demo.py assertion $ pytest failure_demo.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR/assertion, inifile: rootdir: $REGENDOC_TMPDIR/assertion, inifile:
collected 42 items collected 42 items
@ -364,7 +366,7 @@ get on the terminal - we are working on that)::
> int(s) > int(s)
E ValueError: invalid literal for int() with base 10: 'qwe' E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen $PYTHON_PREFIX/lib/python3.6/site-packages/_pytest/python_api.py:682>:1: ValueError <0-codegen $REGENDOC_TMPDIR/assertion/failure_demo.py:145>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________ ______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises object at 0xdeadbeef> self = <failure_demo.TestRaises object at 0xdeadbeef>

View File

@ -43,7 +43,9 @@ provide the ``cmdopt`` through a :ref:`fixture function <fixture function>`:
def cmdopt(request): def cmdopt(request):
return request.config.getoption("--cmdopt") return request.config.getoption("--cmdopt")
Let's run this without supplying our new option:: Let's run this without supplying our new option:
.. code-block:: pytest
$ pytest -q test_sample.py $ pytest -q test_sample.py
F [100%] F [100%]
@ -65,7 +67,9 @@ Let's run this without supplying our new option::
first first
1 failed in 0.12 seconds 1 failed in 0.12 seconds
And now with supplying a command line option:: And now with supplying a command line option:
.. code-block:: pytest
$ pytest -q --cmdopt=type2 $ pytest -q --cmdopt=type2
F [100%] F [100%]
@ -117,11 +121,13 @@ the command line arguments before they get processed:
If you have the `xdist plugin <https://pypi.org/project/pytest-xdist/>`_ installed If you have the `xdist plugin <https://pypi.org/project/pytest-xdist/>`_ installed
you will now always perform test runs using a number you will now always perform test runs using a number
of subprocesses close to your CPU. Running in an empty of subprocesses close to your CPU. Running in an empty
directory with the above conftest.py:: directory with the above conftest.py:
.. code-block:: pytest
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items collected 0 items
@ -175,11 +181,13 @@ We can now write a test module like this:
def test_func_slow(): def test_func_slow():
pass pass
and when running it will see a skipped "slow" test:: and when running it will see a skipped "slow" test:
.. code-block:: pytest
$ pytest -rs # "-rs" means report details on the little 's' $ pytest -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
@ -189,11 +197,13 @@ and when running it will see a skipped "slow" test::
=================== 1 passed, 1 skipped in 0.12 seconds ==================== =================== 1 passed, 1 skipped in 0.12 seconds ====================
Or run it including the ``slow`` marked test:: Or run it including the ``slow`` marked test:
.. code-block:: pytest
$ pytest --runslow $ pytest --runslow
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
@ -230,7 +240,9 @@ Example:
The ``__tracebackhide__`` setting influences ``pytest`` showing The ``__tracebackhide__`` setting influences ``pytest`` showing
of tracebacks: the ``checkconfig`` function will not be shown of tracebacks: the ``checkconfig`` function will not be shown
unless the ``--full-trace`` command line option is specified. unless the ``--full-trace`` command line option is specified.
Let's run our little function:: Let's run our little function:
.. code-block:: pytest
$ pytest -q test_checkconfig.py $ pytest -q test_checkconfig.py
F [100%] F [100%]
@ -327,11 +339,13 @@ It's easy to present extra information in a ``pytest`` run:
def pytest_report_header(config): def pytest_report_header(config):
return "project deps: mylib-1.1" return "project deps: mylib-1.1"
which will add the string to the test header accordingly:: which will add the string to the test header accordingly:
.. code-block:: pytest
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
project deps: mylib-1.1 project deps: mylib-1.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items collected 0 items
@ -353,11 +367,13 @@ display more information if applicable:
if config.getoption("verbose") > 0: if config.getoption("verbose") > 0:
return ["info1: did you know that ...", "did you?"] return ["info1: did you know that ...", "did you?"]
which will add info only when run with "--v":: which will add info only when run with "--v":
.. code-block:: pytest
$ pytest -v $ pytest -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache cachedir: .pytest_cache
info1: did you know that ... info1: did you know that ...
did you? did you?
@ -366,11 +382,13 @@ which will add info only when run with "--v"::
======================= no tests ran in 0.12 seconds ======================= ======================= no tests ran in 0.12 seconds =======================
and nothing when run plainly:: and nothing when run plainly:
.. code-block:: pytest
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items collected 0 items
@ -403,11 +421,13 @@ out which tests are the slowest. Let's make an artificial test suite:
def test_funcslow2(): def test_funcslow2():
time.sleep(0.3) time.sleep(0.3)
Now we can profile which test functions execute the slowest:: Now we can profile which test functions execute the slowest:
.. code-block:: pytest
$ pytest --durations=3 $ pytest --durations=3
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items collected 3 items
@ -475,11 +495,13 @@ tests in a class. Here is a test module example:
def test_normal(): def test_normal():
pass pass
If we run this:: If we run this:
.. code-block:: pytest
$ pytest -rx $ pytest -rx
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
@ -556,11 +578,13 @@ the ``db`` fixture:
def test_root(db): # no db here, will error out def test_root(db): # no db here, will error out
pass pass
We can run this:: We can run this:
.. code-block:: pytest
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 7 items collected 7 items
@ -667,11 +691,13 @@ if you then have failing tests:
def test_fail2(): def test_fail2():
assert 0 assert 0
and run them:: and run them:
.. code-block:: pytest
$ pytest test_module.py $ pytest test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
@ -766,11 +792,13 @@ if you then have failing tests:
def test_fail2(): def test_fail2():
assert 0 assert 0
and run it:: and run it:
.. code-block:: pytest
$ pytest -s test_module.py $ pytest -s test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items collected 3 items

View File

@ -13,7 +13,7 @@ calls it::
@pytest.fixture(scope="session", autouse=True) @pytest.fixture(scope="session", autouse=True)
def callattr_ahead_of_alltests(request): def callattr_ahead_of_alltests(request):
print ("callattr_ahead_of_alltests called") print("callattr_ahead_of_alltests called")
seen = set([None]) seen = set([None])
session = request.node session = request.node
for item in session.items: for item in session.items:
@ -31,20 +31,20 @@ will be called ahead of running any tests::
class TestHello(object): class TestHello(object):
@classmethod @classmethod
def callme(cls): def callme(cls):
print ("callme called!") print("callme called!")
def test_method1(self): def test_method1(self):
print ("test_method1 called") print("test_method1 called")
def test_method2(self): def test_method2(self):
print ("test_method1 called") print("test_method1 called")
class TestOther(object): class TestOther(object):
@classmethod @classmethod
def callme(cls): def callme(cls):
print ("callme other called") print("callme other called")
def test_other(self): def test_other(self):
print ("test other") print("test other")
# works with unittest as well ... # works with unittest as well ...
import unittest import unittest
@ -52,12 +52,14 @@ will be called ahead of running any tests::
class SomeTest(unittest.TestCase): class SomeTest(unittest.TestCase):
@classmethod @classmethod
def callme(self): def callme(self):
print ("SomeTest callme called") print("SomeTest callme called")
def test_unit1(self): def test_unit1(self):
print ("test_unit1 method called") print("test_unit1 method called")
If you run this without output capturing:: If you run this without output capturing:
.. code-block:: pytest
$ pytest -q -s test_module.py $ pytest -q -s test_module.py
callattr_ahead_of_alltests called callattr_ahead_of_alltests called

View File

@ -66,11 +66,13 @@ using it::
Here, the ``test_ehlo`` needs the ``smtp_connection`` fixture value. pytest Here, the ``test_ehlo`` needs the ``smtp_connection`` fixture value. pytest
will discover and call the :py:func:`@pytest.fixture <_pytest.python.fixture>` will discover and call the :py:func:`@pytest.fixture <_pytest.python.fixture>`
marked ``smtp_connection`` fixture function. Running the test looks like this:: marked ``smtp_connection`` fixture function. Running the test looks like this:
.. code-block:: pytest
$ pytest test_smtpsimple.py $ pytest test_smtpsimple.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item collected 1 item
@ -204,11 +206,13 @@ located)::
assert 0 # for demo purposes assert 0 # for demo purposes
We deliberately insert failing ``assert 0`` statements in order to We deliberately insert failing ``assert 0`` statements in order to
inspect what is going on and can now run the tests:: inspect what is going on and can now run the tests:
.. code-block:: pytest
$ pytest test_module.py $ pytest test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
@ -460,7 +464,7 @@ read an optional server URL from the test module which uses our fixture::
server = getattr(request.module, "smtpserver", "smtp.gmail.com") server = getattr(request.module, "smtpserver", "smtp.gmail.com")
smtp_connection = smtplib.SMTP(server, 587, timeout=5) smtp_connection = smtplib.SMTP(server, 587, timeout=5)
yield smtp_connection yield smtp_connection
print ("finalizing %s (%s)" % (smtp_connection, server)) print("finalizing %s (%s)" % (smtp_connection, server))
smtp_connection.close() smtp_connection.close()
We use the ``request.module`` attribute to optionally obtain an We use the ``request.module`` attribute to optionally obtain an
@ -482,7 +486,9 @@ server URL in its module namespace::
def test_showhelo(smtp_connection): def test_showhelo(smtp_connection):
assert 0, smtp_connection.helo() assert 0, smtp_connection.helo()
Running it:: Running it:
.. code-block:: pytest
$ pytest -qq --tb=short test_anothersmtp.py $ pytest -qq --tb=short test_anothersmtp.py
F [100%] F [100%]
@ -584,7 +590,9 @@ The main change is the declaration of ``params`` with
:py:func:`@pytest.fixture <_pytest.python.fixture>`, a list of values :py:func:`@pytest.fixture <_pytest.python.fixture>`, a list of values
for each of which the fixture function will execute and can access for each of which the fixture function will execute and can access
a value via ``request.param``. No test function code needs to change. a value via ``request.param``. No test function code needs to change.
So let's just do another run:: So let's just do another run:
.. code-block:: pytest
$ pytest -q test_module.py $ pytest -q test_module.py
FFFF [100%] FFFF [100%]
@ -686,11 +694,13 @@ a function which will be called with the fixture value and then
has to return a string to use. In the latter case if the function has to return a string to use. In the latter case if the function
return ``None`` then pytest's auto-generated ID will be used. return ``None`` then pytest's auto-generated ID will be used.
Running the above tests results in the following test IDs being used:: Running the above tests results in the following test IDs being used:
.. code-block:: pytest
$ pytest --collect-only $ pytest --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 10 items collected 10 items
<Module 'test_anothersmtp.py'> <Module 'test_anothersmtp.py'>
@ -728,11 +738,13 @@ Example::
def test_data(data_set): def test_data(data_set):
pass pass
Running this test will *skip* the invocation of ``data_set`` with value ``2``:: Running this test will *skip* the invocation of ``data_set`` with value ``2``:
.. code-block:: pytest
$ pytest test_fixture_marks.py -v $ pytest test_fixture_marks.py -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 3 items collecting ... collected 3 items
@ -771,11 +783,13 @@ and instantiate an object ``app`` where we stick the already defined
assert app.smtp_connection assert app.smtp_connection
Here we declare an ``app`` fixture which receives the previously defined Here we declare an ``app`` fixture which receives the previously defined
``smtp_connection`` fixture and instantiates an ``App`` object with it. Let's run it:: ``smtp_connection`` fixture and instantiates an ``App`` object with it. Let's run it:
.. code-block:: pytest
$ pytest -v test_appsetup.py $ pytest -v test_appsetup.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items collecting ... collected 2 items
@ -821,30 +835,32 @@ to show the setup/teardown flow::
@pytest.fixture(scope="module", params=["mod1", "mod2"]) @pytest.fixture(scope="module", params=["mod1", "mod2"])
def modarg(request): def modarg(request):
param = request.param param = request.param
print (" SETUP modarg %s" % param) print(" SETUP modarg %s" % param)
yield param yield param
print (" TEARDOWN modarg %s" % param) print(" TEARDOWN modarg %s" % param)
@pytest.fixture(scope="function", params=[1,2]) @pytest.fixture(scope="function", params=[1,2])
def otherarg(request): def otherarg(request):
param = request.param param = request.param
print (" SETUP otherarg %s" % param) print(" SETUP otherarg %s" % param)
yield param yield param
print (" TEARDOWN otherarg %s" % param) print(" TEARDOWN otherarg %s" % param)
def test_0(otherarg): def test_0(otherarg):
print (" RUN test0 with otherarg %s" % otherarg) print(" RUN test0 with otherarg %s" % otherarg)
def test_1(modarg): def test_1(modarg):
print (" RUN test1 with modarg %s" % modarg) print(" RUN test1 with modarg %s" % modarg)
def test_2(otherarg, modarg): def test_2(otherarg, modarg):
print (" RUN test2 with otherarg %s and modarg %s" % (otherarg, modarg)) print(" RUN test2 with otherarg %s and modarg %s" % (otherarg, modarg))
Let's run the tests in verbose mode and with looking at the print-output:: Let's run the tests in verbose mode and with looking at the print-output:
.. code-block:: pytest
$ pytest -v -s test_module.py $ pytest -v -s test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6 platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 8 items collecting ... collected 8 items
@ -942,7 +958,9 @@ and declare its use in a test module via a ``usefixtures`` marker::
Due to the ``usefixtures`` marker, the ``cleandir`` fixture Due to the ``usefixtures`` marker, the ``cleandir`` fixture
will be required for the execution of each test method, just as if will be required for the execution of each test method, just as if
you specified a "cleandir" function argument to each of them. Let's run it you specified a "cleandir" function argument to each of them. Let's run it
to verify our fixture is activated and the tests pass:: to verify our fixture is activated and the tests pass:
.. code-block:: pytest
$ pytest -q $ pytest -q
.. [100%] .. [100%]
@ -1041,7 +1059,9 @@ which implies that all test methods in the class will use this fixture
without a need to state it in the test function signature or with a without a need to state it in the test function signature or with a
class-level ``usefixtures`` decorator. class-level ``usefixtures`` decorator.
If we run it, we get two passing tests:: If we run it, we get two passing tests:
.. code-block:: pytest
$ pytest -q $ pytest -q
.. [100%] .. [100%]

View File

@ -26,9 +26,9 @@ a per-session Database object::
# content of conftest.py # content of conftest.py
class Database(object): class Database(object):
def __init__(self): def __init__(self):
print ("database instance created") print("database instance created")
def destroy(self): def destroy(self):
print ("database instance destroyed") print("database instance destroyed")
def pytest_funcarg__db(request): def pytest_funcarg__db(request):
return request.cached_setup(setup=DataBase, return request.cached_setup(setup=DataBase,

View File

@ -27,7 +27,7 @@ Install ``pytest``
2. Check that you installed the correct version:: 2. Check that you installed the correct version::
$ pytest --version $ pytest --version
This is pytest version 3.x.y, imported from $PYTHON_PREFIX/lib/python3.6/site-packages/pytest.py This is pytest version 4.x.y, imported from $PYTHON_PREFIX/lib/python3.6/site-packages/pytest.py
.. _`simpletest`: .. _`simpletest`:
@ -43,11 +43,13 @@ Create a simple test function with just four lines of code::
def test_answer(): def test_answer():
assert func(3) == 5 assert func(3) == 5
Thats it. You can now execute the test function:: Thats it. You can now execute the test function:
.. code-block:: pytest
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item collected 1 item
@ -90,7 +92,9 @@ Use the ``raises`` helper to assert that some code raises an exception::
with pytest.raises(SystemExit): with pytest.raises(SystemExit):
f() f()
Execute the test function with “quiet” reporting mode:: Execute the test function with “quiet” reporting mode:
.. code-block:: pytest
$ pytest -q test_sysexit.py $ pytest -q test_sysexit.py
. [100%] . [100%]
@ -111,7 +115,9 @@ Once you develop multiple tests, you may want to group them into a class. pytest
x = "hello" x = "hello"
assert hasattr(x, 'check') assert hasattr(x, 'check')
``pytest`` discovers all tests following its :ref:`Conventions for Python test discovery <test discovery>`, so it finds both ``test_`` prefixed functions. There is no need to subclass anything. We can simply run the module by passing its filename:: ``pytest`` discovers all tests following its :ref:`Conventions for Python test discovery <test discovery>`, so it finds both ``test_`` prefixed functions. There is no need to subclass anything. We can simply run the module by passing its filename:
.. code-block:: pytest
$ pytest -q test_class.py $ pytest -q test_class.py
.F [100%] .F [100%]
@ -138,10 +144,12 @@ Request a unique temporary directory for functional tests
# content of test_tmpdir.py # content of test_tmpdir.py
def test_needsfiles(tmpdir): def test_needsfiles(tmpdir):
print (tmpdir) print(tmpdir)
assert 0 assert 0
List the name ``tmpdir`` in the test function signature and ``pytest`` will lookup and call a fixture factory to create the resource before performing the test function call. Before the test runs, ``pytest`` creates a unique-per-test-invocation temporary directory:: List the name ``tmpdir`` in the test function signature and ``pytest`` will lookup and call a fixture factory to create the resource before performing the test function call. Before the test runs, ``pytest`` creates a unique-per-test-invocation temporary directory:
.. code-block:: pytest
$ pytest -q test_tmpdir.py $ pytest -q test_tmpdir.py
F [100%] F [100%]
@ -151,7 +159,7 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look
tmpdir = local('PYTEST_TMPDIR/test_needsfiles0') tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')
def test_needsfiles(tmpdir): def test_needsfiles(tmpdir):
print (tmpdir) print(tmpdir)
> assert 0 > assert 0
E assert 0 E assert 0

View File

@ -22,11 +22,13 @@ An example of a simple test:
assert inc(3) == 5 assert inc(3) == 5
To execute it:: To execute it:
.. code-block:: pytest
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item collected 1 item

View File

@ -50,11 +50,13 @@ to an expected output::
Here, the ``@parametrize`` decorator defines three different ``(test_input,expected)`` Here, the ``@parametrize`` decorator defines three different ``(test_input,expected)``
tuples so that the ``test_eval`` function will run three times using tuples so that the ``test_eval`` function will run three times using
them in turn:: them in turn:
.. code-block:: pytest
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items collected 3 items
@ -99,11 +101,13 @@ for example with the builtin ``mark.xfail``::
def test_eval(test_input, expected): def test_eval(test_input, expected):
assert eval(test_input) == expected assert eval(test_input) == expected
Let's run this:: Let's run this:
.. code-block:: pytest
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items collected 3 items
@ -172,7 +176,9 @@ If we now pass two stringinput values, our test will run twice::
.. [100%] .. [100%]
2 passed in 0.12 seconds 2 passed in 0.12 seconds
Let's also run with a stringinput that will lead to a failing test:: Let's also run with a stringinput that will lead to a failing test:
.. code-block:: pytest
$ pytest -q --stringinput="!" test_strings.py $ pytest -q --stringinput="!" test_strings.py
F [100%] F [100%]
@ -194,7 +200,9 @@ As expected our test function fails.
If you don't specify a stringinput it will be skipped because If you don't specify a stringinput it will be skipped because
``metafunc.parametrize()`` will be called with an empty parameter ``metafunc.parametrize()`` will be called with an empty parameter
list:: list:
.. code-block:: pytest
$ pytest -q -rs test_strings.py $ pytest -q -rs test_strings.py
s [100%] s [100%]

View File

@ -1,3 +1,4 @@
pygments-pytest>=1.0.4
# pinning sphinx to 1.4.* due to search issues with rtd: # pinning sphinx to 1.4.* due to search issues with rtd:
# https://github.com/rtfd/readthedocs-sphinx-ext/issues/25 # https://github.com/rtfd/readthedocs-sphinx-ext/issues/25
sphinx ==1.4.* sphinx ==1.4.*

View File

@ -323,11 +323,13 @@ Here is a simple test file with the several usages:
.. literalinclude:: example/xfail_demo.py .. literalinclude:: example/xfail_demo.py
Running it with the report-on-xfail option gives this output:: Running it with the report-on-xfail option gives this output:
.. code-block:: pytest
example $ pytest -rx xfail_demo.py example $ pytest -rx xfail_demo.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR/example, inifile: rootdir: $REGENDOC_TMPDIR/example, inifile:
collected 7 items collected 7 items

View File

@ -35,11 +35,13 @@ created in the `base temporary directory`_.
assert 0 assert 0
Running this would result in a passed test except for the last Running this would result in a passed test except for the last
``assert 0`` line which we use to look at values:: ``assert 0`` line which we use to look at values:
.. code-block:: pytest
$ pytest test_tmp_path.py $ pytest test_tmp_path.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item collected 1 item
@ -95,11 +97,13 @@ and more. Here is an example test usage::
assert 0 assert 0
Running this would result in a passed test except for the last Running this would result in a passed test except for the last
``assert 0`` line which we use to look at values:: ``assert 0`` line which we use to look at values:
.. code-block:: pytest
$ pytest test_tmpdir.py $ pytest test_tmpdir.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item collected 1 item

View File

@ -122,11 +122,13 @@ fixture definition::
The ``@pytest.mark.usefixtures("db_class")`` class-decorator makes sure that The ``@pytest.mark.usefixtures("db_class")`` class-decorator makes sure that
the pytest fixture function ``db_class`` is called once per class. the pytest fixture function ``db_class`` is called once per class.
Due to the deliberately failing assert statements, we can take a look at Due to the deliberately failing assert statements, we can take a look at
the ``self.db`` values in the traceback:: the ``self.db`` values in the traceback:
.. code-block:: pytest
$ pytest test_unittest_db.py $ pytest test_unittest_db.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
@ -199,7 +201,9 @@ used for all methods of the class where it is defined. This is a
shortcut for using a ``@pytest.mark.usefixtures("initdir")`` marker shortcut for using a ``@pytest.mark.usefixtures("initdir")`` marker
on the class like in the previous example. on the class like in the previous example.
Running this test module ...:: Running this test module ...:
.. code-block:: pytest
$ pytest -q test_unittest_cleandir.py $ pytest -q test_unittest_cleandir.py
. [100%] . [100%]

View File

@ -150,11 +150,13 @@ Detailed summary report
The ``-r`` flag can be used to display test results summary at the end of the test session, The ``-r`` flag can be used to display test results summary at the end of the test session,
making it easy in large test suites to get a clear picture of all failures, skips, xfails, etc. making it easy in large test suites to get a clear picture of all failures, skips, xfails, etc.
Example:: Example:
.. code-block:: pytest
$ pytest -ra $ pytest -ra
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items collected 0 items
@ -173,11 +175,13 @@ Here is the full list of available characters that can be used:
- ``P`` - passed with output - ``P`` - passed with output
- ``a`` - all except ``pP`` - ``a`` - all except ``pP``
More than one character can be used, so for example to only see failed and skipped tests, you can execute:: More than one character can be used, so for example to only see failed and skipped tests, you can execute:
.. code-block:: pytest
$ pytest -rfs $ pytest -rfs
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items collected 0 items

View File

@ -18,11 +18,13 @@ and displays them at the end of the session::
def test_one(): def test_one():
assert api_v1() == 1 assert api_v1() == 1
Running pytest now produces this output:: Running pytest now produces this output:
.. code-block:: pytest
$ pytest test_show_warnings.py $ pytest test_show_warnings.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item collected 1 item
@ -37,7 +39,9 @@ Running pytest now produces this output::
=================== 1 passed, 1 warnings in 0.12 seconds =================== =================== 1 passed, 1 warnings in 0.12 seconds ===================
The ``-W`` flag can be passed to control which warnings will be displayed or even turn The ``-W`` flag can be passed to control which warnings will be displayed or even turn
them into errors:: them into errors:
.. code-block:: pytest
$ pytest -q test_show_warnings.py -W error::UserWarning $ pytest -q test_show_warnings.py -W error::UserWarning
F [100%] F [100%]
@ -347,7 +351,7 @@ defines an ``__init__`` constructor, as this prevents the class from being insta
def test_foo(self): def test_foo(self):
assert 1 == 1 assert 1 == 1
:: .. code-block:: pytest
$ pytest test_pytest_warnings.py -q $ pytest test_pytest_warnings.py -q

View File

@ -73,7 +73,7 @@ sub directory but not for other directories::
a/conftest.py: a/conftest.py:
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
# called for running each test in 'a' directory # called for running each test in 'a' directory
print ("setting up", item) print("setting up", item)
a/test_sub.py: a/test_sub.py:
def test_sub(): def test_sub():
@ -388,30 +388,31 @@ return a result object, with which we can assert the tests' outcomes.
additionally it is possible to copy examples for an example folder before running pytest on it additionally it is possible to copy examples for an example folder before running pytest on it
.. code:: ini .. code-block:: ini
# content of pytest.ini # content of pytest.ini
[pytest] [pytest]
pytester_example_dir = . pytester_example_dir = .
.. code:: python .. code-block:: python
# content of test_example.py # content of test_example.py
def test_plugin(testdir): def test_plugin(testdir):
testdir.copy_example("test_example.py") testdir.copy_example("test_example.py")
testdir.runpytest("-k", "test_example") testdir.runpytest("-k", "test_example")
def test_example(): def test_example():
pass pass
.. code:: .. code-block:: pytest
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 2 items collected 2 items

View File

@ -65,9 +65,9 @@ def report(issues):
print(title) print(title)
# print() # print()
# lines = body.split("\n") # lines = body.split("\n")
# print ("\n".join(lines[:3])) # print("\n".join(lines[:3]))
# if len(lines) > 3 or len(body) > 240: # if len(lines) > 3 or len(body) > 240:
# print ("...") # print("...")
print("\n\nFound %s open issues" % len(issues)) print("\n\nFound %s open issues" % len(issues))

View File

@ -0,0 +1,21 @@
@echo off
rem Source: https://github.com/appveyor/ci/blob/master/scripts/appveyor-retry.cmd
rem initiate the retry number
set retryNumber=0
set maxRetries=3
:RUN
%*
set LastErrorLevel=%ERRORLEVEL%
IF %LastErrorLevel% == 0 GOTO :EOF
set /a retryNumber=%retryNumber%+1
IF %reTryNumber% == %maxRetries% (GOTO :FAILED)
:RETRY
set /a retryNumberDisp=%retryNumber%+1
@echo Command "%*" failed with exit code %LastErrorLevel%. Retrying %retryNumberDisp% of %maxRetries%
GOTO :RUN
: FAILED
@echo Sorry, we tried running command for %maxRetries% times and all attempts were unsuccessful!
EXIT /B %LastErrorLevel%

View File

@ -5,7 +5,7 @@ if not defined PYTEST_NO_COVERAGE (
C:\Python36\Scripts\coverage combine C:\Python36\Scripts\coverage combine
C:\Python36\Scripts\coverage xml --ignore-errors C:\Python36\Scripts\coverage xml --ignore-errors
C:\Python36\Scripts\coverage report -m --ignore-errors C:\Python36\Scripts\coverage report -m --ignore-errors
C:\Python36\Scripts\codecov --required -X gcov pycov search -f coverage.xml --flags %TOXENV:-= % windows scripts\appveyor-retry C:\Python36\Scripts\codecov --required -X gcov pycov search -f coverage.xml --flags %TOXENV:-= % windows
) else ( ) else (
echo Skipping coverage upload, PYTEST_NO_COVERAGE is set echo Skipping coverage upload, PYTEST_NO_COVERAGE is set
) )

View File

@ -391,40 +391,84 @@ co_equal = compile(
) )
@attr.s(repr=False)
class ExceptionInfo(object): class ExceptionInfo(object):
""" wraps sys.exc_info() objects and offers """ wraps sys.exc_info() objects and offers
help for navigating the traceback. help for navigating the traceback.
""" """
_striptext = ""
_assert_start_repr = ( _assert_start_repr = (
"AssertionError(u'assert " if _PY2 else "AssertionError('assert " "AssertionError(u'assert " if _PY2 else "AssertionError('assert "
) )
def __init__(self, tup=None, exprinfo=None): _excinfo = attr.ib()
import _pytest._code _striptext = attr.ib(default="")
_traceback = attr.ib(default=None)
if tup is None: @classmethod
tup = sys.exc_info() def from_current(cls, exprinfo=None):
if exprinfo is None and isinstance(tup[1], AssertionError): """returns an ExceptionInfo matching the current traceback
exprinfo = getattr(tup[1], "msg", None)
if exprinfo is None: .. warning::
exprinfo = py.io.saferepr(tup[1])
if exprinfo and exprinfo.startswith(self._assert_start_repr): Experimental API
self._striptext = "AssertionError: "
self._excinfo = tup
#: the exception class :param exprinfo: a text string helping to determine if we should
self.type = tup[0] strip ``AssertionError`` from the output, defaults
#: the exception instance to the exception message/``__str__()``
self.value = tup[1] """
#: the exception raw traceback tup = sys.exc_info()
self.tb = tup[2] _striptext = ""
#: the exception type name if exprinfo is None and isinstance(tup[1], AssertionError):
self.typename = self.type.__name__ exprinfo = getattr(tup[1], "msg", None)
#: the exception traceback (_pytest._code.Traceback instance) if exprinfo is None:
self.traceback = _pytest._code.Traceback(self.tb, excinfo=ref(self)) exprinfo = py.io.saferepr(tup[1])
if exprinfo and exprinfo.startswith(cls._assert_start_repr):
_striptext = "AssertionError: "
return cls(tup, _striptext)
@classmethod
def for_later(cls):
"""return an unfilled ExceptionInfo
"""
return cls(None)
@property
def type(self):
"""the exception class"""
return self._excinfo[0]
@property
def value(self):
"""the exception value"""
return self._excinfo[1]
@property
def tb(self):
"""the exception raw traceback"""
return self._excinfo[2]
@property
def typename(self):
"""the type name of the exception"""
return self.type.__name__
@property
def traceback(self):
"""the traceback"""
if self._traceback is None:
self._traceback = Traceback(self.tb, excinfo=ref(self))
return self._traceback
@traceback.setter
def traceback(self, value):
self._traceback = value
def __repr__(self): def __repr__(self):
if self._excinfo is None:
return "<ExceptionInfo for raises contextmanager>"
return "<ExceptionInfo %s tblen=%d>" % (self.typename, len(self.traceback)) return "<ExceptionInfo %s tblen=%d>" % (self.typename, len(self.traceback))
def exconly(self, tryshort=False): def exconly(self, tryshort=False):
@ -513,6 +557,8 @@ class ExceptionInfo(object):
return fmt.repr_excinfo(self) return fmt.repr_excinfo(self)
def __str__(self): def __str__(self):
if self._excinfo is None:
return repr(self)
entry = self.traceback[-1] entry = self.traceback[-1]
loc = ReprFileLocation(entry.path, entry.lineno + 1, self.exconly()) loc = ReprFileLocation(entry.path, entry.lineno + 1, self.exconly())
return str(loc) return str(loc)

View File

@ -270,6 +270,7 @@ class AssertionRewritingHook(object):
_issue_config_warning( _issue_config_warning(
PytestWarning("Module already imported so cannot be rewritten: %s" % name), PytestWarning("Module already imported so cannot be rewritten: %s" % name),
self.config, self.config,
stacklevel=5,
) )
def load_module(self, name): def load_module(self, name):

View File

@ -122,6 +122,12 @@ def assertrepr_compare(config, op, left, right):
def isset(x): def isset(x):
return isinstance(x, (set, frozenset)) return isinstance(x, (set, frozenset))
def isdatacls(obj):
return getattr(obj, "__dataclass_fields__", None) is not None
def isattrs(obj):
return getattr(obj, "__attrs_attrs__", None) is not None
def isiterable(obj): def isiterable(obj):
try: try:
iter(obj) iter(obj)
@ -142,6 +148,9 @@ def assertrepr_compare(config, op, left, right):
explanation = _compare_eq_set(left, right, verbose) explanation = _compare_eq_set(left, right, verbose)
elif isdict(left) and isdict(right): elif isdict(left) and isdict(right):
explanation = _compare_eq_dict(left, right, verbose) explanation = _compare_eq_dict(left, right, verbose)
elif type(left) == type(right) and (isdatacls(left) or isattrs(left)):
type_fn = (isdatacls, isattrs)
explanation = _compare_eq_cls(left, right, verbose, type_fn)
if isiterable(left) and isiterable(right): if isiterable(left) and isiterable(right):
expl = _compare_eq_iterable(left, right, verbose) expl = _compare_eq_iterable(left, right, verbose)
if explanation is not None: if explanation is not None:
@ -155,7 +164,7 @@ def assertrepr_compare(config, op, left, right):
explanation = [ explanation = [
u"(pytest_assertion plugin: representation of details failed. " u"(pytest_assertion plugin: representation of details failed. "
u"Probably an object has a faulty __repr__.)", u"Probably an object has a faulty __repr__.)",
six.text_type(_pytest._code.ExceptionInfo()), six.text_type(_pytest._code.ExceptionInfo.from_current()),
] ]
if not explanation: if not explanation:
@ -315,6 +324,38 @@ def _compare_eq_dict(left, right, verbose=False):
return explanation return explanation
def _compare_eq_cls(left, right, verbose, type_fns):
isdatacls, isattrs = type_fns
if isdatacls(left):
all_fields = left.__dataclass_fields__
fields_to_check = [field for field, info in all_fields.items() if info.compare]
elif isattrs(left):
all_fields = left.__attrs_attrs__
fields_to_check = [field.name for field in all_fields if field.cmp]
same = []
diff = []
for field in fields_to_check:
if getattr(left, field) == getattr(right, field):
same.append(field)
else:
diff.append(field)
explanation = []
if same and verbose < 2:
explanation.append(u"Omitting %s identical items, use -vv to show" % len(same))
elif same:
explanation += [u"Matching attributes:"]
explanation += pprint.pformat(same).splitlines()
if diff:
explanation += [u"Differing attributes:"]
for field in diff:
explanation += [
(u"%s: %r != %r") % (field, getattr(left, field), getattr(right, field))
]
return explanation
def _notin_text(term, text, verbose=False): def _notin_text(term, text, verbose=False):
index = text.find(term) index = text.find(term)
head = text[:index] head = text[:index]

View File

@ -56,7 +56,9 @@ class Cache(object):
from _pytest.warning_types import PytestWarning from _pytest.warning_types import PytestWarning
_issue_config_warning( _issue_config_warning(
PytestWarning(fmt.format(**args) if args else fmt), self._config PytestWarning(fmt.format(**args) if args else fmt),
self._config,
stacklevel=3,
) )
def makedir(self, name): def makedir(self, name):
@ -108,6 +110,10 @@ class Cache(object):
""" """
path = self._getvaluepath(key) path = self._getvaluepath(key)
try: try:
if path.parent.is_dir():
cache_dir_exists_already = True
else:
cache_dir_exists_already = self._cachedir.exists()
path.parent.mkdir(exist_ok=True, parents=True) path.parent.mkdir(exist_ok=True, parents=True)
except (IOError, OSError): except (IOError, OSError):
self.warn("could not create cache path {path}", path=path) self.warn("could not create cache path {path}", path=path)
@ -119,6 +125,7 @@ class Cache(object):
else: else:
with f: with f:
json.dump(value, f, indent=2, sort_keys=True) json.dump(value, f, indent=2, sort_keys=True)
if not cache_dir_exists_already:
self._ensure_supporting_files() self._ensure_supporting_files()
def _ensure_supporting_files(self): def _ensure_supporting_files(self):
@ -128,8 +135,10 @@ class Cache(object):
if not readme_path.is_file(): if not readme_path.is_file():
readme_path.write_text(README_CONTENT) readme_path.write_text(README_CONTENT)
msg = u"# created by pytest automatically, do not change\n*" gitignore_path = self._cachedir.joinpath(".gitignore")
self._cachedir.joinpath(".gitignore").write_text(msg, encoding="UTF-8") if not gitignore_path.is_file():
msg = u"# Created by pytest automatically.\n*"
gitignore_path.write_text(msg, encoding="UTF-8")
class LFPlugin(object): class LFPlugin(object):

View File

@ -389,7 +389,6 @@ else:
COLLECT_FAKEMODULE_ATTRIBUTES = ( COLLECT_FAKEMODULE_ATTRIBUTES = (
"Collector", "Collector",
"Module", "Module",
"Generator",
"Function", "Function",
"Instance", "Instance",
"Session", "Session",

View File

@ -11,10 +11,10 @@ import shlex
import sys import sys
import types import types
import warnings import warnings
from distutils.version import LooseVersion
import py import py
import six import six
from pkg_resources import parse_version
from pluggy import HookimplMarker from pluggy import HookimplMarker
from pluggy import HookspecMarker from pluggy import HookspecMarker
from pluggy import PluginManager from pluggy import PluginManager
@ -191,7 +191,7 @@ def _prepareconfig(args=None, plugins=None):
if warning: if warning:
from _pytest.warnings import _issue_config_warning from _pytest.warnings import _issue_config_warning
_issue_config_warning(warning, config=config) _issue_config_warning(warning, config=config, stacklevel=4)
return pluginmanager.hook.pytest_cmdline_parse( return pluginmanager.hook.pytest_cmdline_parse(
pluginmanager=pluginmanager, args=args pluginmanager=pluginmanager, args=args
) )
@ -815,7 +815,7 @@ class Config(object):
minver = self.inicfg.get("minversion", None) minver = self.inicfg.get("minversion", None)
if minver: if minver:
if LooseVersion(minver) > LooseVersion(pytest.__version__): if parse_version(minver) > parse_version(pytest.__version__):
raise pytest.UsageError( raise pytest.UsageError(
"%s:%d: requires pytest-%s, actual pytest-%s'" "%s:%d: requires pytest-%s, actual pytest-%s'"
% ( % (

View File

@ -42,6 +42,7 @@ def getcfg(args, config=None):
CFG_PYTEST_SECTION.format(filename=inibasename) CFG_PYTEST_SECTION.format(filename=inibasename)
), ),
config=config, config=config,
stacklevel=2,
) )
return base, p, iniconfig["pytest"] return base, p, iniconfig["pytest"]
if ( if (
@ -116,7 +117,9 @@ def determine_setup(inifile, args, rootdir_cmd_arg=None, config=None):
# TODO: [pytest] section in *.cfg files is deprecated. Need refactoring once # TODO: [pytest] section in *.cfg files is deprecated. Need refactoring once
# the deprecation expires. # the deprecation expires.
_issue_config_warning( _issue_config_warning(
CFG_PYTEST_SECTION.format(filename=str(inifile)), config CFG_PYTEST_SECTION.format(filename=str(inifile)),
config,
stacklevel=2,
) )
break break
except KeyError: except KeyError:

View File

@ -22,28 +22,13 @@ MAIN_STR_ARGS = RemovedInPytest4Warning(
"pass a list of arguments instead." "pass a list of arguments instead."
) )
YIELD_TESTS = RemovedInPytest4Warning( YIELD_TESTS = "yield tests were removed in pytest 4.0 - {name} will be ignored"
"yield tests are deprecated, and scheduled to be removed in pytest 4.0"
)
CACHED_SETUP = RemovedInPytest4Warning( CACHED_SETUP = RemovedInPytest4Warning(
"cached_setup is deprecated and will be removed in a future release. " "cached_setup is deprecated and will be removed in a future release. "
"Use standard fixture functions instead." "Use standard fixture functions instead."
) )
COMPAT_PROPERTY = UnformattedWarning(
RemovedInPytest4Warning,
"usage of {owner}.{name} is deprecated, please use pytest.{name} instead",
)
CUSTOM_CLASS = UnformattedWarning(
RemovedInPytest4Warning,
'use of special named "{name}" objects in collectors of type "{type_name}" to '
"customize the created nodes is deprecated. "
"Use pytest_pycollect_makeitem(...) to create custom "
"collection nodes instead.",
)
FUNCARG_PREFIX = UnformattedWarning( FUNCARG_PREFIX = UnformattedWarning(
RemovedInPytest4Warning, RemovedInPytest4Warning,
'{name}: declaring fixtures using "pytest_funcarg__" prefix is deprecated ' '{name}: declaring fixtures using "pytest_funcarg__" prefix is deprecated '
@ -92,6 +77,15 @@ NODE_WARN = RemovedInPytest4Warning(
"Node.warn(code, message) form has been deprecated, use Node.warn(warning_instance) instead." "Node.warn(code, message) form has been deprecated, use Node.warn(warning_instance) instead."
) )
RAISES_EXEC = PytestDeprecationWarning(
"raises(..., 'code(as_a_string)') is deprecated, use the context manager form or use `exec()` directly\n\n"
"See https://docs.pytest.org/en/latest/deprecations.html#raises-warns-exec"
)
WARNS_EXEC = PytestDeprecationWarning(
"warns(..., 'code(as_a_string)') is deprecated, use the context manager form or use `exec()` directly.\n\n"
"See https://docs.pytest.org/en/latest/deprecations.html#raises-warns-exec"
)
RECORD_XML_PROPERTY = RemovedInPytest4Warning( RECORD_XML_PROPERTY = RemovedInPytest4Warning(
'Fixture renamed from "record_xml_property" to "record_property" as user ' 'Fixture renamed from "record_xml_property" to "record_property" as user '
"properties are now available to all reporters.\n" "properties are now available to all reporters.\n"

View File

@ -1303,17 +1303,11 @@ class FixtureManager(object):
if holderobj in self._holderobjseen: if holderobj in self._holderobjseen:
return return
from _pytest.nodes import _CompatProperty
self._holderobjseen.add(holderobj) self._holderobjseen.add(holderobj)
autousenames = [] autousenames = []
for name in dir(holderobj): for name in dir(holderobj):
# The attribute can be an arbitrary descriptor, so the attribute # The attribute can be an arbitrary descriptor, so the attribute
# access below can raise. safe_getatt() ignores such exceptions. # access below can raise. safe_getatt() ignores such exceptions.
maybe_property = safe_getattr(type(holderobj), name, None)
if isinstance(maybe_property, _CompatProperty):
# deprecated
continue
obj = safe_getattr(holderobj, name, None) obj = safe_getattr(holderobj, name, None)
marker = getfixturemarker(obj) marker = getfixturemarker(obj)
# fixture functions have a pytest_funcarg__ prefix (pre-2.3 style) # fixture functions have a pytest_funcarg__ prefix (pre-2.3 style)

View File

@ -188,7 +188,7 @@ def wrap_session(config, doit):
except Failed: except Failed:
session.exitstatus = EXIT_TESTSFAILED session.exitstatus = EXIT_TESTSFAILED
except KeyboardInterrupt: except KeyboardInterrupt:
excinfo = _pytest._code.ExceptionInfo() excinfo = _pytest._code.ExceptionInfo.from_current()
exitstatus = EXIT_INTERRUPTED exitstatus = EXIT_INTERRUPTED
if initstate <= 2 and isinstance(excinfo.value, exit.Exception): if initstate <= 2 and isinstance(excinfo.value, exit.Exception):
sys.stderr.write("{}: {}\n".format(excinfo.typename, excinfo.value.msg)) sys.stderr.write("{}: {}\n".format(excinfo.typename, excinfo.value.msg))
@ -197,7 +197,7 @@ def wrap_session(config, doit):
config.hook.pytest_keyboard_interrupt(excinfo=excinfo) config.hook.pytest_keyboard_interrupt(excinfo=excinfo)
session.exitstatus = exitstatus session.exitstatus = exitstatus
except: # noqa except: # noqa
excinfo = _pytest._code.ExceptionInfo() excinfo = _pytest._code.ExceptionInfo.from_current()
config.notify_exception(excinfo, config.option) config.notify_exception(excinfo, config.option)
session.exitstatus = EXIT_INTERNALERROR session.exitstatus = EXIT_INTERNALERROR
if excinfo.errisinstance(SystemExit): if excinfo.errisinstance(SystemExit):

View File

@ -13,8 +13,9 @@ import six
import pytest import pytest
from _pytest.fixtures import fixture from _pytest.fixtures import fixture
from _pytest.pathlib import Path
RE_IMPORT_ERROR_NAME = re.compile("^No module named (.*)$") RE_IMPORT_ERROR_NAME = re.compile(r"^No module named (.*)$")
@fixture @fixture
@ -267,6 +268,9 @@ class MonkeyPatch(object):
self._cwd = os.getcwd() self._cwd = os.getcwd()
if hasattr(path, "chdir"): if hasattr(path, "chdir"):
path.chdir() path.chdir()
elif isinstance(path, Path):
# modern python uses the fspath protocol here LEGACY
os.chdir(str(path))
else: else:
os.chdir(path) os.chdir(path)

View File

@ -5,7 +5,6 @@ from __future__ import print_function
import os import os
import warnings import warnings
import attr
import py import py
import six import six
@ -56,22 +55,6 @@ def ischildnode(baseid, nodeid):
return node_parts[: len(base_parts)] == base_parts return node_parts[: len(base_parts)] == base_parts
@attr.s
class _CompatProperty(object):
name = attr.ib()
def __get__(self, obj, owner):
if obj is None:
return self
from _pytest.deprecated import COMPAT_PROPERTY
warnings.warn(
COMPAT_PROPERTY.format(name=self.name, owner=owner.__name__), stacklevel=2
)
return getattr(__import__("pytest"), self.name)
class Node(object): class Node(object):
""" base class for Collector and Item the test collection tree. """ base class for Collector and Item the test collection tree.
Collector subclasses have children, Items are terminal nodes.""" Collector subclasses have children, Items are terminal nodes."""
@ -119,26 +102,8 @@ class Node(object):
""" fspath sensitive hook proxy used to call pytest hooks""" """ fspath sensitive hook proxy used to call pytest hooks"""
return self.session.gethookproxy(self.fspath) return self.session.gethookproxy(self.fspath)
Module = _CompatProperty("Module")
Class = _CompatProperty("Class")
Instance = _CompatProperty("Instance")
Function = _CompatProperty("Function")
File = _CompatProperty("File")
Item = _CompatProperty("Item")
def _getcustomclass(self, name):
maybe_compatprop = getattr(type(self), name)
if isinstance(maybe_compatprop, _CompatProperty):
return getattr(__import__("pytest"), name)
else:
from _pytest.deprecated import CUSTOM_CLASS
cls = getattr(self, name)
self.warn(CUSTOM_CLASS.format(name=name, type_name=type(self).__name__))
return cls
def __repr__(self): def __repr__(self):
return "<%s %r>" % (self.__class__.__name__, getattr(self, "name", None)) return "<%s %s>" % (self.__class__.__name__, getattr(self, "name", None))
def warn(self, _code_or_warning=None, message=None, code=None): def warn(self, _code_or_warning=None, message=None, code=None):
"""Issue a warning for this item. """Issue a warning for this item.

View File

@ -23,20 +23,15 @@ def get_skip_exceptions():
def pytest_runtest_makereport(item, call): def pytest_runtest_makereport(item, call):
if call.excinfo and call.excinfo.errisinstance(get_skip_exceptions()): if call.excinfo and call.excinfo.errisinstance(get_skip_exceptions()):
# let's substitute the excinfo with a pytest.skip one # let's substitute the excinfo with a pytest.skip one
call2 = call.__class__(lambda: runner.skip(str(call.excinfo.value)), call.when) call2 = runner.CallInfo.from_call(
lambda: runner.skip(str(call.excinfo.value)), call.when
)
call.excinfo = call2.excinfo call.excinfo = call2.excinfo
@hookimpl(trylast=True) @hookimpl(trylast=True)
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
if is_potential_nosetest(item): if is_potential_nosetest(item):
if isinstance(item.parent, python.Generator):
gen = item.parent
if not hasattr(gen, "_nosegensetup"):
call_optional(gen.obj, "setup")
if isinstance(gen.parent, python.Instance):
call_optional(gen.parent.obj, "setup")
gen._nosegensetup = True
if not call_optional(item.obj, "setup"): if not call_optional(item.obj, "setup"):
# call module level setup if there is no object level one # call module level setup if there is no object level one
call_optional(item.parent.obj, "setup") call_optional(item.parent.obj, "setup")
@ -53,11 +48,6 @@ def teardown_nose(item):
# del item.parent._nosegensetup # del item.parent._nosegensetup
def pytest_make_collect_report(collector):
if isinstance(collector, python.Generator):
call_optional(collector.obj, "setup")
def is_potential_nosetest(item): def is_potential_nosetest(item):
# extra check needed since we do not do nose style setup/teardown # extra check needed since we do not do nose style setup/teardown
# on direct unittest style classes # on direct unittest style classes

View File

@ -38,6 +38,7 @@ from _pytest.compat import safe_str
from _pytest.compat import STRING_TYPES from _pytest.compat import STRING_TYPES
from _pytest.config import hookimpl from _pytest.config import hookimpl
from _pytest.main import FSHookProxy from _pytest.main import FSHookProxy
from _pytest.mark import MARK_GEN
from _pytest.mark.structures import get_unpacked_marks from _pytest.mark.structures import get_unpacked_marks
from _pytest.mark.structures import normalize_mark_list from _pytest.mark.structures import normalize_mark_list
from _pytest.mark.structures import transfer_markers from _pytest.mark.structures import transfer_markers
@ -199,7 +200,6 @@ def pytest_pycollect_makeitem(collector, name, obj):
# nothing was collected elsewhere, let's do it here # nothing was collected elsewhere, let's do it here
if safe_isclass(obj): if safe_isclass(obj):
if collector.istestclass(obj, name): if collector.istestclass(obj, name):
Class = collector._getcustomclass("Class")
outcome.force_result(Class(name, parent=collector)) outcome.force_result(Class(name, parent=collector))
elif collector.istestfunction(obj, name): elif collector.istestfunction(obj, name):
# mock seems to store unbound methods (issue473), normalize it # mock seems to store unbound methods (issue473), normalize it
@ -219,7 +219,10 @@ def pytest_pycollect_makeitem(collector, name, obj):
) )
elif getattr(obj, "__test__", True): elif getattr(obj, "__test__", True):
if is_generator(obj): if is_generator(obj):
res = Generator(name, parent=collector) res = Function(name, parent=collector)
reason = deprecated.YIELD_TESTS.format(name=name)
res.add_marker(MARK_GEN.xfail(run=False, reason=reason))
res.warn(PytestWarning(reason))
else: else:
res = list(collector._genfunctions(name, obj)) res = list(collector._genfunctions(name, obj))
outcome.force_result(res) outcome.force_result(res)
@ -408,7 +411,6 @@ class PyCollector(PyobjMixin, nodes.Collector):
else: else:
self.ihook.pytest_generate_tests(metafunc=metafunc) self.ihook.pytest_generate_tests(metafunc=metafunc)
Function = self._getcustomclass("Function")
if not metafunc._calls: if not metafunc._calls:
yield Function(name, parent=self, fixtureinfo=fixtureinfo) yield Function(name, parent=self, fixtureinfo=fixtureinfo)
else: else:
@ -450,7 +452,7 @@ class Module(nodes.File, PyCollector):
mod = self.fspath.pyimport(ensuresyspath=importmode) mod = self.fspath.pyimport(ensuresyspath=importmode)
except SyntaxError: except SyntaxError:
raise self.CollectError( raise self.CollectError(
_pytest._code.ExceptionInfo().getrepr(style="short") _pytest._code.ExceptionInfo.from_current().getrepr(style="short")
) )
except self.fspath.ImportMismatchError: except self.fspath.ImportMismatchError:
e = sys.exc_info()[1] e = sys.exc_info()[1]
@ -466,7 +468,7 @@ class Module(nodes.File, PyCollector):
except ImportError: except ImportError:
from _pytest._code.code import ExceptionInfo from _pytest._code.code import ExceptionInfo
exc_info = ExceptionInfo() exc_info = ExceptionInfo.from_current()
if self.config.getoption("verbose") < 2: if self.config.getoption("verbose") < 2:
exc_info.traceback = exc_info.traceback.filter(filter_traceback) exc_info.traceback = exc_info.traceback.filter(filter_traceback)
exc_repr = ( exc_repr = (
@ -648,7 +650,7 @@ class Class(PyCollector):
) )
) )
return [] return []
return [self._getcustomclass("Instance")(name="()", parent=self)] return [Instance(name="()", parent=self)]
def setup(self): def setup(self):
setup_class = _get_xunit_func(self.obj, "setup_class") setup_class = _get_xunit_func(self.obj, "setup_class")
@ -739,51 +741,6 @@ class FunctionMixin(PyobjMixin):
return self._repr_failure_py(excinfo, style=style) return self._repr_failure_py(excinfo, style=style)
class Generator(FunctionMixin, PyCollector):
def collect(self):
# test generators are seen as collectors but they also
# invoke setup/teardown on popular request
# (induced by the common "test_*" naming shared with normal tests)
from _pytest import deprecated
self.warn(deprecated.YIELD_TESTS)
self.session._setupstate.prepare(self)
# see FunctionMixin.setup and test_setupstate_is_preserved_134
self._preservedparent = self.parent.obj
values = []
seen = {}
_Function = self._getcustomclass("Function")
for i, x in enumerate(self.obj()):
name, call, args = self.getcallargs(x)
if not callable(call):
raise TypeError("%r yielded non callable test %r" % (self.obj, call))
if name is None:
name = "[%d]" % i
else:
name = "['%s']" % name
if name in seen:
raise ValueError(
"%r generated tests with non-unique name %r" % (self, name)
)
seen[name] = True
values.append(_Function(name, self, args=args, callobj=call))
return values
def getcallargs(self, obj):
if not isinstance(obj, (tuple, list)):
obj = (obj,)
# explicit naming
if isinstance(obj[0], six.string_types):
name = obj[0]
obj = obj[1:]
else:
name = None
call, args = obj[0], obj[1:]
return name, call, args
def hasinit(obj): def hasinit(obj):
init = getattr(obj, "__init__", None) init = getattr(obj, "__init__", None)
if init: if init:
@ -1326,8 +1283,7 @@ def _showfixtures_main(config, session):
tw.line(" %s: no docstring available" % (loc,), red=True) tw.line(" %s: no docstring available" % (loc,), red=True)
def write_docstring(tw, doc): def write_docstring(tw, doc, indent=" "):
INDENT = " "
doc = doc.rstrip() doc = doc.rstrip()
if "\n" in doc: if "\n" in doc:
firstline, rest = doc.split("\n", 1) firstline, rest = doc.split("\n", 1)
@ -1335,11 +1291,11 @@ def write_docstring(tw, doc):
firstline, rest = doc, "" firstline, rest = doc, ""
if firstline.strip(): if firstline.strip():
tw.line(INDENT + firstline.strip()) tw.line(indent + firstline.strip())
if rest: if rest:
for line in dedent(rest).split("\n"): for line in dedent(rest).split("\n"):
tw.write(INDENT + line + "\n") tw.write(indent + line + "\n")
class Function(FunctionMixin, nodes.Item, fixtures.FuncargnamesCompatAttr): class Function(FunctionMixin, nodes.Item, fixtures.FuncargnamesCompatAttr):

View File

@ -1,6 +1,9 @@
from __future__ import absolute_import
import math import math
import pprint import pprint
import sys import sys
import warnings
from decimal import Decimal from decimal import Decimal
from numbers import Number from numbers import Number
@ -14,6 +17,7 @@ from _pytest.compat import isclass
from _pytest.compat import Mapping from _pytest.compat import Mapping
from _pytest.compat import Sequence from _pytest.compat import Sequence
from _pytest.compat import STRING_TYPES from _pytest.compat import STRING_TYPES
from _pytest.deprecated import RAISES_EXEC
from _pytest.outcomes import fail from _pytest.outcomes import fail
BASE_TYPE = (type, STRING_TYPES) BASE_TYPE = (type, STRING_TYPES)
@ -604,9 +608,9 @@ def raises(expected_exception, *args, **kwargs):
>>> with raises(ValueError, match=r'must be \d+$'): >>> with raises(ValueError, match=r'must be \d+$'):
... raise ValueError("value must be 42") ... raise ValueError("value must be 42")
**Legacy forms** **Legacy form**
The forms below are fully supported but are discouraged for new code because the The form below is fully supported but discouraged for new code because the
context manager form is regarded as more readable and less error-prone. context manager form is regarded as more readable and less error-prone.
It is possible to specify a callable by passing a to-be-called lambda:: It is possible to specify a callable by passing a to-be-called lambda::
@ -623,14 +627,6 @@ def raises(expected_exception, *args, **kwargs):
>>> raises(ZeroDivisionError, f, x=0) >>> raises(ZeroDivisionError, f, x=0)
<ExceptionInfo ...> <ExceptionInfo ...>
It is also possible to pass a string to be evaluated at runtime::
>>> raises(ZeroDivisionError, "f(0)")
<ExceptionInfo ...>
The string will be evaluated using the same ``locals()`` and ``globals()``
at the moment of the ``raises`` call.
.. currentmodule:: _pytest._code .. currentmodule:: _pytest._code
Consult the API of ``excinfo`` objects: :class:`ExceptionInfo`. Consult the API of ``excinfo`` objects: :class:`ExceptionInfo`.
@ -672,6 +668,7 @@ def raises(expected_exception, *args, **kwargs):
raise TypeError(msg) raise TypeError(msg)
return RaisesContext(expected_exception, message, match_expr) return RaisesContext(expected_exception, message, match_expr)
elif isinstance(args[0], str): elif isinstance(args[0], str):
warnings.warn(RAISES_EXEC, stacklevel=2)
code, = args code, = args
assert isinstance(code, str) assert isinstance(code, str)
frame = sys._getframe(1) frame = sys._getframe(1)
@ -679,18 +676,18 @@ def raises(expected_exception, *args, **kwargs):
loc.update(kwargs) loc.update(kwargs)
# print "raises frame scope: %r" % frame.f_locals # print "raises frame scope: %r" % frame.f_locals
try: try:
code = _pytest._code.Source(code).compile() code = _pytest._code.Source(code).compile(_genframe=frame)
six.exec_(code, frame.f_globals, loc) six.exec_(code, frame.f_globals, loc)
# XXX didn't mean f_globals == f_locals something special? # XXX didn't mean f_globals == f_locals something special?
# this is destroyed here ... # this is destroyed here ...
except expected_exception: except expected_exception:
return _pytest._code.ExceptionInfo() return _pytest._code.ExceptionInfo.from_current()
else: else:
func = args[0] func = args[0]
try: try:
func(*args[1:], **kwargs) func(*args[1:], **kwargs)
except expected_exception: except expected_exception:
return _pytest._code.ExceptionInfo() return _pytest._code.ExceptionInfo.from_current()
fail(message) fail(message)
@ -705,7 +702,7 @@ class RaisesContext(object):
self.excinfo = None self.excinfo = None
def __enter__(self): def __enter__(self):
self.excinfo = object.__new__(_pytest._code.ExceptionInfo) self.excinfo = _pytest._code.ExceptionInfo.for_later()
return self.excinfo return self.excinfo
def __exit__(self, *tp): def __exit__(self, *tp):

View File

@ -11,6 +11,7 @@ import warnings
import six import six
import _pytest._code import _pytest._code
from _pytest.deprecated import WARNS_EXEC
from _pytest.fixtures import yield_fixture from _pytest.fixtures import yield_fixture
from _pytest.outcomes import fail from _pytest.outcomes import fail
@ -89,6 +90,7 @@ def warns(expected_warning, *args, **kwargs):
match_expr = kwargs.pop("match") match_expr = kwargs.pop("match")
return WarningsChecker(expected_warning, match_expr=match_expr) return WarningsChecker(expected_warning, match_expr=match_expr)
elif isinstance(args[0], str): elif isinstance(args[0], str):
warnings.warn(WARNS_EXEC, stacklevel=2)
code, = args code, = args
assert isinstance(code, str) assert isinstance(code, str)
frame = sys._getframe(1) frame = sys._getframe(1)

View File

@ -36,7 +36,7 @@ def pytest_configure(config):
from _pytest.deprecated import RESULT_LOG from _pytest.deprecated import RESULT_LOG
from _pytest.warnings import _issue_config_warning from _pytest.warnings import _issue_config_warning
_issue_config_warning(RESULT_LOG, config) _issue_config_warning(RESULT_LOG, config, stacklevel=2)
def pytest_unconfigure(config): def pytest_unconfigure(config):

View File

@ -8,6 +8,7 @@ import os
import sys import sys
from time import time from time import time
import attr
import six import six
from .reports import CollectErrorRepr from .reports import CollectErrorRepr
@ -189,43 +190,57 @@ def check_interactive_exception(call, report):
def call_runtest_hook(item, when, **kwds): def call_runtest_hook(item, when, **kwds):
hookname = "pytest_runtest_" + when hookname = "pytest_runtest_" + when
ihook = getattr(item.ihook, hookname) ihook = getattr(item.ihook, hookname)
return CallInfo( return CallInfo.from_call(
lambda: ihook(item=item, **kwds), lambda: ihook(item=item, **kwds),
when=when, when=when,
treat_keyboard_interrupt_as_exception=item.config.getvalue("usepdb"), reraise=KeyboardInterrupt if not item.config.getvalue("usepdb") else (),
) )
@attr.s(repr=False)
class CallInfo(object): class CallInfo(object):
""" Result/Exception info a function invocation. """ """ Result/Exception info a function invocation. """
#: None or ExceptionInfo object. _result = attr.ib()
excinfo = None # type: Optional[ExceptionInfo]
excinfo = attr.ib()
start = attr.ib()
stop = attr.ib()
when = attr.ib()
def __init__(self, func, when, treat_keyboard_interrupt_as_exception=False): @property
def result(self):
if self.excinfo is not None:
raise AttributeError("{!r} has no valid result".format(self))
return self._result
@classmethod
def from_call(cls, func, when, reraise=None):
#: context of invocation: one of "setup", "call", #: context of invocation: one of "setup", "call",
#: "teardown", "memocollect" #: "teardown", "memocollect"
self.when = when start = time()
self.start = time() excinfo = None
try: try:
self.result = func() result = func()
except KeyboardInterrupt:
if treat_keyboard_interrupt_as_exception:
self.excinfo = ExceptionInfo()
else:
self.stop = time()
raise
except: # noqa except: # noqa
self.excinfo = ExceptionInfo() excinfo = ExceptionInfo.from_current()
self.stop = time() if reraise is not None and excinfo.errisinstance(reraise):
raise
result = None
stop = time()
return cls(start=start, stop=stop, when=when, result=result, excinfo=excinfo)
def __repr__(self): def __repr__(self):
if self.excinfo: if self.excinfo is not None:
status = "exception: %s" % str(self.excinfo.value) status = "exception"
value = self.excinfo.value
else: else:
result = getattr(self, "result", "<NOTSET>") # TODO: investigate unification
status = "result: %r" % (result,) value = repr(self._result)
return "<CallInfo when=%r %s>" % (self.when, status) status = "result"
return "<CallInfo when={when!r} {status}: {value}>".format(
when=self.when, value=value, status=status
)
def pytest_runtest_makereport(item, call): def pytest_runtest_makereport(item, call):
@ -269,7 +284,7 @@ def pytest_runtest_makereport(item, call):
def pytest_make_collect_report(collector): def pytest_make_collect_report(collector):
call = CallInfo(lambda: list(collector.collect()), "collect") call = CallInfo.from_call(lambda: list(collector.collect()), "collect")
longrepr = None longrepr = None
if not call.excinfo: if not call.excinfo:
outcome = "passed" outcome = "passed"

View File

@ -647,9 +647,11 @@ class TerminalReporter(object):
def pytest_terminal_summary(self): def pytest_terminal_summary(self):
self.summary_errors() self.summary_errors()
self.summary_failures() self.summary_failures()
yield
self.summary_warnings() self.summary_warnings()
yield
self.summary_passes() self.summary_passes()
# Display any extra warnings from teardown here (if any).
self.summary_warnings()
def pytest_keyboard_interrupt(self, excinfo): def pytest_keyboard_interrupt(self, excinfo):
self._keyboardinterrupt_memo = excinfo.getrepr(funcargs=True) self._keyboardinterrupt_memo = excinfo.getrepr(funcargs=True)
@ -726,11 +728,21 @@ class TerminalReporter(object):
if not all_warnings: if not all_warnings:
return return
final = hasattr(self, "_already_displayed_warnings")
if final:
warnings = all_warnings[self._already_displayed_warnings :]
else:
warnings = all_warnings
self._already_displayed_warnings = len(warnings)
if not warnings:
return
grouped = itertools.groupby( grouped = itertools.groupby(
all_warnings, key=lambda wr: wr.get_location(self.config) warnings, key=lambda wr: wr.get_location(self.config)
) )
self.write_sep("=", "warnings summary", yellow=True, bold=False) title = "warnings summary (final)" if final else "warnings summary"
self.write_sep("=", title, yellow=True, bold=False)
for location, warning_records in grouped: for location, warning_records in grouped:
# legacy warnings show their location explicitly, while standard warnings look better without # legacy warnings show their location explicitly, while standard warnings look better without
# it because the location is already formatted into the message # it because the location is already formatted into the message
@ -786,8 +798,7 @@ class TerminalReporter(object):
self.write_line(line) self.write_line(line)
else: else:
msg = self._getfailureheadline(rep) msg = self._getfailureheadline(rep)
markup = {"red": True, "bold": True} self.write_sep("_", msg, red=True, bold=True)
self.write_sep("_", msg, **markup)
self._outrep_summary(rep) self._outrep_summary(rep)
for report in self.getreports(""): for report in self.getreports(""):
if report.nodeid == rep.nodeid and report.when == "teardown": if report.nodeid == rep.nodeid and report.when == "teardown":
@ -808,7 +819,7 @@ class TerminalReporter(object):
msg = "ERROR at setup of " + msg msg = "ERROR at setup of " + msg
elif rep.when == "teardown": elif rep.when == "teardown":
msg = "ERROR at teardown of " + msg msg = "ERROR at teardown of " + msg
self.write_sep("_", msg) self.write_sep("_", msg, red=True, bold=True)
self._outrep_summary(rep) self._outrep_summary(rep)
def _outrep_summary(self, rep): def _outrep_summary(self, rep):

View File

@ -10,6 +10,7 @@ import warnings
import attr import attr
import py import py
import six
import pytest import pytest
from .pathlib import ensure_reset_dir from .pathlib import ensure_reset_dir
@ -26,7 +27,14 @@ class TempPathFactory(object):
The base directory can be configured using the ``--basetemp`` option.""" The base directory can be configured using the ``--basetemp`` option."""
_given_basetemp = attr.ib() _given_basetemp = attr.ib(
# using os.path.abspath() to get absolute path instead of resolve() as it
# does not work the same in all platforms (see #4427)
# Path.absolute() exists, but it is not public (see https://bugs.python.org/issue25012)
convert=attr.converters.optional(
lambda p: Path(os.path.abspath(six.text_type(p)))
)
)
_trace = attr.ib() _trace = attr.ib()
_basetemp = attr.ib(default=None) _basetemp = attr.ib(default=None)
@ -53,7 +61,7 @@ class TempPathFactory(object):
""" return base temporary directory. """ """ return base temporary directory. """
if self._basetemp is None: if self._basetemp is None:
if self._given_basetemp is not None: if self._given_basetemp is not None:
basetemp = Path(self._given_basetemp) basetemp = self._given_basetemp
ensure_reset_dir(basetemp) ensure_reset_dir(basetemp)
else: else:
from_env = os.environ.get("PYTEST_DEBUG_TEMPROOT") from_env = os.environ.get("PYTEST_DEBUG_TEMPROOT")

View File

@ -115,6 +115,10 @@ class TestCaseFunction(Function):
rawexcinfo = getattr(rawexcinfo, "_rawexcinfo", rawexcinfo) rawexcinfo = getattr(rawexcinfo, "_rawexcinfo", rawexcinfo)
try: try:
excinfo = _pytest._code.ExceptionInfo(rawexcinfo) excinfo = _pytest._code.ExceptionInfo(rawexcinfo)
# invoke the attributes to trigger storing the traceback
# trial causes some issue there
excinfo.value
excinfo.traceback
except TypeError: except TypeError:
try: try:
try: try:
@ -136,7 +140,7 @@ class TestCaseFunction(Function):
except KeyboardInterrupt: except KeyboardInterrupt:
raise raise
except fail.Exception: except fail.Exception:
excinfo = _pytest._code.ExceptionInfo() excinfo = _pytest._code.ExceptionInfo.from_current()
self.__dict__.setdefault("_excinfo", []).append(excinfo) self.__dict__.setdefault("_excinfo", []).append(excinfo)
def addError(self, testcase, rawexcinfo): def addError(self, testcase, rawexcinfo):

View File

@ -160,7 +160,7 @@ def pytest_terminal_summary(terminalreporter):
yield yield
def _issue_config_warning(warning, config): def _issue_config_warning(warning, config, stacklevel):
""" """
This function should be used instead of calling ``warnings.warn`` directly when we are in the "configure" stage: This function should be used instead of calling ``warnings.warn`` directly when we are in the "configure" stage:
at this point the actual options might not have been set, so we manually trigger the pytest_warning_captured at this point the actual options might not have been set, so we manually trigger the pytest_warning_captured
@ -168,10 +168,11 @@ def _issue_config_warning(warning, config):
:param warning: the warning instance. :param warning: the warning instance.
:param config: :param config:
:param stacklevel: stacklevel forwarded to warnings.warn
""" """
with warnings.catch_warnings(record=True) as records: with warnings.catch_warnings(record=True) as records:
warnings.simplefilter("always", type(warning)) warnings.simplefilter("always", type(warning))
warnings.warn(warning, stacklevel=2) warnings.warn(warning, stacklevel=stacklevel)
config.hook.pytest_warning_captured.call_historic( config.hook.pytest_warning_captured.call_historic(
kwargs=dict(warning_message=records[0], when="config", item=None) kwargs=dict(warning_message=records[0], when="config", item=None)
) )

View File

@ -28,7 +28,6 @@ from _pytest.outcomes import skip
from _pytest.outcomes import xfail from _pytest.outcomes import xfail
from _pytest.python import Class from _pytest.python import Class
from _pytest.python import Function from _pytest.python import Function
from _pytest.python import Generator
from _pytest.python import Instance from _pytest.python import Instance
from _pytest.python import Module from _pytest.python import Module
from _pytest.python import Package from _pytest.python import Package
@ -57,7 +56,6 @@ __all__ = [
"fixture", "fixture",
"freeze_includes", "freeze_includes",
"Function", "Function",
"Generator",
"hookimpl", "hookimpl",
"hookspec", "hookspec",
"importorskip", "importorskip",

View File

@ -206,7 +206,7 @@ class TestGeneralUsage(object):
testdir.makeconftest( testdir.makeconftest(
""" """
import sys import sys
print ("should not be seen") print("should not be seen")
sys.stderr.write("stder42\\n") sys.stderr.write("stder42\\n")
""" """
) )
@ -218,7 +218,7 @@ class TestGeneralUsage(object):
def test_conftest_printing_shows_if_error(self, testdir): def test_conftest_printing_shows_if_error(self, testdir):
testdir.makeconftest( testdir.makeconftest(
""" """
print ("should be seen") print("should be seen")
assert 0 assert 0
""" """
) )
@ -301,7 +301,7 @@ class TestGeneralUsage(object):
def pytest_generate_tests(metafunc): def pytest_generate_tests(metafunc):
metafunc.addcall({'x': 3}, id='hello-123') metafunc.addcall({'x': 3}, id='hello-123')
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
print (item.keywords) print(item.keywords)
if 'hello-123' in item.keywords: if 'hello-123' in item.keywords:
pytest.skip("hello") pytest.skip("hello")
assert 0 assert 0

View File

@ -37,7 +37,7 @@ def test_code_with_class():
class A(object): class A(object):
pass pass
pytest.raises(TypeError, "_pytest._code.Code(A)") pytest.raises(TypeError, _pytest._code.Code, A)
def x(): def x():
@ -169,7 +169,7 @@ class TestExceptionInfo(object):
else: else:
assert False assert False
except AssertionError: except AssertionError:
exci = _pytest._code.ExceptionInfo() exci = _pytest._code.ExceptionInfo.from_current()
assert exci.getrepr() assert exci.getrepr()
@ -181,7 +181,7 @@ class TestTracebackEntry(object):
else: else:
assert False assert False
except AssertionError: except AssertionError:
exci = _pytest._code.ExceptionInfo() exci = _pytest._code.ExceptionInfo.from_current()
entry = exci.traceback[0] entry = exci.traceback[0]
source = entry.getsource() source = entry.getsource()
assert len(source) == 6 assert len(source) == 6

View File

@ -71,7 +71,7 @@ def test_excinfo_simple():
try: try:
raise ValueError raise ValueError
except ValueError: except ValueError:
info = _pytest._code.ExceptionInfo() info = _pytest._code.ExceptionInfo.from_current()
assert info.type == ValueError assert info.type == ValueError
@ -85,7 +85,7 @@ def test_excinfo_getstatement():
try: try:
f() f()
except ValueError: except ValueError:
excinfo = _pytest._code.ExceptionInfo() excinfo = _pytest._code.ExceptionInfo.from_current()
linenumbers = [ linenumbers = [
_pytest._code.getrawcode(f).co_firstlineno - 1 + 4, _pytest._code.getrawcode(f).co_firstlineno - 1 + 4,
_pytest._code.getrawcode(f).co_firstlineno - 1 + 1, _pytest._code.getrawcode(f).co_firstlineno - 1 + 1,
@ -126,7 +126,7 @@ class TestTraceback_f_g_h(object):
try: try:
h() h()
except ValueError: except ValueError:
self.excinfo = _pytest._code.ExceptionInfo() self.excinfo = _pytest._code.ExceptionInfo.from_current()
def test_traceback_entries(self): def test_traceback_entries(self):
tb = self.excinfo.traceback tb = self.excinfo.traceback
@ -163,7 +163,7 @@ class TestTraceback_f_g_h(object):
try: try:
exec(source.compile()) exec(source.compile())
except NameError: except NameError:
tb = _pytest._code.ExceptionInfo().traceback tb = _pytest._code.ExceptionInfo.from_current().traceback
print(tb[-1].getsource()) print(tb[-1].getsource())
s = str(tb[-1].getsource()) s = str(tb[-1].getsource())
assert s.startswith("def xyz():\n try:") assert s.startswith("def xyz():\n try:")
@ -180,7 +180,8 @@ class TestTraceback_f_g_h(object):
def test_traceback_cut_excludepath(self, testdir): def test_traceback_cut_excludepath(self, testdir):
p = testdir.makepyfile("def f(): raise ValueError") p = testdir.makepyfile("def f(): raise ValueError")
excinfo = pytest.raises(ValueError, "p.pyimport().f()") with pytest.raises(ValueError) as excinfo:
p.pyimport().f()
basedir = py.path.local(pytest.__file__).dirpath() basedir = py.path.local(pytest.__file__).dirpath()
newtraceback = excinfo.traceback.cut(excludepath=basedir) newtraceback = excinfo.traceback.cut(excludepath=basedir)
for x in newtraceback: for x in newtraceback:
@ -336,7 +337,8 @@ class TestTraceback_f_g_h(object):
def test_excinfo_exconly(): def test_excinfo_exconly():
excinfo = pytest.raises(ValueError, h) excinfo = pytest.raises(ValueError, h)
assert excinfo.exconly().startswith("ValueError") assert excinfo.exconly().startswith("ValueError")
excinfo = pytest.raises(ValueError, "raise ValueError('hello\\nworld')") with pytest.raises(ValueError) as excinfo:
raise ValueError("hello\nworld")
msg = excinfo.exconly(tryshort=True) msg = excinfo.exconly(tryshort=True)
assert msg.startswith("ValueError") assert msg.startswith("ValueError")
assert msg.endswith("world") assert msg.endswith("world")
@ -356,6 +358,12 @@ def test_excinfo_str():
assert len(s.split(":")) >= 3 # on windows it's 4 assert len(s.split(":")) >= 3 # on windows it's 4
def test_excinfo_for_later():
e = ExceptionInfo.for_later()
assert "for raises" in repr(e)
assert "for raises" in str(e)
def test_excinfo_errisinstance(): def test_excinfo_errisinstance():
excinfo = pytest.raises(ValueError, h) excinfo = pytest.raises(ValueError, h)
assert excinfo.errisinstance(ValueError) assert excinfo.errisinstance(ValueError)
@ -365,7 +373,7 @@ def test_excinfo_no_sourcecode():
try: try:
exec("raise ValueError()") exec("raise ValueError()")
except ValueError: except ValueError:
excinfo = _pytest._code.ExceptionInfo() excinfo = _pytest._code.ExceptionInfo.from_current()
s = str(excinfo.traceback[-1]) s = str(excinfo.traceback[-1])
assert s == " File '<string>':1 in <module>\n ???\n" assert s == " File '<string>':1 in <module>\n ???\n"
@ -390,7 +398,7 @@ def test_entrysource_Queue_example():
try: try:
queue.Queue().get(timeout=0.001) queue.Queue().get(timeout=0.001)
except queue.Empty: except queue.Empty:
excinfo = _pytest._code.ExceptionInfo() excinfo = _pytest._code.ExceptionInfo.from_current()
entry = excinfo.traceback[-1] entry = excinfo.traceback[-1]
source = entry.getsource() source = entry.getsource()
assert source is not None assert source is not None
@ -402,7 +410,7 @@ def test_codepath_Queue_example():
try: try:
queue.Queue().get(timeout=0.001) queue.Queue().get(timeout=0.001)
except queue.Empty: except queue.Empty:
excinfo = _pytest._code.ExceptionInfo() excinfo = _pytest._code.ExceptionInfo.from_current()
entry = excinfo.traceback[-1] entry = excinfo.traceback[-1]
path = entry.path path = entry.path
assert isinstance(path, py.path.local) assert isinstance(path, py.path.local)
@ -453,7 +461,7 @@ class TestFormattedExcinfo(object):
except KeyboardInterrupt: except KeyboardInterrupt:
raise raise
except: # noqa except: # noqa
return _pytest._code.ExceptionInfo() return _pytest._code.ExceptionInfo.from_current()
assert 0, "did not raise" assert 0, "did not raise"
def test_repr_source(self): def test_repr_source(self):
@ -491,7 +499,7 @@ class TestFormattedExcinfo(object):
try: try:
exec(co) exec(co)
except ValueError: except ValueError:
excinfo = _pytest._code.ExceptionInfo() excinfo = _pytest._code.ExceptionInfo.from_current()
repr = pr.repr_excinfo(excinfo) repr = pr.repr_excinfo(excinfo)
assert repr.reprtraceback.reprentries[1].lines[0] == "> ???" assert repr.reprtraceback.reprentries[1].lines[0] == "> ???"
if sys.version_info[0] >= 3: if sys.version_info[0] >= 3:
@ -510,7 +518,7 @@ raise ValueError()
try: try:
exec(co) exec(co)
except ValueError: except ValueError:
excinfo = _pytest._code.ExceptionInfo() excinfo = _pytest._code.ExceptionInfo.from_current()
repr = pr.repr_excinfo(excinfo) repr = pr.repr_excinfo(excinfo)
assert repr.reprtraceback.reprentries[1].lines[0] == "> ???" assert repr.reprtraceback.reprentries[1].lines[0] == "> ???"
if sys.version_info[0] >= 3: if sys.version_info[0] >= 3:
@ -1340,7 +1348,7 @@ def test_repr_traceback_with_unicode(style, encoding):
try: try:
raise RuntimeError(msg) raise RuntimeError(msg)
except RuntimeError: except RuntimeError:
e_info = ExceptionInfo() e_info = ExceptionInfo.from_current()
formatter = FormattedExcinfo(style=style) formatter = FormattedExcinfo(style=style)
repr_traceback = formatter.repr_traceback(e_info) repr_traceback = formatter.repr_traceback(e_info)
assert repr_traceback is not None assert repr_traceback is not None

View File

@ -6,6 +6,7 @@ from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
import ast
import inspect import inspect
import sys import sys
@ -14,7 +15,6 @@ import six
import _pytest._code import _pytest._code
import pytest import pytest
from _pytest._code import Source from _pytest._code import Source
from _pytest._code.source import ast
astonly = pytest.mark.nothing astonly = pytest.mark.nothing
@ -306,8 +306,6 @@ class TestSourceParsingAndCompiling(object):
pytest.raises(SyntaxError, lambda: source.getstatementrange(0)) pytest.raises(SyntaxError, lambda: source.getstatementrange(0))
def test_compile_to_ast(self): def test_compile_to_ast(self):
import ast
source = Source("x = 4") source = Source("x = 4")
mod = source.compile(flag=ast.PyCF_ONLY_AST) mod = source.compile(flag=ast.PyCF_ONLY_AST)
assert isinstance(mod, ast.Module) assert isinstance(mod, ast.Module)
@ -317,10 +315,9 @@ class TestSourceParsingAndCompiling(object):
co = self.source.compile() co = self.source.compile()
six.exec_(co, globals()) six.exec_(co, globals())
f(7) f(7)
excinfo = pytest.raises(AssertionError, "f(6)") excinfo = pytest.raises(AssertionError, f, 6)
frame = excinfo.traceback[-1].frame frame = excinfo.traceback[-1].frame
stmt = frame.code.fullsource.getstatement(frame.lineno) stmt = frame.code.fullsource.getstatement(frame.lineno)
# print "block", str(block)
assert str(stmt).strip().startswith("assert") assert str(stmt).strip().startswith("assert")
@pytest.mark.parametrize("name", ["", None, "my"]) @pytest.mark.parametrize("name", ["", None, "my"])
@ -361,17 +358,13 @@ def test_getline_finally():
def c(): def c():
pass pass
excinfo = pytest.raises( with pytest.raises(TypeError) as excinfo:
TypeError, teardown = None
""" try:
teardown = None c(1)
try: finally:
c(1) if teardown:
finally: teardown()
if teardown:
teardown()
""",
)
source = excinfo.traceback[-1].statement source = excinfo.traceback[-1].statement
assert str(source).strip() == "c(1)" assert str(source).strip() == "c(1)"

View File

@ -10,47 +10,6 @@ from _pytest.warnings import SHOW_PYTEST_WARNINGS_ARG
pytestmark = pytest.mark.pytester_example_path("deprecated") pytestmark = pytest.mark.pytester_example_path("deprecated")
def test_yield_tests_deprecation(testdir):
testdir.makepyfile(
"""
def func1(arg, arg2):
assert arg == arg2
def test_gen():
yield "m1", func1, 15, 3*5
yield "m2", func1, 42, 6*7
def test_gen2():
for k in range(10):
yield func1, 1, 1
"""
)
result = testdir.runpytest(SHOW_PYTEST_WARNINGS_ARG)
result.stdout.fnmatch_lines(
[
"*test_yield_tests_deprecation.py:3:*yield tests are deprecated*",
"*test_yield_tests_deprecation.py:6:*yield tests are deprecated*",
"*2 passed*",
]
)
assert result.stdout.str().count("yield tests are deprecated") == 2
def test_compat_properties_deprecation(testdir):
testdir.makepyfile(
"""
def test_foo(request):
print(request.node.Module)
"""
)
result = testdir.runpytest(SHOW_PYTEST_WARNINGS_ARG)
result.stdout.fnmatch_lines(
[
"*test_compat_properties_deprecation.py:2:*usage of Function.Module is deprecated, "
"please use pytest.Module instead*",
"*1 passed, 1 warnings in*",
]
)
def test_cached_setup_deprecation(testdir): def test_cached_setup_deprecation(testdir):
testdir.makepyfile( testdir.makepyfile(
""" """
@ -72,36 +31,6 @@ def test_cached_setup_deprecation(testdir):
) )
def test_custom_class_deprecation(testdir):
testdir.makeconftest(
"""
import pytest
class MyModule(pytest.Module):
class Class(pytest.Class):
pass
def pytest_pycollect_makemodule(path, parent):
return MyModule(path, parent)
"""
)
testdir.makepyfile(
"""
class Test:
def test_foo(self):
pass
"""
)
result = testdir.runpytest(SHOW_PYTEST_WARNINGS_ARG)
result.stdout.fnmatch_lines(
[
'*test_custom_class_deprecation.py:1:*"Class" objects in collectors of type "MyModule*',
"*1 passed, 1 warnings in*",
]
)
def test_funcarg_prefix_deprecation(testdir): def test_funcarg_prefix_deprecation(testdir):
testdir.makepyfile( testdir.makepyfile(
""" """

View File

@ -0,0 +1,14 @@
from dataclasses import dataclass
from dataclasses import field
def test_dataclasses():
@dataclass
class SimpleDataObject(object):
field_a: int = field()
field_b: int = field()
left = SimpleDataObject(1, "b")
right = SimpleDataObject(1, "c")
assert left == right

View File

@ -0,0 +1,14 @@
from dataclasses import dataclass
from dataclasses import field
def test_dataclasses_with_attribute_comparison_off():
@dataclass
class SimpleDataObject(object):
field_a: int = field()
field_b: int = field(compare=False)
left = SimpleDataObject(1, "b")
right = SimpleDataObject(1, "c")
assert left == right

View File

@ -0,0 +1,14 @@
from dataclasses import dataclass
from dataclasses import field
def test_dataclasses_verbose():
@dataclass
class SimpleDataObject(object):
field_a: int = field()
field_b: int = field()
left = SimpleDataObject(1, "b")
right = SimpleDataObject(1, "c")
assert left == right

View File

@ -0,0 +1,19 @@
from dataclasses import dataclass
from dataclasses import field
def test_comparing_two_different_data_classes():
@dataclass
class SimpleDataObjectOne(object):
field_a: int = field()
field_b: int = field()
@dataclass
class SimpleDataObjectTwo(object):
field_a: int = field()
field_b: int = field()
left = SimpleDataObjectOne(1, "b")
right = SimpleDataObjectTwo(1, "c")
assert left != right

View File

@ -3,4 +3,4 @@ import pytest
@pytest.fixture @pytest.fixture
def arg2(request): def arg2(request):
pytest.raises(Exception, "request.getfixturevalue('arg1')") pytest.raises(Exception, request.getfixturevalue, "arg1")

View File

@ -7,7 +7,6 @@ import _pytest._code
import pytest import pytest
from _pytest.main import EXIT_NOTESTSCOLLECTED from _pytest.main import EXIT_NOTESTSCOLLECTED
from _pytest.nodes import Collector from _pytest.nodes import Collector
from _pytest.warnings import SHOW_PYTEST_WARNINGS_ARG
class TestModule(object): class TestModule(object):
@ -244,217 +243,6 @@ class TestClass(object):
@pytest.mark.filterwarnings( @pytest.mark.filterwarnings(
"ignore:usage of Generator.Function is deprecated, please use pytest.Function instead" "ignore:usage of Generator.Function is deprecated, please use pytest.Function instead"
) )
class TestGenerator(object):
def test_generative_functions(self, testdir):
modcol = testdir.getmodulecol(
"""
def func1(arg, arg2):
assert arg == arg2
def test_gen():
yield func1, 17, 3*5
yield func1, 42, 6*7
"""
)
colitems = modcol.collect()
assert len(colitems) == 1
gencol = colitems[0]
assert isinstance(gencol, pytest.Generator)
gencolitems = gencol.collect()
assert len(gencolitems) == 2
assert isinstance(gencolitems[0], pytest.Function)
assert isinstance(gencolitems[1], pytest.Function)
assert gencolitems[0].name == "[0]"
assert gencolitems[0].obj.__name__ == "func1"
def test_generative_methods(self, testdir):
modcol = testdir.getmodulecol(
"""
def func1(arg, arg2):
assert arg == arg2
class TestGenMethods(object):
def test_gen(self):
yield func1, 17, 3*5
yield func1, 42, 6*7
"""
)
gencol = modcol.collect()[0].collect()[0].collect()[0]
assert isinstance(gencol, pytest.Generator)
gencolitems = gencol.collect()
assert len(gencolitems) == 2
assert isinstance(gencolitems[0], pytest.Function)
assert isinstance(gencolitems[1], pytest.Function)
assert gencolitems[0].name == "[0]"
assert gencolitems[0].obj.__name__ == "func1"
def test_generative_functions_with_explicit_names(self, testdir):
modcol = testdir.getmodulecol(
"""
def func1(arg, arg2):
assert arg == arg2
def test_gen():
yield "seventeen", func1, 17, 3*5
yield "fortytwo", func1, 42, 6*7
"""
)
colitems = modcol.collect()
assert len(colitems) == 1
gencol = colitems[0]
assert isinstance(gencol, pytest.Generator)
gencolitems = gencol.collect()
assert len(gencolitems) == 2
assert isinstance(gencolitems[0], pytest.Function)
assert isinstance(gencolitems[1], pytest.Function)
assert gencolitems[0].name == "['seventeen']"
assert gencolitems[0].obj.__name__ == "func1"
assert gencolitems[1].name == "['fortytwo']"
assert gencolitems[1].obj.__name__ == "func1"
def test_generative_functions_unique_explicit_names(self, testdir):
# generative
modcol = testdir.getmodulecol(
"""
def func(): pass
def test_gen():
yield "name", func
yield "name", func
"""
)
colitems = modcol.collect()
assert len(colitems) == 1
gencol = colitems[0]
assert isinstance(gencol, pytest.Generator)
pytest.raises(ValueError, "gencol.collect()")
def test_generative_methods_with_explicit_names(self, testdir):
modcol = testdir.getmodulecol(
"""
def func1(arg, arg2):
assert arg == arg2
class TestGenMethods(object):
def test_gen(self):
yield "m1", func1, 17, 3*5
yield "m2", func1, 42, 6*7
"""
)
gencol = modcol.collect()[0].collect()[0].collect()[0]
assert isinstance(gencol, pytest.Generator)
gencolitems = gencol.collect()
assert len(gencolitems) == 2
assert isinstance(gencolitems[0], pytest.Function)
assert isinstance(gencolitems[1], pytest.Function)
assert gencolitems[0].name == "['m1']"
assert gencolitems[0].obj.__name__ == "func1"
assert gencolitems[1].name == "['m2']"
assert gencolitems[1].obj.__name__ == "func1"
def test_order_of_execution_generator_same_codeline(self, testdir, tmpdir):
o = testdir.makepyfile(
"""
from __future__ import print_function
def test_generative_order_of_execution():
import py, pytest
test_list = []
expected_list = list(range(6))
def list_append(item):
test_list.append(item)
def assert_order_of_execution():
print('expected order', expected_list)
print('but got ', test_list)
assert test_list == expected_list
for i in expected_list:
yield list_append, i
yield assert_order_of_execution
"""
)
reprec = testdir.inline_run(o, SHOW_PYTEST_WARNINGS_ARG)
passed, skipped, failed = reprec.countoutcomes()
assert passed == 7
assert not skipped and not failed
def test_order_of_execution_generator_different_codeline(self, testdir):
o = testdir.makepyfile(
"""
from __future__ import print_function
def test_generative_tests_different_codeline():
import py, pytest
test_list = []
expected_list = list(range(3))
def list_append_2():
test_list.append(2)
def list_append_1():
test_list.append(1)
def list_append_0():
test_list.append(0)
def assert_order_of_execution():
print('expected order', expected_list)
print('but got ', test_list)
assert test_list == expected_list
yield list_append_0
yield list_append_1
yield list_append_2
yield assert_order_of_execution
"""
)
reprec = testdir.inline_run(o, SHOW_PYTEST_WARNINGS_ARG)
passed, skipped, failed = reprec.countoutcomes()
assert passed == 4
assert not skipped and not failed
def test_setupstate_is_preserved_134(self, testdir):
# yield-based tests are messy wrt to setupstate because
# during collection they already invoke setup functions
# and then again when they are run. For now, we want to make sure
# that the old 1.3.4 behaviour is preserved such that all
# yielded functions all share the same "self" instance that
# has been used during collection.
o = testdir.makepyfile(
"""
setuplist = []
class TestClass(object):
def setup_method(self, func):
#print "setup_method", self, func
setuplist.append(self)
self.init = 42
def teardown_method(self, func):
self.init = None
def test_func1(self):
pass
def test_func2(self):
yield self.func2
yield self.func2
def func2(self):
assert self.init
def test_setuplist():
# once for test_func2 during collection
# once for test_func1 during test run
# once for test_func2 during test run
#print setuplist
assert len(setuplist) == 3, len(setuplist)
assert setuplist[0] == setuplist[2], setuplist
assert setuplist[1] != setuplist[2], setuplist
"""
)
reprec = testdir.inline_run(o, "-v", SHOW_PYTEST_WARNINGS_ARG)
passed, skipped, failed = reprec.countoutcomes()
assert passed == 4
assert not skipped and not failed
class TestFunction(object): class TestFunction(object):
@pytest.fixture @pytest.fixture
def ignore_parametrized_marks_args(self): def ignore_parametrized_marks_args(self):
@ -489,26 +277,34 @@ class TestFunction(object):
] ]
) )
def test_function_equality(self, testdir, tmpdir): @staticmethod
def make_function(testdir, **kwargs):
from _pytest.fixtures import FixtureManager from _pytest.fixtures import FixtureManager
config = testdir.parseconfigure() config = testdir.parseconfigure()
session = testdir.Session(config) session = testdir.Session(config)
session._fixturemanager = FixtureManager(session) session._fixturemanager = FixtureManager(session)
return pytest.Function(config=config, parent=session, **kwargs)
def test_function_equality(self, testdir, tmpdir):
def func1(): def func1():
pass pass
def func2(): def func2():
pass pass
f1 = pytest.Function( f1 = self.make_function(testdir, name="name", args=(1,), callobj=func1)
name="name", parent=session, config=config, args=(1,), callobj=func1
)
assert f1 == f1 assert f1 == f1
f2 = pytest.Function(name="name", config=config, callobj=func2, parent=session) f2 = self.make_function(testdir, name="name", callobj=func2)
assert f1 != f2 assert f1 != f2
def test_repr_produces_actual_test_id(self, testdir):
f = self.make_function(
testdir, name=r"test[\xe5]", callobj=self.test_repr_produces_actual_test_id
)
assert repr(f) == r"<Function test[\xe5]>"
def test_issue197_parametrize_emptyset(self, testdir): def test_issue197_parametrize_emptyset(self, testdir):
testdir.makepyfile( testdir.makepyfile(
""" """
@ -1095,7 +891,8 @@ def test_modulecol_roundtrip(testdir):
class TestTracebackCutting(object): class TestTracebackCutting(object):
def test_skip_simple(self): def test_skip_simple(self):
excinfo = pytest.raises(pytest.skip.Exception, 'pytest.skip("xxx")') with pytest.raises(pytest.skip.Exception) as excinfo:
pytest.skip("xxx")
assert excinfo.traceback[-1].frame.code.name == "skip" assert excinfo.traceback[-1].frame.code.name == "skip"
assert excinfo.traceback[-1].ishidden() assert excinfo.traceback[-1].ishidden()
@ -1262,39 +1059,6 @@ class TestReportInfo(object):
@pytest.mark.filterwarnings( @pytest.mark.filterwarnings(
"ignore:usage of Generator.Function is deprecated, please use pytest.Function instead" "ignore:usage of Generator.Function is deprecated, please use pytest.Function instead"
) )
def test_generator_reportinfo(self, testdir):
modcol = testdir.getmodulecol(
"""
# lineno 0
def test_gen():
def check(x):
assert x
yield check, 3
"""
)
gencol = testdir.collect_by_name(modcol, "test_gen")
fspath, lineno, modpath = gencol.reportinfo()
assert fspath == modcol.fspath
assert lineno == 1
assert modpath == "test_gen"
genitem = gencol.collect()[0]
fspath, lineno, modpath = genitem.reportinfo()
assert fspath == modcol.fspath
assert lineno == 2
assert modpath == "test_gen[0]"
"""
def test_func():
pass
def test_genfunc():
def check(x):
pass
yield check, 3
class TestClass(object):
def test_method(self):
pass
"""
def test_reportinfo_with_nasty_getattr(self, testdir): def test_reportinfo_with_nasty_getattr(self, testdir):
# https://github.com/pytest-dev/pytest/issues/1204 # https://github.com/pytest-dev/pytest/issues/1204
modcol = testdir.getmodulecol( modcol = testdir.getmodulecol(
@ -1364,54 +1128,6 @@ def test_customized_python_discovery_functions(testdir):
result.stdout.fnmatch_lines(["*1 passed*"]) result.stdout.fnmatch_lines(["*1 passed*"])
def test_collector_attributes(testdir):
testdir.makeconftest(
"""
import pytest
def pytest_pycollect_makeitem(collector):
assert collector.Function == pytest.Function
assert collector.Class == pytest.Class
assert collector.Instance == pytest.Instance
assert collector.Module == pytest.Module
"""
)
testdir.makepyfile(
"""
def test_hello():
pass
"""
)
result = testdir.runpytest(SHOW_PYTEST_WARNINGS_ARG)
result.stdout.fnmatch_lines(["*1 passed*"])
def test_customize_through_attributes(testdir):
testdir.makeconftest(
"""
import pytest
class MyFunction(pytest.Function):
pass
class MyInstance(pytest.Instance):
Function = MyFunction
class MyClass(pytest.Class):
Instance = MyInstance
def pytest_pycollect_makeitem(collector, name, obj):
if name.startswith("MyTestClass"):
return MyClass(name, parent=collector)
"""
)
testdir.makepyfile(
"""
class MyTestClass(object):
def test_hello(self):
pass
"""
)
result = testdir.runpytest("--collect-only", SHOW_PYTEST_WARNINGS_ARG)
result.stdout.fnmatch_lines(["*MyClass*", "*MyFunction*test_hello*"])
def test_unorderable_types(testdir): def test_unorderable_types(testdir):
testdir.makepyfile( testdir.makepyfile(
""" """

View File

@ -526,15 +526,8 @@ class TestRequestBasic(object):
try: try:
gc.collect() gc.collect()
leaked_types = sum(1 for _ in gc.garbage leaked = [x for _ in gc.garbage if isinstance(_, PseudoFixtureDef)]
if isinstance(_, PseudoFixtureDef)) assert leaked == []
# debug leaked types if the test fails
print(leaked_types)
gc.garbage[:] = []
assert leaked_types == 0
finally: finally:
gc.set_debug(original) gc.set_debug(original)
@ -542,7 +535,7 @@ class TestRequestBasic(object):
pass pass
""" """
) )
result = testdir.runpytest() result = testdir.runpytest_subprocess()
result.stdout.fnmatch_lines("* 1 passed in *") result.stdout.fnmatch_lines("* 1 passed in *")
def test_getfixturevalue_recursive(self, testdir): def test_getfixturevalue_recursive(self, testdir):
@ -913,7 +906,8 @@ class TestRequestMarking(object):
assert "skipif" not in item1.keywords assert "skipif" not in item1.keywords
req1.applymarker(pytest.mark.skipif) req1.applymarker(pytest.mark.skipif)
assert "skipif" in item1.keywords assert "skipif" in item1.keywords
pytest.raises(ValueError, "req1.applymarker(42)") with pytest.raises(ValueError):
req1.applymarker(42)
def test_accesskeywords(self, testdir): def test_accesskeywords(self, testdir):
testdir.makepyfile( testdir.makepyfile(
@ -1495,7 +1489,7 @@ class TestFixtureManagerParseFactories(object):
return "class" return "class"
def test_hello(self, item, fm): def test_hello(self, item, fm):
faclist = fm.getfixturedefs("hello", item.nodeid) faclist = fm.getfixturedefs("hello", item.nodeid)
print (faclist) print(faclist)
assert len(faclist) == 3 assert len(faclist) == 3
assert faclist[0].func(item._request) == "conftest" assert faclist[0].func(item._request) == "conftest"
@ -1856,24 +1850,6 @@ class TestAutouseManagement(object):
reprec = testdir.inline_run("-s") reprec = testdir.inline_run("-s")
reprec.assertoutcome(passed=1) reprec.assertoutcome(passed=1)
def test_autouse_honored_for_yield(self, testdir):
testdir.makepyfile(
"""
import pytest
@pytest.fixture(autouse=True)
def tst():
global x
x = 3
def test_gen():
def f(hello):
assert x == abs(hello)
yield f, 3
yield f, -3
"""
)
reprec = testdir.inline_run(SHOW_PYTEST_WARNINGS_ARG)
reprec.assertoutcome(passed=2)
def test_funcarg_and_setup(self, testdir): def test_funcarg_and_setup(self, testdir):
testdir.makepyfile( testdir.makepyfile(
""" """
@ -2040,7 +2016,7 @@ class TestAutouseManagement(object):
values.append("step2-%d" % item) values.append("step2-%d" % item)
def test_finish(): def test_finish():
print (values) print(values)
assert values == ["setup-1", "step1-1", "step2-1", "teardown-1", assert values == ["setup-1", "step1-1", "step2-1", "teardown-1",
"setup-2", "step1-2", "step2-2", "teardown-2",] "setup-2", "step1-2", "step2-2", "teardown-2",]
""" """
@ -2880,7 +2856,7 @@ class TestFixtureMarker(object):
def base(request, fix1): def base(request, fix1):
def cleanup_base(): def cleanup_base():
values.append("fin_base") values.append("fin_base")
print ("finalizing base") print("finalizing base")
request.addfinalizer(cleanup_base) request.addfinalizer(cleanup_base)
def test_begin(): def test_begin():
@ -3480,13 +3456,13 @@ class TestContextManagerFixtureFuncs(object):
from test_context import fixture from test_context import fixture
@fixture @fixture
def arg1(): def arg1():
print ("setup") print("setup")
yield 1 yield 1
print ("teardown") print("teardown")
def test_1(arg1): def test_1(arg1):
print ("test1", arg1) print("test1", arg1)
def test_2(arg1): def test_2(arg1):
print ("test2", arg1) print("test2", arg1)
assert 0 assert 0
""" """
) )
@ -3509,13 +3485,13 @@ class TestContextManagerFixtureFuncs(object):
from test_context import fixture from test_context import fixture
@fixture(scope="module") @fixture(scope="module")
def arg1(): def arg1():
print ("setup") print("setup")
yield 1 yield 1
print ("teardown") print("teardown")
def test_1(arg1): def test_1(arg1):
print ("test1", arg1) print("test1", arg1)
def test_2(arg1): def test_2(arg1):
print ("test2", arg1) print("test2", arg1)
""" """
) )
result = testdir.runpytest("-s") result = testdir.runpytest("-s")

View File

@ -283,7 +283,7 @@ class TestReRunTests(object):
global count, req global count, req
assert request != req assert request != req
req = request req = request
print ("fix count %s" % count) print("fix count %s" % count)
count += 1 count += 1
def test_fix(fix): def test_fix(fix):
pass pass

View File

@ -70,11 +70,11 @@ class TestMetafunc(object):
pass pass
metafunc = self.Metafunc(func) metafunc = self.Metafunc(func)
pytest.raises(ValueError, "metafunc.addcall(id=None)") pytest.raises(ValueError, metafunc.addcall, id=None)
metafunc.addcall(id=1) metafunc.addcall(id=1)
pytest.raises(ValueError, "metafunc.addcall(id=1)") pytest.raises(ValueError, metafunc.addcall, id=1)
pytest.raises(ValueError, "metafunc.addcall(id='1')") pytest.raises(ValueError, metafunc.addcall, id="1")
metafunc.addcall(id=2) metafunc.addcall(id=2)
assert len(metafunc._calls) == 2 assert len(metafunc._calls) == 2
assert metafunc._calls[0].id == "1" assert metafunc._calls[0].id == "1"
@ -108,7 +108,7 @@ class TestMetafunc(object):
metafunc.addcall(funcargs={"x": 2}) metafunc.addcall(funcargs={"x": 2})
metafunc.addcall(funcargs={"x": 3}) metafunc.addcall(funcargs={"x": 3})
pytest.raises(pytest.fail.Exception, "metafunc.addcall({'xyz': 0})") pytest.raises(pytest.fail.Exception, metafunc.addcall, {"xyz": 0})
assert len(metafunc._calls) == 2 assert len(metafunc._calls) == 2
assert metafunc._calls[0].funcargs == {"x": 2} assert metafunc._calls[0].funcargs == {"x": 2}
assert metafunc._calls[1].funcargs == {"x": 3} assert metafunc._calls[1].funcargs == {"x": 3}
@ -474,9 +474,9 @@ class TestMetafunc(object):
result = testdir.runpytest("--collect-only", SHOW_PYTEST_WARNINGS_ARG) result = testdir.runpytest("--collect-only", SHOW_PYTEST_WARNINGS_ARG)
result.stdout.fnmatch_lines( result.stdout.fnmatch_lines(
[ [
"<Module 'test_parametrize_ids_exception.py'>", "<Module test_parametrize_ids_exception.py>",
" <Function 'test_foo[a]'>", " <Function test_foo[a]>",
" <Function 'test_foo[b]'>", " <Function test_foo[b]>",
"*test_parametrize_ids_exception.py:6: *parameter arg at position 0*", "*test_parametrize_ids_exception.py:6: *parameter arg at position 0*",
"*test_parametrize_ids_exception.py:6: *parameter arg at position 1*", "*test_parametrize_ids_exception.py:6: *parameter arg at position 1*",
] ]

View File

@ -4,21 +4,32 @@ import six
import pytest import pytest
from _pytest.outcomes import Failed from _pytest.outcomes import Failed
from _pytest.warning_types import PytestDeprecationWarning
class TestRaises(object): class TestRaises(object):
def test_raises(self): def test_raises(self):
source = "int('qwe')" source = "int('qwe')"
excinfo = pytest.raises(ValueError, source) with pytest.warns(PytestDeprecationWarning):
excinfo = pytest.raises(ValueError, source)
code = excinfo.traceback[-1].frame.code code = excinfo.traceback[-1].frame.code
s = str(code.fullsource) s = str(code.fullsource)
assert s == source assert s == source
def test_raises_exec(self): def test_raises_exec(self):
pytest.raises(ValueError, "a,x = []") with pytest.warns(PytestDeprecationWarning) as warninfo:
pytest.raises(ValueError, "a,x = []")
assert warninfo[0].filename == __file__
def test_raises_exec_correct_filename(self):
with pytest.warns(PytestDeprecationWarning):
excinfo = pytest.raises(ValueError, 'int("s")')
assert __file__ in excinfo.traceback[-1].path
def test_raises_syntax_error(self): def test_raises_syntax_error(self):
pytest.raises(SyntaxError, "qwe qwe qwe") with pytest.warns(PytestDeprecationWarning) as warninfo:
pytest.raises(SyntaxError, "qwe qwe qwe")
assert warninfo[0].filename == __file__
def test_raises_function(self): def test_raises_function(self):
pytest.raises(ValueError, int, "hello") pytest.raises(ValueError, int, "hello")
@ -33,6 +44,23 @@ class TestRaises(object):
except pytest.raises.Exception: except pytest.raises.Exception:
pass pass
def test_raises_repr_inflight(self):
"""Ensure repr() on an exception info inside a pytest.raises with block works (#4386)"""
class E(Exception):
pass
with pytest.raises(E) as excinfo:
# this test prints the inflight uninitialized object
# using repr and str as well as pprint to demonstrate
# it works
print(str(excinfo))
print(repr(excinfo))
import pprint
pprint.pprint(excinfo)
raise E()
def test_raises_as_contextmanager(self, testdir): def test_raises_as_contextmanager(self, testdir):
testdir.makepyfile( testdir.makepyfile(
""" """
@ -43,7 +71,7 @@ class TestRaises(object):
with pytest.raises(ZeroDivisionError) as excinfo: with pytest.raises(ZeroDivisionError) as excinfo:
assert isinstance(excinfo, _pytest._code.ExceptionInfo) assert isinstance(excinfo, _pytest._code.ExceptionInfo)
1/0 1/0
print (excinfo) print(excinfo)
assert excinfo.type == ZeroDivisionError assert excinfo.type == ZeroDivisionError
assert isinstance(excinfo.value, ZeroDivisionError) assert isinstance(excinfo.value, ZeroDivisionError)

View File

@ -6,6 +6,7 @@ from __future__ import print_function
import sys import sys
import textwrap import textwrap
import attr
import py import py
import six import six
@ -548,6 +549,115 @@ class TestAssert_reprcompare(object):
assert msg assert msg
class TestAssert_reprcompare_dataclass(object):
@pytest.mark.skipif(sys.version_info < (3, 7), reason="Dataclasses in Python3.7+")
def test_dataclasses(self, testdir):
p = testdir.copy_example("dataclasses/test_compare_dataclasses.py")
result = testdir.runpytest(p)
result.assert_outcomes(failed=1, passed=0)
result.stdout.fnmatch_lines(
[
"*Omitting 1 identical items, use -vv to show*",
"*Differing attributes:*",
"*field_b: 'b' != 'c'*",
]
)
@pytest.mark.skipif(sys.version_info < (3, 7), reason="Dataclasses in Python3.7+")
def test_dataclasses_verbose(self, testdir):
p = testdir.copy_example("dataclasses/test_compare_dataclasses_verbose.py")
result = testdir.runpytest(p, "-vv")
result.assert_outcomes(failed=1, passed=0)
result.stdout.fnmatch_lines(
[
"*Matching attributes:*",
"*['field_a']*",
"*Differing attributes:*",
"*field_b: 'b' != 'c'*",
]
)
@pytest.mark.skipif(sys.version_info < (3, 7), reason="Dataclasses in Python3.7+")
def test_dataclasses_with_attribute_comparison_off(self, testdir):
p = testdir.copy_example(
"dataclasses/test_compare_dataclasses_field_comparison_off.py"
)
result = testdir.runpytest(p, "-vv")
result.assert_outcomes(failed=0, passed=1)
@pytest.mark.skipif(sys.version_info < (3, 7), reason="Dataclasses in Python3.7+")
def test_comparing_two_different_data_classes(self, testdir):
p = testdir.copy_example(
"dataclasses/test_compare_two_different_dataclasses.py"
)
result = testdir.runpytest(p, "-vv")
result.assert_outcomes(failed=0, passed=1)
class TestAssert_reprcompare_attrsclass(object):
def test_attrs(self):
@attr.s
class SimpleDataObject(object):
field_a = attr.ib()
field_b = attr.ib()
left = SimpleDataObject(1, "b")
right = SimpleDataObject(1, "c")
lines = callequal(left, right)
assert lines[1].startswith("Omitting 1 identical item")
assert "Matching attributes" not in lines
for line in lines[1:]:
assert "field_a" not in line
def test_attrs_verbose(self):
@attr.s
class SimpleDataObject(object):
field_a = attr.ib()
field_b = attr.ib()
left = SimpleDataObject(1, "b")
right = SimpleDataObject(1, "c")
lines = callequal(left, right, verbose=2)
assert lines[1].startswith("Matching attributes:")
assert "Omitting" not in lines[1]
assert lines[2] == "['field_a']"
def test_attrs_with_attribute_comparison_off(self):
@attr.s
class SimpleDataObject(object):
field_a = attr.ib()
field_b = attr.ib(cmp=False)
left = SimpleDataObject(1, "b")
right = SimpleDataObject(1, "b")
lines = callequal(left, right, verbose=2)
assert lines[1].startswith("Matching attributes:")
assert "Omitting" not in lines[1]
assert lines[2] == "['field_a']"
for line in lines[2:]:
assert "field_b" not in line
def test_comparing_two_different_attrs_classes(self):
@attr.s
class SimpleDataObjectOne(object):
field_a = attr.ib()
field_b = attr.ib()
@attr.s
class SimpleDataObjectTwo(object):
field_a = attr.ib()
field_b = attr.ib()
left = SimpleDataObjectOne(1, "b")
right = SimpleDataObjectTwo(1, "c")
lines = callequal(left, right)
assert lines is None
class TestFormatExplanation(object): class TestFormatExplanation(object):
def test_special_chars_full(self, testdir): def test_special_chars_full(self, testdir):
# Issue 453, for the bug this would raise IndexError # Issue 453, for the bug this would raise IndexError

View File

@ -899,5 +899,29 @@ def test_gitignore(testdir):
config = testdir.parseconfig() config = testdir.parseconfig()
cache = Cache.for_config(config) cache = Cache.for_config(config)
cache.set("foo", "bar") cache.set("foo", "bar")
msg = "# created by pytest automatically, do not change\n*" msg = "# Created by pytest automatically.\n*"
assert cache._cachedir.joinpath(".gitignore").read_text(encoding="UTF-8") == msg gitignore_path = cache._cachedir.joinpath(".gitignore")
assert gitignore_path.read_text(encoding="UTF-8") == msg
# Does not overwrite existing/custom one.
gitignore_path.write_text(u"custom")
cache.set("something", "else")
assert gitignore_path.read_text(encoding="UTF-8") == "custom"
def test_does_not_create_boilerplate_in_existing_dirs(testdir):
from _pytest.cacheprovider import Cache
testdir.makeini(
"""
[pytest]
cache_dir = .
"""
)
config = testdir.parseconfig()
cache = Cache.for_config(config)
cache.set("foo", "bar")
assert os.path.isdir("v") # cache contents
assert not os.path.exists(".gitignore")
assert not os.path.exists("README.md")

View File

@ -87,7 +87,7 @@ class TestCaptureManager(object):
try: try:
capman = CaptureManager("fd") capman = CaptureManager("fd")
capman.start_global_capturing() capman.start_global_capturing()
pytest.raises(AssertionError, "capman.start_global_capturing()") pytest.raises(AssertionError, capman.start_global_capturing)
capman.stop_global_capturing() capman.stop_global_capturing()
finally: finally:
capouter.stop_capturing() capouter.stop_capturing()
@ -107,8 +107,8 @@ def test_capturing_unicode(testdir, method):
# taken from issue 227 from nosetests # taken from issue 227 from nosetests
def test_unicode(): def test_unicode():
import sys import sys
print (sys.stdout) print(sys.stdout)
print (%s) print(%s)
""" """
% obj % obj
) )
@ -121,7 +121,7 @@ def test_capturing_bytes_in_utf8_encoding(testdir, method):
testdir.makepyfile( testdir.makepyfile(
""" """
def test_unicode(): def test_unicode():
print ('b\\u00f6y') print('b\\u00f6y')
""" """
) )
result = testdir.runpytest("--capture=%s" % method) result = testdir.runpytest("--capture=%s" % method)
@ -131,7 +131,7 @@ def test_capturing_bytes_in_utf8_encoding(testdir, method):
def test_collect_capturing(testdir): def test_collect_capturing(testdir):
p = testdir.makepyfile( p = testdir.makepyfile(
""" """
print ("collect %s failure" % 13) print("collect %s failure" % 13)
import xyz42123 import xyz42123
""" """
) )
@ -144,14 +144,14 @@ class TestPerTestCapturing(object):
p = testdir.makepyfile( p = testdir.makepyfile(
""" """
def setup_module(mod): def setup_module(mod):
print ("setup module") print("setup module")
def setup_function(function): def setup_function(function):
print ("setup " + function.__name__) print("setup " + function.__name__)
def test_func1(): def test_func1():
print ("in func1") print("in func1")
assert 0 assert 0
def test_func2(): def test_func2():
print ("in func2") print("in func2")
assert 0 assert 0
""" """
) )
@ -172,14 +172,14 @@ class TestPerTestCapturing(object):
""" """
import sys import sys
def setup_module(func): def setup_module(func):
print ("module-setup") print("module-setup")
def setup_function(func): def setup_function(func):
print ("function-setup") print("function-setup")
def test_func(): def test_func():
print ("in function") print("in function")
assert 0 assert 0
def teardown_function(func): def teardown_function(func):
print ("in teardown") print("in teardown")
""" """
) )
result = testdir.runpytest(p) result = testdir.runpytest(p)
@ -198,9 +198,9 @@ class TestPerTestCapturing(object):
p = testdir.makepyfile( p = testdir.makepyfile(
""" """
def test_func1(): def test_func1():
print ("in func1") print("in func1")
def test_func2(): def test_func2():
print ("in func2") print("in func2")
assert 0 assert 0
""" """
) )
@ -213,12 +213,12 @@ class TestPerTestCapturing(object):
p = testdir.makepyfile( p = testdir.makepyfile(
""" """
def setup_function(function): def setup_function(function):
print ("setup func1") print("setup func1")
def teardown_function(function): def teardown_function(function):
print ("teardown func1") print("teardown func1")
assert 0 assert 0
def test_func1(): def test_func1():
print ("in func1") print("in func1")
pass pass
""" """
) )
@ -238,7 +238,7 @@ class TestPerTestCapturing(object):
p = testdir.makepyfile( p = testdir.makepyfile(
""" """
def teardown_module(mod): def teardown_module(mod):
print ("teardown module") print("teardown module")
assert 0 assert 0
def test_func(): def test_func():
pass pass
@ -259,10 +259,10 @@ class TestPerTestCapturing(object):
"""\ """\
import sys import sys
def test_capturing(): def test_capturing():
print (42) print(42)
sys.stderr.write(str(23)) sys.stderr.write(str(23))
def test_capturing_error(): def test_capturing_error():
print (1) print(1)
sys.stderr.write(str(2)) sys.stderr.write(str(2))
raise ValueError raise ValueError
""" """
@ -392,7 +392,7 @@ class TestCaptureFixture(object):
reprec = testdir.inline_runsource( reprec = testdir.inline_runsource(
"""\ """\
def test_hello(capsys): def test_hello(capsys):
print (42) print(42)
out, err = capsys.readouterr() out, err = capsys.readouterr()
assert out.startswith("42") assert out.startswith("42")
""", """,
@ -460,7 +460,7 @@ class TestCaptureFixture(object):
p = testdir.makepyfile( p = testdir.makepyfile(
"""\ """\
def test_hello(cap{}): def test_hello(cap{}):
print ("xxx42xxx") print("xxx42xxx")
assert 0 assert 0
""".format( """.format(
method method
@ -702,7 +702,7 @@ def test_capture_conftest_runtest_setup(testdir):
testdir.makeconftest( testdir.makeconftest(
""" """
def pytest_runtest_setup(): def pytest_runtest_setup():
print ("hello19") print("hello19")
""" """
) )
testdir.makepyfile("def test_func(): pass") testdir.makepyfile("def test_func(): pass")
@ -737,7 +737,7 @@ def test_capture_early_option_parsing(testdir):
testdir.makeconftest( testdir.makeconftest(
""" """
def pytest_runtest_setup(): def pytest_runtest_setup():
print ("hello19") print("hello19")
""" """
) )
testdir.makepyfile("def test_func(): pass") testdir.makepyfile("def test_func(): pass")
@ -798,10 +798,10 @@ class TestCaptureIO(object):
f = capture.CaptureIO() f = capture.CaptureIO()
if sys.version_info >= (3, 0): if sys.version_info >= (3, 0):
f.write("\u00f6") f.write("\u00f6")
pytest.raises(TypeError, "f.write(bytes('hello', 'UTF-8'))") pytest.raises(TypeError, f.write, b"hello")
else: else:
f.write(text_type("\u00f6", "UTF-8")) f.write(u"\u00f6")
f.write("hello") # bytes f.write(b"hello")
s = f.getvalue() s = f.getvalue()
f.close() f.close()
assert isinstance(s, text_type) assert isinstance(s, text_type)
@ -1149,7 +1149,7 @@ class TestStdCapture(object):
print("XXX which indicates an error in the underlying capturing") print("XXX which indicates an error in the underlying capturing")
print("XXX mechanisms") print("XXX mechanisms")
with self.getcapture(): with self.getcapture():
pytest.raises(IOError, "sys.stdin.read()") pytest.raises(IOError, sys.stdin.read)
class TestStdCaptureFD(TestStdCapture): class TestStdCaptureFD(TestStdCapture):
@ -1302,14 +1302,14 @@ def test_capturing_and_logging_fundamentals(testdir, method):
logging.warn("hello1") logging.warn("hello1")
outerr = cap.readouterr() outerr = cap.readouterr()
print ("suspend, captured %%s" %%(outerr,)) print("suspend, captured %%s" %%(outerr,))
logging.warn("hello2") logging.warn("hello2")
cap.pop_outerr_to_orig() cap.pop_outerr_to_orig()
logging.warn("hello3") logging.warn("hello3")
outerr = cap.readouterr() outerr = cap.readouterr()
print ("suspend2, captured %%s" %% (outerr,)) print("suspend2, captured %%s" %% (outerr,))
""" """
% (method,) % (method,)
) )

View File

@ -21,20 +21,6 @@ class TestCollector(object):
assert not issubclass(Collector, Item) assert not issubclass(Collector, Item)
assert not issubclass(Item, Collector) assert not issubclass(Item, Collector)
def test_compat_attributes(self, testdir, recwarn):
modcol = testdir.getmodulecol(
"""
def test_pass(): pass
def test_fail(): assert 0
"""
)
recwarn.clear()
assert modcol.Module == pytest.Module
assert modcol.Class == pytest.Class
assert modcol.Item == pytest.Item
assert modcol.File == pytest.File
assert modcol.Function == pytest.Function
def test_check_equality(self, testdir): def test_check_equality(self, testdir):
modcol = testdir.getmodulecol( modcol = testdir.getmodulecol(
""" """
@ -950,10 +936,10 @@ def test_collect_init_tests(testdir):
[ [
"collected 2 items", "collected 2 items",
"<Package *", "<Package *",
" <Module '__init__.py'>", " <Module __init__.py>",
" <Function 'test_init'>", " <Function test_init>",
" <Module 'test_foo.py'>", " <Module test_foo.py>",
" <Function 'test_foo'>", " <Function test_foo>",
] ]
) )
result = testdir.runpytest("./tests", "--collect-only") result = testdir.runpytest("./tests", "--collect-only")
@ -961,10 +947,10 @@ def test_collect_init_tests(testdir):
[ [
"collected 2 items", "collected 2 items",
"<Package *", "<Package *",
" <Module '__init__.py'>", " <Module __init__.py>",
" <Function 'test_init'>", " <Function test_init>",
" <Module 'test_foo.py'>", " <Module test_foo.py>",
" <Function 'test_foo'>", " <Function test_foo>",
] ]
) )
# Ignores duplicates with "." and pkginit (#4310). # Ignores duplicates with "." and pkginit (#4310).
@ -972,11 +958,11 @@ def test_collect_init_tests(testdir):
result.stdout.fnmatch_lines( result.stdout.fnmatch_lines(
[ [
"collected 2 items", "collected 2 items",
"<Package */tests'>", "<Package */tests>",
" <Module '__init__.py'>", " <Module __init__.py>",
" <Function 'test_init'>", " <Function test_init>",
" <Module 'test_foo.py'>", " <Module test_foo.py>",
" <Function 'test_foo'>", " <Function test_foo>",
] ]
) )
# Same as before, but different order. # Same as before, but different order.
@ -984,21 +970,21 @@ def test_collect_init_tests(testdir):
result.stdout.fnmatch_lines( result.stdout.fnmatch_lines(
[ [
"collected 2 items", "collected 2 items",
"<Package */tests'>", "<Package */tests>",
" <Module '__init__.py'>", " <Module __init__.py>",
" <Function 'test_init'>", " <Function test_init>",
" <Module 'test_foo.py'>", " <Module test_foo.py>",
" <Function 'test_foo'>", " <Function test_foo>",
] ]
) )
result = testdir.runpytest("./tests/test_foo.py", "--collect-only") result = testdir.runpytest("./tests/test_foo.py", "--collect-only")
result.stdout.fnmatch_lines( result.stdout.fnmatch_lines(
["<Package */tests'>", " <Module 'test_foo.py'>", " <Function 'test_foo'>"] ["<Package */tests>", " <Module test_foo.py>", " <Function test_foo>"]
) )
assert "test_init" not in result.stdout.str() assert "test_init" not in result.stdout.str()
result = testdir.runpytest("./tests/__init__.py", "--collect-only") result = testdir.runpytest("./tests/__init__.py", "--collect-only")
result.stdout.fnmatch_lines( result.stdout.fnmatch_lines(
["<Package */tests'>", " <Module '__init__.py'>", " <Function 'test_init'>"] ["<Package */tests>", " <Module __init__.py>", " <Function test_init>"]
) )
assert "test_foo" not in result.stdout.str() assert "test_foo" not in result.stdout.str()

View File

@ -194,7 +194,7 @@ class TestConfigAPI(object):
config = testdir.parseconfig("--hello=this") config = testdir.parseconfig("--hello=this")
for x in ("hello", "--hello", "-X"): for x in ("hello", "--hello", "-X"):
assert config.getoption(x) == "this" assert config.getoption(x) == "this"
pytest.raises(ValueError, "config.getoption('qweqwe')") pytest.raises(ValueError, config.getoption, "qweqwe")
@pytest.mark.skipif("sys.version_info[0] < 3") @pytest.mark.skipif("sys.version_info[0] < 3")
def test_config_getoption_unicode(self, testdir): def test_config_getoption_unicode(self, testdir):
@ -211,7 +211,7 @@ class TestConfigAPI(object):
def test_config_getvalueorskip(self, testdir): def test_config_getvalueorskip(self, testdir):
config = testdir.parseconfig() config = testdir.parseconfig()
pytest.raises(pytest.skip.Exception, "config.getvalueorskip('hello')") pytest.raises(pytest.skip.Exception, config.getvalueorskip, "hello")
verbose = config.getvalueorskip("verbose") verbose = config.getvalueorskip("verbose")
assert verbose == config.option.verbose assert verbose == config.option.verbose
@ -723,7 +723,8 @@ def test_config_in_subdirectory_colon_command_line_issue2148(testdir):
def test_notify_exception(testdir, capfd): def test_notify_exception(testdir, capfd):
config = testdir.parseconfig() config = testdir.parseconfig()
excinfo = pytest.raises(ValueError, "raise ValueError(1)") with pytest.raises(ValueError) as excinfo:
raise ValueError(1)
config.notify_exception(excinfo) config.notify_exception(excinfo)
out, err = capfd.readouterr() out, err = capfd.readouterr()
assert "ValueError" in err assert "ValueError" in err

View File

@ -371,7 +371,7 @@ class TestPython(object):
import sys import sys
def test_fail(): def test_fail():
print ("hello-stdout") print("hello-stdout")
sys.stderr.write("hello-stderr\\n") sys.stderr.write("hello-stderr\\n")
logging.info('info msg') logging.info('info msg')
logging.warning('warning msg') logging.warning('warning msg')
@ -589,7 +589,7 @@ class TestPython(object):
""" """
# coding: latin1 # coding: latin1
def test_hello(): def test_hello():
print (%r) print(%r)
assert 0 assert 0
""" """
% value % value

View File

@ -190,7 +190,7 @@ def test_ini_markers(testdir):
""" """
def test_markers(pytestconfig): def test_markers(pytestconfig):
markers = pytestconfig.getini("markers") markers = pytestconfig.getini("markers")
print (markers) print(markers)
assert len(markers) >= 2 assert len(markers) >= 2
assert markers[0].startswith("a1:") assert markers[0].startswith("a1:")
assert markers[1].startswith("a2:") assert markers[1].startswith("a2:")

View File

@ -27,7 +27,7 @@ def test_setattr():
x = 1 x = 1
monkeypatch = MonkeyPatch() monkeypatch = MonkeyPatch()
pytest.raises(AttributeError, "monkeypatch.setattr(A, 'notexists', 2)") pytest.raises(AttributeError, monkeypatch.setattr, A, "notexists", 2)
monkeypatch.setattr(A, "y", 2, raising=False) monkeypatch.setattr(A, "y", 2, raising=False)
assert A.y == 2 assert A.y == 2
monkeypatch.undo() monkeypatch.undo()
@ -99,7 +99,7 @@ def test_delattr():
monkeypatch = MonkeyPatch() monkeypatch = MonkeyPatch()
monkeypatch.delattr(A, "x") monkeypatch.delattr(A, "x")
pytest.raises(AttributeError, "monkeypatch.delattr(A, 'y')") pytest.raises(AttributeError, monkeypatch.delattr, A, "y")
monkeypatch.delattr(A, "y", raising=False) monkeypatch.delattr(A, "y", raising=False)
monkeypatch.setattr(A, "x", 5, raising=False) monkeypatch.setattr(A, "x", 5, raising=False)
assert A.x == 5 assert A.x == 5
@ -156,7 +156,7 @@ def test_delitem():
monkeypatch.delitem(d, "x") monkeypatch.delitem(d, "x")
assert "x" not in d assert "x" not in d
monkeypatch.delitem(d, "y", raising=False) monkeypatch.delitem(d, "y", raising=False)
pytest.raises(KeyError, "monkeypatch.delitem(d, 'y')") pytest.raises(KeyError, monkeypatch.delitem, d, "y")
assert not d assert not d
monkeypatch.setitem(d, "y", 1700) monkeypatch.setitem(d, "y", 1700)
assert d["y"] == 1700 assert d["y"] == 1700
@ -182,7 +182,7 @@ def test_delenv():
name = "xyz1234" name = "xyz1234"
assert name not in os.environ assert name not in os.environ
monkeypatch = MonkeyPatch() monkeypatch = MonkeyPatch()
pytest.raises(KeyError, "monkeypatch.delenv(%r, raising=True)" % name) pytest.raises(KeyError, monkeypatch.delenv, name, raising=True)
monkeypatch.delenv(name, raising=False) monkeypatch.delenv(name, raising=False)
monkeypatch.undo() monkeypatch.undo()
os.environ[name] = "1" os.environ[name] = "1"

View File

@ -3,7 +3,6 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import pytest import pytest
from _pytest.warnings import SHOW_PYTEST_WARNINGS_ARG
def setup_module(mod): def setup_module(mod):
@ -71,11 +70,11 @@ def test_nose_setup_func(testdir):
@with_setup(my_setup, my_teardown) @with_setup(my_setup, my_teardown)
def test_hello(): def test_hello():
print (values) print(values)
assert values == [1] assert values == [1]
def test_world(): def test_world():
print (values) print(values)
assert values == [1,2] assert values == [1,2]
""" """
@ -95,11 +94,11 @@ def test_nose_setup_func_failure(testdir):
@with_setup(my_setup, my_teardown) @with_setup(my_setup, my_teardown)
def test_hello(): def test_hello():
print (values) print(values)
assert values == [1] assert values == [1]
def test_world(): def test_world():
print (values) print(values)
assert values == [1,2] assert values == [1,2]
""" """
@ -147,11 +146,11 @@ def test_nose_setup_partial(testdir):
my_teardown_partial = partial(my_teardown, 2) my_teardown_partial = partial(my_teardown, 2)
def test_hello(): def test_hello():
print (values) print(values)
assert values == [1] assert values == [1]
def test_world(): def test_world():
print (values) print(values)
assert values == [1,2] assert values == [1,2]
test_hello.setup = my_setup_partial test_hello.setup = my_setup_partial
@ -162,73 +161,6 @@ def test_nose_setup_partial(testdir):
result.stdout.fnmatch_lines(["*2 passed*"]) result.stdout.fnmatch_lines(["*2 passed*"])
def test_nose_test_generator_fixtures(testdir):
p = testdir.makepyfile(
"""
# taken from nose-0.11.1 unit_tests/test_generator_fixtures.py
from nose.tools import eq_
called = []
def outer_setup():
called.append('outer_setup')
def outer_teardown():
called.append('outer_teardown')
def inner_setup():
called.append('inner_setup')
def inner_teardown():
called.append('inner_teardown')
def test_gen():
called[:] = []
for i in range(0, 5):
yield check, i
def check(i):
expect = ['outer_setup']
for x in range(0, i):
expect.append('inner_setup')
expect.append('inner_teardown')
expect.append('inner_setup')
eq_(called, expect)
test_gen.setup = outer_setup
test_gen.teardown = outer_teardown
check.setup = inner_setup
check.teardown = inner_teardown
class TestClass(object):
def setup(self):
print ("setup called in %s" % self)
self.called = ['setup']
def teardown(self):
print ("teardown called in %s" % self)
eq_(self.called, ['setup'])
self.called.append('teardown')
def test(self):
print ("test called in %s" % self)
for i in range(0, 5):
yield self.check, i
def check(self, i):
print ("check called in %s" % self)
expect = ['setup']
#for x in range(0, i):
# expect.append('setup')
# expect.append('teardown')
#expect.append('setup')
eq_(self.called, expect)
"""
)
result = testdir.runpytest(p, "-p", "nose", SHOW_PYTEST_WARNINGS_ARG)
result.stdout.fnmatch_lines(["*10 passed*"])
def test_module_level_setup(testdir): def test_module_level_setup(testdir):
testdir.makepyfile( testdir.makepyfile(
""" """

View File

@ -100,12 +100,8 @@ class TestParser(object):
def test_group_shortopt_lowercase(self, parser): def test_group_shortopt_lowercase(self, parser):
group = parser.getgroup("hello") group = parser.getgroup("hello")
pytest.raises( with pytest.raises(ValueError):
ValueError,
"""
group.addoption("-x", action="store_true") group.addoption("-x", action="store_true")
""",
)
assert len(group.options) == 0 assert len(group.options) == 0
group._addoption("-x", action="store_true") group._addoption("-x", action="store_true")
assert len(group.options) == 1 assert len(group.options) == 1

View File

@ -8,7 +8,6 @@ import sys
import _pytest._code import _pytest._code
import pytest import pytest
from _pytest.warnings import SHOW_PYTEST_WARNINGS_ARG
try: try:
breakpoint breakpoint
@ -809,27 +808,6 @@ class TestTraceOption:
assert "reading from stdin while output" not in rest assert "reading from stdin while output" not in rest
TestPDB.flush(child) TestPDB.flush(child)
def test_trace_against_yield_test(self, testdir):
p1 = testdir.makepyfile(
"""
def is_equal(a, b):
assert a == b
def test_1():
yield is_equal, 1, 1
"""
)
child = testdir.spawn_pytest(
"{} --trace {}".format(SHOW_PYTEST_WARNINGS_ARG, str(p1))
)
child.expect("is_equal")
child.expect("Pdb")
child.sendeof()
rest = child.read().decode("utf8")
assert "1 passed" in rest
assert "reading from stdin while output" not in rest
TestPDB.flush(child)
def test_trace_after_runpytest(testdir): def test_trace_after_runpytest(testdir):
"""Test that debugging's pytest_configure is re-entrant.""" """Test that debugging's pytest_configure is re-entrant."""

View File

@ -196,7 +196,7 @@ class TestPytestPluginManager(object):
assert pm.is_registered(mod) assert pm.is_registered(mod)
values = pm.get_plugins() values = pm.get_plugins()
assert mod in values assert mod in values
pytest.raises(ValueError, "pm.register(mod)") pytest.raises(ValueError, pm.register, mod)
pytest.raises(ValueError, lambda: pm.register(mod)) pytest.raises(ValueError, lambda: pm.register(mod))
# assert not pm.is_registered(mod2) # assert not pm.is_registered(mod2)
assert pm.get_plugins() == values assert pm.get_plugins() == values
@ -284,8 +284,8 @@ class TestPytestPluginManager(object):
result.stdout.fnmatch_lines(["*1 passed*"]) result.stdout.fnmatch_lines(["*1 passed*"])
def test_import_plugin_importname(self, testdir, pytestpm): def test_import_plugin_importname(self, testdir, pytestpm):
pytest.raises(ImportError, 'pytestpm.import_plugin("qweqwex.y")') pytest.raises(ImportError, pytestpm.import_plugin, "qweqwex.y")
pytest.raises(ImportError, 'pytestpm.import_plugin("pytest_qweqwx.y")') pytest.raises(ImportError, pytestpm.import_plugin, "pytest_qweqwx.y")
testdir.syspathinsert() testdir.syspathinsert()
pluginname = "pytest_hello" pluginname = "pytest_hello"
@ -301,8 +301,8 @@ class TestPytestPluginManager(object):
assert plugin2 is plugin1 assert plugin2 is plugin1
def test_import_plugin_dotted_name(self, testdir, pytestpm): def test_import_plugin_dotted_name(self, testdir, pytestpm):
pytest.raises(ImportError, 'pytestpm.import_plugin("qweqwex.y")') pytest.raises(ImportError, pytestpm.import_plugin, "qweqwex.y")
pytest.raises(ImportError, 'pytestpm.import_plugin("pytest_qweqwex.y")') pytest.raises(ImportError, pytestpm.import_plugin, "pytest_qweqwex.y")
testdir.syspathinsert() testdir.syspathinsert()
testdir.mkpydir("pkg").join("plug.py").write("x=3") testdir.mkpydir("pkg").join("plug.py").write("x=3")

View File

@ -71,7 +71,7 @@ def test_make_hook_recorder(testdir):
recorder.unregister() recorder.unregister()
recorder.clear() recorder.clear()
recorder.hook.pytest_runtest_logreport(report=rep) recorder.hook.pytest_runtest_logreport(report=rep)
pytest.raises(ValueError, "recorder.getfailures()") pytest.raises(ValueError, recorder.getfailures)
def test_parseconfig(testdir): def test_parseconfig(testdir):
@ -174,7 +174,7 @@ def test_hookrecorder_basic(holder):
call = rec.popcall("pytest_xyz") call = rec.popcall("pytest_xyz")
assert call.arg == 123 assert call.arg == 123
assert call._name == "pytest_xyz" assert call._name == "pytest_xyz"
pytest.raises(pytest.fail.Exception, "rec.popcall('abc')") pytest.raises(pytest.fail.Exception, rec.popcall, "abc")
pm.hook.pytest_xyz_noarg() pm.hook.pytest_xyz_noarg()
call = rec.popcall("pytest_xyz_noarg") call = rec.popcall("pytest_xyz_noarg")
assert call._name == "pytest_xyz_noarg" assert call._name == "pytest_xyz_noarg"

View File

@ -7,6 +7,7 @@ import warnings
import pytest import pytest
from _pytest.recwarn import WarningsRecorder from _pytest.recwarn import WarningsRecorder
from _pytest.warning_types import PytestDeprecationWarning
def test_recwarn_stacklevel(recwarn): def test_recwarn_stacklevel(recwarn):
@ -44,7 +45,7 @@ class TestWarningsRecorderChecker(object):
rec.clear() rec.clear()
assert len(rec.list) == 0 assert len(rec.list) == 0
assert values is rec.list assert values is rec.list
pytest.raises(AssertionError, "rec.pop()") pytest.raises(AssertionError, rec.pop)
@pytest.mark.issue(4243) @pytest.mark.issue(4243)
def test_warn_stacklevel(self): def test_warn_stacklevel(self):
@ -214,9 +215,17 @@ class TestWarns(object):
source1 = "warnings.warn('w1', RuntimeWarning)" source1 = "warnings.warn('w1', RuntimeWarning)"
source2 = "warnings.warn('w2', RuntimeWarning)" source2 = "warnings.warn('w2', RuntimeWarning)"
source3 = "warnings.warn('w3', RuntimeWarning)" source3 = "warnings.warn('w3', RuntimeWarning)"
pytest.warns(RuntimeWarning, source1) with pytest.warns(PytestDeprecationWarning) as warninfo: # yo dawg
pytest.raises(pytest.fail.Exception, lambda: pytest.warns(UserWarning, source2)) pytest.warns(RuntimeWarning, source1)
pytest.warns(RuntimeWarning, source3) pytest.raises(
pytest.fail.Exception, lambda: pytest.warns(UserWarning, source2)
)
pytest.warns(RuntimeWarning, source3)
assert len(warninfo) == 3
for w in warninfo:
assert w.filename == __file__
msg, = w.message.args
assert msg.startswith("warns(..., 'code(as_a_string)') is deprecated")
def test_function(self): def test_function(self):
pytest.warns( pytest.warns(

View File

@ -151,7 +151,7 @@ class TestWithFunctionIntegration(object):
try: try:
raise ValueError raise ValueError
except ValueError: except ValueError:
excinfo = _pytest._code.ExceptionInfo() excinfo = _pytest._code.ExceptionInfo.from_current()
reslog = ResultLog(None, py.io.TextIO()) reslog = ResultLog(None, py.io.TextIO())
reslog.pytest_internalerror(excinfo.getrepr(style=style)) reslog.pytest_internalerror(excinfo.getrepr(style=style))
entry = reslog.logfile.getvalue() entry = reslog.logfile.getvalue()

View File

@ -487,13 +487,13 @@ def test_report_extra_parameters(reporttype):
def test_callinfo(): def test_callinfo():
ci = runner.CallInfo(lambda: 0, "123") ci = runner.CallInfo.from_call(lambda: 0, "123")
assert ci.when == "123" assert ci.when == "123"
assert ci.result == 0 assert ci.result == 0
assert "result" in repr(ci) assert "result" in repr(ci)
assert repr(ci) == "<CallInfo when='123' result: 0>" assert repr(ci) == "<CallInfo when='123' result: 0>"
ci = runner.CallInfo(lambda: 0 / 0, "123") ci = runner.CallInfo.from_call(lambda: 0 / 0, "123")
assert ci.when == "123" assert ci.when == "123"
assert not hasattr(ci, "result") assert not hasattr(ci, "result")
assert repr(ci) == "<CallInfo when='123' exception: division by zero>" assert repr(ci) == "<CallInfo when='123' exception: division by zero>"
@ -501,16 +501,6 @@ def test_callinfo():
assert "exc" in repr(ci) assert "exc" in repr(ci)
def test_callinfo_repr_while_running():
def repr_while_running():
f = sys._getframe().f_back
assert "func" in f.f_locals
assert repr(f.f_locals["self"]) == "<CallInfo when='when' result: '<NOTSET>'>"
ci = runner.CallInfo(repr_while_running, "when")
assert repr(ci) == "<CallInfo when='when' result: None>"
# design question: do we want general hooks in python files? # design question: do we want general hooks in python files?
# then something like the following functional tests makes sense # then something like the following functional tests makes sense
@ -561,20 +551,16 @@ def test_outcomeexception_passes_except_Exception():
def test_pytest_exit(): def test_pytest_exit():
try: with pytest.raises(pytest.exit.Exception) as excinfo:
pytest.exit("hello") pytest.exit("hello")
except pytest.exit.Exception: assert excinfo.errisinstance(KeyboardInterrupt)
excinfo = _pytest._code.ExceptionInfo()
assert excinfo.errisinstance(KeyboardInterrupt)
def test_pytest_fail(): def test_pytest_fail():
try: with pytest.raises(pytest.fail.Exception) as excinfo:
pytest.fail("hello") pytest.fail("hello")
except pytest.fail.Exception: s = excinfo.exconly(tryshort=True)
excinfo = _pytest._code.ExceptionInfo() assert s.startswith("Failed")
s = excinfo.exconly(tryshort=True)
assert s.startswith("Failed")
def test_pytest_exit_msg(testdir): def test_pytest_exit_msg(testdir):
@ -683,7 +669,7 @@ def test_exception_printing_skip():
try: try:
pytest.skip("hello") pytest.skip("hello")
except pytest.skip.Exception: except pytest.skip.Exception:
excinfo = _pytest._code.ExceptionInfo() excinfo = _pytest._code.ExceptionInfo.from_current()
s = excinfo.exconly(tryshort=True) s = excinfo.exconly(tryshort=True)
assert s.startswith("Skipped") assert s.startswith("Skipped")
@ -704,21 +690,17 @@ def test_importorskip(monkeypatch):
# check that importorskip reports the actual call # check that importorskip reports the actual call
# in this test the test_runner.py file # in this test the test_runner.py file
assert path.purebasename == "test_runner" assert path.purebasename == "test_runner"
pytest.raises(SyntaxError, "pytest.importorskip('x y z')") pytest.raises(SyntaxError, pytest.importorskip, "x y z")
pytest.raises(SyntaxError, "pytest.importorskip('x=y')") pytest.raises(SyntaxError, pytest.importorskip, "x=y")
mod = types.ModuleType("hello123") mod = types.ModuleType("hello123")
mod.__version__ = "1.3" mod.__version__ = "1.3"
monkeypatch.setitem(sys.modules, "hello123", mod) monkeypatch.setitem(sys.modules, "hello123", mod)
pytest.raises( with pytest.raises(pytest.skip.Exception):
pytest.skip.Exception,
"""
pytest.importorskip("hello123", minversion="1.3.1") pytest.importorskip("hello123", minversion="1.3.1")
""",
)
mod2 = pytest.importorskip("hello123", minversion="1.3") mod2 = pytest.importorskip("hello123", minversion="1.3")
assert mod2 == mod assert mod2 == mod
except pytest.skip.Exception: except pytest.skip.Exception:
print(_pytest._code.ExceptionInfo()) print(_pytest._code.ExceptionInfo.from_current())
pytest.fail("spurious skip") pytest.fail("spurious skip")
@ -734,13 +716,10 @@ def test_importorskip_dev_module(monkeypatch):
monkeypatch.setitem(sys.modules, "mockmodule", mod) monkeypatch.setitem(sys.modules, "mockmodule", mod)
mod2 = pytest.importorskip("mockmodule", minversion="0.12.0") mod2 = pytest.importorskip("mockmodule", minversion="0.12.0")
assert mod2 == mod assert mod2 == mod
pytest.raises( with pytest.raises(pytest.skip.Exception):
pytest.skip.Exception, pytest.importorskip("mockmodule1", minversion="0.14.0")
"""
pytest.importorskip('mockmodule1', minversion='0.14.0')""",
)
except pytest.skip.Exception: except pytest.skip.Exception:
print(_pytest._code.ExceptionInfo()) print(_pytest._code.ExceptionInfo.from_current())
pytest.fail("spurious skip") pytest.fail("spurious skip")

Some files were not shown because too many files have changed in this diff Show More