Preparing release version 3.6.1
This commit is contained in:
parent
c58b67c540
commit
66ec0a50b6
|
@ -8,6 +8,50 @@
|
||||||
|
|
||||||
.. towncrier release notes start
|
.. towncrier release notes start
|
||||||
|
|
||||||
|
Pytest 3.6.1 (2018-06-05)
|
||||||
|
=========================
|
||||||
|
|
||||||
|
Bug Fixes
|
||||||
|
---------
|
||||||
|
|
||||||
|
- Fixed a bug where stdout and stderr were logged twice by junitxml when a test
|
||||||
|
was marked xfail. (`#3491
|
||||||
|
<https://github.com/pytest-dev/pytest/issues/3491>`_)
|
||||||
|
|
||||||
|
- Fix ``usefixtures`` mark applyed to unittest tests by correctly instantiating
|
||||||
|
``FixtureInfo``. (`#3498
|
||||||
|
<https://github.com/pytest-dev/pytest/issues/3498>`_)
|
||||||
|
|
||||||
|
- Fix assertion rewriter compatibility with libraries that monkey patch
|
||||||
|
``file`` objects. (`#3503
|
||||||
|
<https://github.com/pytest-dev/pytest/issues/3503>`_)
|
||||||
|
|
||||||
|
|
||||||
|
Improved Documentation
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
- Added a section on how to use fixtures as factories to the fixture
|
||||||
|
documentation. (`#3461 <https://github.com/pytest-dev/pytest/issues/3461>`_)
|
||||||
|
|
||||||
|
|
||||||
|
Trivial/Internal Changes
|
||||||
|
------------------------
|
||||||
|
|
||||||
|
- Enable caching for pip/pre-commit in order to reduce build time on
|
||||||
|
travis/appveyor. (`#3502
|
||||||
|
<https://github.com/pytest-dev/pytest/issues/3502>`_)
|
||||||
|
|
||||||
|
- Switch pytest to the src/ layout as we already suggested it for good practice
|
||||||
|
- now we implement it as well. (`#3513
|
||||||
|
<https://github.com/pytest-dev/pytest/issues/3513>`_)
|
||||||
|
|
||||||
|
- Fix if in tests to support 3.7.0b5, where a docstring handling in AST got
|
||||||
|
reverted. (`#3530 <https://github.com/pytest-dev/pytest/issues/3530>`_)
|
||||||
|
|
||||||
|
- Remove some python2.5 compatibility code. (`#3629
|
||||||
|
<https://github.com/pytest-dev/pytest/issues/3629>`_)
|
||||||
|
|
||||||
|
|
||||||
Pytest 3.6.0 (2018-05-23)
|
Pytest 3.6.0 (2018-05-23)
|
||||||
=========================
|
=========================
|
||||||
|
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Added a section on how to use fixtures as factories to the fixture documentation.
|
|
|
@ -1 +0,0 @@
|
||||||
Fixed a bug where stdout and stderr were logged twice by junitxml when a test was marked xfail.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix ``usefixtures`` mark applyed to unittest tests by correctly instantiating ``FixtureInfo``.
|
|
|
@ -1 +0,0 @@
|
||||||
Enable caching for pip/pre-commit in order to reduce build time on travis/appveyor.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix assertion rewriter compatibility with libraries that monkey patch ``file`` objects.
|
|
|
@ -1 +0,0 @@
|
||||||
Switch pytest to the src/ layout as we already suggested it for good practice - now we implement it as well.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix if in tests to support 3.7.0b5, where a docstring handling in AST got reverted.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove some python2.5 compatibility code.
|
|
|
@ -6,6 +6,7 @@ Release announcements
|
||||||
:maxdepth: 2
|
:maxdepth: 2
|
||||||
|
|
||||||
|
|
||||||
|
release-3.6.1
|
||||||
release-3.6.0
|
release-3.6.0
|
||||||
release-3.5.1
|
release-3.5.1
|
||||||
release-3.5.0
|
release-3.5.0
|
||||||
|
|
|
@ -0,0 +1,24 @@
|
||||||
|
pytest-3.6.1
|
||||||
|
=======================================
|
||||||
|
|
||||||
|
pytest 3.6.1 has just been released to PyPI.
|
||||||
|
|
||||||
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
|
pip install --upgrade pytest
|
||||||
|
|
||||||
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
||||||
|
* Anthony Sottile
|
||||||
|
* Bruno Oliveira
|
||||||
|
* Jeffrey Rackauckas
|
||||||
|
* Miro Hrončok
|
||||||
|
* Niklas Meinzer
|
||||||
|
* Oliver Bestwalter
|
||||||
|
* Ronny Pfannschmidt
|
||||||
|
|
||||||
|
|
||||||
|
Happy testing,
|
||||||
|
The pytest Development Team
|
|
@ -29,17 +29,17 @@ you will see the return value of the function call::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_assert1.py F [100%]
|
test_assert1.py F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
______________________________ test_function _______________________________
|
______________________________ test_function _______________________________
|
||||||
|
|
||||||
def test_function():
|
def test_function():
|
||||||
> assert f() == 4
|
> assert f() == 4
|
||||||
E assert 3 == 4
|
E assert 3 == 4
|
||||||
E + where 3 = f()
|
E + where 3 = f()
|
||||||
|
|
||||||
test_assert1.py:5: AssertionError
|
test_assert1.py:5: AssertionError
|
||||||
========================= 1 failed in 0.12 seconds =========================
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
@ -172,12 +172,12 @@ if you run this module::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_assert2.py F [100%]
|
test_assert2.py F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
___________________________ test_set_comparison ____________________________
|
___________________________ test_set_comparison ____________________________
|
||||||
|
|
||||||
def test_set_comparison():
|
def test_set_comparison():
|
||||||
set1 = set("1308")
|
set1 = set("1308")
|
||||||
set2 = set("8035")
|
set2 = set("8035")
|
||||||
|
@ -188,7 +188,7 @@ if you run this module::
|
||||||
E Extra items in the right set:
|
E Extra items in the right set:
|
||||||
E '5'
|
E '5'
|
||||||
E Use -v to get the full diff
|
E Use -v to get the full diff
|
||||||
|
|
||||||
test_assert2.py:5: AssertionError
|
test_assert2.py:5: AssertionError
|
||||||
========================= 1 failed in 0.12 seconds =========================
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
@ -241,14 +241,14 @@ the conftest file::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_compare _______________________________
|
_______________________________ test_compare _______________________________
|
||||||
|
|
||||||
def test_compare():
|
def test_compare():
|
||||||
f1 = Foo(1)
|
f1 = Foo(1)
|
||||||
f2 = Foo(2)
|
f2 = Foo(2)
|
||||||
> assert f1 == f2
|
> assert f1 == f2
|
||||||
E assert Comparing Foo instances:
|
E assert Comparing Foo instances:
|
||||||
E vals: 1 != 2
|
E vals: 1 != 2
|
||||||
|
|
||||||
test_foocompare.py:11: AssertionError
|
test_foocompare.py:11: AssertionError
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
|
|
|
@ -17,13 +17,13 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
$ pytest -q --fixtures
|
$ pytest -q --fixtures
|
||||||
cache
|
cache
|
||||||
Return a cache object that can persist state between testing sessions.
|
Return a cache object that can persist state between testing sessions.
|
||||||
|
|
||||||
cache.get(key, default)
|
cache.get(key, default)
|
||||||
cache.set(key, value)
|
cache.set(key, value)
|
||||||
|
|
||||||
Keys must be a ``/`` separated value, where the first part is usually the
|
Keys must be a ``/`` separated value, where the first part is usually the
|
||||||
name of your plugin or application to avoid clashes with other cache users.
|
name of your plugin or application to avoid clashes with other cache users.
|
||||||
|
|
||||||
Values can be any object handled by the json stdlib module.
|
Values can be any object handled by the json stdlib module.
|
||||||
capsys
|
capsys
|
||||||
Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
|
Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
|
||||||
|
@ -49,9 +49,9 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
Fixture that returns a :py:class:`dict` that will be injected into the namespace of doctests.
|
Fixture that returns a :py:class:`dict` that will be injected into the namespace of doctests.
|
||||||
pytestconfig
|
pytestconfig
|
||||||
Session-scoped fixture that returns the :class:`_pytest.config.Config` object.
|
Session-scoped fixture that returns the :class:`_pytest.config.Config` object.
|
||||||
|
|
||||||
Example::
|
Example::
|
||||||
|
|
||||||
def test_foo(pytestconfig):
|
def test_foo(pytestconfig):
|
||||||
if pytestconfig.getoption("verbose"):
|
if pytestconfig.getoption("verbose"):
|
||||||
...
|
...
|
||||||
|
@ -61,9 +61,9 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
configured reporters, like JUnit XML.
|
configured reporters, like JUnit XML.
|
||||||
The fixture is callable with ``(name, value)``, with value being automatically
|
The fixture is callable with ``(name, value)``, with value being automatically
|
||||||
xml-encoded.
|
xml-encoded.
|
||||||
|
|
||||||
Example::
|
Example::
|
||||||
|
|
||||||
def test_function(record_property):
|
def test_function(record_property):
|
||||||
record_property("example_key", 1)
|
record_property("example_key", 1)
|
||||||
record_xml_property
|
record_xml_property
|
||||||
|
@ -74,9 +74,9 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
automatically xml-encoded
|
automatically xml-encoded
|
||||||
caplog
|
caplog
|
||||||
Access and control log capturing.
|
Access and control log capturing.
|
||||||
|
|
||||||
Captured logs are available through the following methods::
|
Captured logs are available through the following methods::
|
||||||
|
|
||||||
* caplog.text -> string containing formatted log output
|
* caplog.text -> string containing formatted log output
|
||||||
* caplog.records -> list of logging.LogRecord instances
|
* caplog.records -> list of logging.LogRecord instances
|
||||||
* caplog.record_tuples -> list of (logger_name, level, message) tuples
|
* caplog.record_tuples -> list of (logger_name, level, message) tuples
|
||||||
|
@ -84,7 +84,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
monkeypatch
|
monkeypatch
|
||||||
The returned ``monkeypatch`` fixture provides these
|
The returned ``monkeypatch`` fixture provides these
|
||||||
helper methods to modify objects, dictionaries or os.environ::
|
helper methods to modify objects, dictionaries or os.environ::
|
||||||
|
|
||||||
monkeypatch.setattr(obj, name, value, raising=True)
|
monkeypatch.setattr(obj, name, value, raising=True)
|
||||||
monkeypatch.delattr(obj, name, raising=True)
|
monkeypatch.delattr(obj, name, raising=True)
|
||||||
monkeypatch.setitem(mapping, name, value)
|
monkeypatch.setitem(mapping, name, value)
|
||||||
|
@ -93,14 +93,14 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
monkeypatch.delenv(name, value, raising=True)
|
monkeypatch.delenv(name, value, raising=True)
|
||||||
monkeypatch.syspath_prepend(path)
|
monkeypatch.syspath_prepend(path)
|
||||||
monkeypatch.chdir(path)
|
monkeypatch.chdir(path)
|
||||||
|
|
||||||
All modifications will be undone after the requesting
|
All modifications will be undone after the requesting
|
||||||
test function or fixture has finished. The ``raising``
|
test function or fixture has finished. The ``raising``
|
||||||
parameter determines if a KeyError or AttributeError
|
parameter determines if a KeyError or AttributeError
|
||||||
will be raised if the set/deletion operation has no target.
|
will be raised if the set/deletion operation has no target.
|
||||||
recwarn
|
recwarn
|
||||||
Return a :class:`WarningsRecorder` instance that records all warnings emitted by test functions.
|
Return a :class:`WarningsRecorder` instance that records all warnings emitted by test functions.
|
||||||
|
|
||||||
See http://docs.python.org/library/warnings.html for information
|
See http://docs.python.org/library/warnings.html for information
|
||||||
on warning categories.
|
on warning categories.
|
||||||
tmpdir_factory
|
tmpdir_factory
|
||||||
|
@ -111,9 +111,9 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
created as a sub directory of the base temporary
|
created as a sub directory of the base temporary
|
||||||
directory. The returned object is a `py.path.local`_
|
directory. The returned object is a `py.path.local`_
|
||||||
path object.
|
path object.
|
||||||
|
|
||||||
.. _`py.path.local`: https://py.readthedocs.io/en/latest/path.html
|
.. _`py.path.local`: https://py.readthedocs.io/en/latest/path.html
|
||||||
|
|
||||||
no tests ran in 0.12 seconds
|
no tests ran in 0.12 seconds
|
||||||
|
|
||||||
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like::
|
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like::
|
||||||
|
|
|
@ -49,26 +49,26 @@ If you run this for the first time you will see two failures::
|
||||||
.................F.......F........................ [100%]
|
.................F.......F........................ [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_num[17] _______________________________
|
_______________________________ test_num[17] _______________________________
|
||||||
|
|
||||||
i = 17
|
i = 17
|
||||||
|
|
||||||
@pytest.mark.parametrize("i", range(50))
|
@pytest.mark.parametrize("i", range(50))
|
||||||
def test_num(i):
|
def test_num(i):
|
||||||
if i in (17, 25):
|
if i in (17, 25):
|
||||||
> pytest.fail("bad luck")
|
> pytest.fail("bad luck")
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
_______________________________ test_num[25] _______________________________
|
_______________________________ test_num[25] _______________________________
|
||||||
|
|
||||||
i = 25
|
i = 25
|
||||||
|
|
||||||
@pytest.mark.parametrize("i", range(50))
|
@pytest.mark.parametrize("i", range(50))
|
||||||
def test_num(i):
|
def test_num(i):
|
||||||
if i in (17, 25):
|
if i in (17, 25):
|
||||||
> pytest.fail("bad luck")
|
> pytest.fail("bad luck")
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
2 failed, 48 passed in 0.12 seconds
|
2 failed, 48 passed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -80,31 +80,31 @@ If you then run it with ``--lf``::
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 50 items / 48 deselected
|
collected 50 items / 48 deselected
|
||||||
run-last-failure: rerun previous 2 failures
|
run-last-failure: rerun previous 2 failures
|
||||||
|
|
||||||
test_50.py FF [100%]
|
test_50.py FF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_num[17] _______________________________
|
_______________________________ test_num[17] _______________________________
|
||||||
|
|
||||||
i = 17
|
i = 17
|
||||||
|
|
||||||
@pytest.mark.parametrize("i", range(50))
|
@pytest.mark.parametrize("i", range(50))
|
||||||
def test_num(i):
|
def test_num(i):
|
||||||
if i in (17, 25):
|
if i in (17, 25):
|
||||||
> pytest.fail("bad luck")
|
> pytest.fail("bad luck")
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
_______________________________ test_num[25] _______________________________
|
_______________________________ test_num[25] _______________________________
|
||||||
|
|
||||||
i = 25
|
i = 25
|
||||||
|
|
||||||
@pytest.mark.parametrize("i", range(50))
|
@pytest.mark.parametrize("i", range(50))
|
||||||
def test_num(i):
|
def test_num(i):
|
||||||
if i in (17, 25):
|
if i in (17, 25):
|
||||||
> pytest.fail("bad luck")
|
> pytest.fail("bad luck")
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
================= 2 failed, 48 deselected in 0.12 seconds ==================
|
================= 2 failed, 48 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
|
@ -121,31 +121,31 @@ of ``FF`` and dots)::
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 50 items
|
collected 50 items
|
||||||
run-last-failure: rerun previous 2 failures first
|
run-last-failure: rerun previous 2 failures first
|
||||||
|
|
||||||
test_50.py FF................................................ [100%]
|
test_50.py FF................................................ [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_num[17] _______________________________
|
_______________________________ test_num[17] _______________________________
|
||||||
|
|
||||||
i = 17
|
i = 17
|
||||||
|
|
||||||
@pytest.mark.parametrize("i", range(50))
|
@pytest.mark.parametrize("i", range(50))
|
||||||
def test_num(i):
|
def test_num(i):
|
||||||
if i in (17, 25):
|
if i in (17, 25):
|
||||||
> pytest.fail("bad luck")
|
> pytest.fail("bad luck")
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
_______________________________ test_num[25] _______________________________
|
_______________________________ test_num[25] _______________________________
|
||||||
|
|
||||||
i = 25
|
i = 25
|
||||||
|
|
||||||
@pytest.mark.parametrize("i", range(50))
|
@pytest.mark.parametrize("i", range(50))
|
||||||
def test_num(i):
|
def test_num(i):
|
||||||
if i in (17, 25):
|
if i in (17, 25):
|
||||||
> pytest.fail("bad luck")
|
> pytest.fail("bad luck")
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
=================== 2 failed, 48 passed in 0.12 seconds ====================
|
=================== 2 failed, 48 passed in 0.12 seconds ====================
|
||||||
|
|
||||||
|
@ -198,13 +198,13 @@ of the sleep::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
______________________________ test_function _______________________________
|
______________________________ test_function _______________________________
|
||||||
|
|
||||||
mydata = 42
|
mydata = 42
|
||||||
|
|
||||||
def test_function(mydata):
|
def test_function(mydata):
|
||||||
> assert mydata == 23
|
> assert mydata == 23
|
||||||
E assert 42 == 23
|
E assert 42 == 23
|
||||||
|
|
||||||
test_caching.py:14: AssertionError
|
test_caching.py:14: AssertionError
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -215,13 +215,13 @@ the cache and this will be quick::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
______________________________ test_function _______________________________
|
______________________________ test_function _______________________________
|
||||||
|
|
||||||
mydata = 42
|
mydata = 42
|
||||||
|
|
||||||
def test_function(mydata):
|
def test_function(mydata):
|
||||||
> assert mydata == 23
|
> assert mydata == 23
|
||||||
E assert 42 == 23
|
E assert 42 == 23
|
||||||
|
|
||||||
test_caching.py:14: AssertionError
|
test_caching.py:14: AssertionError
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -246,7 +246,7 @@ You can always peek at the content of the cache using the
|
||||||
['test_caching.py::test_function']
|
['test_caching.py::test_function']
|
||||||
example/value contains:
|
example/value contains:
|
||||||
42
|
42
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
Clearing Cache content
|
Clearing Cache content
|
||||||
|
|
|
@ -68,16 +68,16 @@ of the failing function and hide the other one::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py .F [100%]
|
test_module.py .F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
________________________________ test_func2 ________________________________
|
________________________________ test_func2 ________________________________
|
||||||
|
|
||||||
def test_func2():
|
def test_func2():
|
||||||
> assert False
|
> assert False
|
||||||
E assert False
|
E assert False
|
||||||
|
|
||||||
test_module.py:9: AssertionError
|
test_module.py:9: AssertionError
|
||||||
-------------------------- Captured stdout setup ---------------------------
|
-------------------------- Captured stdout setup ---------------------------
|
||||||
setting up <function test_func2 at 0xdeadbeef>
|
setting up <function test_func2 at 0xdeadbeef>
|
||||||
|
|
|
@ -65,9 +65,9 @@ then you can just invoke ``pytest`` without command line options::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
mymodule.py . [100%]
|
mymodule.py . [100%]
|
||||||
|
|
||||||
========================= 1 passed in 0.12 seconds =========================
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
It is possible to use fixtures using the ``getfixture`` helper::
|
It is possible to use fixtures using the ``getfixture`` helper::
|
||||||
|
|
|
@ -35,9 +35,9 @@ You can then restrict a test run to only run tests marked with ``webtest``::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items / 3 deselected
|
collecting ... collected 4 items / 3 deselected
|
||||||
|
|
||||||
test_server.py::test_send_http PASSED [100%]
|
test_server.py::test_send_http PASSED [100%]
|
||||||
|
|
||||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
Or the inverse, running all tests except the webtest ones::
|
Or the inverse, running all tests except the webtest ones::
|
||||||
|
@ -48,11 +48,11 @@ Or the inverse, running all tests except the webtest ones::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items / 1 deselected
|
collecting ... collected 4 items / 1 deselected
|
||||||
|
|
||||||
test_server.py::test_something_quick PASSED [ 33%]
|
test_server.py::test_something_quick PASSED [ 33%]
|
||||||
test_server.py::test_another PASSED [ 66%]
|
test_server.py::test_another PASSED [ 66%]
|
||||||
test_server.py::TestClass::test_method PASSED [100%]
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
Selecting tests based on their node ID
|
Selecting tests based on their node ID
|
||||||
|
@ -68,9 +68,9 @@ tests based on their module, class, method, or function name::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 1 item
|
collecting ... collected 1 item
|
||||||
|
|
||||||
test_server.py::TestClass::test_method PASSED [100%]
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
========================= 1 passed in 0.12 seconds =========================
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
You can also select on the class::
|
You can also select on the class::
|
||||||
|
@ -81,9 +81,9 @@ You can also select on the class::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 1 item
|
collecting ... collected 1 item
|
||||||
|
|
||||||
test_server.py::TestClass::test_method PASSED [100%]
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
========================= 1 passed in 0.12 seconds =========================
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
Or select multiple nodes::
|
Or select multiple nodes::
|
||||||
|
@ -94,10 +94,10 @@ Or select multiple nodes::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_server.py::TestClass::test_method PASSED [ 50%]
|
test_server.py::TestClass::test_method PASSED [ 50%]
|
||||||
test_server.py::test_send_http PASSED [100%]
|
test_server.py::test_send_http PASSED [100%]
|
||||||
|
|
||||||
========================= 2 passed in 0.12 seconds =========================
|
========================= 2 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
.. _node-id:
|
.. _node-id:
|
||||||
|
@ -132,9 +132,9 @@ select tests based on their names::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items / 3 deselected
|
collecting ... collected 4 items / 3 deselected
|
||||||
|
|
||||||
test_server.py::test_send_http PASSED [100%]
|
test_server.py::test_send_http PASSED [100%]
|
||||||
|
|
||||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
And you can also run all tests except the ones that match the keyword::
|
And you can also run all tests except the ones that match the keyword::
|
||||||
|
@ -145,11 +145,11 @@ And you can also run all tests except the ones that match the keyword::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items / 1 deselected
|
collecting ... collected 4 items / 1 deselected
|
||||||
|
|
||||||
test_server.py::test_something_quick PASSED [ 33%]
|
test_server.py::test_something_quick PASSED [ 33%]
|
||||||
test_server.py::test_another PASSED [ 66%]
|
test_server.py::test_another PASSED [ 66%]
|
||||||
test_server.py::TestClass::test_method PASSED [100%]
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
Or to select "http" and "quick" tests::
|
Or to select "http" and "quick" tests::
|
||||||
|
@ -160,10 +160,10 @@ Or to select "http" and "quick" tests::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items / 2 deselected
|
collecting ... collected 4 items / 2 deselected
|
||||||
|
|
||||||
test_server.py::test_send_http PASSED [ 50%]
|
test_server.py::test_send_http PASSED [ 50%]
|
||||||
test_server.py::test_something_quick PASSED [100%]
|
test_server.py::test_something_quick PASSED [100%]
|
||||||
|
|
||||||
================== 2 passed, 2 deselected in 0.12 seconds ==================
|
================== 2 passed, 2 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
@ -199,21 +199,21 @@ You can ask which markers exist for your test suite - the list includes our just
|
||||||
|
|
||||||
$ pytest --markers
|
$ pytest --markers
|
||||||
@pytest.mark.webtest: mark a test as a webtest.
|
@pytest.mark.webtest: mark a test as a webtest.
|
||||||
|
|
||||||
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
|
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
|
||||||
|
|
||||||
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
|
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
|
||||||
|
|
||||||
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
|
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
|
||||||
|
|
||||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
|
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
|
||||||
|
|
||||||
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
|
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
|
||||||
|
|
||||||
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
||||||
|
|
||||||
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
||||||
|
|
||||||
|
|
||||||
For an example on how to add and work with markers from a plugin, see
|
For an example on how to add and work with markers from a plugin, see
|
||||||
:ref:`adding a custom marker from a plugin`.
|
:ref:`adding a custom marker from a plugin`.
|
||||||
|
@ -352,9 +352,9 @@ the test needs::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_someenv.py s [100%]
|
test_someenv.py s [100%]
|
||||||
|
|
||||||
======================== 1 skipped in 0.12 seconds =========================
|
======================== 1 skipped in 0.12 seconds =========================
|
||||||
|
|
||||||
and here is one that specifies exactly the environment needed::
|
and here is one that specifies exactly the environment needed::
|
||||||
|
@ -364,30 +364,30 @@ and here is one that specifies exactly the environment needed::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_someenv.py . [100%]
|
test_someenv.py . [100%]
|
||||||
|
|
||||||
========================= 1 passed in 0.12 seconds =========================
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
The ``--markers`` option always gives you a list of available markers::
|
The ``--markers`` option always gives you a list of available markers::
|
||||||
|
|
||||||
$ pytest --markers
|
$ pytest --markers
|
||||||
@pytest.mark.env(name): mark test to run only on named environment
|
@pytest.mark.env(name): mark test to run only on named environment
|
||||||
|
|
||||||
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
|
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
|
||||||
|
|
||||||
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
|
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
|
||||||
|
|
||||||
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
|
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
|
||||||
|
|
||||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
|
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
|
||||||
|
|
||||||
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
|
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
|
||||||
|
|
||||||
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
||||||
|
|
||||||
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
||||||
|
|
||||||
|
|
||||||
.. _`passing callables to custom markers`:
|
.. _`passing callables to custom markers`:
|
||||||
|
|
||||||
|
@ -523,11 +523,11 @@ then you will see two tests skipped and two executed tests as expected::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
test_plat.py s.s. [100%]
|
test_plat.py s.s. [100%]
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux
|
SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux
|
||||||
|
|
||||||
=================== 2 passed, 2 skipped in 0.12 seconds ====================
|
=================== 2 passed, 2 skipped in 0.12 seconds ====================
|
||||||
|
|
||||||
Note that if you specify a platform via the marker-command line option like this::
|
Note that if you specify a platform via the marker-command line option like this::
|
||||||
|
@ -537,9 +537,9 @@ Note that if you specify a platform via the marker-command line option like this
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items / 3 deselected
|
collected 4 items / 3 deselected
|
||||||
|
|
||||||
test_plat.py . [100%]
|
test_plat.py . [100%]
|
||||||
|
|
||||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
||||||
|
@ -588,9 +588,9 @@ We can now use the ``-m option`` to select one set::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items / 2 deselected
|
collected 4 items / 2 deselected
|
||||||
|
|
||||||
test_module.py FF [100%]
|
test_module.py FF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
__________________________ test_interface_simple ___________________________
|
__________________________ test_interface_simple ___________________________
|
||||||
test_module.py:3: in test_interface_simple
|
test_module.py:3: in test_interface_simple
|
||||||
|
@ -609,9 +609,9 @@ or to select both "event" and "interface" tests::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items / 1 deselected
|
collected 4 items / 1 deselected
|
||||||
|
|
||||||
test_module.py FFF [100%]
|
test_module.py FFF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
__________________________ test_interface_simple ___________________________
|
__________________________ test_interface_simple ___________________________
|
||||||
test_module.py:3: in test_interface_simple
|
test_module.py:3: in test_interface_simple
|
||||||
|
|
|
@ -30,9 +30,9 @@ now execute the test specification::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_simple.yml F. [100%]
|
test_simple.yml F. [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
______________________________ usecase: hello ______________________________
|
______________________________ usecase: hello ______________________________
|
||||||
usecase execution failed
|
usecase execution failed
|
||||||
|
@ -63,10 +63,10 @@ consulted when reporting in ``verbose`` mode::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_simple.yml::hello FAILED [ 50%]
|
test_simple.yml::hello FAILED [ 50%]
|
||||||
test_simple.yml::ok PASSED [100%]
|
test_simple.yml::ok PASSED [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
______________________________ usecase: hello ______________________________
|
______________________________ usecase: hello ______________________________
|
||||||
usecase execution failed
|
usecase execution failed
|
||||||
|
@ -87,5 +87,5 @@ interesting to just look at the collection tree::
|
||||||
<YamlFile 'test_simple.yml'>
|
<YamlFile 'test_simple.yml'>
|
||||||
<YamlItem 'hello'>
|
<YamlItem 'hello'>
|
||||||
<YamlItem 'ok'>
|
<YamlItem 'ok'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
|
@ -55,13 +55,13 @@ let's run the full monty::
|
||||||
....F [100%]
|
....F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_____________________________ test_compute[4] ______________________________
|
_____________________________ test_compute[4] ______________________________
|
||||||
|
|
||||||
param1 = 4
|
param1 = 4
|
||||||
|
|
||||||
def test_compute(param1):
|
def test_compute(param1):
|
||||||
> assert param1 < 4
|
> assert param1 < 4
|
||||||
E assert 4 < 4
|
E assert 4 < 4
|
||||||
|
|
||||||
test_compute.py:3: AssertionError
|
test_compute.py:3: AssertionError
|
||||||
1 failed, 4 passed in 0.12 seconds
|
1 failed, 4 passed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -151,7 +151,7 @@ objects, they are still using the default pytest representation::
|
||||||
<Function 'test_timedistance_v2[20011211-20011212-expected1]'>
|
<Function 'test_timedistance_v2[20011211-20011212-expected1]'>
|
||||||
<Function 'test_timedistance_v3[forward]'>
|
<Function 'test_timedistance_v3[forward]'>
|
||||||
<Function 'test_timedistance_v3[backward]'>
|
<Function 'test_timedistance_v3[backward]'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
|
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
|
||||||
|
@ -198,9 +198,9 @@ this is a fully self-contained example which you can run with::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
test_scenarios.py .... [100%]
|
test_scenarios.py .... [100%]
|
||||||
|
|
||||||
========================= 4 passed in 0.12 seconds =========================
|
========================= 4 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
|
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
|
||||||
|
@ -218,7 +218,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
|
||||||
<Function 'test_demo2[basic]'>
|
<Function 'test_demo2[basic]'>
|
||||||
<Function 'test_demo1[advanced]'>
|
<Function 'test_demo1[advanced]'>
|
||||||
<Function 'test_demo2[advanced]'>
|
<Function 'test_demo2[advanced]'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
Note that we told ``metafunc.parametrize()`` that your scenario values
|
Note that we told ``metafunc.parametrize()`` that your scenario values
|
||||||
|
@ -279,7 +279,7 @@ Let's first see how it looks like at collection time::
|
||||||
<Module 'test_backends.py'>
|
<Module 'test_backends.py'>
|
||||||
<Function 'test_db_initialized[d1]'>
|
<Function 'test_db_initialized[d1]'>
|
||||||
<Function 'test_db_initialized[d2]'>
|
<Function 'test_db_initialized[d2]'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
And then when we run the test::
|
And then when we run the test::
|
||||||
|
@ -288,15 +288,15 @@ And then when we run the test::
|
||||||
.F [100%]
|
.F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_________________________ test_db_initialized[d2] __________________________
|
_________________________ test_db_initialized[d2] __________________________
|
||||||
|
|
||||||
db = <conftest.DB2 object at 0xdeadbeef>
|
db = <conftest.DB2 object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_db_initialized(db):
|
def test_db_initialized(db):
|
||||||
# a dummy test
|
# a dummy test
|
||||||
if db.__class__.__name__ == "DB2":
|
if db.__class__.__name__ == "DB2":
|
||||||
> pytest.fail("deliberately failing for demo purposes")
|
> pytest.fail("deliberately failing for demo purposes")
|
||||||
E Failed: deliberately failing for demo purposes
|
E Failed: deliberately failing for demo purposes
|
||||||
|
|
||||||
test_backends.py:6: Failed
|
test_backends.py:6: Failed
|
||||||
1 failed, 1 passed in 0.12 seconds
|
1 failed, 1 passed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -339,7 +339,7 @@ The result of this test will be successful::
|
||||||
collected 1 item
|
collected 1 item
|
||||||
<Module 'test_indirect_list.py'>
|
<Module 'test_indirect_list.py'>
|
||||||
<Function 'test_indirect[a-b]'>
|
<Function 'test_indirect[a-b]'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. regendoc:wipe
|
.. regendoc:wipe
|
||||||
|
@ -384,13 +384,13 @@ argument sets to use for each test function. Let's run it::
|
||||||
F.. [100%]
|
F.. [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
________________________ TestClass.test_equals[1-2] ________________________
|
________________________ TestClass.test_equals[1-2] ________________________
|
||||||
|
|
||||||
self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2
|
self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2
|
||||||
|
|
||||||
def test_equals(self, a, b):
|
def test_equals(self, a, b):
|
||||||
> assert a == b
|
> assert a == b
|
||||||
E assert 1 == 2
|
E assert 1 == 2
|
||||||
|
|
||||||
test_parametrize.py:18: AssertionError
|
test_parametrize.py:18: AssertionError
|
||||||
1 failed, 2 passed in 0.12 seconds
|
1 failed, 2 passed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -462,11 +462,11 @@ If you run this with reporting for skips enabled::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py .s [100%]
|
test_module.py .s [100%]
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
SKIP [1] $REGENDOC_TMPDIR/conftest.py:11: could not import 'opt2'
|
SKIP [1] $REGENDOC_TMPDIR/conftest.py:11: could not import 'opt2'
|
||||||
|
|
||||||
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
||||||
|
|
||||||
You'll see that we don't have an ``opt2`` module and thus the second test run
|
You'll see that we don't have an ``opt2`` module and thus the second test run
|
||||||
|
|
|
@ -133,7 +133,7 @@ then the test collection looks like this::
|
||||||
<Instance '()'>
|
<Instance '()'>
|
||||||
<Function 'simple_check'>
|
<Function 'simple_check'>
|
||||||
<Function 'complex_check'>
|
<Function 'complex_check'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
@ -180,7 +180,7 @@ You can always peek at the collection tree without running tests like this::
|
||||||
<Instance '()'>
|
<Instance '()'>
|
||||||
<Function 'test_method'>
|
<Function 'test_method'>
|
||||||
<Function 'test_anothermethod'>
|
<Function 'test_anothermethod'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. _customizing-test-collection:
|
.. _customizing-test-collection:
|
||||||
|
@ -243,5 +243,5 @@ file will be left out::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
|
@ -14,111 +14,112 @@ get on the terminal - we are working on that)::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
|
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
|
||||||
collected 42 items
|
collected 42 items
|
||||||
|
|
||||||
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%]
|
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
____________________________ test_generative[0] ____________________________
|
____________________________ test_generative[0] ____________________________
|
||||||
|
|
||||||
param1 = 3, param2 = 6
|
param1 = 3, param2 = 6
|
||||||
|
|
||||||
def test_generative(param1, param2):
|
def test_generative(param1, param2):
|
||||||
> assert param1 * 2 < param2
|
> assert param1 * 2 < param2
|
||||||
E assert (3 * 2) < 6
|
E assert (3 * 2) < 6
|
||||||
|
|
||||||
failure_demo.py:16: AssertionError
|
failure_demo.py:19: AssertionError
|
||||||
_________________________ TestFailing.test_simple __________________________
|
_________________________ TestFailing.test_simple __________________________
|
||||||
|
|
||||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_simple(self):
|
def test_simple(self):
|
||||||
|
|
||||||
def f():
|
def f():
|
||||||
return 42
|
return 42
|
||||||
|
|
||||||
def g():
|
def g():
|
||||||
return 43
|
return 43
|
||||||
|
|
||||||
> assert f() == g()
|
> assert f() == g()
|
||||||
E assert 42 == 43
|
E assert 42 == 43
|
||||||
E + where 42 = <function TestFailing.test_simple.<locals>.f at 0xdeadbeef>()
|
E + where 42 = <function TestFailing.test_simple.<locals>.f at 0xdeadbeef>()
|
||||||
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>()
|
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>()
|
||||||
|
|
||||||
failure_demo.py:29: AssertionError
|
failure_demo.py:37: AssertionError
|
||||||
____________________ TestFailing.test_simple_multiline _____________________
|
____________________ TestFailing.test_simple_multiline _____________________
|
||||||
|
|
||||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_simple_multiline(self):
|
def test_simple_multiline(self):
|
||||||
otherfunc_multi(
|
> otherfunc_multi(42, 6 * 9)
|
||||||
42,
|
|
||||||
> 6*9)
|
failure_demo.py:40:
|
||||||
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
failure_demo.py:34:
|
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
|
||||||
|
|
||||||
a = 42, b = 54
|
a = 42, b = 54
|
||||||
|
|
||||||
def otherfunc_multi(a,b):
|
def otherfunc_multi(a, b):
|
||||||
> assert (a ==
|
> assert a == b
|
||||||
b)
|
|
||||||
E assert 42 == 54
|
E assert 42 == 54
|
||||||
|
|
||||||
failure_demo.py:12: AssertionError
|
failure_demo.py:15: AssertionError
|
||||||
___________________________ TestFailing.test_not ___________________________
|
___________________________ TestFailing.test_not ___________________________
|
||||||
|
|
||||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_not(self):
|
def test_not(self):
|
||||||
|
|
||||||
def f():
|
def f():
|
||||||
return 42
|
return 42
|
||||||
|
|
||||||
> assert not f()
|
> assert not f()
|
||||||
E assert not 42
|
E assert not 42
|
||||||
E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>()
|
E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>()
|
||||||
|
|
||||||
failure_demo.py:39: AssertionError
|
failure_demo.py:47: AssertionError
|
||||||
_________________ TestSpecialisedExplanations.test_eq_text _________________
|
_________________ TestSpecialisedExplanations.test_eq_text _________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_text(self):
|
def test_eq_text(self):
|
||||||
> assert 'spam' == 'eggs'
|
> assert "spam" == "eggs"
|
||||||
E AssertionError: assert 'spam' == 'eggs'
|
E AssertionError: assert 'spam' == 'eggs'
|
||||||
E - spam
|
E - spam
|
||||||
E + eggs
|
E + eggs
|
||||||
|
|
||||||
failure_demo.py:43: AssertionError
|
failure_demo.py:53: AssertionError
|
||||||
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
|
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_similar_text(self):
|
def test_eq_similar_text(self):
|
||||||
> assert 'foo 1 bar' == 'foo 2 bar'
|
> assert "foo 1 bar" == "foo 2 bar"
|
||||||
E AssertionError: assert 'foo 1 bar' == 'foo 2 bar'
|
E AssertionError: assert 'foo 1 bar' == 'foo 2 bar'
|
||||||
E - foo 1 bar
|
E - foo 1 bar
|
||||||
E ? ^
|
E ? ^
|
||||||
E + foo 2 bar
|
E + foo 2 bar
|
||||||
E ? ^
|
E ? ^
|
||||||
|
|
||||||
failure_demo.py:46: AssertionError
|
failure_demo.py:56: AssertionError
|
||||||
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
|
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_multiline_text(self):
|
def test_eq_multiline_text(self):
|
||||||
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
|
> assert "foo\nspam\nbar" == "foo\neggs\nbar"
|
||||||
E AssertionError: assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
|
E AssertionError: assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
|
||||||
E foo
|
E foo
|
||||||
E - spam
|
E - spam
|
||||||
E + eggs
|
E + eggs
|
||||||
E bar
|
E bar
|
||||||
|
|
||||||
failure_demo.py:49: AssertionError
|
failure_demo.py:59: AssertionError
|
||||||
______________ TestSpecialisedExplanations.test_eq_long_text _______________
|
______________ TestSpecialisedExplanations.test_eq_long_text _______________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_long_text(self):
|
def test_eq_long_text(self):
|
||||||
a = '1'*100 + 'a' + '2'*100
|
a = "1" * 100 + "a" + "2" * 100
|
||||||
b = '1'*100 + 'b' + '2'*100
|
b = "1" * 100 + "b" + "2" * 100
|
||||||
> assert a == b
|
> assert a == b
|
||||||
E AssertionError: assert '111111111111...2222222222222' == '1111111111111...2222222222222'
|
E AssertionError: assert '111111111111...2222222222222' == '1111111111111...2222222222222'
|
||||||
E Skipping 90 identical leading characters in diff, use -v to show
|
E Skipping 90 identical leading characters in diff, use -v to show
|
||||||
|
@ -127,15 +128,15 @@ get on the terminal - we are working on that)::
|
||||||
E ? ^
|
E ? ^
|
||||||
E + 1111111111b222222222
|
E + 1111111111b222222222
|
||||||
E ? ^
|
E ? ^
|
||||||
|
|
||||||
failure_demo.py:54: AssertionError
|
failure_demo.py:64: AssertionError
|
||||||
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
|
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_long_text_multiline(self):
|
def test_eq_long_text_multiline(self):
|
||||||
a = '1\n'*100 + 'a' + '2\n'*100
|
a = "1\n" * 100 + "a" + "2\n" * 100
|
||||||
b = '1\n'*100 + 'b' + '2\n'*100
|
b = "1\n" * 100 + "b" + "2\n" * 100
|
||||||
> assert a == b
|
> assert a == b
|
||||||
E AssertionError: assert '1\n1\n1\n1\n...n2\n2\n2\n2\n' == '1\n1\n1\n1\n1...n2\n2\n2\n2\n'
|
E AssertionError: assert '1\n1\n1\n1\n...n2\n2\n2\n2\n' == '1\n1\n1\n1\n1...n2\n2\n2\n2\n'
|
||||||
E Skipping 190 identical leading characters in diff, use -v to show
|
E Skipping 190 identical leading characters in diff, use -v to show
|
||||||
|
@ -145,40 +146,40 @@ get on the terminal - we are working on that)::
|
||||||
E 1
|
E 1
|
||||||
E 1
|
E 1
|
||||||
E 1...
|
E 1...
|
||||||
E
|
E
|
||||||
E ...Full output truncated (7 lines hidden), use '-vv' to show
|
E ...Full output truncated (7 lines hidden), use '-vv' to show
|
||||||
|
|
||||||
failure_demo.py:59: AssertionError
|
failure_demo.py:69: AssertionError
|
||||||
_________________ TestSpecialisedExplanations.test_eq_list _________________
|
_________________ TestSpecialisedExplanations.test_eq_list _________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_list(self):
|
def test_eq_list(self):
|
||||||
> assert [0, 1, 2] == [0, 1, 3]
|
> assert [0, 1, 2] == [0, 1, 3]
|
||||||
E assert [0, 1, 2] == [0, 1, 3]
|
E assert [0, 1, 2] == [0, 1, 3]
|
||||||
E At index 2 diff: 2 != 3
|
E At index 2 diff: 2 != 3
|
||||||
E Use -v to get the full diff
|
E Use -v to get the full diff
|
||||||
|
|
||||||
failure_demo.py:62: AssertionError
|
failure_demo.py:72: AssertionError
|
||||||
______________ TestSpecialisedExplanations.test_eq_list_long _______________
|
______________ TestSpecialisedExplanations.test_eq_list_long _______________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_list_long(self):
|
def test_eq_list_long(self):
|
||||||
a = [0]*100 + [1] + [3]*100
|
a = [0] * 100 + [1] + [3] * 100
|
||||||
b = [0]*100 + [2] + [3]*100
|
b = [0] * 100 + [2] + [3] * 100
|
||||||
> assert a == b
|
> assert a == b
|
||||||
E assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...]
|
E assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...]
|
||||||
E At index 100 diff: 1 != 2
|
E At index 100 diff: 1 != 2
|
||||||
E Use -v to get the full diff
|
E Use -v to get the full diff
|
||||||
|
|
||||||
failure_demo.py:67: AssertionError
|
failure_demo.py:77: AssertionError
|
||||||
_________________ TestSpecialisedExplanations.test_eq_dict _________________
|
_________________ TestSpecialisedExplanations.test_eq_dict _________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_dict(self):
|
def test_eq_dict(self):
|
||||||
> assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
|
> assert {"a": 0, "b": 1, "c": 0} == {"a": 0, "b": 2, "d": 0}
|
||||||
E AssertionError: assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
|
E AssertionError: assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
|
||||||
E Omitting 1 identical items, use -vv to show
|
E Omitting 1 identical items, use -vv to show
|
||||||
E Differing items:
|
E Differing items:
|
||||||
|
@ -187,16 +188,16 @@ get on the terminal - we are working on that)::
|
||||||
E {'c': 0}
|
E {'c': 0}
|
||||||
E Right contains more items:
|
E Right contains more items:
|
||||||
E {'d': 0}...
|
E {'d': 0}...
|
||||||
E
|
E
|
||||||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||||
|
|
||||||
failure_demo.py:70: AssertionError
|
failure_demo.py:80: AssertionError
|
||||||
_________________ TestSpecialisedExplanations.test_eq_set __________________
|
_________________ TestSpecialisedExplanations.test_eq_set __________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_set(self):
|
def test_eq_set(self):
|
||||||
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
|
> assert {0, 10, 11, 12} == {0, 20, 21}
|
||||||
E AssertionError: assert {0, 10, 11, 12} == {0, 20, 21}
|
E AssertionError: assert {0, 10, 11, 12} == {0, 20, 21}
|
||||||
E Extra items in the left set:
|
E Extra items in the left set:
|
||||||
E 10
|
E 10
|
||||||
|
@ -205,37 +206,37 @@ get on the terminal - we are working on that)::
|
||||||
E Extra items in the right set:
|
E Extra items in the right set:
|
||||||
E 20
|
E 20
|
||||||
E 21...
|
E 21...
|
||||||
E
|
E
|
||||||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||||
|
|
||||||
failure_demo.py:73: AssertionError
|
failure_demo.py:83: AssertionError
|
||||||
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
|
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_longer_list(self):
|
def test_eq_longer_list(self):
|
||||||
> assert [1,2] == [1,2,3]
|
> assert [1, 2] == [1, 2, 3]
|
||||||
E assert [1, 2] == [1, 2, 3]
|
E assert [1, 2] == [1, 2, 3]
|
||||||
E Right contains more items, first extra item: 3
|
E Right contains more items, first extra item: 3
|
||||||
E Use -v to get the full diff
|
E Use -v to get the full diff
|
||||||
|
|
||||||
failure_demo.py:76: AssertionError
|
failure_demo.py:86: AssertionError
|
||||||
_________________ TestSpecialisedExplanations.test_in_list _________________
|
_________________ TestSpecialisedExplanations.test_in_list _________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_in_list(self):
|
def test_in_list(self):
|
||||||
> assert 1 in [0, 2, 3, 4, 5]
|
> assert 1 in [0, 2, 3, 4, 5]
|
||||||
E assert 1 in [0, 2, 3, 4, 5]
|
E assert 1 in [0, 2, 3, 4, 5]
|
||||||
|
|
||||||
failure_demo.py:79: AssertionError
|
failure_demo.py:89: AssertionError
|
||||||
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
|
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_not_in_text_multiline(self):
|
def test_not_in_text_multiline(self):
|
||||||
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
|
text = "some multiline\ntext\nwhich\nincludes foo\nand a\ntail"
|
||||||
> assert 'foo' not in text
|
> assert "foo" not in text
|
||||||
E AssertionError: assert 'foo' not in 'some multiline\ntext\nw...ncludes foo\nand a\ntail'
|
E AssertionError: assert 'foo' not in 'some multiline\ntext\nw...ncludes foo\nand a\ntail'
|
||||||
E 'foo' is contained here:
|
E 'foo' is contained here:
|
||||||
E some multiline
|
E some multiline
|
||||||
|
@ -244,239 +245,254 @@ get on the terminal - we are working on that)::
|
||||||
E includes foo
|
E includes foo
|
||||||
E ? +++
|
E ? +++
|
||||||
E and a...
|
E and a...
|
||||||
E
|
E
|
||||||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||||
|
|
||||||
failure_demo.py:83: AssertionError
|
failure_demo.py:93: AssertionError
|
||||||
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
|
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_not_in_text_single(self):
|
def test_not_in_text_single(self):
|
||||||
text = 'single foo line'
|
text = "single foo line"
|
||||||
> assert 'foo' not in text
|
> assert "foo" not in text
|
||||||
E AssertionError: assert 'foo' not in 'single foo line'
|
E AssertionError: assert 'foo' not in 'single foo line'
|
||||||
E 'foo' is contained here:
|
E 'foo' is contained here:
|
||||||
E single foo line
|
E single foo line
|
||||||
E ? +++
|
E ? +++
|
||||||
|
|
||||||
failure_demo.py:87: AssertionError
|
failure_demo.py:97: AssertionError
|
||||||
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
|
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_not_in_text_single_long(self):
|
def test_not_in_text_single_long(self):
|
||||||
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
|
text = "head " * 50 + "foo " + "tail " * 20
|
||||||
> assert 'foo' not in text
|
> assert "foo" not in text
|
||||||
E AssertionError: assert 'foo' not in 'head head head head hea...ail tail tail tail tail '
|
E AssertionError: assert 'foo' not in 'head head head head hea...ail tail tail tail tail '
|
||||||
E 'foo' is contained here:
|
E 'foo' is contained here:
|
||||||
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
|
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
|
||||||
E ? +++
|
E ? +++
|
||||||
|
|
||||||
failure_demo.py:91: AssertionError
|
failure_demo.py:101: AssertionError
|
||||||
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
|
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_not_in_text_single_long_term(self):
|
def test_not_in_text_single_long_term(self):
|
||||||
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
|
text = "head " * 50 + "f" * 70 + "tail " * 20
|
||||||
> assert 'f'*70 not in text
|
> assert "f" * 70 not in text
|
||||||
E AssertionError: assert 'fffffffffff...ffffffffffff' not in 'head head he...l tail tail '
|
E AssertionError: assert 'fffffffffff...ffffffffffff' not in 'head head he...l tail tail '
|
||||||
E 'ffffffffffffffffff...fffffffffffffffffff' is contained here:
|
E 'ffffffffffffffffff...fffffffffffffffffff' is contained here:
|
||||||
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
|
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
|
||||||
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||||
|
|
||||||
failure_demo.py:95: AssertionError
|
failure_demo.py:105: AssertionError
|
||||||
______________________________ test_attribute ______________________________
|
______________________________ test_attribute ______________________________
|
||||||
|
|
||||||
def test_attribute():
|
def test_attribute():
|
||||||
|
|
||||||
class Foo(object):
|
class Foo(object):
|
||||||
b = 1
|
b = 1
|
||||||
|
|
||||||
i = Foo()
|
i = Foo()
|
||||||
> assert i.b == 2
|
> assert i.b == 2
|
||||||
E assert 1 == 2
|
E assert 1 == 2
|
||||||
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.b
|
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.b
|
||||||
|
|
||||||
failure_demo.py:102: AssertionError
|
failure_demo.py:114: AssertionError
|
||||||
_________________________ test_attribute_instance __________________________
|
_________________________ test_attribute_instance __________________________
|
||||||
|
|
||||||
def test_attribute_instance():
|
def test_attribute_instance():
|
||||||
|
|
||||||
class Foo(object):
|
class Foo(object):
|
||||||
b = 1
|
b = 1
|
||||||
|
|
||||||
> assert Foo().b == 2
|
> assert Foo().b == 2
|
||||||
E AssertionError: assert 1 == 2
|
E AssertionError: assert 1 == 2
|
||||||
E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef>.b
|
E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef>.b
|
||||||
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
|
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
|
||||||
|
|
||||||
failure_demo.py:108: AssertionError
|
failure_demo.py:122: AssertionError
|
||||||
__________________________ test_attribute_failure __________________________
|
__________________________ test_attribute_failure __________________________
|
||||||
|
|
||||||
def test_attribute_failure():
|
def test_attribute_failure():
|
||||||
|
|
||||||
class Foo(object):
|
class Foo(object):
|
||||||
|
|
||||||
def _get_b(self):
|
def _get_b(self):
|
||||||
raise Exception('Failed to get attrib')
|
raise Exception("Failed to get attrib")
|
||||||
|
|
||||||
b = property(_get_b)
|
b = property(_get_b)
|
||||||
|
|
||||||
i = Foo()
|
i = Foo()
|
||||||
> assert i.b == 2
|
> assert i.b == 2
|
||||||
|
|
||||||
failure_demo.py:117:
|
failure_demo.py:135:
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
|
|
||||||
self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0xdeadbeef>
|
self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0xdeadbeef>
|
||||||
|
|
||||||
def _get_b(self):
|
def _get_b(self):
|
||||||
> raise Exception('Failed to get attrib')
|
> raise Exception("Failed to get attrib")
|
||||||
E Exception: Failed to get attrib
|
E Exception: Failed to get attrib
|
||||||
|
|
||||||
failure_demo.py:114: Exception
|
failure_demo.py:130: Exception
|
||||||
_________________________ test_attribute_multiple __________________________
|
_________________________ test_attribute_multiple __________________________
|
||||||
|
|
||||||
def test_attribute_multiple():
|
def test_attribute_multiple():
|
||||||
|
|
||||||
class Foo(object):
|
class Foo(object):
|
||||||
b = 1
|
b = 1
|
||||||
|
|
||||||
class Bar(object):
|
class Bar(object):
|
||||||
b = 2
|
b = 2
|
||||||
|
|
||||||
> assert Foo().b == Bar().b
|
> assert Foo().b == Bar().b
|
||||||
E AssertionError: assert 1 == 2
|
E AssertionError: assert 1 == 2
|
||||||
E + where 1 = <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef>.b
|
E + where 1 = <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef>.b
|
||||||
E + where <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Foo'>()
|
E + where <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Foo'>()
|
||||||
E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef>.b
|
E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef>.b
|
||||||
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
|
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
|
||||||
|
|
||||||
failure_demo.py:125: AssertionError
|
failure_demo.py:146: AssertionError
|
||||||
__________________________ TestRaises.test_raises __________________________
|
__________________________ TestRaises.test_raises __________________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_raises(self):
|
def test_raises(self):
|
||||||
s = 'qwe'
|
s = "qwe" # NOQA
|
||||||
> raises(TypeError, "int(s)")
|
> raises(TypeError, "int(s)")
|
||||||
|
|
||||||
failure_demo.py:134:
|
failure_demo.py:157:
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
|
|
||||||
> int(s)
|
> int(s)
|
||||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||||
|
|
||||||
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:615>:1: ValueError
|
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:634>:1: ValueError
|
||||||
______________________ TestRaises.test_raises_doesnt _______________________
|
______________________ TestRaises.test_raises_doesnt _______________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_raises_doesnt(self):
|
def test_raises_doesnt(self):
|
||||||
> raises(IOError, "int('3')")
|
> raises(IOError, "int('3')")
|
||||||
E Failed: DID NOT RAISE <class 'OSError'>
|
E Failed: DID NOT RAISE <class 'OSError'>
|
||||||
|
|
||||||
failure_demo.py:137: Failed
|
failure_demo.py:160: Failed
|
||||||
__________________________ TestRaises.test_raise ___________________________
|
__________________________ TestRaises.test_raise ___________________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_raise(self):
|
def test_raise(self):
|
||||||
> raise ValueError("demo error")
|
> raise ValueError("demo error")
|
||||||
E ValueError: demo error
|
E ValueError: demo error
|
||||||
|
|
||||||
failure_demo.py:140: ValueError
|
failure_demo.py:163: ValueError
|
||||||
________________________ TestRaises.test_tupleerror ________________________
|
________________________ TestRaises.test_tupleerror ________________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_tupleerror(self):
|
def test_tupleerror(self):
|
||||||
> a,b = [1]
|
> a, b = [1] # NOQA
|
||||||
E ValueError: not enough values to unpack (expected 2, got 1)
|
E ValueError: not enough values to unpack (expected 2, got 1)
|
||||||
|
|
||||||
failure_demo.py:143: ValueError
|
failure_demo.py:166: ValueError
|
||||||
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
|
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
|
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
|
||||||
l = [1,2,3]
|
items = [1, 2, 3]
|
||||||
print ("l is %r" % l)
|
print("items is %r" % items)
|
||||||
> a,b = l.pop()
|
> a, b = items.pop()
|
||||||
E TypeError: 'int' object is not iterable
|
E TypeError: 'int' object is not iterable
|
||||||
|
|
||||||
failure_demo.py:148: TypeError
|
failure_demo.py:171: TypeError
|
||||||
--------------------------- Captured stdout call ---------------------------
|
--------------------------- Captured stdout call ---------------------------
|
||||||
l is [1, 2, 3]
|
items is [1, 2, 3]
|
||||||
________________________ TestRaises.test_some_error ________________________
|
________________________ TestRaises.test_some_error ________________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_some_error(self):
|
def test_some_error(self):
|
||||||
> if namenotexi:
|
> if namenotexi: # NOQA
|
||||||
E NameError: name 'namenotexi' is not defined
|
E NameError: name 'namenotexi' is not defined
|
||||||
|
|
||||||
failure_demo.py:151: NameError
|
failure_demo.py:174: NameError
|
||||||
____________________ test_dynamic_compile_shows_nicely _____________________
|
____________________ test_dynamic_compile_shows_nicely _____________________
|
||||||
|
|
||||||
def test_dynamic_compile_shows_nicely():
|
def test_dynamic_compile_shows_nicely():
|
||||||
import imp
|
import imp
|
||||||
import sys
|
import sys
|
||||||
src = 'def foo():\n assert 1 == 0\n'
|
|
||||||
name = 'abc-123'
|
src = "def foo():\n assert 1 == 0\n"
|
||||||
|
name = "abc-123"
|
||||||
module = imp.new_module(name)
|
module = imp.new_module(name)
|
||||||
code = _pytest._code.compile(src, name, 'exec')
|
code = _pytest._code.compile(src, name, "exec")
|
||||||
py.builtin.exec_(code, module.__dict__)
|
py.builtin.exec_(code, module.__dict__)
|
||||||
sys.modules[name] = module
|
sys.modules[name] = module
|
||||||
> module.foo()
|
> module.foo()
|
||||||
|
|
||||||
failure_demo.py:168:
|
failure_demo.py:192:
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
|
|
||||||
def foo():
|
def foo():
|
||||||
> assert 1 == 0
|
> assert 1 == 0
|
||||||
E AssertionError
|
E AssertionError
|
||||||
|
|
||||||
<2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:165>:2: AssertionError
|
<2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:189>:2: AssertionError
|
||||||
____________________ TestMoreErrors.test_complex_error _____________________
|
____________________ TestMoreErrors.test_complex_error _____________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_complex_error(self):
|
def test_complex_error(self):
|
||||||
|
|
||||||
def f():
|
def f():
|
||||||
return 44
|
return 44
|
||||||
|
|
||||||
def g():
|
def g():
|
||||||
return 43
|
return 43
|
||||||
|
|
||||||
> somefunc(f(), g())
|
> somefunc(f(), g())
|
||||||
|
|
||||||
failure_demo.py:178:
|
failure_demo.py:205:
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
failure_demo.py:9: in somefunc
|
failure_demo.py:11: in somefunc
|
||||||
otherfunc(x,y)
|
otherfunc(x, y)
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
|
|
||||||
a = 44, b = 43
|
a = 44, b = 43
|
||||||
|
|
||||||
def otherfunc(a,b):
|
def otherfunc(a, b):
|
||||||
> assert a==b
|
> assert a == b
|
||||||
E assert 44 == 43
|
E assert 44 == 43
|
||||||
|
|
||||||
failure_demo.py:6: AssertionError
|
failure_demo.py:7: AssertionError
|
||||||
___________________ TestMoreErrors.test_z1_unpack_error ____________________
|
___________________ TestMoreErrors.test_z1_unpack_error ____________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_z1_unpack_error(self):
|
def test_z1_unpack_error(self):
|
||||||
l = []
|
items = []
|
||||||
> a,b = l
|
> a, b = items
|
||||||
E ValueError: not enough values to unpack (expected 2, got 0)
|
E ValueError: not enough values to unpack (expected 2, got 0)
|
||||||
|
|
||||||
failure_demo.py:182: ValueError
|
failure_demo.py:209: ValueError
|
||||||
____________________ TestMoreErrors.test_z2_type_error _____________________
|
____________________ TestMoreErrors.test_z2_type_error _____________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_z2_type_error(self):
|
def test_z2_type_error(self):
|
||||||
l = 3
|
items = 3
|
||||||
> a,b = l
|
> a, b = items
|
||||||
E TypeError: 'int' object is not iterable
|
E TypeError: 'int' object is not iterable
|
||||||
|
|
||||||
failure_demo.py:186: TypeError
|
failure_demo.py:213: TypeError
|
||||||
______________________ TestMoreErrors.test_startswith ______________________
|
______________________ TestMoreErrors.test_startswith ______________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_startswith(self):
|
def test_startswith(self):
|
||||||
s = "123"
|
s = "123"
|
||||||
g = "456"
|
g = "456"
|
||||||
|
@ -484,108 +500,119 @@ get on the terminal - we are working on that)::
|
||||||
E AssertionError: assert False
|
E AssertionError: assert False
|
||||||
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
|
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
|
||||||
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
|
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
|
||||||
|
|
||||||
failure_demo.py:191: AssertionError
|
failure_demo.py:218: AssertionError
|
||||||
__________________ TestMoreErrors.test_startswith_nested ___________________
|
__________________ TestMoreErrors.test_startswith_nested ___________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_startswith_nested(self):
|
def test_startswith_nested(self):
|
||||||
|
|
||||||
def f():
|
def f():
|
||||||
return "123"
|
return "123"
|
||||||
|
|
||||||
def g():
|
def g():
|
||||||
return "456"
|
return "456"
|
||||||
|
|
||||||
> assert f().startswith(g())
|
> assert f().startswith(g())
|
||||||
E AssertionError: assert False
|
E AssertionError: assert False
|
||||||
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
|
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
|
||||||
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
|
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
|
||||||
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef>()
|
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef>()
|
||||||
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>()
|
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>()
|
||||||
|
|
||||||
failure_demo.py:198: AssertionError
|
failure_demo.py:228: AssertionError
|
||||||
_____________________ TestMoreErrors.test_global_func ______________________
|
_____________________ TestMoreErrors.test_global_func ______________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_global_func(self):
|
def test_global_func(self):
|
||||||
> assert isinstance(globf(42), float)
|
> assert isinstance(globf(42), float)
|
||||||
E assert False
|
E assert False
|
||||||
E + where False = isinstance(43, float)
|
E + where False = isinstance(43, float)
|
||||||
E + where 43 = globf(42)
|
E + where 43 = globf(42)
|
||||||
|
|
||||||
failure_demo.py:201: AssertionError
|
failure_demo.py:231: AssertionError
|
||||||
_______________________ TestMoreErrors.test_instance _______________________
|
_______________________ TestMoreErrors.test_instance _______________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_instance(self):
|
def test_instance(self):
|
||||||
self.x = 6*7
|
self.x = 6 * 7
|
||||||
> assert self.x != 42
|
> assert self.x != 42
|
||||||
E assert 42 != 42
|
E assert 42 != 42
|
||||||
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
|
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
|
||||||
|
|
||||||
failure_demo.py:205: AssertionError
|
failure_demo.py:235: AssertionError
|
||||||
_______________________ TestMoreErrors.test_compare ________________________
|
_______________________ TestMoreErrors.test_compare ________________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_compare(self):
|
def test_compare(self):
|
||||||
> assert globf(10) < 5
|
> assert globf(10) < 5
|
||||||
E assert 11 < 5
|
E assert 11 < 5
|
||||||
E + where 11 = globf(10)
|
E + where 11 = globf(10)
|
||||||
|
|
||||||
failure_demo.py:208: AssertionError
|
failure_demo.py:238: AssertionError
|
||||||
_____________________ TestMoreErrors.test_try_finally ______________________
|
_____________________ TestMoreErrors.test_try_finally ______________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_try_finally(self):
|
def test_try_finally(self):
|
||||||
x = 1
|
x = 1
|
||||||
try:
|
try:
|
||||||
> assert x == 0
|
> assert x == 0
|
||||||
E assert 1 == 0
|
E assert 1 == 0
|
||||||
|
|
||||||
failure_demo.py:213: AssertionError
|
failure_demo.py:243: AssertionError
|
||||||
___________________ TestCustomAssertMsg.test_single_line ___________________
|
___________________ TestCustomAssertMsg.test_single_line ___________________
|
||||||
|
|
||||||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_single_line(self):
|
def test_single_line(self):
|
||||||
|
|
||||||
class A(object):
|
class A(object):
|
||||||
a = 1
|
a = 1
|
||||||
|
|
||||||
b = 2
|
b = 2
|
||||||
> assert A.a == b, "A.a appears not to be b"
|
> assert A.a == b, "A.a appears not to be b"
|
||||||
E AssertionError: A.a appears not to be b
|
E AssertionError: A.a appears not to be b
|
||||||
E assert 1 == 2
|
E assert 1 == 2
|
||||||
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a
|
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a
|
||||||
|
|
||||||
failure_demo.py:224: AssertionError
|
failure_demo.py:256: AssertionError
|
||||||
____________________ TestCustomAssertMsg.test_multiline ____________________
|
____________________ TestCustomAssertMsg.test_multiline ____________________
|
||||||
|
|
||||||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_multiline(self):
|
def test_multiline(self):
|
||||||
|
|
||||||
class A(object):
|
class A(object):
|
||||||
a = 1
|
a = 1
|
||||||
|
|
||||||
b = 2
|
b = 2
|
||||||
> assert A.a == b, "A.a appears not to be b\n" \
|
> assert (
|
||||||
"or does not appear to be b\none of those"
|
A.a == b
|
||||||
|
), "A.a appears not to be b\n" "or does not appear to be b\none of those"
|
||||||
E AssertionError: A.a appears not to be b
|
E AssertionError: A.a appears not to be b
|
||||||
E or does not appear to be b
|
E or does not appear to be b
|
||||||
E one of those
|
E one of those
|
||||||
E assert 1 == 2
|
E assert 1 == 2
|
||||||
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a
|
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a
|
||||||
|
|
||||||
failure_demo.py:230: AssertionError
|
failure_demo.py:264: AssertionError
|
||||||
___________________ TestCustomAssertMsg.test_custom_repr ___________________
|
___________________ TestCustomAssertMsg.test_custom_repr ___________________
|
||||||
|
|
||||||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_custom_repr(self):
|
def test_custom_repr(self):
|
||||||
|
|
||||||
class JSON(object):
|
class JSON(object):
|
||||||
a = 1
|
a = 1
|
||||||
|
|
||||||
def __repr__(self):
|
def __repr__(self):
|
||||||
return "This is JSON\n{\n 'foo': 'bar'\n}"
|
return "This is JSON\n{\n 'foo': 'bar'\n}"
|
||||||
|
|
||||||
a = JSON()
|
a = JSON()
|
||||||
b = 2
|
b = 2
|
||||||
> assert a.a == b, a
|
> assert a.a == b, a
|
||||||
|
@ -595,12 +622,12 @@ get on the terminal - we are working on that)::
|
||||||
E }
|
E }
|
||||||
E assert 1 == 2
|
E assert 1 == 2
|
||||||
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
|
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
|
||||||
|
|
||||||
failure_demo.py:240: AssertionError
|
failure_demo.py:278: AssertionError
|
||||||
============================= warnings summary =============================
|
============================= warnings summary =============================
|
||||||
None
|
None
|
||||||
Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0.
|
Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0.
|
||||||
Please use Metafunc.parametrize instead.
|
Please use Metafunc.parametrize instead.
|
||||||
|
|
||||||
-- Docs: http://doc.pytest.org/en/latest/warnings.html
|
-- Docs: http://doc.pytest.org/en/latest/warnings.html
|
||||||
================== 42 failed, 1 warnings in 0.12 seconds ===================
|
================== 42 failed, 1 warnings in 0.12 seconds ===================
|
||||||
|
|
|
@ -49,17 +49,17 @@ Let's run this without supplying our new option::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_answer ________________________________
|
_______________________________ test_answer ________________________________
|
||||||
|
|
||||||
cmdopt = 'type1'
|
cmdopt = 'type1'
|
||||||
|
|
||||||
def test_answer(cmdopt):
|
def test_answer(cmdopt):
|
||||||
if cmdopt == "type1":
|
if cmdopt == "type1":
|
||||||
print ("first")
|
print("first")
|
||||||
elif cmdopt == "type2":
|
elif cmdopt == "type2":
|
||||||
print ("second")
|
print("second")
|
||||||
> assert 0 # to see what was printed
|
> assert 0 # to see what was printed
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_sample.py:6: AssertionError
|
test_sample.py:6: AssertionError
|
||||||
--------------------------- Captured stdout call ---------------------------
|
--------------------------- Captured stdout call ---------------------------
|
||||||
first
|
first
|
||||||
|
@ -71,17 +71,17 @@ And now with supplying a command line option::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_answer ________________________________
|
_______________________________ test_answer ________________________________
|
||||||
|
|
||||||
cmdopt = 'type2'
|
cmdopt = 'type2'
|
||||||
|
|
||||||
def test_answer(cmdopt):
|
def test_answer(cmdopt):
|
||||||
if cmdopt == "type1":
|
if cmdopt == "type1":
|
||||||
print ("first")
|
print("first")
|
||||||
elif cmdopt == "type2":
|
elif cmdopt == "type2":
|
||||||
print ("second")
|
print("second")
|
||||||
> assert 0 # to see what was printed
|
> assert 0 # to see what was printed
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_sample.py:6: AssertionError
|
test_sample.py:6: AssertionError
|
||||||
--------------------------- Captured stdout call ---------------------------
|
--------------------------- Captured stdout call ---------------------------
|
||||||
second
|
second
|
||||||
|
@ -124,7 +124,7 @@ directory with the above conftest.py::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. _`excontrolskip`:
|
.. _`excontrolskip`:
|
||||||
|
@ -182,11 +182,11 @@ and when running it will see a skipped "slow" test::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py .s [100%]
|
test_module.py .s [100%]
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
SKIP [1] test_module.py:8: need --runslow option to run
|
SKIP [1] test_module.py:8: need --runslow option to run
|
||||||
|
|
||||||
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
||||||
|
|
||||||
Or run it including the ``slow`` marked test::
|
Or run it including the ``slow`` marked test::
|
||||||
|
@ -196,9 +196,9 @@ Or run it including the ``slow`` marked test::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py .. [100%]
|
test_module.py .. [100%]
|
||||||
|
|
||||||
========================= 2 passed in 0.12 seconds =========================
|
========================= 2 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
Writing well integrated assertion helpers
|
Writing well integrated assertion helpers
|
||||||
|
@ -236,12 +236,12 @@ Let's run our little function::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
______________________________ test_something ______________________________
|
______________________________ test_something ______________________________
|
||||||
|
|
||||||
def test_something():
|
def test_something():
|
||||||
> checkconfig(42)
|
> checkconfig(42)
|
||||||
E Failed: not configured: 42
|
E Failed: not configured: 42
|
||||||
|
|
||||||
test_checkconfig.py:8: Failed
|
test_checkconfig.py:11: Failed
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
If you only want to hide certain exceptions, you can set ``__tracebackhide__``
|
If you only want to hide certain exceptions, you can set ``__tracebackhide__``
|
||||||
|
@ -335,7 +335,7 @@ which will add the string to the test header accordingly::
|
||||||
project deps: mylib-1.1
|
project deps: mylib-1.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. regendoc:wipe
|
.. regendoc:wipe
|
||||||
|
@ -363,7 +363,7 @@ which will add info only when run with "--v"::
|
||||||
did you?
|
did you?
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 0 items
|
collecting ... collected 0 items
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
and nothing when run plainly::
|
and nothing when run plainly::
|
||||||
|
@ -373,7 +373,7 @@ and nothing when run plainly::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
profiling test duration
|
profiling test duration
|
||||||
|
@ -410,13 +410,13 @@ Now we can profile which test functions execute the slowest::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
test_some_are_slow.py ... [100%]
|
test_some_are_slow.py ... [100%]
|
||||||
|
|
||||||
========================= slowest 3 test durations =========================
|
========================= slowest 3 test durations =========================
|
||||||
0.30s call test_some_are_slow.py::test_funcslow2
|
0.30s call test_some_are_slow.py::test_funcslow2
|
||||||
0.20s call test_some_are_slow.py::test_funcslow1
|
0.20s call test_some_are_slow.py::test_funcslow1
|
||||||
0.11s call test_some_are_slow.py::test_funcfast
|
0.10s call test_some_are_slow.py::test_funcfast
|
||||||
========================= 3 passed in 0.12 seconds =========================
|
========================= 3 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
incremental testing - test steps
|
incremental testing - test steps
|
||||||
|
@ -482,19 +482,19 @@ If we run this::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
test_step.py .Fx. [100%]
|
test_step.py .Fx. [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
____________________ TestUserHandling.test_modification ____________________
|
____________________ TestUserHandling.test_modification ____________________
|
||||||
|
|
||||||
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_modification(self):
|
def test_modification(self):
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_step.py:9: AssertionError
|
test_step.py:11: AssertionError
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
||||||
reason: previous test failed (test_modification)
|
reason: previous test failed (test_modification)
|
||||||
|
@ -563,12 +563,12 @@ We can run this::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 7 items
|
collected 7 items
|
||||||
|
|
||||||
test_step.py .Fx. [ 57%]
|
test_step.py .Fx. [ 57%]
|
||||||
a/test_db.py F [ 71%]
|
a/test_db.py F [ 71%]
|
||||||
a/test_db2.py F [ 85%]
|
a/test_db2.py F [ 85%]
|
||||||
b/test_error.py E [100%]
|
b/test_error.py E [100%]
|
||||||
|
|
||||||
================================== ERRORS ==================================
|
================================== ERRORS ==================================
|
||||||
_______________________ ERROR at setup of test_root ________________________
|
_______________________ ERROR at setup of test_root ________________________
|
||||||
file $REGENDOC_TMPDIR/b/test_error.py, line 1
|
file $REGENDOC_TMPDIR/b/test_error.py, line 1
|
||||||
|
@ -576,37 +576,37 @@ We can run this::
|
||||||
E fixture 'db' not found
|
E fixture 'db' not found
|
||||||
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_xml_attribute, record_xml_property, recwarn, tmpdir, tmpdir_factory
|
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_xml_attribute, record_xml_property, recwarn, tmpdir, tmpdir_factory
|
||||||
> use 'pytest --fixtures [testpath]' for help on them.
|
> use 'pytest --fixtures [testpath]' for help on them.
|
||||||
|
|
||||||
$REGENDOC_TMPDIR/b/test_error.py:1
|
$REGENDOC_TMPDIR/b/test_error.py:1
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
____________________ TestUserHandling.test_modification ____________________
|
____________________ TestUserHandling.test_modification ____________________
|
||||||
|
|
||||||
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_modification(self):
|
def test_modification(self):
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_step.py:9: AssertionError
|
test_step.py:11: AssertionError
|
||||||
_________________________________ test_a1 __________________________________
|
_________________________________ test_a1 __________________________________
|
||||||
|
|
||||||
db = <conftest.DB object at 0xdeadbeef>
|
db = <conftest.DB object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_a1(db):
|
def test_a1(db):
|
||||||
> assert 0, db # to show value
|
> assert 0, db # to show value
|
||||||
E AssertionError: <conftest.DB object at 0xdeadbeef>
|
E AssertionError: <conftest.DB object at 0xdeadbeef>
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
a/test_db.py:2: AssertionError
|
a/test_db.py:2: AssertionError
|
||||||
_________________________________ test_a2 __________________________________
|
_________________________________ test_a2 __________________________________
|
||||||
|
|
||||||
db = <conftest.DB object at 0xdeadbeef>
|
db = <conftest.DB object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_a2(db):
|
def test_a2(db):
|
||||||
> assert 0, db # to show value
|
> assert 0, db # to show value
|
||||||
E AssertionError: <conftest.DB object at 0xdeadbeef>
|
E AssertionError: <conftest.DB object at 0xdeadbeef>
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
a/test_db2.py:2: AssertionError
|
a/test_db2.py:2: AssertionError
|
||||||
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ==========
|
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ==========
|
||||||
|
|
||||||
|
@ -674,26 +674,26 @@ and run them::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py FF [100%]
|
test_module.py FF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
________________________________ test_fail1 ________________________________
|
________________________________ test_fail1 ________________________________
|
||||||
|
|
||||||
tmpdir = local('PYTEST_TMPDIR/test_fail10')
|
tmpdir = local('PYTEST_TMPDIR/test_fail10')
|
||||||
|
|
||||||
def test_fail1(tmpdir):
|
def test_fail1(tmpdir):
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:2: AssertionError
|
test_module.py:2: AssertionError
|
||||||
________________________________ test_fail2 ________________________________
|
________________________________ test_fail2 ________________________________
|
||||||
|
|
||||||
def test_fail2():
|
def test_fail2():
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:4: AssertionError
|
test_module.py:6: AssertionError
|
||||||
========================= 2 failed in 0.12 seconds =========================
|
========================= 2 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
you will have a "failures" file which contains the failing test ids::
|
you will have a "failures" file which contains the failing test ids::
|
||||||
|
@ -773,37 +773,37 @@ and run it::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
|
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
|
||||||
Fexecuting test failed test_module.py::test_call_fails
|
Fexecuting test failed test_module.py::test_call_fails
|
||||||
F
|
F
|
||||||
|
|
||||||
================================== ERRORS ==================================
|
================================== ERRORS ==================================
|
||||||
____________________ ERROR at setup of test_setup_fails ____________________
|
____________________ ERROR at setup of test_setup_fails ____________________
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def other():
|
def other():
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:6: AssertionError
|
test_module.py:7: AssertionError
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_____________________________ test_call_fails ______________________________
|
_____________________________ test_call_fails ______________________________
|
||||||
|
|
||||||
something = None
|
something = None
|
||||||
|
|
||||||
def test_call_fails(something):
|
def test_call_fails(something):
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:12: AssertionError
|
test_module.py:15: AssertionError
|
||||||
________________________________ test_fail2 ________________________________
|
________________________________ test_fail2 ________________________________
|
||||||
|
|
||||||
def test_fail2():
|
def test_fail2():
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:15: AssertionError
|
test_module.py:19: AssertionError
|
||||||
==================== 2 failed, 1 error in 0.12 seconds =====================
|
==================== 2 failed, 1 error in 0.12 seconds =====================
|
||||||
|
|
||||||
You'll see that the fixture finalizers could use the precise reporting
|
You'll see that the fixture finalizers could use the precise reporting
|
||||||
|
|
|
@ -73,20 +73,20 @@ marked ``smtp`` fixture function. Running the test looks like this::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_smtpsimple.py F [100%]
|
test_smtpsimple.py F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
________________________________ test_ehlo _________________________________
|
________________________________ test_ehlo _________________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_ehlo(smtp):
|
def test_ehlo(smtp):
|
||||||
response, msg = smtp.ehlo()
|
response, msg = smtp.ehlo()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
> assert 0 # for demo purposes
|
> assert 0 # for demo purposes
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_smtpsimple.py:11: AssertionError
|
test_smtpsimple.py:11: AssertionError
|
||||||
========================= 1 failed in 0.12 seconds =========================
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
@ -209,32 +209,32 @@ inspect what is going on and can now run the tests::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py FF [100%]
|
test_module.py FF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
________________________________ test_ehlo _________________________________
|
________________________________ test_ehlo _________________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_ehlo(smtp):
|
def test_ehlo(smtp):
|
||||||
response, msg = smtp.ehlo()
|
response, msg = smtp.ehlo()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
assert b"smtp.gmail.com" in msg
|
assert b"smtp.gmail.com" in msg
|
||||||
> assert 0 # for demo purposes
|
> assert 0 # for demo purposes
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:6: AssertionError
|
test_module.py:6: AssertionError
|
||||||
________________________________ test_noop _________________________________
|
________________________________ test_noop _________________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_noop(smtp):
|
def test_noop(smtp):
|
||||||
response, msg = smtp.noop()
|
response, msg = smtp.noop()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
> assert 0 # for demo purposes
|
> assert 0 # for demo purposes
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:11: AssertionError
|
test_module.py:11: AssertionError
|
||||||
========================= 2 failed in 0.12 seconds =========================
|
========================= 2 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
@ -337,7 +337,7 @@ Let's execute it::
|
||||||
|
|
||||||
$ pytest -s -q --tb=no
|
$ pytest -s -q --tb=no
|
||||||
FFteardown smtp
|
FFteardown smtp
|
||||||
|
|
||||||
2 failed in 0.12 seconds
|
2 failed in 0.12 seconds
|
||||||
|
|
||||||
We see that the ``smtp`` instance is finalized after the two
|
We see that the ``smtp`` instance is finalized after the two
|
||||||
|
@ -446,7 +446,7 @@ again, nothing much has changed::
|
||||||
|
|
||||||
$ pytest -s -q --tb=no
|
$ pytest -s -q --tb=no
|
||||||
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
|
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
|
||||||
|
|
||||||
2 failed in 0.12 seconds
|
2 failed in 0.12 seconds
|
||||||
|
|
||||||
Let's quickly create another test module that actually sets the
|
Let's quickly create another test module that actually sets the
|
||||||
|
@ -567,51 +567,51 @@ So let's just do another run::
|
||||||
FFFF [100%]
|
FFFF [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
________________________ test_ehlo[smtp.gmail.com] _________________________
|
________________________ test_ehlo[smtp.gmail.com] _________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_ehlo(smtp):
|
def test_ehlo(smtp):
|
||||||
response, msg = smtp.ehlo()
|
response, msg = smtp.ehlo()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
assert b"smtp.gmail.com" in msg
|
assert b"smtp.gmail.com" in msg
|
||||||
> assert 0 # for demo purposes
|
> assert 0 # for demo purposes
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:6: AssertionError
|
test_module.py:6: AssertionError
|
||||||
________________________ test_noop[smtp.gmail.com] _________________________
|
________________________ test_noop[smtp.gmail.com] _________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_noop(smtp):
|
def test_noop(smtp):
|
||||||
response, msg = smtp.noop()
|
response, msg = smtp.noop()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
> assert 0 # for demo purposes
|
> assert 0 # for demo purposes
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:11: AssertionError
|
test_module.py:11: AssertionError
|
||||||
________________________ test_ehlo[mail.python.org] ________________________
|
________________________ test_ehlo[mail.python.org] ________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_ehlo(smtp):
|
def test_ehlo(smtp):
|
||||||
response, msg = smtp.ehlo()
|
response, msg = smtp.ehlo()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
> assert b"smtp.gmail.com" in msg
|
> assert b"smtp.gmail.com" in msg
|
||||||
E AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nPIPELINING\nSIZE 51200000\nETRN\nSTARTTLS\nAUTH DIGEST-MD5 NTLM CRAM-MD5\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8'
|
E AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nPIPELINING\nSIZE 51200000\nETRN\nSTARTTLS\nAUTH DIGEST-MD5 NTLM CRAM-MD5\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8'
|
||||||
|
|
||||||
test_module.py:5: AssertionError
|
test_module.py:5: AssertionError
|
||||||
-------------------------- Captured stdout setup ---------------------------
|
-------------------------- Captured stdout setup ---------------------------
|
||||||
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
||||||
________________________ test_noop[mail.python.org] ________________________
|
________________________ test_noop[mail.python.org] ________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_noop(smtp):
|
def test_noop(smtp):
|
||||||
response, msg = smtp.noop()
|
response, msg = smtp.noop()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
> assert 0 # for demo purposes
|
> assert 0 # for demo purposes
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:11: AssertionError
|
test_module.py:11: AssertionError
|
||||||
------------------------- Captured stdout teardown -------------------------
|
------------------------- Captured stdout teardown -------------------------
|
||||||
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
@ -683,7 +683,7 @@ Running the above tests results in the following test IDs being used::
|
||||||
<Function 'test_noop[smtp.gmail.com]'>
|
<Function 'test_noop[smtp.gmail.com]'>
|
||||||
<Function 'test_ehlo[mail.python.org]'>
|
<Function 'test_ehlo[mail.python.org]'>
|
||||||
<Function 'test_noop[mail.python.org]'>
|
<Function 'test_noop[mail.python.org]'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. _`fixture-parametrize-marks`:
|
.. _`fixture-parametrize-marks`:
|
||||||
|
@ -713,11 +713,11 @@ Running this test will *skip* the invocation of ``data_set`` with value ``2``::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 3 items
|
collecting ... collected 3 items
|
||||||
|
|
||||||
test_fixture_marks.py::test_data[0] PASSED [ 33%]
|
test_fixture_marks.py::test_data[0] PASSED [ 33%]
|
||||||
test_fixture_marks.py::test_data[1] PASSED [ 66%]
|
test_fixture_marks.py::test_data[1] PASSED [ 66%]
|
||||||
test_fixture_marks.py::test_data[2] SKIPPED [100%]
|
test_fixture_marks.py::test_data[2] SKIPPED [100%]
|
||||||
|
|
||||||
=================== 2 passed, 1 skipped in 0.12 seconds ====================
|
=================== 2 passed, 1 skipped in 0.12 seconds ====================
|
||||||
|
|
||||||
.. _`interdependent fixtures`:
|
.. _`interdependent fixtures`:
|
||||||
|
@ -756,10 +756,10 @@ Here we declare an ``app`` fixture which receives the previously defined
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_appsetup.py::test_smtp_exists[smtp.gmail.com] PASSED [ 50%]
|
test_appsetup.py::test_smtp_exists[smtp.gmail.com] PASSED [ 50%]
|
||||||
test_appsetup.py::test_smtp_exists[mail.python.org] PASSED [100%]
|
test_appsetup.py::test_smtp_exists[mail.python.org] PASSED [100%]
|
||||||
|
|
||||||
========================= 2 passed in 0.12 seconds =========================
|
========================= 2 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
Due to the parametrization of ``smtp`` the test will run twice with two
|
Due to the parametrization of ``smtp`` the test will run twice with two
|
||||||
|
@ -825,26 +825,26 @@ Let's run the tests in verbose mode and with looking at the print-output::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 8 items
|
collecting ... collected 8 items
|
||||||
|
|
||||||
test_module.py::test_0[1] SETUP otherarg 1
|
test_module.py::test_0[1] SETUP otherarg 1
|
||||||
RUN test0 with otherarg 1
|
RUN test0 with otherarg 1
|
||||||
PASSED TEARDOWN otherarg 1
|
PASSED TEARDOWN otherarg 1
|
||||||
|
|
||||||
test_module.py::test_0[2] SETUP otherarg 2
|
test_module.py::test_0[2] SETUP otherarg 2
|
||||||
RUN test0 with otherarg 2
|
RUN test0 with otherarg 2
|
||||||
PASSED TEARDOWN otherarg 2
|
PASSED TEARDOWN otherarg 2
|
||||||
|
|
||||||
test_module.py::test_1[mod1] SETUP modarg mod1
|
test_module.py::test_1[mod1] SETUP modarg mod1
|
||||||
RUN test1 with modarg mod1
|
RUN test1 with modarg mod1
|
||||||
PASSED
|
PASSED
|
||||||
test_module.py::test_2[mod1-1] SETUP otherarg 1
|
test_module.py::test_2[mod1-1] SETUP otherarg 1
|
||||||
RUN test2 with otherarg 1 and modarg mod1
|
RUN test2 with otherarg 1 and modarg mod1
|
||||||
PASSED TEARDOWN otherarg 1
|
PASSED TEARDOWN otherarg 1
|
||||||
|
|
||||||
test_module.py::test_2[mod1-2] SETUP otherarg 2
|
test_module.py::test_2[mod1-2] SETUP otherarg 2
|
||||||
RUN test2 with otherarg 2 and modarg mod1
|
RUN test2 with otherarg 2 and modarg mod1
|
||||||
PASSED TEARDOWN otherarg 2
|
PASSED TEARDOWN otherarg 2
|
||||||
|
|
||||||
test_module.py::test_1[mod2] TEARDOWN modarg mod1
|
test_module.py::test_1[mod2] TEARDOWN modarg mod1
|
||||||
SETUP modarg mod2
|
SETUP modarg mod2
|
||||||
RUN test1 with modarg mod2
|
RUN test1 with modarg mod2
|
||||||
|
@ -852,13 +852,13 @@ Let's run the tests in verbose mode and with looking at the print-output::
|
||||||
test_module.py::test_2[mod2-1] SETUP otherarg 1
|
test_module.py::test_2[mod2-1] SETUP otherarg 1
|
||||||
RUN test2 with otherarg 1 and modarg mod2
|
RUN test2 with otherarg 1 and modarg mod2
|
||||||
PASSED TEARDOWN otherarg 1
|
PASSED TEARDOWN otherarg 1
|
||||||
|
|
||||||
test_module.py::test_2[mod2-2] SETUP otherarg 2
|
test_module.py::test_2[mod2-2] SETUP otherarg 2
|
||||||
RUN test2 with otherarg 2 and modarg mod2
|
RUN test2 with otherarg 2 and modarg mod2
|
||||||
PASSED TEARDOWN otherarg 2
|
PASSED TEARDOWN otherarg 2
|
||||||
TEARDOWN modarg mod2
|
TEARDOWN modarg mod2
|
||||||
|
|
||||||
|
|
||||||
========================= 8 passed in 0.12 seconds =========================
|
========================= 8 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
You can see that the parametrized module-scoped ``modarg`` resource caused an
|
You can see that the parametrized module-scoped ``modarg`` resource caused an
|
||||||
|
|
|
@ -50,17 +50,17 @@ That’s it. You can now execute the test function::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_sample.py F [100%]
|
test_sample.py F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_answer ________________________________
|
_______________________________ test_answer ________________________________
|
||||||
|
|
||||||
def test_answer():
|
def test_answer():
|
||||||
> assert func(3) == 5
|
> assert func(3) == 5
|
||||||
E assert 4 == 5
|
E assert 4 == 5
|
||||||
E + where 4 = func(3)
|
E + where 4 = func(3)
|
||||||
|
|
||||||
test_sample.py:5: AssertionError
|
test_sample.py:5: AssertionError
|
||||||
========================= 1 failed in 0.12 seconds =========================
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
@ -117,15 +117,15 @@ Once you develop multiple tests, you may want to group them into a class. pytest
|
||||||
.F [100%]
|
.F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
____________________________ TestClass.test_two ____________________________
|
____________________________ TestClass.test_two ____________________________
|
||||||
|
|
||||||
self = <test_class.TestClass object at 0xdeadbeef>
|
self = <test_class.TestClass object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_two(self):
|
def test_two(self):
|
||||||
x = "hello"
|
x = "hello"
|
||||||
> assert hasattr(x, 'check')
|
> assert hasattr(x, 'check')
|
||||||
E AssertionError: assert False
|
E AssertionError: assert False
|
||||||
E + where False = hasattr('hello', 'check')
|
E + where False = hasattr('hello', 'check')
|
||||||
|
|
||||||
test_class.py:8: AssertionError
|
test_class.py:8: AssertionError
|
||||||
1 failed, 1 passed in 0.12 seconds
|
1 failed, 1 passed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -147,14 +147,14 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_____________________________ test_needsfiles ______________________________
|
_____________________________ test_needsfiles ______________________________
|
||||||
|
|
||||||
tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')
|
tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')
|
||||||
|
|
||||||
def test_needsfiles(tmpdir):
|
def test_needsfiles(tmpdir):
|
||||||
print (tmpdir)
|
print (tmpdir)
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_tmpdir.py:3: AssertionError
|
test_tmpdir.py:3: AssertionError
|
||||||
--------------------------- Captured stdout call ---------------------------
|
--------------------------- Captured stdout call ---------------------------
|
||||||
PYTEST_TMPDIR/test_needsfiles0
|
PYTEST_TMPDIR/test_needsfiles0
|
||||||
|
|
|
@ -29,18 +29,18 @@ To execute it::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_sample.py F [100%]
|
test_sample.py F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_answer ________________________________
|
_______________________________ test_answer ________________________________
|
||||||
|
|
||||||
def test_answer():
|
def test_answer():
|
||||||
> assert inc(3) == 5
|
> assert inc(3) == 5
|
||||||
E assert 4 == 5
|
E assert 4 == 5
|
||||||
E + where 4 = inc(3)
|
E + where 4 = inc(3)
|
||||||
|
|
||||||
test_sample.py:5: AssertionError
|
test_sample.py:6: AssertionError
|
||||||
========================= 1 failed in 0.12 seconds =========================
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used.
|
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used.
|
||||||
|
|
|
@ -57,14 +57,14 @@ them in turn::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
test_expectation.py ..F [100%]
|
test_expectation.py ..F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
____________________________ test_eval[6*9-42] _____________________________
|
____________________________ test_eval[6*9-42] _____________________________
|
||||||
|
|
||||||
test_input = '6*9', expected = 42
|
test_input = '6*9', expected = 42
|
||||||
|
|
||||||
@pytest.mark.parametrize("test_input,expected", [
|
@pytest.mark.parametrize("test_input,expected", [
|
||||||
("3+5", 8),
|
("3+5", 8),
|
||||||
("2+4", 6),
|
("2+4", 6),
|
||||||
|
@ -74,7 +74,7 @@ them in turn::
|
||||||
> assert eval(test_input) == expected
|
> assert eval(test_input) == expected
|
||||||
E AssertionError: assert 54 == 42
|
E AssertionError: assert 54 == 42
|
||||||
E + where 54 = eval('6*9')
|
E + where 54 = eval('6*9')
|
||||||
|
|
||||||
test_expectation.py:8: AssertionError
|
test_expectation.py:8: AssertionError
|
||||||
==================== 1 failed, 2 passed in 0.12 seconds ====================
|
==================== 1 failed, 2 passed in 0.12 seconds ====================
|
||||||
|
|
||||||
|
@ -106,9 +106,9 @@ Let's run this::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
test_expectation.py ..x [100%]
|
test_expectation.py ..x [100%]
|
||||||
|
|
||||||
=================== 2 passed, 1 xfailed in 0.12 seconds ====================
|
=================== 2 passed, 1 xfailed in 0.12 seconds ====================
|
||||||
|
|
||||||
The one parameter set which caused a failure previously now
|
The one parameter set which caused a failure previously now
|
||||||
|
@ -174,15 +174,15 @@ Let's also run with a stringinput that will lead to a failing test::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
___________________________ test_valid_string[!] ___________________________
|
___________________________ test_valid_string[!] ___________________________
|
||||||
|
|
||||||
stringinput = '!'
|
stringinput = '!'
|
||||||
|
|
||||||
def test_valid_string(stringinput):
|
def test_valid_string(stringinput):
|
||||||
> assert stringinput.isalpha()
|
> assert stringinput.isalpha()
|
||||||
E AssertionError: assert False
|
E AssertionError: assert False
|
||||||
E + where False = <built-in method isalpha of str object at 0xdeadbeef>()
|
E + where False = <built-in method isalpha of str object at 0xdeadbeef>()
|
||||||
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha
|
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha
|
||||||
|
|
||||||
test_strings.py:3: AssertionError
|
test_strings.py:3: AssertionError
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
|
|
|
@ -334,12 +334,12 @@ Running it with the report-on-xfail option gives this output::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR/example, inifile:
|
rootdir: $REGENDOC_TMPDIR/example, inifile:
|
||||||
collected 7 items
|
collected 7 items
|
||||||
|
|
||||||
xfail_demo.py xxxxxxx [100%]
|
xfail_demo.py xxxxxxx [100%]
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
XFAIL xfail_demo.py::test_hello
|
XFAIL xfail_demo.py::test_hello
|
||||||
XFAIL xfail_demo.py::test_hello2
|
XFAIL xfail_demo.py::test_hello2
|
||||||
reason: [NOTRUN]
|
reason: [NOTRUN]
|
||||||
XFAIL xfail_demo.py::test_hello3
|
XFAIL xfail_demo.py::test_hello3
|
||||||
condition: hasattr(os, 'sep')
|
condition: hasattr(os, 'sep')
|
||||||
XFAIL xfail_demo.py::test_hello4
|
XFAIL xfail_demo.py::test_hello4
|
||||||
|
@ -349,7 +349,7 @@ Running it with the report-on-xfail option gives this output::
|
||||||
XFAIL xfail_demo.py::test_hello6
|
XFAIL xfail_demo.py::test_hello6
|
||||||
reason: reason
|
reason: reason
|
||||||
XFAIL xfail_demo.py::test_hello7
|
XFAIL xfail_demo.py::test_hello7
|
||||||
|
|
||||||
======================== 7 xfailed in 0.12 seconds =========================
|
======================== 7 xfailed in 0.12 seconds =========================
|
||||||
|
|
||||||
.. _`skip/xfail with parametrize`:
|
.. _`skip/xfail with parametrize`:
|
||||||
|
|
|
@ -32,14 +32,14 @@ Running this would result in a passed test except for the last
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_tmpdir.py F [100%]
|
test_tmpdir.py F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_____________________________ test_create_file _____________________________
|
_____________________________ test_create_file _____________________________
|
||||||
|
|
||||||
tmpdir = local('PYTEST_TMPDIR/test_create_file0')
|
tmpdir = local('PYTEST_TMPDIR/test_create_file0')
|
||||||
|
|
||||||
def test_create_file(tmpdir):
|
def test_create_file(tmpdir):
|
||||||
p = tmpdir.mkdir("sub").join("hello.txt")
|
p = tmpdir.mkdir("sub").join("hello.txt")
|
||||||
p.write("content")
|
p.write("content")
|
||||||
|
@ -47,7 +47,7 @@ Running this would result in a passed test except for the last
|
||||||
assert len(tmpdir.listdir()) == 1
|
assert len(tmpdir.listdir()) == 1
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_tmpdir.py:7: AssertionError
|
test_tmpdir.py:7: AssertionError
|
||||||
========================= 1 failed in 0.12 seconds =========================
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
|
|
@ -130,30 +130,30 @@ the ``self.db`` values in the traceback::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_unittest_db.py FF [100%]
|
test_unittest_db.py FF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
___________________________ MyTest.test_method1 ____________________________
|
___________________________ MyTest.test_method1 ____________________________
|
||||||
|
|
||||||
self = <test_unittest_db.MyTest testMethod=test_method1>
|
self = <test_unittest_db.MyTest testMethod=test_method1>
|
||||||
|
|
||||||
def test_method1(self):
|
def test_method1(self):
|
||||||
assert hasattr(self, "db")
|
assert hasattr(self, "db")
|
||||||
> assert 0, self.db # fail for demo purposes
|
> assert 0, self.db # fail for demo purposes
|
||||||
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
|
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_unittest_db.py:9: AssertionError
|
test_unittest_db.py:9: AssertionError
|
||||||
___________________________ MyTest.test_method2 ____________________________
|
___________________________ MyTest.test_method2 ____________________________
|
||||||
|
|
||||||
self = <test_unittest_db.MyTest testMethod=test_method2>
|
self = <test_unittest_db.MyTest testMethod=test_method2>
|
||||||
|
|
||||||
def test_method2(self):
|
def test_method2(self):
|
||||||
> assert 0, self.db # fail for demo purposes
|
> assert 0, self.db # fail for demo purposes
|
||||||
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
|
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_unittest_db.py:12: AssertionError
|
test_unittest_db.py:12: AssertionError
|
||||||
========================= 2 failed in 0.12 seconds =========================
|
========================= 2 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
|
|
@ -502,7 +502,7 @@ hook was invoked::
|
||||||
|
|
||||||
$ python myinvoke.py
|
$ python myinvoke.py
|
||||||
. [100%]*** test run reporting finishing
|
. [100%]*** test run reporting finishing
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
|
|
@ -25,14 +25,14 @@ Running pytest now produces this output::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_show_warnings.py . [100%]
|
test_show_warnings.py . [100%]
|
||||||
|
|
||||||
============================= warnings summary =============================
|
============================= warnings summary =============================
|
||||||
test_show_warnings.py::test_one
|
test_show_warnings.py::test_one
|
||||||
$REGENDOC_TMPDIR/test_show_warnings.py:4: UserWarning: api v1, should use functions from v2
|
$REGENDOC_TMPDIR/test_show_warnings.py:4: UserWarning: api v1, should use functions from v2
|
||||||
warnings.warn(UserWarning("api v1, should use functions from v2"))
|
warnings.warn(UserWarning("api v1, should use functions from v2"))
|
||||||
|
|
||||||
-- Docs: http://doc.pytest.org/en/latest/warnings.html
|
-- Docs: http://doc.pytest.org/en/latest/warnings.html
|
||||||
=================== 1 passed, 1 warnings in 0.12 seconds ===================
|
=================== 1 passed, 1 warnings in 0.12 seconds ===================
|
||||||
|
|
||||||
|
@ -45,17 +45,17 @@ them into errors::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_________________________________ test_one _________________________________
|
_________________________________ test_one _________________________________
|
||||||
|
|
||||||
def test_one():
|
def test_one():
|
||||||
> assert api_v1() == 1
|
> assert api_v1() == 1
|
||||||
|
|
||||||
test_show_warnings.py:8:
|
test_show_warnings.py:8:
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
|
|
||||||
def api_v1():
|
def api_v1():
|
||||||
> warnings.warn(UserWarning("api v1, should use functions from v2"))
|
> warnings.warn(UserWarning("api v1, should use functions from v2"))
|
||||||
E UserWarning: api v1, should use functions from v2
|
E UserWarning: api v1, should use functions from v2
|
||||||
|
|
||||||
test_show_warnings.py:4: UserWarning
|
test_show_warnings.py:4: UserWarning
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue