Remove gendoc normalization for '=' and '_' headers
Using the default for non-atty terminals (80) so the new progress indicator aligns correctly.
This commit is contained in:
parent
b533c2600a
commit
e0d236c031
|
@ -13,8 +13,6 @@ PAPEROPT_letter = -D latex_paper_size=letter
|
||||||
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
||||||
|
|
||||||
REGENDOC_ARGS := \
|
REGENDOC_ARGS := \
|
||||||
--normalize "/={8,} (.*) ={8,}/======= \1 ========/" \
|
|
||||||
--normalize "/_{8,} (.*) _{8,}/_______ \1 ________/" \
|
|
||||||
--normalize "/in \d+.\d+ seconds/in 0.12 seconds/" \
|
--normalize "/in \d+.\d+ seconds/in 0.12 seconds/" \
|
||||||
--normalize "@/tmp/pytest-of-.*/pytest-\d+@PYTEST_TMPDIR@" \
|
--normalize "@/tmp/pytest-of-.*/pytest-\d+@PYTEST_TMPDIR@" \
|
||||||
--normalize "@pytest-(\d+)\\.[^ ,]+@pytest-\1.x.y@" \
|
--normalize "@pytest-(\d+)\\.[^ ,]+@pytest-\1.x.y@" \
|
||||||
|
|
|
@ -25,15 +25,15 @@ to assert that your function returns a certain value. If this assertion fails
|
||||||
you will see the return value of the function call::
|
you will see the return value of the function call::
|
||||||
|
|
||||||
$ pytest test_assert1.py
|
$ pytest test_assert1.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_assert1.py F
|
test_assert1.py F [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_function ________
|
______________________________ test_function _______________________________
|
||||||
|
|
||||||
def test_function():
|
def test_function():
|
||||||
> assert f() == 4
|
> assert f() == 4
|
||||||
|
@ -41,7 +41,7 @@ you will see the return value of the function call::
|
||||||
E + where 3 = f()
|
E + where 3 = f()
|
||||||
|
|
||||||
test_assert1.py:5: AssertionError
|
test_assert1.py:5: AssertionError
|
||||||
======= 1 failed in 0.12 seconds ========
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
``pytest`` has support for showing the values of the most common subexpressions
|
``pytest`` has support for showing the values of the most common subexpressions
|
||||||
including calls, attributes, comparisons, and binary and unary
|
including calls, attributes, comparisons, and binary and unary
|
||||||
|
@ -168,15 +168,15 @@ when it encounters comparisons. For example::
|
||||||
if you run this module::
|
if you run this module::
|
||||||
|
|
||||||
$ pytest test_assert2.py
|
$ pytest test_assert2.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_assert2.py F
|
test_assert2.py F [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_set_comparison ________
|
___________________________ test_set_comparison ____________________________
|
||||||
|
|
||||||
def test_set_comparison():
|
def test_set_comparison():
|
||||||
set1 = set("1308")
|
set1 = set("1308")
|
||||||
|
@ -190,7 +190,7 @@ if you run this module::
|
||||||
E Use -v to get the full diff
|
E Use -v to get the full diff
|
||||||
|
|
||||||
test_assert2.py:5: AssertionError
|
test_assert2.py:5: AssertionError
|
||||||
======= 1 failed in 0.12 seconds ========
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
Special comparisons are done for a number of cases:
|
Special comparisons are done for a number of cases:
|
||||||
|
|
||||||
|
@ -238,9 +238,9 @@ you can run the test module and get the custom output defined in
|
||||||
the conftest file::
|
the conftest file::
|
||||||
|
|
||||||
$ pytest -q test_foocompare.py
|
$ pytest -q test_foocompare.py
|
||||||
F
|
F [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_compare ________
|
_______________________________ test_compare _______________________________
|
||||||
|
|
||||||
def test_compare():
|
def test_compare():
|
||||||
f1 = Foo(1)
|
f1 = Foo(1)
|
||||||
|
|
|
@ -91,11 +91,23 @@ You can ask for available builtin or project-custom
|
||||||
capsys
|
capsys
|
||||||
Enable capturing of writes to sys.stdout/sys.stderr and make
|
Enable capturing of writes to sys.stdout/sys.stderr and make
|
||||||
captured output available via ``capsys.readouterr()`` method calls
|
captured output available via ``capsys.readouterr()`` method calls
|
||||||
which return a ``(out, err)`` tuple.
|
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be ``text``
|
||||||
|
objects.
|
||||||
|
capsysbinary
|
||||||
|
Enable capturing of writes to sys.stdout/sys.stderr and make
|
||||||
|
captured output available via ``capsys.readouterr()`` method calls
|
||||||
|
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be ``bytes``
|
||||||
|
objects.
|
||||||
capfd
|
capfd
|
||||||
Enable capturing of writes to file descriptors 1 and 2 and make
|
Enable capturing of writes to file descriptors 1 and 2 and make
|
||||||
captured output available via ``capfd.readouterr()`` method calls
|
captured output available via ``capfd.readouterr()`` method calls
|
||||||
which return a ``(out, err)`` tuple.
|
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be ``text``
|
||||||
|
objects.
|
||||||
|
capfdbinary
|
||||||
|
Enable capturing of write to file descriptors 1 and 2 and make
|
||||||
|
captured output available via ``capfdbinary.readouterr`` method calls
|
||||||
|
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be
|
||||||
|
``bytes`` objects.
|
||||||
doctest_namespace
|
doctest_namespace
|
||||||
Inject names into the doctest namespace.
|
Inject names into the doctest namespace.
|
||||||
pytestconfig
|
pytestconfig
|
||||||
|
@ -104,6 +116,14 @@ You can ask for available builtin or project-custom
|
||||||
Add extra xml properties to the tag for the calling test.
|
Add extra xml properties to the tag for the calling test.
|
||||||
The fixture is callable with ``(name, value)``, with value being automatically
|
The fixture is callable with ``(name, value)``, with value being automatically
|
||||||
xml-encoded.
|
xml-encoded.
|
||||||
|
caplog
|
||||||
|
Access and control log capturing.
|
||||||
|
|
||||||
|
Captured logs are available through the following methods::
|
||||||
|
|
||||||
|
* caplog.text() -> string containing formatted log output
|
||||||
|
* caplog.records() -> list of logging.LogRecord instances
|
||||||
|
* caplog.record_tuples() -> list of (logger_name, level, message) tuples
|
||||||
monkeypatch
|
monkeypatch
|
||||||
The returned ``monkeypatch`` fixture provides these
|
The returned ``monkeypatch`` fixture provides these
|
||||||
helper methods to modify objects, dictionaries or os.environ::
|
helper methods to modify objects, dictionaries or os.environ::
|
||||||
|
|
|
@ -46,9 +46,9 @@ First, let's create 50 test invocation of which only 2 fail::
|
||||||
If you run this for the first time you will see two failures::
|
If you run this for the first time you will see two failures::
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
.................F.......F........................
|
.................F.......F........................ [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_num[17] ________
|
_______________________________ test_num[17] _______________________________
|
||||||
|
|
||||||
i = 17
|
i = 17
|
||||||
|
|
||||||
|
@ -59,7 +59,7 @@ If you run this for the first time you will see two failures::
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
_______ test_num[25] ________
|
_______________________________ test_num[25] _______________________________
|
||||||
|
|
||||||
i = 25
|
i = 25
|
||||||
|
|
||||||
|
@ -75,16 +75,16 @@ If you run this for the first time you will see two failures::
|
||||||
If you then run it with ``--lf``::
|
If you then run it with ``--lf``::
|
||||||
|
|
||||||
$ pytest --lf
|
$ pytest --lf
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 50 items
|
collected 50 items
|
||||||
run-last-failure: rerun previous 2 failures
|
run-last-failure: rerun previous 2 failures
|
||||||
|
|
||||||
test_50.py FF
|
test_50.py FF [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_num[17] ________
|
_______________________________ test_num[17] _______________________________
|
||||||
|
|
||||||
i = 17
|
i = 17
|
||||||
|
|
||||||
|
@ -95,7 +95,7 @@ If you then run it with ``--lf``::
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
_______ test_num[25] ________
|
_______________________________ test_num[25] _______________________________
|
||||||
|
|
||||||
i = 25
|
i = 25
|
||||||
|
|
||||||
|
@ -106,8 +106,8 @@ If you then run it with ``--lf``::
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
======= 48 tests deselected ========
|
=========================== 48 tests deselected ============================
|
||||||
======= 2 failed, 48 deselected in 0.12 seconds ========
|
================= 2 failed, 48 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
You have run only the two failing test from the last run, while 48 tests have
|
You have run only the two failing test from the last run, while 48 tests have
|
||||||
not been run ("deselected").
|
not been run ("deselected").
|
||||||
|
@ -117,16 +117,16 @@ previous failures will be executed first (as can be seen from the series
|
||||||
of ``FF`` and dots)::
|
of ``FF`` and dots)::
|
||||||
|
|
||||||
$ pytest --ff
|
$ pytest --ff
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 50 items
|
collected 50 items
|
||||||
run-last-failure: rerun previous 2 failures first
|
run-last-failure: rerun previous 2 failures first
|
||||||
|
|
||||||
test_50.py FF................................................
|
test_50.py FF................................................ [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_num[17] ________
|
_______________________________ test_num[17] _______________________________
|
||||||
|
|
||||||
i = 17
|
i = 17
|
||||||
|
|
||||||
|
@ -137,7 +137,7 @@ of ``FF`` and dots)::
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
_______ test_num[25] ________
|
_______________________________ test_num[25] _______________________________
|
||||||
|
|
||||||
i = 25
|
i = 25
|
||||||
|
|
||||||
|
@ -148,7 +148,7 @@ of ``FF`` and dots)::
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
======= 2 failed, 48 passed in 0.12 seconds ========
|
=================== 2 failed, 48 passed in 0.12 seconds ====================
|
||||||
|
|
||||||
.. _`config.cache`:
|
.. _`config.cache`:
|
||||||
|
|
||||||
|
@ -182,9 +182,9 @@ If you run this command once, it will take a while because
|
||||||
of the sleep::
|
of the sleep::
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
F
|
F [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_function ________
|
______________________________ test_function _______________________________
|
||||||
|
|
||||||
mydata = 42
|
mydata = 42
|
||||||
|
|
||||||
|
@ -199,9 +199,9 @@ If you run it a second time the value will be retrieved from
|
||||||
the cache and this will be quick::
|
the cache and this will be quick::
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
F
|
F [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_function ________
|
______________________________ test_function _______________________________
|
||||||
|
|
||||||
mydata = 42
|
mydata = 42
|
||||||
|
|
||||||
|
@ -222,7 +222,7 @@ You can always peek at the content of the cache using the
|
||||||
``--cache-show`` command line option::
|
``--cache-show`` command line option::
|
||||||
|
|
||||||
$ py.test --cache-show
|
$ py.test --cache-show
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
cachedir: $REGENDOC_TMPDIR/.cache
|
cachedir: $REGENDOC_TMPDIR/.cache
|
||||||
|
@ -232,7 +232,7 @@ You can always peek at the content of the cache using the
|
||||||
example/value contains:
|
example/value contains:
|
||||||
42
|
42
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
Clearing Cache content
|
Clearing Cache content
|
||||||
-------------------------------
|
-------------------------------
|
||||||
|
|
|
@ -63,15 +63,15 @@ and running this module will show you precisely the output
|
||||||
of the failing function and hide the other one::
|
of the failing function and hide the other one::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py .F
|
test_module.py .F [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_func2 ________
|
________________________________ test_func2 ________________________________
|
||||||
|
|
||||||
def test_func2():
|
def test_func2():
|
||||||
> assert False
|
> assert False
|
||||||
|
@ -80,7 +80,7 @@ of the failing function and hide the other one::
|
||||||
test_module.py:9: AssertionError
|
test_module.py:9: AssertionError
|
||||||
-------------------------- Captured stdout setup ---------------------------
|
-------------------------- Captured stdout setup ---------------------------
|
||||||
setting up <function test_func2 at 0xdeadbeef>
|
setting up <function test_func2 at 0xdeadbeef>
|
||||||
======= 1 failed, 1 passed in 0.12 seconds ========
|
==================== 1 failed, 1 passed in 0.12 seconds ====================
|
||||||
|
|
||||||
Accessing captured output from a test function
|
Accessing captured output from a test function
|
||||||
---------------------------------------------------
|
---------------------------------------------------
|
||||||
|
|
|
@ -61,14 +61,14 @@ and another like this::
|
||||||
then you can just invoke ``pytest`` without command line options::
|
then you can just invoke ``pytest`` without command line options::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
mymodule.py .
|
mymodule.py . [100%]
|
||||||
|
|
||||||
======= 1 passed in 0.12 seconds ========
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
It is possible to use fixtures using the ``getfixture`` helper::
|
It is possible to use fixtures using the ``getfixture`` helper::
|
||||||
|
|
||||||
|
|
|
@ -30,32 +30,32 @@ You can "mark" a test function with custom metadata like this::
|
||||||
You can then restrict a test run to only run tests marked with ``webtest``::
|
You can then restrict a test run to only run tests marked with ``webtest``::
|
||||||
|
|
||||||
$ pytest -v -m webtest
|
$ pytest -v -m webtest
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
|
||||||
test_server.py::test_send_http PASSED
|
test_server.py::test_send_http PASSED [100%]
|
||||||
|
|
||||||
======= 3 tests deselected ========
|
============================ 3 tests deselected ============================
|
||||||
======= 1 passed, 3 deselected in 0.12 seconds ========
|
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
Or the inverse, running all tests except the webtest ones::
|
Or the inverse, running all tests except the webtest ones::
|
||||||
|
|
||||||
$ pytest -v -m "not webtest"
|
$ pytest -v -m "not webtest"
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
|
||||||
test_server.py::test_something_quick PASSED
|
test_server.py::test_something_quick PASSED [ 33%]
|
||||||
test_server.py::test_another PASSED
|
test_server.py::test_another PASSED [ 66%]
|
||||||
test_server.py::TestClass::test_method PASSED
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
======= 1 tests deselected ========
|
============================ 1 tests deselected ============================
|
||||||
======= 3 passed, 1 deselected in 0.12 seconds ========
|
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
Selecting tests based on their node ID
|
Selecting tests based on their node ID
|
||||||
--------------------------------------
|
--------------------------------------
|
||||||
|
@ -65,42 +65,42 @@ arguments to select only specified tests. This makes it easy to select
|
||||||
tests based on their module, class, method, or function name::
|
tests based on their module, class, method, or function name::
|
||||||
|
|
||||||
$ pytest -v test_server.py::TestClass::test_method
|
$ pytest -v test_server.py::TestClass::test_method
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 1 item
|
collecting ... collected 1 item
|
||||||
|
|
||||||
test_server.py::TestClass::test_method PASSED
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
======= 1 passed in 0.12 seconds ========
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
You can also select on the class::
|
You can also select on the class::
|
||||||
|
|
||||||
$ pytest -v test_server.py::TestClass
|
$ pytest -v test_server.py::TestClass
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 1 item
|
collecting ... collected 1 item
|
||||||
|
|
||||||
test_server.py::TestClass::test_method PASSED
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
======= 1 passed in 0.12 seconds ========
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
Or select multiple nodes::
|
Or select multiple nodes::
|
||||||
|
|
||||||
$ pytest -v test_server.py::TestClass test_server.py::test_send_http
|
$ pytest -v test_server.py::TestClass test_server.py::test_send_http
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_server.py::TestClass::test_method PASSED
|
test_server.py::TestClass::test_method PASSED [ 50%]
|
||||||
test_server.py::test_send_http PASSED
|
test_server.py::test_send_http PASSED [100%]
|
||||||
|
|
||||||
======= 2 passed in 0.12 seconds ========
|
========================= 2 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
.. _node-id:
|
.. _node-id:
|
||||||
|
|
||||||
|
@ -129,47 +129,47 @@ exact match on markers that ``-m`` provides. This makes it easy to
|
||||||
select tests based on their names::
|
select tests based on their names::
|
||||||
|
|
||||||
$ pytest -v -k http # running with the above defined example module
|
$ pytest -v -k http # running with the above defined example module
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
|
||||||
test_server.py::test_send_http PASSED
|
test_server.py::test_send_http PASSED [100%]
|
||||||
|
|
||||||
======= 3 tests deselected ========
|
============================ 3 tests deselected ============================
|
||||||
======= 1 passed, 3 deselected in 0.12 seconds ========
|
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
And you can also run all tests except the ones that match the keyword::
|
And you can also run all tests except the ones that match the keyword::
|
||||||
|
|
||||||
$ pytest -k "not send_http" -v
|
$ pytest -k "not send_http" -v
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
|
||||||
test_server.py::test_something_quick PASSED
|
test_server.py::test_something_quick PASSED [ 33%]
|
||||||
test_server.py::test_another PASSED
|
test_server.py::test_another PASSED [ 66%]
|
||||||
test_server.py::TestClass::test_method PASSED
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
======= 1 tests deselected ========
|
============================ 1 tests deselected ============================
|
||||||
======= 3 passed, 1 deselected in 0.12 seconds ========
|
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
Or to select "http" and "quick" tests::
|
Or to select "http" and "quick" tests::
|
||||||
|
|
||||||
$ pytest -k "http or quick" -v
|
$ pytest -k "http or quick" -v
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
|
||||||
test_server.py::test_send_http PASSED
|
test_server.py::test_send_http PASSED [ 50%]
|
||||||
test_server.py::test_something_quick PASSED
|
test_server.py::test_something_quick PASSED [100%]
|
||||||
|
|
||||||
======= 2 tests deselected ========
|
============================ 2 tests deselected ============================
|
||||||
======= 2 passed, 2 deselected in 0.12 seconds ========
|
================== 2 passed, 2 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
@ -354,26 +354,26 @@ and an example invocations specifying a different environment than what
|
||||||
the test needs::
|
the test needs::
|
||||||
|
|
||||||
$ pytest -E stage2
|
$ pytest -E stage2
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_someenv.py s
|
test_someenv.py s [100%]
|
||||||
|
|
||||||
======= 1 skipped in 0.12 seconds ========
|
======================== 1 skipped in 0.12 seconds =========================
|
||||||
|
|
||||||
and here is one that specifies exactly the environment needed::
|
and here is one that specifies exactly the environment needed::
|
||||||
|
|
||||||
$ pytest -E stage1
|
$ pytest -E stage1
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_someenv.py .
|
test_someenv.py . [100%]
|
||||||
|
|
||||||
======= 1 passed in 0.12 seconds ========
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
The ``--markers`` option always gives you a list of available markers::
|
The ``--markers`` option always gives you a list of available markers::
|
||||||
|
|
||||||
|
@ -432,7 +432,7 @@ The output is as follows::
|
||||||
|
|
||||||
$ pytest -q -s
|
$ pytest -q -s
|
||||||
Marker info name=my_marker args=(<function hello_world at 0xdeadbeef>,) kwars={}
|
Marker info name=my_marker args=(<function hello_world at 0xdeadbeef>,) kwars={}
|
||||||
.
|
. [100%]
|
||||||
1 passed in 0.12 seconds
|
1 passed in 0.12 seconds
|
||||||
|
|
||||||
We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``.
|
We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``.
|
||||||
|
@ -477,7 +477,7 @@ Let's run this without capturing output and see what we get::
|
||||||
glob args=('function',) kwargs={'x': 3}
|
glob args=('function',) kwargs={'x': 3}
|
||||||
glob args=('class',) kwargs={'x': 2}
|
glob args=('class',) kwargs={'x': 2}
|
||||||
glob args=('module',) kwargs={'x': 1}
|
glob args=('module',) kwargs={'x': 1}
|
||||||
.
|
. [100%]
|
||||||
1 passed in 0.12 seconds
|
1 passed in 0.12 seconds
|
||||||
|
|
||||||
marking platform specific tests with pytest
|
marking platform specific tests with pytest
|
||||||
|
@ -530,29 +530,29 @@ Let's do a little test file to show how this looks like::
|
||||||
then you will see two tests skipped and two executed tests as expected::
|
then you will see two tests skipped and two executed tests as expected::
|
||||||
|
|
||||||
$ pytest -rs # this option reports skip reasons
|
$ pytest -rs # this option reports skip reasons
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
test_plat.py s.s.
|
test_plat.py s.s. [100%]
|
||||||
======= short test summary info ========
|
========================= short test summary info ==========================
|
||||||
SKIP [2] $REGENDOC_TMPDIR/conftest.py:13: cannot run on platform linux
|
SKIP [2] $REGENDOC_TMPDIR/conftest.py:13: cannot run on platform linux
|
||||||
|
|
||||||
======= 2 passed, 2 skipped in 0.12 seconds ========
|
=================== 2 passed, 2 skipped in 0.12 seconds ====================
|
||||||
|
|
||||||
Note that if you specify a platform via the marker-command line option like this::
|
Note that if you specify a platform via the marker-command line option like this::
|
||||||
|
|
||||||
$ pytest -m linux
|
$ pytest -m linux
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
test_plat.py .
|
test_plat.py . [100%]
|
||||||
|
|
||||||
======= 3 tests deselected ========
|
============================ 3 tests deselected ============================
|
||||||
======= 1 passed, 3 deselected in 0.12 seconds ========
|
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
||||||
|
|
||||||
|
@ -596,47 +596,47 @@ We want to dynamically define two markers and can do it in a
|
||||||
We can now use the ``-m option`` to select one set::
|
We can now use the ``-m option`` to select one set::
|
||||||
|
|
||||||
$ pytest -m interface --tb=short
|
$ pytest -m interface --tb=short
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
test_module.py FF
|
test_module.py FF [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_interface_simple ________
|
__________________________ test_interface_simple ___________________________
|
||||||
test_module.py:3: in test_interface_simple
|
test_module.py:3: in test_interface_simple
|
||||||
assert 0
|
assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
_______ test_interface_complex ________
|
__________________________ test_interface_complex __________________________
|
||||||
test_module.py:6: in test_interface_complex
|
test_module.py:6: in test_interface_complex
|
||||||
assert 0
|
assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
======= 2 tests deselected ========
|
============================ 2 tests deselected ============================
|
||||||
======= 2 failed, 2 deselected in 0.12 seconds ========
|
================== 2 failed, 2 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
or to select both "event" and "interface" tests::
|
or to select both "event" and "interface" tests::
|
||||||
|
|
||||||
$ pytest -m "interface or event" --tb=short
|
$ pytest -m "interface or event" --tb=short
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
test_module.py FFF
|
test_module.py FFF [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_interface_simple ________
|
__________________________ test_interface_simple ___________________________
|
||||||
test_module.py:3: in test_interface_simple
|
test_module.py:3: in test_interface_simple
|
||||||
assert 0
|
assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
_______ test_interface_complex ________
|
__________________________ test_interface_complex __________________________
|
||||||
test_module.py:6: in test_interface_complex
|
test_module.py:6: in test_interface_complex
|
||||||
assert 0
|
assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
_______ test_event_simple ________
|
____________________________ test_event_simple _____________________________
|
||||||
test_module.py:9: in test_event_simple
|
test_module.py:9: in test_event_simple
|
||||||
assert 0
|
assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
======= 1 tests deselected ========
|
============================ 1 tests deselected ============================
|
||||||
======= 3 failed, 1 deselected in 0.12 seconds ========
|
================== 3 failed, 1 deselected in 0.12 seconds ==================
|
||||||
|
|
|
@ -26,19 +26,19 @@ and if you installed `PyYAML`_ or a compatible YAML-parser you can
|
||||||
now execute the test specification::
|
now execute the test specification::
|
||||||
|
|
||||||
nonpython $ pytest test_simple.yml
|
nonpython $ pytest test_simple.yml
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_simple.yml F.
|
test_simple.yml F. [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ usecase: hello ________
|
______________________________ usecase: hello ______________________________
|
||||||
usecase execution failed
|
usecase execution failed
|
||||||
spec failed: 'some': 'other'
|
spec failed: 'some': 'other'
|
||||||
no further details known at this point.
|
no further details known at this point.
|
||||||
======= 1 failed, 1 passed in 0.12 seconds ========
|
==================== 1 failed, 1 passed in 0.12 seconds ====================
|
||||||
|
|
||||||
.. regendoc:wipe
|
.. regendoc:wipe
|
||||||
|
|
||||||
|
@ -58,21 +58,21 @@ your own domain specific testing language this way.
|
||||||
consulted when reporting in ``verbose`` mode::
|
consulted when reporting in ``verbose`` mode::
|
||||||
|
|
||||||
nonpython $ pytest -v
|
nonpython $ pytest -v
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_simple.yml::hello FAILED
|
test_simple.yml::hello FAILED [ 50%]
|
||||||
test_simple.yml::ok PASSED
|
test_simple.yml::ok PASSED [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ usecase: hello ________
|
______________________________ usecase: hello ______________________________
|
||||||
usecase execution failed
|
usecase execution failed
|
||||||
spec failed: 'some': 'other'
|
spec failed: 'some': 'other'
|
||||||
no further details known at this point.
|
no further details known at this point.
|
||||||
======= 1 failed, 1 passed in 0.12 seconds ========
|
==================== 1 failed, 1 passed in 0.12 seconds ====================
|
||||||
|
|
||||||
.. regendoc:wipe
|
.. regendoc:wipe
|
||||||
|
|
||||||
|
@ -80,7 +80,7 @@ While developing your custom test collection and execution it's also
|
||||||
interesting to just look at the collection tree::
|
interesting to just look at the collection tree::
|
||||||
|
|
||||||
nonpython $ pytest --collect-only
|
nonpython $ pytest --collect-only
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
@ -88,4 +88,4 @@ interesting to just look at the collection tree::
|
||||||
<YamlItem 'hello'>
|
<YamlItem 'hello'>
|
||||||
<YamlItem 'ok'>
|
<YamlItem 'ok'>
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
|
@ -45,16 +45,16 @@ Now we add a test configuration like this::
|
||||||
This means that we only run 2 tests if we do not pass ``--all``::
|
This means that we only run 2 tests if we do not pass ``--all``::
|
||||||
|
|
||||||
$ pytest -q test_compute.py
|
$ pytest -q test_compute.py
|
||||||
..
|
.. [100%]
|
||||||
2 passed in 0.12 seconds
|
2 passed in 0.12 seconds
|
||||||
|
|
||||||
We run only two computations, so we see two dots.
|
We run only two computations, so we see two dots.
|
||||||
let's run the full monty::
|
let's run the full monty::
|
||||||
|
|
||||||
$ pytest -q --all
|
$ pytest -q --all
|
||||||
....F
|
....F [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_compute[4] ________
|
_____________________________ test_compute[4] ______________________________
|
||||||
|
|
||||||
param1 = 4
|
param1 = 4
|
||||||
|
|
||||||
|
@ -138,7 +138,7 @@ objects, they are still using the default pytest representation::
|
||||||
|
|
||||||
|
|
||||||
$ pytest test_time.py --collect-only
|
$ pytest test_time.py --collect-only
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 8 items
|
collected 8 items
|
||||||
|
@ -152,7 +152,7 @@ objects, they are still using the default pytest representation::
|
||||||
<Function 'test_timedistance_v3[forward]'>
|
<Function 'test_timedistance_v3[forward]'>
|
||||||
<Function 'test_timedistance_v3[backward]'>
|
<Function 'test_timedistance_v3[backward]'>
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
|
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
|
||||||
together with the actual data, instead of listing them separately.
|
together with the actual data, instead of listing them separately.
|
||||||
|
@ -194,20 +194,20 @@ only have to work a bit to construct the correct arguments for pytest's
|
||||||
this is a fully self-contained example which you can run with::
|
this is a fully self-contained example which you can run with::
|
||||||
|
|
||||||
$ pytest test_scenarios.py
|
$ pytest test_scenarios.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
test_scenarios.py ....
|
test_scenarios.py .... [100%]
|
||||||
|
|
||||||
======= 4 passed in 0.12 seconds ========
|
========================= 4 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
|
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
|
||||||
|
|
||||||
|
|
||||||
$ pytest --collect-only test_scenarios.py
|
$ pytest --collect-only test_scenarios.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
@ -219,7 +219,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
|
||||||
<Function 'test_demo1[advanced]'>
|
<Function 'test_demo1[advanced]'>
|
||||||
<Function 'test_demo2[advanced]'>
|
<Function 'test_demo2[advanced]'>
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
Note that we told ``metafunc.parametrize()`` that your scenario values
|
Note that we told ``metafunc.parametrize()`` that your scenario values
|
||||||
should be considered class-scoped. With pytest-2.3 this leads to a
|
should be considered class-scoped. With pytest-2.3 this leads to a
|
||||||
|
@ -272,7 +272,7 @@ creates a database object for the actual test invocations::
|
||||||
Let's first see how it looks like at collection time::
|
Let's first see how it looks like at collection time::
|
||||||
|
|
||||||
$ pytest test_backends.py --collect-only
|
$ pytest test_backends.py --collect-only
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
@ -280,14 +280,14 @@ Let's first see how it looks like at collection time::
|
||||||
<Function 'test_db_initialized[d1]'>
|
<Function 'test_db_initialized[d1]'>
|
||||||
<Function 'test_db_initialized[d2]'>
|
<Function 'test_db_initialized[d2]'>
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
And then when we run the test::
|
And then when we run the test::
|
||||||
|
|
||||||
$ pytest -q test_backends.py
|
$ pytest -q test_backends.py
|
||||||
.F
|
.F [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_db_initialized[d2] ________
|
_________________________ test_db_initialized[d2] __________________________
|
||||||
|
|
||||||
db = <conftest.DB2 object at 0xdeadbeef>
|
db = <conftest.DB2 object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -333,14 +333,14 @@ will be passed to respective fixture function::
|
||||||
The result of this test will be successful::
|
The result of this test will be successful::
|
||||||
|
|
||||||
$ pytest test_indirect_list.py --collect-only
|
$ pytest test_indirect_list.py --collect-only
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
<Module 'test_indirect_list.py'>
|
<Module 'test_indirect_list.py'>
|
||||||
<Function 'test_indirect[a-b]'>
|
<Function 'test_indirect[a-b]'>
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. regendoc:wipe
|
.. regendoc:wipe
|
||||||
|
|
||||||
|
@ -381,9 +381,9 @@ Our test generator looks up a class-level definition which specifies which
|
||||||
argument sets to use for each test function. Let's run it::
|
argument sets to use for each test function. Let's run it::
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
F..
|
F.. [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ TestClass.test_equals[1-2] ________
|
________________________ TestClass.test_equals[1-2] ________________________
|
||||||
|
|
||||||
self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2
|
self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2
|
||||||
|
|
||||||
|
@ -411,10 +411,11 @@ is to be run with different sets of arguments for its three arguments:
|
||||||
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::
|
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::
|
||||||
|
|
||||||
. $ pytest -rs -q multipython.py
|
. $ pytest -rs -q multipython.py
|
||||||
sssssssssssssss.........sss.........sss.........
|
ssssssssssssssssssssssss... [100%]
|
||||||
======= short test summary info ========
|
========================= short test summary info ==========================
|
||||||
SKIP [21] $REGENDOC_TMPDIR/CWD/multipython.py:24: 'python2.6' not found
|
SKIP [12] $REGENDOC_TMPDIR/CWD/multipython.py:24: 'python2.7' not found
|
||||||
27 passed, 21 skipped in 0.12 seconds
|
SKIP [12] $REGENDOC_TMPDIR/CWD/multipython.py:24: 'python3.4' not found
|
||||||
|
3 passed, 24 skipped in 0.12 seconds
|
||||||
|
|
||||||
Indirect parametrization of optional implementations/imports
|
Indirect parametrization of optional implementations/imports
|
||||||
--------------------------------------------------------------------
|
--------------------------------------------------------------------
|
||||||
|
@ -460,16 +461,16 @@ And finally a little test module::
|
||||||
If you run this with reporting for skips enabled::
|
If you run this with reporting for skips enabled::
|
||||||
|
|
||||||
$ pytest -rs test_module.py
|
$ pytest -rs test_module.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py .s
|
test_module.py .s [100%]
|
||||||
======= short test summary info ========
|
========================= short test summary info ==========================
|
||||||
SKIP [1] $REGENDOC_TMPDIR/conftest.py:11: could not import 'opt2'
|
SKIP [1] $REGENDOC_TMPDIR/conftest.py:11: could not import 'opt2'
|
||||||
|
|
||||||
======= 1 passed, 1 skipped in 0.12 seconds ========
|
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
||||||
|
|
||||||
You'll see that we don't have a ``opt2`` module and thus the second test run
|
You'll see that we don't have a ``opt2`` module and thus the second test run
|
||||||
of our ``test_func1`` was skipped. A few notes:
|
of our ``test_func1`` was skipped. A few notes:
|
||||||
|
|
|
@ -116,7 +116,7 @@ that match ``*_check``. For example, if we have::
|
||||||
then the test collection looks like this::
|
then the test collection looks like this::
|
||||||
|
|
||||||
$ pytest --collect-only
|
$ pytest --collect-only
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
@ -126,7 +126,7 @@ then the test collection looks like this::
|
||||||
<Function 'simple_check'>
|
<Function 'simple_check'>
|
||||||
<Function 'complex_check'>
|
<Function 'complex_check'>
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
@ -162,7 +162,7 @@ Finding out what is collected
|
||||||
You can always peek at the collection tree without running tests like this::
|
You can always peek at the collection tree without running tests like this::
|
||||||
|
|
||||||
. $ pytest --collect-only pythoncollection.py
|
. $ pytest --collect-only pythoncollection.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
@ -173,7 +173,7 @@ You can always peek at the collection tree without running tests like this::
|
||||||
<Function 'test_method'>
|
<Function 'test_method'>
|
||||||
<Function 'test_anothermethod'>
|
<Function 'test_anothermethod'>
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. _customizing-test-collection:
|
.. _customizing-test-collection:
|
||||||
|
|
||||||
|
@ -231,9 +231,9 @@ If you run with a Python 3 interpreter both the one test and the ``setup.py``
|
||||||
file will be left out::
|
file will be left out::
|
||||||
|
|
||||||
$ pytest --collect-only
|
$ pytest --collect-only
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
|
@ -10,15 +10,15 @@ not showing the nice colors here in the HTML that you
|
||||||
get on the terminal - we are working on that)::
|
get on the terminal - we are working on that)::
|
||||||
|
|
||||||
assertion $ pytest failure_demo.py
|
assertion $ pytest failure_demo.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
|
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
|
||||||
collected 42 items
|
collected 42 items
|
||||||
|
|
||||||
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
|
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_generative[0] ________
|
____________________________ test_generative[0] ____________________________
|
||||||
|
|
||||||
param1 = 3, param2 = 6
|
param1 = 3, param2 = 6
|
||||||
|
|
||||||
|
@ -27,7 +27,7 @@ get on the terminal - we are working on that)::
|
||||||
E assert (3 * 2) < 6
|
E assert (3 * 2) < 6
|
||||||
|
|
||||||
failure_demo.py:16: AssertionError
|
failure_demo.py:16: AssertionError
|
||||||
_______ TestFailing.test_simple ________
|
_________________________ TestFailing.test_simple __________________________
|
||||||
|
|
||||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -43,7 +43,7 @@ get on the terminal - we are working on that)::
|
||||||
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>()
|
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>()
|
||||||
|
|
||||||
failure_demo.py:29: AssertionError
|
failure_demo.py:29: AssertionError
|
||||||
_______ TestFailing.test_simple_multiline ________
|
____________________ TestFailing.test_simple_multiline _____________________
|
||||||
|
|
||||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -63,7 +63,7 @@ get on the terminal - we are working on that)::
|
||||||
E assert 42 == 54
|
E assert 42 == 54
|
||||||
|
|
||||||
failure_demo.py:12: AssertionError
|
failure_demo.py:12: AssertionError
|
||||||
_______ TestFailing.test_not ________
|
___________________________ TestFailing.test_not ___________________________
|
||||||
|
|
||||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -75,7 +75,7 @@ get on the terminal - we are working on that)::
|
||||||
E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>()
|
E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>()
|
||||||
|
|
||||||
failure_demo.py:39: AssertionError
|
failure_demo.py:39: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_eq_text ________
|
_________________ TestSpecialisedExplanations.test_eq_text _________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -86,7 +86,7 @@ get on the terminal - we are working on that)::
|
||||||
E + eggs
|
E + eggs
|
||||||
|
|
||||||
failure_demo.py:43: AssertionError
|
failure_demo.py:43: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_eq_similar_text ________
|
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -99,7 +99,7 @@ get on the terminal - we are working on that)::
|
||||||
E ? ^
|
E ? ^
|
||||||
|
|
||||||
failure_demo.py:46: AssertionError
|
failure_demo.py:46: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_eq_multiline_text ________
|
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -112,7 +112,7 @@ get on the terminal - we are working on that)::
|
||||||
E bar
|
E bar
|
||||||
|
|
||||||
failure_demo.py:49: AssertionError
|
failure_demo.py:49: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_eq_long_text ________
|
______________ TestSpecialisedExplanations.test_eq_long_text _______________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -129,7 +129,7 @@ get on the terminal - we are working on that)::
|
||||||
E ? ^
|
E ? ^
|
||||||
|
|
||||||
failure_demo.py:54: AssertionError
|
failure_demo.py:54: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_eq_long_text_multiline ________
|
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -149,7 +149,7 @@ get on the terminal - we are working on that)::
|
||||||
E ...Full output truncated (7 lines hidden), use '-vv' to show
|
E ...Full output truncated (7 lines hidden), use '-vv' to show
|
||||||
|
|
||||||
failure_demo.py:59: AssertionError
|
failure_demo.py:59: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_eq_list ________
|
_________________ TestSpecialisedExplanations.test_eq_list _________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -160,7 +160,7 @@ get on the terminal - we are working on that)::
|
||||||
E Use -v to get the full diff
|
E Use -v to get the full diff
|
||||||
|
|
||||||
failure_demo.py:62: AssertionError
|
failure_demo.py:62: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_eq_list_long ________
|
______________ TestSpecialisedExplanations.test_eq_list_long _______________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -173,7 +173,7 @@ get on the terminal - we are working on that)::
|
||||||
E Use -v to get the full diff
|
E Use -v to get the full diff
|
||||||
|
|
||||||
failure_demo.py:67: AssertionError
|
failure_demo.py:67: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_eq_dict ________
|
_________________ TestSpecialisedExplanations.test_eq_dict _________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -191,7 +191,7 @@ get on the terminal - we are working on that)::
|
||||||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||||
|
|
||||||
failure_demo.py:70: AssertionError
|
failure_demo.py:70: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_eq_set ________
|
_________________ TestSpecialisedExplanations.test_eq_set __________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -209,7 +209,7 @@ get on the terminal - we are working on that)::
|
||||||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||||
|
|
||||||
failure_demo.py:73: AssertionError
|
failure_demo.py:73: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_eq_longer_list ________
|
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -220,7 +220,7 @@ get on the terminal - we are working on that)::
|
||||||
E Use -v to get the full diff
|
E Use -v to get the full diff
|
||||||
|
|
||||||
failure_demo.py:76: AssertionError
|
failure_demo.py:76: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_in_list ________
|
_________________ TestSpecialisedExplanations.test_in_list _________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -229,7 +229,7 @@ get on the terminal - we are working on that)::
|
||||||
E assert 1 in [0, 2, 3, 4, 5]
|
E assert 1 in [0, 2, 3, 4, 5]
|
||||||
|
|
||||||
failure_demo.py:79: AssertionError
|
failure_demo.py:79: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_not_in_text_multiline ________
|
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -248,7 +248,7 @@ get on the terminal - we are working on that)::
|
||||||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||||
|
|
||||||
failure_demo.py:83: AssertionError
|
failure_demo.py:83: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_not_in_text_single ________
|
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -261,7 +261,7 @@ get on the terminal - we are working on that)::
|
||||||
E ? +++
|
E ? +++
|
||||||
|
|
||||||
failure_demo.py:87: AssertionError
|
failure_demo.py:87: AssertionError
|
||||||
_______ TestSpecialisedExplanations.test_not_in_text_single_long ________
|
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -287,7 +287,7 @@ get on the terminal - we are working on that)::
|
||||||
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||||
|
|
||||||
failure_demo.py:95: AssertionError
|
failure_demo.py:95: AssertionError
|
||||||
_______ test_attribute ________
|
______________________________ test_attribute ______________________________
|
||||||
|
|
||||||
def test_attribute():
|
def test_attribute():
|
||||||
class Foo(object):
|
class Foo(object):
|
||||||
|
@ -298,7 +298,7 @@ get on the terminal - we are working on that)::
|
||||||
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.b
|
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.b
|
||||||
|
|
||||||
failure_demo.py:102: AssertionError
|
failure_demo.py:102: AssertionError
|
||||||
_______ test_attribute_instance ________
|
_________________________ test_attribute_instance __________________________
|
||||||
|
|
||||||
def test_attribute_instance():
|
def test_attribute_instance():
|
||||||
class Foo(object):
|
class Foo(object):
|
||||||
|
@ -309,7 +309,7 @@ get on the terminal - we are working on that)::
|
||||||
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
|
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
|
||||||
|
|
||||||
failure_demo.py:108: AssertionError
|
failure_demo.py:108: AssertionError
|
||||||
_______ test_attribute_failure ________
|
__________________________ test_attribute_failure __________________________
|
||||||
|
|
||||||
def test_attribute_failure():
|
def test_attribute_failure():
|
||||||
class Foo(object):
|
class Foo(object):
|
||||||
|
@ -329,7 +329,7 @@ get on the terminal - we are working on that)::
|
||||||
E Exception: Failed to get attrib
|
E Exception: Failed to get attrib
|
||||||
|
|
||||||
failure_demo.py:114: Exception
|
failure_demo.py:114: Exception
|
||||||
_______ test_attribute_multiple ________
|
_________________________ test_attribute_multiple __________________________
|
||||||
|
|
||||||
def test_attribute_multiple():
|
def test_attribute_multiple():
|
||||||
class Foo(object):
|
class Foo(object):
|
||||||
|
@ -344,7 +344,7 @@ get on the terminal - we are working on that)::
|
||||||
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
|
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
|
||||||
|
|
||||||
failure_demo.py:125: AssertionError
|
failure_demo.py:125: AssertionError
|
||||||
_______ TestRaises.test_raises ________
|
__________________________ TestRaises.test_raises __________________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -359,7 +359,7 @@ get on the terminal - we are working on that)::
|
||||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||||
|
|
||||||
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:580>:1: ValueError
|
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:580>:1: ValueError
|
||||||
_______ TestRaises.test_raises_doesnt ________
|
______________________ TestRaises.test_raises_doesnt _______________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -368,7 +368,7 @@ get on the terminal - we are working on that)::
|
||||||
E Failed: DID NOT RAISE <class 'OSError'>
|
E Failed: DID NOT RAISE <class 'OSError'>
|
||||||
|
|
||||||
failure_demo.py:137: Failed
|
failure_demo.py:137: Failed
|
||||||
_______ TestRaises.test_raise ________
|
__________________________ TestRaises.test_raise ___________________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -377,7 +377,7 @@ get on the terminal - we are working on that)::
|
||||||
E ValueError: demo error
|
E ValueError: demo error
|
||||||
|
|
||||||
failure_demo.py:140: ValueError
|
failure_demo.py:140: ValueError
|
||||||
_______ TestRaises.test_tupleerror ________
|
________________________ TestRaises.test_tupleerror ________________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -399,7 +399,7 @@ get on the terminal - we are working on that)::
|
||||||
failure_demo.py:148: TypeError
|
failure_demo.py:148: TypeError
|
||||||
--------------------------- Captured stdout call ---------------------------
|
--------------------------- Captured stdout call ---------------------------
|
||||||
l is [1, 2, 3]
|
l is [1, 2, 3]
|
||||||
_______ TestRaises.test_some_error ________
|
________________________ TestRaises.test_some_error ________________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -408,7 +408,7 @@ get on the terminal - we are working on that)::
|
||||||
E NameError: name 'namenotexi' is not defined
|
E NameError: name 'namenotexi' is not defined
|
||||||
|
|
||||||
failure_demo.py:151: NameError
|
failure_demo.py:151: NameError
|
||||||
_______ test_dynamic_compile_shows_nicely ________
|
____________________ test_dynamic_compile_shows_nicely _____________________
|
||||||
|
|
||||||
def test_dynamic_compile_shows_nicely():
|
def test_dynamic_compile_shows_nicely():
|
||||||
src = 'def foo():\n assert 1 == 0\n'
|
src = 'def foo():\n assert 1 == 0\n'
|
||||||
|
@ -427,7 +427,7 @@ get on the terminal - we are working on that)::
|
||||||
E AssertionError
|
E AssertionError
|
||||||
|
|
||||||
<2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:163>:2: AssertionError
|
<2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:163>:2: AssertionError
|
||||||
_______ TestMoreErrors.test_complex_error ________
|
____________________ TestMoreErrors.test_complex_error _____________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -451,7 +451,7 @@ get on the terminal - we are working on that)::
|
||||||
E assert 44 == 43
|
E assert 44 == 43
|
||||||
|
|
||||||
failure_demo.py:6: AssertionError
|
failure_demo.py:6: AssertionError
|
||||||
_______ TestMoreErrors.test_z1_unpack_error ________
|
___________________ TestMoreErrors.test_z1_unpack_error ____________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -461,7 +461,7 @@ get on the terminal - we are working on that)::
|
||||||
E ValueError: not enough values to unpack (expected 2, got 0)
|
E ValueError: not enough values to unpack (expected 2, got 0)
|
||||||
|
|
||||||
failure_demo.py:180: ValueError
|
failure_demo.py:180: ValueError
|
||||||
_______ TestMoreErrors.test_z2_type_error ________
|
____________________ TestMoreErrors.test_z2_type_error _____________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -471,7 +471,7 @@ get on the terminal - we are working on that)::
|
||||||
E TypeError: 'int' object is not iterable
|
E TypeError: 'int' object is not iterable
|
||||||
|
|
||||||
failure_demo.py:184: TypeError
|
failure_demo.py:184: TypeError
|
||||||
_______ TestMoreErrors.test_startswith ________
|
______________________ TestMoreErrors.test_startswith ______________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -484,7 +484,7 @@ get on the terminal - we are working on that)::
|
||||||
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
|
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
|
||||||
|
|
||||||
failure_demo.py:189: AssertionError
|
failure_demo.py:189: AssertionError
|
||||||
_______ TestMoreErrors.test_startswith_nested ________
|
__________________ TestMoreErrors.test_startswith_nested ___________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -501,7 +501,7 @@ get on the terminal - we are working on that)::
|
||||||
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>()
|
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>()
|
||||||
|
|
||||||
failure_demo.py:196: AssertionError
|
failure_demo.py:196: AssertionError
|
||||||
_______ TestMoreErrors.test_global_func ________
|
_____________________ TestMoreErrors.test_global_func ______________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -512,7 +512,7 @@ get on the terminal - we are working on that)::
|
||||||
E + where 43 = globf(42)
|
E + where 43 = globf(42)
|
||||||
|
|
||||||
failure_demo.py:199: AssertionError
|
failure_demo.py:199: AssertionError
|
||||||
_______ TestMoreErrors.test_instance ________
|
_______________________ TestMoreErrors.test_instance _______________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -523,7 +523,7 @@ get on the terminal - we are working on that)::
|
||||||
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
|
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
|
||||||
|
|
||||||
failure_demo.py:203: AssertionError
|
failure_demo.py:203: AssertionError
|
||||||
_______ TestMoreErrors.test_compare ________
|
_______________________ TestMoreErrors.test_compare ________________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -533,7 +533,7 @@ get on the terminal - we are working on that)::
|
||||||
E + where 11 = globf(10)
|
E + where 11 = globf(10)
|
||||||
|
|
||||||
failure_demo.py:206: AssertionError
|
failure_demo.py:206: AssertionError
|
||||||
_______ TestMoreErrors.test_try_finally ________
|
_____________________ TestMoreErrors.test_try_finally ______________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -544,7 +544,7 @@ get on the terminal - we are working on that)::
|
||||||
E assert 1 == 0
|
E assert 1 == 0
|
||||||
|
|
||||||
failure_demo.py:211: AssertionError
|
failure_demo.py:211: AssertionError
|
||||||
_______ TestCustomAssertMsg.test_single_line ________
|
___________________ TestCustomAssertMsg.test_single_line ___________________
|
||||||
|
|
||||||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -558,7 +558,7 @@ get on the terminal - we are working on that)::
|
||||||
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a
|
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a
|
||||||
|
|
||||||
failure_demo.py:222: AssertionError
|
failure_demo.py:222: AssertionError
|
||||||
_______ TestCustomAssertMsg.test_multiline ________
|
____________________ TestCustomAssertMsg.test_multiline ____________________
|
||||||
|
|
||||||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -575,7 +575,7 @@ get on the terminal - we are working on that)::
|
||||||
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a
|
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a
|
||||||
|
|
||||||
failure_demo.py:228: AssertionError
|
failure_demo.py:228: AssertionError
|
||||||
_______ TestCustomAssertMsg.test_custom_repr ________
|
___________________ TestCustomAssertMsg.test_custom_repr ___________________
|
||||||
|
|
||||||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -595,4 +595,10 @@ get on the terminal - we are working on that)::
|
||||||
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
|
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
|
||||||
|
|
||||||
failure_demo.py:238: AssertionError
|
failure_demo.py:238: AssertionError
|
||||||
======= 42 failed in 0.12 seconds ========
|
============================= warnings summary =============================
|
||||||
|
None
|
||||||
|
Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0.
|
||||||
|
Please use Metafunc.parametrize instead.
|
||||||
|
|
||||||
|
-- Docs: http://doc.pytest.org/en/latest/warnings.html
|
||||||
|
================== 42 failed, 1 warnings in 0.12 seconds ===================
|
||||||
|
|
|
@ -41,9 +41,9 @@ provide the ``cmdopt`` through a :ref:`fixture function <fixture function>`:
|
||||||
Let's run this without supplying our new option::
|
Let's run this without supplying our new option::
|
||||||
|
|
||||||
$ pytest -q test_sample.py
|
$ pytest -q test_sample.py
|
||||||
F
|
F [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_answer ________
|
_______________________________ test_answer ________________________________
|
||||||
|
|
||||||
cmdopt = 'type1'
|
cmdopt = 'type1'
|
||||||
|
|
||||||
|
@ -63,9 +63,9 @@ Let's run this without supplying our new option::
|
||||||
And now with supplying a command line option::
|
And now with supplying a command line option::
|
||||||
|
|
||||||
$ pytest -q --cmdopt=type2
|
$ pytest -q --cmdopt=type2
|
||||||
F
|
F [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_answer ________
|
_______________________________ test_answer ________________________________
|
||||||
|
|
||||||
cmdopt = 'type2'
|
cmdopt = 'type2'
|
||||||
|
|
||||||
|
@ -112,12 +112,12 @@ of subprocesses close to your CPU. Running in an empty
|
||||||
directory with the above conftest.py::
|
directory with the above conftest.py::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. _`excontrolskip`:
|
.. _`excontrolskip`:
|
||||||
|
|
||||||
|
@ -166,28 +166,28 @@ We can now write a test module like this:
|
||||||
and when running it will see a skipped "slow" test::
|
and when running it will see a skipped "slow" test::
|
||||||
|
|
||||||
$ pytest -rs # "-rs" means report details on the little 's'
|
$ pytest -rs # "-rs" means report details on the little 's'
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py .s
|
test_module.py .s [100%]
|
||||||
======= short test summary info ========
|
========================= short test summary info ==========================
|
||||||
SKIP [1] test_module.py:8: need --runslow option to run
|
SKIP [1] test_module.py:8: need --runslow option to run
|
||||||
|
|
||||||
======= 1 passed, 1 skipped in 0.12 seconds ========
|
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
||||||
|
|
||||||
Or run it including the ``slow`` marked test::
|
Or run it including the ``slow`` marked test::
|
||||||
|
|
||||||
$ pytest --runslow
|
$ pytest --runslow
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py ..
|
test_module.py .. [100%]
|
||||||
|
|
||||||
======= 2 passed in 0.12 seconds ========
|
========================= 2 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
Writing well integrated assertion helpers
|
Writing well integrated assertion helpers
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
@ -218,9 +218,9 @@ unless the ``--full-trace`` command line option is specified.
|
||||||
Let's run our little function::
|
Let's run our little function::
|
||||||
|
|
||||||
$ pytest -q test_checkconfig.py
|
$ pytest -q test_checkconfig.py
|
||||||
F
|
F [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_something ________
|
______________________________ test_something ______________________________
|
||||||
|
|
||||||
def test_something():
|
def test_something():
|
||||||
> checkconfig(42)
|
> checkconfig(42)
|
||||||
|
@ -305,13 +305,13 @@ It's easy to present extra information in a ``pytest`` run:
|
||||||
which will add the string to the test header accordingly::
|
which will add the string to the test header accordingly::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
project deps: mylib-1.1
|
project deps: mylib-1.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. regendoc:wipe
|
.. regendoc:wipe
|
||||||
|
|
||||||
|
@ -330,7 +330,7 @@ display more information if applicable:
|
||||||
which will add info only when run with "--v"::
|
which will add info only when run with "--v"::
|
||||||
|
|
||||||
$ pytest -v
|
$ pytest -v
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
info1: did you know that ...
|
info1: did you know that ...
|
||||||
|
@ -338,17 +338,17 @@ which will add info only when run with "--v"::
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 0 items
|
collecting ... collected 0 items
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
and nothing when run plainly::
|
and nothing when run plainly::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
profiling test duration
|
profiling test duration
|
||||||
--------------------------
|
--------------------------
|
||||||
|
@ -377,18 +377,18 @@ out which tests are the slowest. Let's make an artificial test suite:
|
||||||
Now we can profile which test functions execute the slowest::
|
Now we can profile which test functions execute the slowest::
|
||||||
|
|
||||||
$ pytest --durations=3
|
$ pytest --durations=3
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
test_some_are_slow.py ...
|
test_some_are_slow.py ... [100%]
|
||||||
|
|
||||||
======= slowest 3 test durations ========
|
========================= slowest 3 test durations =========================
|
||||||
0.30s call test_some_are_slow.py::test_funcslow2
|
0.30s call test_some_are_slow.py::test_funcslow2
|
||||||
0.20s call test_some_are_slow.py::test_funcslow1
|
0.20s call test_some_are_slow.py::test_funcslow1
|
||||||
0.10s call test_some_are_slow.py::test_funcfast
|
0.10s call test_some_are_slow.py::test_funcfast
|
||||||
======= 3 passed in 0.12 seconds ========
|
========================= 3 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
incremental testing - test steps
|
incremental testing - test steps
|
||||||
---------------------------------------------------
|
---------------------------------------------------
|
||||||
|
@ -443,18 +443,18 @@ tests in a class. Here is a test module example:
|
||||||
If we run this::
|
If we run this::
|
||||||
|
|
||||||
$ pytest -rx
|
$ pytest -rx
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
test_step.py .Fx.
|
test_step.py .Fx. [100%]
|
||||||
======= short test summary info ========
|
========================= short test summary info ==========================
|
||||||
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
||||||
reason: previous test failed (test_modification)
|
reason: previous test failed (test_modification)
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ TestUserHandling.test_modification ________
|
____________________ TestUserHandling.test_modification ____________________
|
||||||
|
|
||||||
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -463,7 +463,7 @@ If we run this::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_step.py:9: AssertionError
|
test_step.py:9: AssertionError
|
||||||
======= 1 failed, 2 passed, 1 xfailed in 0.12 seconds ========
|
============== 1 failed, 2 passed, 1 xfailed in 0.12 seconds ===============
|
||||||
|
|
||||||
We'll see that ``test_deletion`` was not executed because ``test_modification``
|
We'll see that ``test_deletion`` was not executed because ``test_modification``
|
||||||
failed. It is reported as an "expected failure".
|
failed. It is reported as an "expected failure".
|
||||||
|
@ -522,27 +522,27 @@ the ``db`` fixture:
|
||||||
We can run this::
|
We can run this::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 7 items
|
collected 7 items
|
||||||
|
|
||||||
test_step.py .Fx.
|
test_step.py .Fx. [ 57%]
|
||||||
a/test_db.py F
|
a/test_db.py F [ 71%]
|
||||||
a/test_db2.py F
|
a/test_db2.py F [ 85%]
|
||||||
b/test_error.py E
|
b/test_error.py E [100%]
|
||||||
|
|
||||||
======= ERRORS ========
|
================================== ERRORS ==================================
|
||||||
_______ ERROR at setup of test_root ________
|
_______________________ ERROR at setup of test_root ________________________
|
||||||
file $REGENDOC_TMPDIR/b/test_error.py, line 1
|
file $REGENDOC_TMPDIR/b/test_error.py, line 1
|
||||||
def test_root(db): # no db here, will error out
|
def test_root(db): # no db here, will error out
|
||||||
E fixture 'db' not found
|
E fixture 'db' not found
|
||||||
> available fixtures: cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
|
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
|
||||||
> use 'pytest --fixtures [testpath]' for help on them.
|
> use 'pytest --fixtures [testpath]' for help on them.
|
||||||
|
|
||||||
$REGENDOC_TMPDIR/b/test_error.py:1
|
$REGENDOC_TMPDIR/b/test_error.py:1
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ TestUserHandling.test_modification ________
|
____________________ TestUserHandling.test_modification ____________________
|
||||||
|
|
||||||
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -551,7 +551,7 @@ We can run this::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_step.py:9: AssertionError
|
test_step.py:9: AssertionError
|
||||||
_______ test_a1 ________
|
_________________________________ test_a1 __________________________________
|
||||||
|
|
||||||
db = <conftest.DB object at 0xdeadbeef>
|
db = <conftest.DB object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -561,7 +561,7 @@ We can run this::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
a/test_db.py:2: AssertionError
|
a/test_db.py:2: AssertionError
|
||||||
_______ test_a2 ________
|
_________________________________ test_a2 __________________________________
|
||||||
|
|
||||||
db = <conftest.DB object at 0xdeadbeef>
|
db = <conftest.DB object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -571,7 +571,7 @@ We can run this::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
a/test_db2.py:2: AssertionError
|
a/test_db2.py:2: AssertionError
|
||||||
======= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ========
|
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ==========
|
||||||
|
|
||||||
The two test modules in the ``a`` directory see the same ``db`` fixture instance
|
The two test modules in the ``a`` directory see the same ``db`` fixture instance
|
||||||
while the one test in the sister-directory ``b`` doesn't see it. We could of course
|
while the one test in the sister-directory ``b`` doesn't see it. We could of course
|
||||||
|
@ -630,15 +630,15 @@ if you then have failing tests:
|
||||||
and run them::
|
and run them::
|
||||||
|
|
||||||
$ pytest test_module.py
|
$ pytest test_module.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py FF
|
test_module.py FF [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_fail1 ________
|
________________________________ test_fail1 ________________________________
|
||||||
|
|
||||||
tmpdir = local('PYTEST_TMPDIR/test_fail10')
|
tmpdir = local('PYTEST_TMPDIR/test_fail10')
|
||||||
|
|
||||||
|
@ -647,14 +647,14 @@ and run them::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:2: AssertionError
|
test_module.py:2: AssertionError
|
||||||
_______ test_fail2 ________
|
________________________________ test_fail2 ________________________________
|
||||||
|
|
||||||
def test_fail2():
|
def test_fail2():
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:4: AssertionError
|
test_module.py:4: AssertionError
|
||||||
======= 2 failed in 0.12 seconds ========
|
========================= 2 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
you will have a "failures" file which contains the failing test ids::
|
you will have a "failures" file which contains the failing test ids::
|
||||||
|
|
||||||
|
@ -724,17 +724,17 @@ if you then have failing tests:
|
||||||
and run it::
|
and run it::
|
||||||
|
|
||||||
$ pytest -s test_module.py
|
$ pytest -s test_module.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
|
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
|
||||||
Fexecuting test failed test_module.py::test_call_fails
|
Fexecuting test failed test_module.py::test_call_fails
|
||||||
F
|
F [100%]
|
||||||
|
|
||||||
======= ERRORS ========
|
================================== ERRORS ==================================
|
||||||
_______ ERROR at setup of test_setup_fails ________
|
____________________ ERROR at setup of test_setup_fails ____________________
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def other():
|
def other():
|
||||||
|
@ -742,8 +742,8 @@ and run it::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:6: AssertionError
|
test_module.py:6: AssertionError
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_call_fails ________
|
_____________________________ test_call_fails ______________________________
|
||||||
|
|
||||||
something = None
|
something = None
|
||||||
|
|
||||||
|
@ -752,14 +752,14 @@ and run it::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:12: AssertionError
|
test_module.py:12: AssertionError
|
||||||
_______ test_fail2 ________
|
________________________________ test_fail2 ________________________________
|
||||||
|
|
||||||
def test_fail2():
|
def test_fail2():
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:15: AssertionError
|
test_module.py:15: AssertionError
|
||||||
======= 2 failed, 1 error in 0.12 seconds ========
|
==================== 2 failed, 1 error in 0.12 seconds =====================
|
||||||
|
|
||||||
You'll see that the fixture finalizers could use the precise reporting
|
You'll see that the fixture finalizers could use the precise reporting
|
||||||
information.
|
information.
|
||||||
|
|
|
@ -68,5 +68,5 @@ If you run this without output capturing::
|
||||||
.test_method1 called
|
.test_method1 called
|
||||||
.test other
|
.test other
|
||||||
.test_unit1 method called
|
.test_unit1 method called
|
||||||
.
|
. [100%]
|
||||||
4 passed in 0.12 seconds
|
4 passed in 0.12 seconds
|
||||||
|
|
|
@ -69,15 +69,15 @@ will discover and call the :py:func:`@pytest.fixture <_pytest.python.fixture>`
|
||||||
marked ``smtp`` fixture function. Running the test looks like this::
|
marked ``smtp`` fixture function. Running the test looks like this::
|
||||||
|
|
||||||
$ pytest test_smtpsimple.py
|
$ pytest test_smtpsimple.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_smtpsimple.py F
|
test_smtpsimple.py F [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_ehlo ________
|
________________________________ test_ehlo _________________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -88,7 +88,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_smtpsimple.py:11: AssertionError
|
test_smtpsimple.py:11: AssertionError
|
||||||
======= 1 failed in 0.12 seconds ========
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
In the failure traceback we see that the test function was called with a
|
In the failure traceback we see that the test function was called with a
|
||||||
``smtp`` argument, the ``smtplib.SMTP()`` instance created by the fixture
|
``smtp`` argument, the ``smtplib.SMTP()`` instance created by the fixture
|
||||||
|
@ -205,15 +205,15 @@ We deliberately insert failing ``assert 0`` statements in order to
|
||||||
inspect what is going on and can now run the tests::
|
inspect what is going on and can now run the tests::
|
||||||
|
|
||||||
$ pytest test_module.py
|
$ pytest test_module.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py FF
|
test_module.py FF [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_ehlo ________
|
________________________________ test_ehlo _________________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -225,7 +225,7 @@ inspect what is going on and can now run the tests::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:6: AssertionError
|
test_module.py:6: AssertionError
|
||||||
_______ test_noop ________
|
________________________________ test_noop _________________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -236,7 +236,7 @@ inspect what is going on and can now run the tests::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:11: AssertionError
|
test_module.py:11: AssertionError
|
||||||
======= 2 failed in 0.12 seconds ========
|
========================= 2 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
You see the two ``assert 0`` failing and more importantly you can also see
|
You see the two ``assert 0`` failing and more importantly you can also see
|
||||||
that the same (module-scoped) ``smtp`` object was passed into the two
|
that the same (module-scoped) ``smtp`` object was passed into the two
|
||||||
|
@ -286,7 +286,7 @@ tests.
|
||||||
Let's execute it::
|
Let's execute it::
|
||||||
|
|
||||||
$ pytest -s -q --tb=no
|
$ pytest -s -q --tb=no
|
||||||
FFteardown smtp
|
FF [100%]teardown smtp
|
||||||
|
|
||||||
2 failed in 0.12 seconds
|
2 failed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -391,7 +391,7 @@ We use the ``request.module`` attribute to optionally obtain an
|
||||||
again, nothing much has changed::
|
again, nothing much has changed::
|
||||||
|
|
||||||
$ pytest -s -q --tb=no
|
$ pytest -s -q --tb=no
|
||||||
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
|
FF [100%]finalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
|
||||||
|
|
||||||
2 failed in 0.12 seconds
|
2 failed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -408,9 +408,9 @@ server URL in its module namespace::
|
||||||
Running it::
|
Running it::
|
||||||
|
|
||||||
$ pytest -qq --tb=short test_anothersmtp.py
|
$ pytest -qq --tb=short test_anothersmtp.py
|
||||||
F
|
F [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_showhelo ________
|
______________________________ test_showhelo _______________________________
|
||||||
test_anothersmtp.py:5: in test_showhelo
|
test_anothersmtp.py:5: in test_showhelo
|
||||||
assert 0, smtp.helo()
|
assert 0, smtp.helo()
|
||||||
E AssertionError: (250, b'mail.python.org')
|
E AssertionError: (250, b'mail.python.org')
|
||||||
|
@ -457,9 +457,9 @@ a value via ``request.param``. No test function code needs to change.
|
||||||
So let's just do another run::
|
So let's just do another run::
|
||||||
|
|
||||||
$ pytest -q test_module.py
|
$ pytest -q test_module.py
|
||||||
FFFF
|
FFFF [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_ehlo[smtp.gmail.com] ________
|
________________________ test_ehlo[smtp.gmail.com] _________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -471,7 +471,7 @@ So let's just do another run::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:6: AssertionError
|
test_module.py:6: AssertionError
|
||||||
_______ test_noop[smtp.gmail.com] ________
|
________________________ test_noop[smtp.gmail.com] _________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -482,7 +482,7 @@ So let's just do another run::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:11: AssertionError
|
test_module.py:11: AssertionError
|
||||||
_______ test_ehlo[mail.python.org] ________
|
________________________ test_ehlo[mail.python.org] ________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -495,7 +495,7 @@ So let's just do another run::
|
||||||
test_module.py:5: AssertionError
|
test_module.py:5: AssertionError
|
||||||
-------------------------- Captured stdout setup ---------------------------
|
-------------------------- Captured stdout setup ---------------------------
|
||||||
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
||||||
_______ test_noop[mail.python.org] ________
|
________________________ test_noop[mail.python.org] ________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
|
@ -559,7 +559,7 @@ return ``None`` then pytest's auto-generated ID will be used.
|
||||||
Running the above tests results in the following test IDs being used::
|
Running the above tests results in the following test IDs being used::
|
||||||
|
|
||||||
$ pytest --collect-only
|
$ pytest --collect-only
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 10 items
|
collected 10 items
|
||||||
|
@ -577,7 +577,7 @@ Running the above tests results in the following test IDs being used::
|
||||||
<Function 'test_ehlo[mail.python.org]'>
|
<Function 'test_ehlo[mail.python.org]'>
|
||||||
<Function 'test_noop[mail.python.org]'>
|
<Function 'test_noop[mail.python.org]'>
|
||||||
|
|
||||||
======= no tests ran in 0.12 seconds ========
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. _`interdependent fixtures`:
|
.. _`interdependent fixtures`:
|
||||||
|
|
||||||
|
@ -610,16 +610,16 @@ Here we declare an ``app`` fixture which receives the previously defined
|
||||||
``smtp`` fixture and instantiates an ``App`` object with it. Let's run it::
|
``smtp`` fixture and instantiates an ``App`` object with it. Let's run it::
|
||||||
|
|
||||||
$ pytest -v test_appsetup.py
|
$ pytest -v test_appsetup.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_appsetup.py::test_smtp_exists[smtp.gmail.com] PASSED
|
test_appsetup.py::test_smtp_exists[smtp.gmail.com] PASSED [ 50%]
|
||||||
test_appsetup.py::test_smtp_exists[mail.python.org] PASSED
|
test_appsetup.py::test_smtp_exists[mail.python.org] PASSED [100%]
|
||||||
|
|
||||||
======= 2 passed in 0.12 seconds ========
|
========================= 2 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
Due to the parametrization of ``smtp`` the test will run twice with two
|
Due to the parametrization of ``smtp`` the test will run twice with two
|
||||||
different ``App`` instances and respective smtp servers. There is no
|
different ``App`` instances and respective smtp servers. There is no
|
||||||
|
@ -679,7 +679,7 @@ to show the setup/teardown flow::
|
||||||
Let's run the tests in verbose mode and with looking at the print-output::
|
Let's run the tests in verbose mode and with looking at the print-output::
|
||||||
|
|
||||||
$ pytest -v -s test_module.py
|
$ pytest -v -s test_module.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
|
@ -687,38 +687,38 @@ Let's run the tests in verbose mode and with looking at the print-output::
|
||||||
|
|
||||||
test_module.py::test_0[1] SETUP otherarg 1
|
test_module.py::test_0[1] SETUP otherarg 1
|
||||||
RUN test0 with otherarg 1
|
RUN test0 with otherarg 1
|
||||||
PASSED TEARDOWN otherarg 1
|
PASSED [ 12%] TEARDOWN otherarg 1
|
||||||
|
|
||||||
test_module.py::test_0[2] SETUP otherarg 2
|
test_module.py::test_0[2] SETUP otherarg 2
|
||||||
RUN test0 with otherarg 2
|
RUN test0 with otherarg 2
|
||||||
PASSED TEARDOWN otherarg 2
|
PASSED [ 25%] TEARDOWN otherarg 2
|
||||||
|
|
||||||
test_module.py::test_1[mod1] SETUP modarg mod1
|
test_module.py::test_1[mod1] SETUP modarg mod1
|
||||||
RUN test1 with modarg mod1
|
RUN test1 with modarg mod1
|
||||||
PASSED
|
PASSED [ 37%]
|
||||||
test_module.py::test_2[1-mod1] SETUP otherarg 1
|
test_module.py::test_2[1-mod1] SETUP otherarg 1
|
||||||
RUN test2 with otherarg 1 and modarg mod1
|
RUN test2 with otherarg 1 and modarg mod1
|
||||||
PASSED TEARDOWN otherarg 1
|
PASSED [ 50%] TEARDOWN otherarg 1
|
||||||
|
|
||||||
test_module.py::test_2[2-mod1] SETUP otherarg 2
|
test_module.py::test_2[2-mod1] SETUP otherarg 2
|
||||||
RUN test2 with otherarg 2 and modarg mod1
|
RUN test2 with otherarg 2 and modarg mod1
|
||||||
PASSED TEARDOWN otherarg 2
|
PASSED [ 62%] TEARDOWN otherarg 2
|
||||||
|
|
||||||
test_module.py::test_1[mod2] TEARDOWN modarg mod1
|
test_module.py::test_1[mod2] TEARDOWN modarg mod1
|
||||||
SETUP modarg mod2
|
SETUP modarg mod2
|
||||||
RUN test1 with modarg mod2
|
RUN test1 with modarg mod2
|
||||||
PASSED
|
PASSED [ 75%]
|
||||||
test_module.py::test_2[1-mod2] SETUP otherarg 1
|
test_module.py::test_2[1-mod2] SETUP otherarg 1
|
||||||
RUN test2 with otherarg 1 and modarg mod2
|
RUN test2 with otherarg 1 and modarg mod2
|
||||||
PASSED TEARDOWN otherarg 1
|
PASSED [ 87%] TEARDOWN otherarg 1
|
||||||
|
|
||||||
test_module.py::test_2[2-mod2] SETUP otherarg 2
|
test_module.py::test_2[2-mod2] SETUP otherarg 2
|
||||||
RUN test2 with otherarg 2 and modarg mod2
|
RUN test2 with otherarg 2 and modarg mod2
|
||||||
PASSED TEARDOWN otherarg 2
|
PASSED [100%] TEARDOWN otherarg 2
|
||||||
TEARDOWN modarg mod2
|
TEARDOWN modarg mod2
|
||||||
|
|
||||||
|
|
||||||
======= 8 passed in 0.12 seconds ========
|
========================= 8 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
You can see that the parametrized module-scoped ``modarg`` resource caused an
|
You can see that the parametrized module-scoped ``modarg`` resource caused an
|
||||||
ordering of test execution that lead to the fewest possible "active" resources.
|
ordering of test execution that lead to the fewest possible "active" resources.
|
||||||
|
@ -781,7 +781,7 @@ you specified a "cleandir" function argument to each of them. Let's run it
|
||||||
to verify our fixture is activated and the tests pass::
|
to verify our fixture is activated and the tests pass::
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
..
|
.. [100%]
|
||||||
2 passed in 0.12 seconds
|
2 passed in 0.12 seconds
|
||||||
|
|
||||||
You can specify multiple fixtures like this:
|
You can specify multiple fixtures like this:
|
||||||
|
@ -862,7 +862,7 @@ class-level ``usefixtures`` decorator.
|
||||||
If we run it, we get two passing tests::
|
If we run it, we get two passing tests::
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
..
|
.. [100%]
|
||||||
2 passed in 0.12 seconds
|
2 passed in 0.12 seconds
|
||||||
|
|
||||||
Here is how autouse fixtures work in other scopes:
|
Here is how autouse fixtures work in other scopes:
|
||||||
|
|
|
@ -44,23 +44,23 @@ Let's create a first test file with a simple test function::
|
||||||
That's it. You can execute the test function now::
|
That's it. You can execute the test function now::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_sample.py F
|
test_sample.py F [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_answer ________
|
_______________________________ test_answer ________________________________
|
||||||
|
|
||||||
def test_answer():
|
def test_answer():
|
||||||
> assert func(3) == 5
|
> assert func(3) == 5
|
||||||
E assert 4 == 5
|
E assert 4 == 5
|
||||||
E + where 4 = func(3)
|
E + where 4 = func(3)
|
||||||
|
|
||||||
test_sample.py:5: AssertionError
|
test_sample.py:5: AssertionError
|
||||||
======= 1 failed in 0.12 seconds ========
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
We got a failure report because our little ``func(3)`` call did not return ``5``.
|
We got a failure report because our little ``func(3)`` call did not return ``5``.
|
||||||
|
|
||||||
|
@ -99,7 +99,7 @@ use the ``raises`` helper::
|
||||||
Running it with, this time in "quiet" reporting mode::
|
Running it with, this time in "quiet" reporting mode::
|
||||||
|
|
||||||
$ pytest -q test_sysexit.py
|
$ pytest -q test_sysexit.py
|
||||||
.
|
. [100%]
|
||||||
1 passed in 0.12 seconds
|
1 passed in 0.12 seconds
|
||||||
|
|
||||||
Grouping multiple tests in a class
|
Grouping multiple tests in a class
|
||||||
|
@ -124,18 +124,18 @@ There is no need to subclass anything. We can simply
|
||||||
run the module by passing its filename::
|
run the module by passing its filename::
|
||||||
|
|
||||||
$ pytest -q test_class.py
|
$ pytest -q test_class.py
|
||||||
.F
|
.F [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ TestClass.test_two ________
|
____________________________ TestClass.test_two ____________________________
|
||||||
|
|
||||||
self = <test_class.TestClass object at 0xdeadbeef>
|
self = <test_class.TestClass object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_two(self):
|
def test_two(self):
|
||||||
x = "hello"
|
x = "hello"
|
||||||
> assert hasattr(x, 'check')
|
> assert hasattr(x, 'check')
|
||||||
E AssertionError: assert False
|
E AssertionError: assert False
|
||||||
E + where False = hasattr('hello', 'check')
|
E + where False = hasattr('hello', 'check')
|
||||||
|
|
||||||
test_class.py:8: AssertionError
|
test_class.py:8: AssertionError
|
||||||
1 failed, 1 passed in 0.12 seconds
|
1 failed, 1 passed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -161,17 +161,17 @@ We list the name ``tmpdir`` in the test function signature and
|
||||||
before performing the test function call. Let's just run it::
|
before performing the test function call. Let's just run it::
|
||||||
|
|
||||||
$ pytest -q test_tmpdir.py
|
$ pytest -q test_tmpdir.py
|
||||||
F
|
F [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_needsfiles ________
|
_____________________________ test_needsfiles ______________________________
|
||||||
|
|
||||||
tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')
|
tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')
|
||||||
|
|
||||||
def test_needsfiles(tmpdir):
|
def test_needsfiles(tmpdir):
|
||||||
print (tmpdir)
|
print (tmpdir)
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_tmpdir.py:3: AssertionError
|
test_tmpdir.py:3: AssertionError
|
||||||
--------------------------- Captured stdout call ---------------------------
|
--------------------------- Captured stdout call ---------------------------
|
||||||
PYTEST_TMPDIR/test_needsfiles0
|
PYTEST_TMPDIR/test_needsfiles0
|
||||||
|
|
|
@ -24,23 +24,23 @@ An example of a simple test:
|
||||||
To execute it::
|
To execute it::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_sample.py F
|
test_sample.py F [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_answer ________
|
_______________________________ test_answer ________________________________
|
||||||
|
|
||||||
def test_answer():
|
def test_answer():
|
||||||
> assert inc(3) == 5
|
> assert inc(3) == 5
|
||||||
E assert 4 == 5
|
E assert 4 == 5
|
||||||
E + where 4 = inc(3)
|
E + where 4 = inc(3)
|
||||||
|
|
||||||
test_sample.py:5: AssertionError
|
test_sample.py:5: AssertionError
|
||||||
======= 1 failed in 0.12 seconds ========
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used.
|
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used.
|
||||||
See :ref:`Getting Started <getstarted>` for more examples.
|
See :ref:`Getting Started <getstarted>` for more examples.
|
||||||
|
|
|
@ -53,15 +53,15 @@ tuples so that the ``test_eval`` function will run three times using
|
||||||
them in turn::
|
them in turn::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
test_expectation.py ..F
|
test_expectation.py ..F [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_eval[6*9-42] ________
|
____________________________ test_eval[6*9-42] _____________________________
|
||||||
|
|
||||||
test_input = '6*9', expected = 42
|
test_input = '6*9', expected = 42
|
||||||
|
|
||||||
|
@ -76,7 +76,7 @@ them in turn::
|
||||||
E + where 54 = eval('6*9')
|
E + where 54 = eval('6*9')
|
||||||
|
|
||||||
test_expectation.py:8: AssertionError
|
test_expectation.py:8: AssertionError
|
||||||
======= 1 failed, 2 passed in 0.12 seconds ========
|
==================== 1 failed, 2 passed in 0.12 seconds ====================
|
||||||
|
|
||||||
As designed in this example, only one pair of input/output values fails
|
As designed in this example, only one pair of input/output values fails
|
||||||
the simple test function. And as usual with test function arguments,
|
the simple test function. And as usual with test function arguments,
|
||||||
|
@ -102,14 +102,14 @@ for example with the builtin ``mark.xfail``::
|
||||||
Let's run this::
|
Let's run this::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
test_expectation.py ..x
|
test_expectation.py ..x [100%]
|
||||||
|
|
||||||
======= 2 passed, 1 xfailed in 0.12 seconds ========
|
=================== 2 passed, 1 xfailed in 0.12 seconds ====================
|
||||||
|
|
||||||
The one parameter set which caused a failure previously now
|
The one parameter set which caused a failure previously now
|
||||||
shows up as an "xfailed (expected to fail)" test.
|
shows up as an "xfailed (expected to fail)" test.
|
||||||
|
@ -165,15 +165,15 @@ command line option and the parametrization of our test function::
|
||||||
If we now pass two stringinput values, our test will run twice::
|
If we now pass two stringinput values, our test will run twice::
|
||||||
|
|
||||||
$ pytest -q --stringinput="hello" --stringinput="world" test_strings.py
|
$ pytest -q --stringinput="hello" --stringinput="world" test_strings.py
|
||||||
..
|
.. [100%]
|
||||||
2 passed in 0.12 seconds
|
2 passed in 0.12 seconds
|
||||||
|
|
||||||
Let's also run with a stringinput that will lead to a failing test::
|
Let's also run with a stringinput that will lead to a failing test::
|
||||||
|
|
||||||
$ pytest -q --stringinput="!" test_strings.py
|
$ pytest -q --stringinput="!" test_strings.py
|
||||||
F
|
F [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_valid_string[!] ________
|
___________________________ test_valid_string[!] ___________________________
|
||||||
|
|
||||||
stringinput = '!'
|
stringinput = '!'
|
||||||
|
|
||||||
|
@ -193,9 +193,9 @@ If you don't specify a stringinput it will be skipped because
|
||||||
list::
|
list::
|
||||||
|
|
||||||
$ pytest -q -rs test_strings.py
|
$ pytest -q -rs test_strings.py
|
||||||
s
|
s [100%]
|
||||||
======= short test summary info ========
|
========================= short test summary info ==========================
|
||||||
SKIP [1] test_strings.py:2: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1
|
SKIP [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1
|
||||||
1 skipped in 0.12 seconds
|
1 skipped in 0.12 seconds
|
||||||
|
|
||||||
Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across
|
Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across
|
||||||
|
|
|
@ -331,13 +331,13 @@ Here is a simple test file with the several usages:
|
||||||
Running it with the report-on-xfail option gives this output::
|
Running it with the report-on-xfail option gives this output::
|
||||||
|
|
||||||
example $ pytest -rx xfail_demo.py
|
example $ pytest -rx xfail_demo.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR/example, inifile:
|
rootdir: $REGENDOC_TMPDIR/example, inifile:
|
||||||
collected 7 items
|
collected 7 items
|
||||||
|
|
||||||
xfail_demo.py xxxxxxx
|
xfail_demo.py xxxxxxx [100%]
|
||||||
======= short test summary info ========
|
========================= short test summary info ==========================
|
||||||
XFAIL xfail_demo.py::test_hello
|
XFAIL xfail_demo.py::test_hello
|
||||||
XFAIL xfail_demo.py::test_hello2
|
XFAIL xfail_demo.py::test_hello2
|
||||||
reason: [NOTRUN]
|
reason: [NOTRUN]
|
||||||
|
@ -351,7 +351,7 @@ Running it with the report-on-xfail option gives this output::
|
||||||
reason: reason
|
reason: reason
|
||||||
XFAIL xfail_demo.py::test_hello7
|
XFAIL xfail_demo.py::test_hello7
|
||||||
|
|
||||||
======= 7 xfailed in 0.12 seconds ========
|
======================== 7 xfailed in 0.12 seconds =========================
|
||||||
|
|
||||||
.. _`skip/xfail with parametrize`:
|
.. _`skip/xfail with parametrize`:
|
||||||
|
|
||||||
|
|
|
@ -28,15 +28,15 @@ Running this would result in a passed test except for the last
|
||||||
``assert 0`` line which we use to look at values::
|
``assert 0`` line which we use to look at values::
|
||||||
|
|
||||||
$ pytest test_tmpdir.py
|
$ pytest test_tmpdir.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_tmpdir.py F
|
test_tmpdir.py F [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_create_file ________
|
_____________________________ test_create_file _____________________________
|
||||||
|
|
||||||
tmpdir = local('PYTEST_TMPDIR/test_create_file0')
|
tmpdir = local('PYTEST_TMPDIR/test_create_file0')
|
||||||
|
|
||||||
|
@ -49,7 +49,7 @@ Running this would result in a passed test except for the last
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_tmpdir.py:7: AssertionError
|
test_tmpdir.py:7: AssertionError
|
||||||
======= 1 failed in 0.12 seconds ========
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
The 'tmpdir_factory' fixture
|
The 'tmpdir_factory' fixture
|
||||||
----------------------------
|
----------------------------
|
||||||
|
|
|
@ -126,15 +126,15 @@ Due to the deliberately failing assert statements, we can take a look at
|
||||||
the ``self.db`` values in the traceback::
|
the ``self.db`` values in the traceback::
|
||||||
|
|
||||||
$ pytest test_unittest_db.py
|
$ pytest test_unittest_db.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_unittest_db.py FF
|
test_unittest_db.py FF [100%]
|
||||||
|
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ MyTest.test_method1 ________
|
___________________________ MyTest.test_method1 ____________________________
|
||||||
|
|
||||||
self = <test_unittest_db.MyTest testMethod=test_method1>
|
self = <test_unittest_db.MyTest testMethod=test_method1>
|
||||||
|
|
||||||
|
@ -145,7 +145,7 @@ the ``self.db`` values in the traceback::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_unittest_db.py:9: AssertionError
|
test_unittest_db.py:9: AssertionError
|
||||||
_______ MyTest.test_method2 ________
|
___________________________ MyTest.test_method2 ____________________________
|
||||||
|
|
||||||
self = <test_unittest_db.MyTest testMethod=test_method2>
|
self = <test_unittest_db.MyTest testMethod=test_method2>
|
||||||
|
|
||||||
|
@ -155,7 +155,7 @@ the ``self.db`` values in the traceback::
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_unittest_db.py:12: AssertionError
|
test_unittest_db.py:12: AssertionError
|
||||||
======= 2 failed in 0.12 seconds ========
|
========================= 2 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
This default pytest traceback shows that the two test methods
|
This default pytest traceback shows that the two test methods
|
||||||
share the same ``self.db`` instance which was our intention
|
share the same ``self.db`` instance which was our intention
|
||||||
|
@ -203,7 +203,7 @@ on the class like in the previous example.
|
||||||
Running this test module ...::
|
Running this test module ...::
|
||||||
|
|
||||||
$ pytest -q test_unittest_cleandir.py
|
$ pytest -q test_unittest_cleandir.py
|
||||||
.
|
. [100%]
|
||||||
1 passed in 0.12 seconds
|
1 passed in 0.12 seconds
|
||||||
|
|
||||||
... gives us one passed test because the ``initdir`` fixture function
|
... gives us one passed test because the ``initdir`` fixture function
|
||||||
|
|
|
@ -21,20 +21,20 @@ and displays them at the end of the session::
|
||||||
Running pytest now produces this output::
|
Running pytest now produces this output::
|
||||||
|
|
||||||
$ pytest test_show_warnings.py
|
$ pytest test_show_warnings.py
|
||||||
======= test session starts ========
|
=========================== test session starts ============================
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_show_warnings.py .
|
test_show_warnings.py . [100%]
|
||||||
|
|
||||||
======= warnings summary ========
|
============================= warnings summary =============================
|
||||||
test_show_warnings.py::test_one
|
test_show_warnings.py::test_one
|
||||||
$REGENDOC_TMPDIR/test_show_warnings.py:4: UserWarning: api v1, should use functions from v2
|
$REGENDOC_TMPDIR/test_show_warnings.py:4: UserWarning: api v1, should use functions from v2
|
||||||
warnings.warn(UserWarning("api v1, should use functions from v2"))
|
warnings.warn(UserWarning("api v1, should use functions from v2"))
|
||||||
|
|
||||||
-- Docs: http://doc.pytest.org/en/latest/warnings.html
|
-- Docs: http://doc.pytest.org/en/latest/warnings.html
|
||||||
======= 1 passed, 1 warnings in 0.12 seconds ========
|
=================== 1 passed, 1 warnings in 0.12 seconds ===================
|
||||||
|
|
||||||
Pytest by default catches all warnings except for ``DeprecationWarning`` and ``PendingDeprecationWarning``.
|
Pytest by default catches all warnings except for ``DeprecationWarning`` and ``PendingDeprecationWarning``.
|
||||||
|
|
||||||
|
@ -42,9 +42,9 @@ The ``-W`` flag can be passed to control which warnings will be displayed or eve
|
||||||
them into errors::
|
them into errors::
|
||||||
|
|
||||||
$ pytest -q test_show_warnings.py -W error::UserWarning
|
$ pytest -q test_show_warnings.py -W error::UserWarning
|
||||||
F
|
F [100%]
|
||||||
======= FAILURES ========
|
================================= FAILURES =================================
|
||||||
_______ test_one ________
|
_________________________________ test_one _________________________________
|
||||||
|
|
||||||
def test_one():
|
def test_one():
|
||||||
> assert api_v1() == 1
|
> assert api_v1() == 1
|
||||||
|
|
Loading…
Reference in New Issue