Merge pull request #9115 from RonnyPfannschmidt/fix-regendoc

fix #8818 - run regendoc without tox cachedir
This commit is contained in:
Ronny Pfannschmidt 2021-10-04 19:11:45 +02:00 committed by GitHub
commit 028eb6fab6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
25 changed files with 298 additions and 369 deletions

View File

@ -0,0 +1 @@
Ensure ``regendoc`` opts out of ``TOX_ENV`` cachedir selection to ensure independent example test runs.

View File

@ -34,6 +34,10 @@ REGENDOC_ARGS := \
regen: REGENDOC_FILES:=*.rst */*.rst regen: REGENDOC_FILES:=*.rst */*.rst
regen: regen:
PYTHONDONTWRITEBYTECODE=1 PYTEST_ADDOPTS="-pno:hypothesis -Wignore::pytest.PytestUnknownMarkWarning" COLUMNS=76 regendoc --update ${REGENDOC_FILES} ${REGENDOC_ARGS} # need to reset cachedir to the non-tox default
PYTHONDONTWRITEBYTECODE=1 \
PYTEST_ADDOPTS="-pno:hypothesis -p no:hypothesispytest -Wignore::pytest.PytestUnknownMarkWarning -o cache_dir=.pytest_cache" \
COLUMNS=76 \
regendoc --update ${REGENDOC_FILES} ${REGENDOC_ARGS}
.PHONY: regen .PHONY: regen

View File

@ -16,8 +16,13 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
.. code-block:: pytest .. code-block:: pytest
$ pytest -q --fixtures $ pytest --fixtures -v
cache =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: .pytest_cache
rootdir: /home/sweet/project
collected 0 items
cache -- ../../../..$PYTHON_SITE/_pytest/cacheprovider.py:520
Return a cache object that can persist state between testing sessions. Return a cache object that can persist state between testing sessions.
cache.get(key, default) cache.get(key, default)
@ -28,40 +33,41 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
Values can be any object handled by the json stdlib module. Values can be any object handled by the json stdlib module.
capsys capsys -- ../../../..$PYTHON_SITE/_pytest/capture.py:903
Enable text capturing of writes to ``sys.stdout`` and ``sys.stderr``. Enable text capturing of writes to ``sys.stdout`` and ``sys.stderr``.
The captured output is made available via ``capsys.readouterr()`` method The captured output is made available via ``capsys.readouterr()`` method
calls, which return a ``(out, err)`` namedtuple. calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``text`` objects. ``out`` and ``err`` will be ``text`` objects.
capsysbinary capsysbinary -- ../../../..$PYTHON_SITE/_pytest/capture.py:920
Enable bytes capturing of writes to ``sys.stdout`` and ``sys.stderr``. Enable bytes capturing of writes to ``sys.stdout`` and ``sys.stderr``.
The captured output is made available via ``capsysbinary.readouterr()`` The captured output is made available via ``capsysbinary.readouterr()``
method calls, which return a ``(out, err)`` namedtuple. method calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``bytes`` objects. ``out`` and ``err`` will be ``bytes`` objects.
capfd capfd -- ../../../..$PYTHON_SITE/_pytest/capture.py:937
Enable text capturing of writes to file descriptors ``1`` and ``2``. Enable text capturing of writes to file descriptors ``1`` and ``2``.
The captured output is made available via ``capfd.readouterr()`` method The captured output is made available via ``capfd.readouterr()`` method
calls, which return a ``(out, err)`` namedtuple. calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``text`` objects. ``out`` and ``err`` will be ``text`` objects.
capfdbinary capfdbinary -- ../../../..$PYTHON_SITE/_pytest/capture.py:954
Enable bytes capturing of writes to file descriptors ``1`` and ``2``. Enable bytes capturing of writes to file descriptors ``1`` and ``2``.
The captured output is made available via ``capfd.readouterr()`` method The captured output is made available via ``capfd.readouterr()`` method
calls, which return a ``(out, err)`` namedtuple. calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``byte`` objects. ``out`` and ``err`` will be ``byte`` objects.
doctest_namespace [session scope] doctest_namespace [session scope] -- ../../../..$PYTHON_SITE/_pytest/doctest.py:728
Fixture that returns a :py:class:`dict` that will be injected into the Fixture that returns a :py:class:`dict` that will be injected into the
namespace of doctests. namespace of doctests.
pytestconfig [session scope] pytestconfig [session scope] -- ../../../..$PYTHON_SITE/_pytest/fixtures.py:1372
Session-scoped fixture that returns the :class:`pytest.Config` object. Session-scoped fixture that returns the session's :class:`pytest.Config`
object.
Example:: Example::
@ -69,7 +75,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
if pytestconfig.getoption("verbose") > 0: if pytestconfig.getoption("verbose") > 0:
... ...
record_property record_property -- ../../../..$PYTHON_SITE/_pytest/junitxml.py:282
Add extra properties to the calling test. Add extra properties to the calling test.
User properties become part of the test report and are available to the User properties become part of the test report and are available to the
@ -83,13 +89,13 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
def test_function(record_property): def test_function(record_property):
record_property("example_key", 1) record_property("example_key", 1)
record_xml_attribute record_xml_attribute -- ../../../..$PYTHON_SITE/_pytest/junitxml.py:305
Add extra xml attributes to the tag for the calling test. Add extra xml attributes to the tag for the calling test.
The fixture is callable with ``name, value``. The value is The fixture is callable with ``name, value``. The value is
automatically XML-encoded. automatically XML-encoded.
record_testsuite_property [session scope] record_testsuite_property [session scope] -- ../../../..$PYTHON_SITE/_pytest/junitxml.py:343
Record a new ``<property>`` tag as child of the root ``<testsuite>``. Record a new ``<property>`` tag as child of the root ``<testsuite>``.
This is suitable to writing global information regarding the entire test This is suitable to writing global information regarding the entire test
@ -111,7 +117,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
`pytest-xdist <https://github.com/pytest-dev/pytest-xdist>`__ plugin. See issue `pytest-xdist <https://github.com/pytest-dev/pytest-xdist>`__ plugin. See issue
`#7767 <https://github.com/pytest-dev/pytest/issues/7767>`__ for details. `#7767 <https://github.com/pytest-dev/pytest/issues/7767>`__ for details.
caplog caplog -- ../../../..$PYTHON_SITE/_pytest/logging.py:491
Access and control log capturing. Access and control log capturing.
Captured logs are available through the following properties/methods:: Captured logs are available through the following properties/methods::
@ -122,7 +128,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
* caplog.record_tuples -> list of (logger_name, level, message) tuples * caplog.record_tuples -> list of (logger_name, level, message) tuples
* caplog.clear() -> clear captured records and formatted log output string * caplog.clear() -> clear captured records and formatted log output string
monkeypatch monkeypatch -- ../../../..$PYTHON_SITE/_pytest/monkeypatch.py:29
A convenient fixture for monkey-patching. A convenient fixture for monkey-patching.
The fixture provides these methods to modify objects, dictionaries or The fixture provides these methods to modify objects, dictionaries or
@ -141,19 +147,19 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
fixture has finished. The ``raising`` parameter determines if a KeyError fixture has finished. The ``raising`` parameter determines if a KeyError
or AttributeError will be raised if the set/deletion operation has no target. or AttributeError will be raised if the set/deletion operation has no target.
recwarn recwarn -- ../../../..$PYTHON_SITE/_pytest/recwarn.py:29
Return a :class:`WarningsRecorder` instance that records all warnings emitted by test functions. Return a :class:`WarningsRecorder` instance that records all warnings emitted by test functions.
See https://docs.python.org/library/how-to/capture-warnings.html for information See https://docs.python.org/library/how-to/capture-warnings.html for information
on warning categories. on warning categories.
tmpdir_factory [session scope] tmpdir_factory [session scope] -- ../../../..$PYTHON_SITE/_pytest/tmpdir.py:210
Return a :class:`pytest.TempdirFactory` instance for the test session. Return a :class:`pytest.TempdirFactory` instance for the test session.
tmp_path_factory [session scope] tmp_path_factory [session scope] -- ../../../..$PYTHON_SITE/_pytest/tmpdir.py:217
Return a :class:`pytest.TempPathFactory` instance for the test session. Return a :class:`pytest.TempPathFactory` instance for the test session.
tmpdir tmpdir -- ../../../..$PYTHON_SITE/_pytest/tmpdir.py:232
Return a temporary directory path object which is unique to each test Return a temporary directory path object which is unique to each test
function invocation, created as a sub directory of the base temporary function invocation, created as a sub directory of the base temporary
directory. directory.
@ -163,11 +169,11 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
``--basetemp`` is used then it is cleared each session. See :ref:`base ``--basetemp`` is used then it is cleared each session. See :ref:`base
temporary directory`. temporary directory`.
The returned object is a `py.path.local`_ path object. The returned object is a `legacy_path`_ object.
.. _`py.path.local`: https://py.readthedocs.io/en/latest/path.html .. _legacy_path: https://py.readthedocs.io/en/latest/path.html
tmp_path tmp_path -- ../../../..$PYTHON_SITE/_pytest/tmpdir.py:250
Return a temporary directory path object which is unique to each test Return a temporary directory path object which is unique to each test
function invocation, created as a sub directory of the base temporary function invocation, created as a sub directory of the base temporary
directory. directory.
@ -180,7 +186,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
The returned object is a :class:`pathlib.Path` object. The returned object is a :class:`pathlib.Path` object.
no tests ran in 0.12s ========================== no tests ran in 0.12s ===========================
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like: You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like:

View File

@ -46,8 +46,8 @@ You can then restrict a test run to only run tests marked with ``webtest``:
$ pytest -v -m webtest $ pytest -v -m webtest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 4 items / 3 deselected / 1 selected collecting ... collected 4 items / 3 deselected / 1 selected
test_server.py::test_send_http PASSED [100%] test_server.py::test_send_http PASSED [100%]
@ -61,8 +61,8 @@ Or the inverse, running all tests except the webtest ones:
$ pytest -v -m "not webtest" $ pytest -v -m "not webtest"
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 4 items / 1 deselected / 3 selected collecting ... collected 4 items / 1 deselected / 3 selected
test_server.py::test_something_quick PASSED [ 33%] test_server.py::test_something_quick PASSED [ 33%]
@ -83,8 +83,8 @@ tests based on their module, class, method, or function name:
$ pytest -v test_server.py::TestClass::test_method $ pytest -v test_server.py::TestClass::test_method
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 1 item collecting ... collected 1 item
test_server.py::TestClass::test_method PASSED [100%] test_server.py::TestClass::test_method PASSED [100%]
@ -98,8 +98,8 @@ You can also select on the class:
$ pytest -v test_server.py::TestClass $ pytest -v test_server.py::TestClass
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 1 item collecting ... collected 1 item
test_server.py::TestClass::test_method PASSED [100%] test_server.py::TestClass::test_method PASSED [100%]
@ -113,8 +113,8 @@ Or select multiple nodes:
$ pytest -v test_server.py::TestClass test_server.py::test_send_http $ pytest -v test_server.py::TestClass test_server.py::test_send_http
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 2 items collecting ... collected 2 items
test_server.py::TestClass::test_method PASSED [ 50%] test_server.py::TestClass::test_method PASSED [ 50%]
@ -157,8 +157,8 @@ The expression matching is now case-insensitive.
$ pytest -v -k http # running with the above defined example module $ pytest -v -k http # running with the above defined example module
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 4 items / 3 deselected / 1 selected collecting ... collected 4 items / 3 deselected / 1 selected
test_server.py::test_send_http PASSED [100%] test_server.py::test_send_http PASSED [100%]
@ -172,8 +172,8 @@ And you can also run all tests except the ones that match the keyword:
$ pytest -k "not send_http" -v $ pytest -k "not send_http" -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 4 items / 1 deselected / 3 selected collecting ... collected 4 items / 1 deselected / 3 selected
test_server.py::test_something_quick PASSED [ 33%] test_server.py::test_something_quick PASSED [ 33%]
@ -189,8 +189,8 @@ Or to select "http" and "quick" tests:
$ pytest -k "http or quick" -v $ pytest -k "http or quick" -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 4 items / 2 deselected / 2 selected collecting ... collected 4 items / 2 deselected / 2 selected
test_server.py::test_send_http PASSED [ 50%] test_server.py::test_send_http PASSED [ 50%]
@ -244,7 +244,7 @@ You can ask which markers exist for your test suite - the list includes our just
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/stable/how-to/parametrize.html for more info and examples. @pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/stable/how-to/parametrize.html for more info and examples.
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/stable/how-to/fixtures.html#usefixtures @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/stable/explanation/fixtures.html#usefixtures
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@ -398,8 +398,8 @@ the test needs:
$ pytest -E stage2 $ pytest -E stage2
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 1 item collected 1 item
test_someenv.py s [100%] test_someenv.py s [100%]
@ -413,8 +413,8 @@ and here is one that specifies exactly the environment needed:
$ pytest -E stage1 $ pytest -E stage1
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 1 item collected 1 item
test_someenv.py . [100%] test_someenv.py . [100%]
@ -488,7 +488,7 @@ The output is as follows:
.. code-block:: pytest .. code-block:: pytest
$ pytest -q -s $ pytest -q -s
Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={}) Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef0001>,), kwargs={})
. .
1 passed in 0.12s 1 passed in 0.12s
@ -606,8 +606,8 @@ then you will see two tests skipped and two executed tests as expected:
$ pytest -rs # this option reports skip reasons $ pytest -rs # this option reports skip reasons
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 4 items collected 4 items
test_plat.py s.s. [100%] test_plat.py s.s. [100%]
@ -623,8 +623,8 @@ Note that if you specify a platform via the marker-command line option like this
$ pytest -m linux $ pytest -m linux
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 4 items / 3 deselected / 1 selected collected 4 items / 3 deselected / 1 selected
test_plat.py . [100%] test_plat.py . [100%]
@ -687,8 +687,8 @@ We can now use the ``-m option`` to select one set:
$ pytest -m interface --tb=short $ pytest -m interface --tb=short
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 4 items / 2 deselected / 2 selected collected 4 items / 2 deselected / 2 selected
test_module.py FF [100%] test_module.py FF [100%]
@ -714,8 +714,8 @@ or to select both "event" and "interface" tests:
$ pytest -m "interface or event" --tb=short $ pytest -m "interface or event" --tb=short
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 4 items / 1 deselected / 3 selected collected 4 items / 1 deselected / 3 selected
test_module.py FFF [100%] test_module.py FFF [100%]

View File

@ -30,8 +30,8 @@ now execute the test specification:
nonpython $ pytest test_simple.yaml nonpython $ pytest test_simple.yaml
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython rootdir: /home/sweet/project/nonpython
collected 2 items collected 2 items
test_simple.yaml F. [100%] test_simple.yaml F. [100%]
@ -67,8 +67,8 @@ consulted when reporting in ``verbose`` mode:
nonpython $ pytest -v nonpython $ pytest -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython rootdir: /home/sweet/project/nonpython
collecting ... collected 2 items collecting ... collected 2 items
test_simple.yaml::hello FAILED [ 50%] test_simple.yaml::hello FAILED [ 50%]
@ -93,8 +93,8 @@ interesting to just look at the collection tree:
nonpython $ pytest --collect-only nonpython $ pytest --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython rootdir: /home/sweet/project/nonpython
collected 2 items collected 2 items
<Package nonpython> <Package nonpython>

View File

@ -161,8 +161,8 @@ objects, they are still using the default pytest representation:
$ pytest test_time.py --collect-only $ pytest test_time.py --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 8 items collected 8 items
<Module test_time.py> <Module test_time.py>
@ -226,8 +226,8 @@ this is a fully self-contained example which you can run with:
$ pytest test_scenarios.py $ pytest test_scenarios.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 4 items collected 4 items
test_scenarios.py .... [100%] test_scenarios.py .... [100%]
@ -241,8 +241,8 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
$ pytest --collect-only test_scenarios.py $ pytest --collect-only test_scenarios.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 4 items collected 4 items
<Module test_scenarios.py> <Module test_scenarios.py>
@ -320,8 +320,8 @@ Let's first see how it looks like at collection time:
$ pytest test_backends.py --collect-only $ pytest test_backends.py --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 2 items collected 2 items
<Module test_backends.py> <Module test_backends.py>
@ -339,7 +339,7 @@ And then when we run the test:
================================= FAILURES ================================= ================================= FAILURES =================================
_________________________ test_db_initialized[d2] __________________________ _________________________ test_db_initialized[d2] __________________________
db = <conftest.DB2 object at 0xdeadbeef> db = <conftest.DB2 object at 0xdeadbeef0001>
def test_db_initialized(db): def test_db_initialized(db):
# a dummy test # a dummy test
@ -419,8 +419,8 @@ The result of this test will be successful:
$ pytest -v test_indirect_list.py $ pytest -v test_indirect_list.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 1 item collecting ... collected 1 item
test_indirect_list.py::test_indirect[a-b] PASSED [100%] test_indirect_list.py::test_indirect[a-b] PASSED [100%]
@ -478,7 +478,7 @@ argument sets to use for each test function. Let's run it:
================================= FAILURES ================================= ================================= FAILURES =================================
________________________ TestClass.test_equals[1-2] ________________________ ________________________ TestClass.test_equals[1-2] ________________________
self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2 self = <test_parametrize.TestClass object at 0xdeadbeef0002>, a = 1, b = 2
def test_equals(self, a, b): def test_equals(self, a, b):
> assert a == b > assert a == b
@ -508,12 +508,8 @@ Running it results in some skips if we don't have all the python interpreters in
.. code-block:: pytest .. code-block:: pytest
. $ pytest -rs -q multipython.py . $ pytest -rs -q multipython.py
sssssssssssssssssssssssssss [100%] ........................... [100%]
========================= short test summary info ========================== 27 passed in 0.12s
SKIPPED [9] multipython.py:29: 'python3.5' not found
SKIPPED [9] multipython.py:29: 'python3.6' not found
SKIPPED [9] multipython.py:29: 'python3.7' not found
27 skipped in 0.12s
Indirect parametrization of optional implementations/imports Indirect parametrization of optional implementations/imports
-------------------------------------------------------------------- --------------------------------------------------------------------
@ -574,8 +570,8 @@ If you run this with reporting for skips enabled:
$ pytest -rs test_module.py $ pytest -rs test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 2 items collected 2 items
test_module.py .s [100%] test_module.py .s [100%]
@ -636,8 +632,8 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
$ pytest -v -m basic $ pytest -v -m basic
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 24 items / 21 deselected / 3 selected collecting ... collected 24 items / 21 deselected / 3 selected
test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%] test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%]

View File

@ -148,8 +148,8 @@ The test collection would look like this:
$ pytest --collect-only $ pytest --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, configfile: pytest.ini rootdir: /home/sweet/project, configfile: pytest.ini
collected 2 items collected 2 items
<Module check_myapp.py> <Module check_myapp.py>
@ -210,8 +210,8 @@ You can always peek at the collection tree without running tests like this:
. $ pytest --collect-only pythoncollection.py . $ pytest --collect-only pythoncollection.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, configfile: pytest.ini rootdir: /home/sweet/project, configfile: pytest.ini
collected 3 items collected 3 items
<Module CWD/pythoncollection.py> <Module CWD/pythoncollection.py>
@ -292,8 +292,8 @@ file will be left out:
$ pytest --collect-only $ pytest --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, configfile: pytest.ini rootdir: /home/sweet/project, configfile: pytest.ini
collected 0 items collected 0 items
======================= no tests collected in 0.12s ======================== ======================= no tests collected in 0.12s ========================

View File

@ -10,8 +10,8 @@ Here is a nice run of several failures and how ``pytest`` presents things:
assertion $ pytest failure_demo.py assertion $ pytest failure_demo.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR/assertion rootdir: /home/sweet/project/assertion
collected 44 items collected 44 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%] failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%]
@ -29,7 +29,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:19: AssertionError failure_demo.py:19: AssertionError
_________________________ TestFailing.test_simple __________________________ _________________________ TestFailing.test_simple __________________________
self = <failure_demo.TestFailing object at 0xdeadbeef> self = <failure_demo.TestFailing object at 0xdeadbeef0001>
def test_simple(self): def test_simple(self):
def f(): def f():
@ -40,13 +40,13 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> assert f() == g() > assert f() == g()
E assert 42 == 43 E assert 42 == 43
E + where 42 = <function TestFailing.test_simple.<locals>.f at 0xdeadbeef>() E + where 42 = <function TestFailing.test_simple.<locals>.f at 0xdeadbeef0002>()
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>() E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef0003>()
failure_demo.py:30: AssertionError failure_demo.py:30: AssertionError
____________________ TestFailing.test_simple_multiline _____________________ ____________________ TestFailing.test_simple_multiline _____________________
self = <failure_demo.TestFailing object at 0xdeadbeef> self = <failure_demo.TestFailing object at 0xdeadbeef0004>
def test_simple_multiline(self): def test_simple_multiline(self):
> otherfunc_multi(42, 6 * 9) > otherfunc_multi(42, 6 * 9)
@ -63,7 +63,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:14: AssertionError failure_demo.py:14: AssertionError
___________________________ TestFailing.test_not ___________________________ ___________________________ TestFailing.test_not ___________________________
self = <failure_demo.TestFailing object at 0xdeadbeef> self = <failure_demo.TestFailing object at 0xdeadbeef0005>
def test_not(self): def test_not(self):
def f(): def f():
@ -71,12 +71,12 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> assert not f() > assert not f()
E assert not 42 E assert not 42
E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>() E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef0006>()
failure_demo.py:39: AssertionError failure_demo.py:39: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________ _________________ TestSpecialisedExplanations.test_eq_text _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef0007>
def test_eq_text(self): def test_eq_text(self):
> assert "spam" == "eggs" > assert "spam" == "eggs"
@ -87,7 +87,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:44: AssertionError failure_demo.py:44: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________ _____________ TestSpecialisedExplanations.test_eq_similar_text _____________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef0008>
def test_eq_similar_text(self): def test_eq_similar_text(self):
> assert "foo 1 bar" == "foo 2 bar" > assert "foo 1 bar" == "foo 2 bar"
@ -100,7 +100,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:47: AssertionError failure_demo.py:47: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________ ____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef0009>
def test_eq_multiline_text(self): def test_eq_multiline_text(self):
> assert "foo\nspam\nbar" == "foo\neggs\nbar" > assert "foo\nspam\nbar" == "foo\neggs\nbar"
@ -113,7 +113,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:50: AssertionError failure_demo.py:50: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________ ______________ TestSpecialisedExplanations.test_eq_long_text _______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef000a>
def test_eq_long_text(self): def test_eq_long_text(self):
a = "1" * 100 + "a" + "2" * 100 a = "1" * 100 + "a" + "2" * 100
@ -130,7 +130,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:55: AssertionError failure_demo.py:55: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________ _________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef000b>
def test_eq_long_text_multiline(self): def test_eq_long_text_multiline(self):
a = "1\n" * 100 + "a" + "2\n" * 100 a = "1\n" * 100 + "a" + "2\n" * 100
@ -150,7 +150,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:60: AssertionError failure_demo.py:60: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________ _________________ TestSpecialisedExplanations.test_eq_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef000c>
def test_eq_list(self): def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3] > assert [0, 1, 2] == [0, 1, 3]
@ -161,7 +161,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:63: AssertionError failure_demo.py:63: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________ ______________ TestSpecialisedExplanations.test_eq_list_long _______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef000d>
def test_eq_list_long(self): def test_eq_list_long(self):
a = [0] * 100 + [1] + [3] * 100 a = [0] * 100 + [1] + [3] * 100
@ -174,7 +174,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:68: AssertionError failure_demo.py:68: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________ _________________ TestSpecialisedExplanations.test_eq_dict _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef000e>
def test_eq_dict(self): def test_eq_dict(self):
> assert {"a": 0, "b": 1, "c": 0} == {"a": 0, "b": 2, "d": 0} > assert {"a": 0, "b": 1, "c": 0} == {"a": 0, "b": 2, "d": 0}
@ -192,7 +192,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:71: AssertionError failure_demo.py:71: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________ _________________ TestSpecialisedExplanations.test_eq_set __________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef000f>
def test_eq_set(self): def test_eq_set(self):
> assert {0, 10, 11, 12} == {0, 20, 21} > assert {0, 10, 11, 12} == {0, 20, 21}
@ -210,7 +210,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:74: AssertionError failure_demo.py:74: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________ _____________ TestSpecialisedExplanations.test_eq_longer_list ______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef0010>
def test_eq_longer_list(self): def test_eq_longer_list(self):
> assert [1, 2] == [1, 2, 3] > assert [1, 2] == [1, 2, 3]
@ -221,7 +221,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:77: AssertionError failure_demo.py:77: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________ _________________ TestSpecialisedExplanations.test_in_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef0011>
def test_in_list(self): def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5] > assert 1 in [0, 2, 3, 4, 5]
@ -230,7 +230,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:80: AssertionError failure_demo.py:80: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________ __________ TestSpecialisedExplanations.test_not_in_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef0012>
def test_not_in_text_multiline(self): def test_not_in_text_multiline(self):
text = "some multiline\ntext\nwhich\nincludes foo\nand a\ntail" text = "some multiline\ntext\nwhich\nincludes foo\nand a\ntail"
@ -249,7 +249,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:84: AssertionError failure_demo.py:84: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________ ___________ TestSpecialisedExplanations.test_not_in_text_single ____________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef0013>
def test_not_in_text_single(self): def test_not_in_text_single(self):
text = "single foo line" text = "single foo line"
@ -262,7 +262,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:88: AssertionError failure_demo.py:88: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________ _________ TestSpecialisedExplanations.test_not_in_text_single_long _________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef0014>
def test_not_in_text_single_long(self): def test_not_in_text_single_long(self):
text = "head " * 50 + "foo " + "tail " * 20 text = "head " * 50 + "foo " + "tail " * 20
@ -275,7 +275,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:92: AssertionError failure_demo.py:92: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______ ______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef0015>
def test_not_in_text_single_long_term(self): def test_not_in_text_single_long_term(self):
text = "head " * 50 + "f" * 70 + "tail " * 20 text = "head " * 50 + "f" * 70 + "tail " * 20
@ -288,7 +288,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:96: AssertionError failure_demo.py:96: AssertionError
______________ TestSpecialisedExplanations.test_eq_dataclass _______________ ______________ TestSpecialisedExplanations.test_eq_dataclass _______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef0016>
def test_eq_dataclass(self): def test_eq_dataclass(self):
from dataclasses import dataclass from dataclasses import dataclass
@ -315,7 +315,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:108: AssertionError failure_demo.py:108: AssertionError
________________ TestSpecialisedExplanations.test_eq_attrs _________________ ________________ TestSpecialisedExplanations.test_eq_attrs _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef0017>
def test_eq_attrs(self): def test_eq_attrs(self):
import attr import attr
@ -349,7 +349,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
i = Foo() i = Foo()
> assert i.b == 2 > assert i.b == 2
E assert 1 == 2 E assert 1 == 2
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.b E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef0018>.b
failure_demo.py:128: AssertionError failure_demo.py:128: AssertionError
_________________________ test_attribute_instance __________________________ _________________________ test_attribute_instance __________________________
@ -360,8 +360,8 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> assert Foo().b == 2 > assert Foo().b == 2
E AssertionError: assert 1 == 2 E AssertionError: assert 1 == 2
E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef>.b E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef0019>.b
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>() E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef0019> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
failure_demo.py:135: AssertionError failure_demo.py:135: AssertionError
__________________________ test_attribute_failure __________________________ __________________________ test_attribute_failure __________________________
@ -379,7 +379,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:146: failure_demo.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0xdeadbeef> self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0xdeadbeef001a>
def _get_b(self): def _get_b(self):
> raise Exception("Failed to get attrib") > raise Exception("Failed to get attrib")
@ -397,15 +397,15 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> assert Foo().b == Bar().b > assert Foo().b == Bar().b
E AssertionError: assert 1 == 2 E AssertionError: assert 1 == 2
E + where 1 = <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef>.b E + where 1 = <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef001b>.b
E + where <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Foo'>() E + where <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef001b> = <class 'failure_demo.test_attribute_multiple.<locals>.Foo'>()
E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef>.b E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef001c>.b
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>() E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef001c> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
failure_demo.py:156: AssertionError failure_demo.py:156: AssertionError
__________________________ TestRaises.test_raises __________________________ __________________________ TestRaises.test_raises __________________________
self = <failure_demo.TestRaises object at 0xdeadbeef> self = <failure_demo.TestRaises object at 0xdeadbeef001d>
def test_raises(self): def test_raises(self):
s = "qwe" s = "qwe"
@ -415,7 +415,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:166: ValueError failure_demo.py:166: ValueError
______________________ TestRaises.test_raises_doesnt _______________________ ______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises object at 0xdeadbeef> self = <failure_demo.TestRaises object at 0xdeadbeef001e>
def test_raises_doesnt(self): def test_raises_doesnt(self):
> raises(OSError, int, "3") > raises(OSError, int, "3")
@ -424,7 +424,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:169: Failed failure_demo.py:169: Failed
__________________________ TestRaises.test_raise ___________________________ __________________________ TestRaises.test_raise ___________________________
self = <failure_demo.TestRaises object at 0xdeadbeef> self = <failure_demo.TestRaises object at 0xdeadbeef001f>
def test_raise(self): def test_raise(self):
> raise ValueError("demo error") > raise ValueError("demo error")
@ -433,7 +433,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:172: ValueError failure_demo.py:172: ValueError
________________________ TestRaises.test_tupleerror ________________________ ________________________ TestRaises.test_tupleerror ________________________
self = <failure_demo.TestRaises object at 0xdeadbeef> self = <failure_demo.TestRaises object at 0xdeadbeef0020>
def test_tupleerror(self): def test_tupleerror(self):
> a, b = [1] # NOQA > a, b = [1] # NOQA
@ -442,7 +442,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:175: ValueError failure_demo.py:175: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______ ______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
self = <failure_demo.TestRaises object at 0xdeadbeef> self = <failure_demo.TestRaises object at 0xdeadbeef0021>
def test_reinterpret_fails_with_print_for_the_fun_of_it(self): def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
items = [1, 2, 3] items = [1, 2, 3]
@ -455,7 +455,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
items is [1, 2, 3] items is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________ ________________________ TestRaises.test_some_error ________________________
self = <failure_demo.TestRaises object at 0xdeadbeef> self = <failure_demo.TestRaises object at 0xdeadbeef0022>
def test_some_error(self): def test_some_error(self):
> if namenotexi: # NOQA > if namenotexi: # NOQA
@ -486,7 +486,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
abc-123:2: AssertionError abc-123:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________ ____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef0023>
def test_complex_error(self): def test_complex_error(self):
def f(): def f():
@ -512,7 +512,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:6: AssertionError failure_demo.py:6: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________ ___________________ TestMoreErrors.test_z1_unpack_error ____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef0024>
def test_z1_unpack_error(self): def test_z1_unpack_error(self):
items = [] items = []
@ -522,7 +522,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:217: ValueError failure_demo.py:217: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________ ____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef0025>
def test_z2_type_error(self): def test_z2_type_error(self):
items = 3 items = 3
@ -532,20 +532,20 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:221: TypeError failure_demo.py:221: TypeError
______________________ TestMoreErrors.test_startswith ______________________ ______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef0026>
def test_startswith(self): def test_startswith(self):
s = "123" s = "123"
g = "456" g = "456"
> assert s.startswith(g) > assert s.startswith(g)
E AssertionError: assert False E AssertionError: assert False
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456') E + where False = <built-in method startswith of str object at 0xdeadbeef0027>('456')
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith E + where <built-in method startswith of str object at 0xdeadbeef0027> = '123'.startswith
failure_demo.py:226: AssertionError failure_demo.py:226: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________ __________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef0028>
def test_startswith_nested(self): def test_startswith_nested(self):
def f(): def f():
@ -556,15 +556,15 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> assert f().startswith(g()) > assert f().startswith(g())
E AssertionError: assert False E AssertionError: assert False
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456') E + where False = <built-in method startswith of str object at 0xdeadbeef0027>('456')
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith E + where <built-in method startswith of str object at 0xdeadbeef0027> = '123'.startswith
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef>() E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef0029>()
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>() E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef002a>()
failure_demo.py:235: AssertionError failure_demo.py:235: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________ _____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef002b>
def test_global_func(self): def test_global_func(self):
> assert isinstance(globf(42), float) > assert isinstance(globf(42), float)
@ -575,18 +575,18 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:238: AssertionError failure_demo.py:238: AssertionError
_______________________ TestMoreErrors.test_instance _______________________ _______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef002c>
def test_instance(self): def test_instance(self):
self.x = 6 * 7 self.x = 6 * 7
> assert self.x != 42 > assert self.x != 42
E assert 42 != 42 E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef002c>.x
failure_demo.py:242: AssertionError failure_demo.py:242: AssertionError
_______________________ TestMoreErrors.test_compare ________________________ _______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef002d>
def test_compare(self): def test_compare(self):
> assert globf(10) < 5 > assert globf(10) < 5
@ -596,7 +596,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:245: AssertionError failure_demo.py:245: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________ _____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef002e>
def test_try_finally(self): def test_try_finally(self):
x = 1 x = 1
@ -607,7 +607,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:250: AssertionError failure_demo.py:250: AssertionError
___________________ TestCustomAssertMsg.test_single_line ___________________ ___________________ TestCustomAssertMsg.test_single_line ___________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef> self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef002f>
def test_single_line(self): def test_single_line(self):
class A: class A:
@ -622,7 +622,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:261: AssertionError failure_demo.py:261: AssertionError
____________________ TestCustomAssertMsg.test_multiline ____________________ ____________________ TestCustomAssertMsg.test_multiline ____________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef> self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef0030>
def test_multiline(self): def test_multiline(self):
class A: class A:
@ -641,7 +641,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:268: AssertionError failure_demo.py:268: AssertionError
___________________ TestCustomAssertMsg.test_custom_repr ___________________ ___________________ TestCustomAssertMsg.test_custom_repr ___________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef> self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef0031>
def test_custom_repr(self): def test_custom_repr(self):
class JSON: class JSON:

View File

@ -166,6 +166,7 @@ Now we'll get feedback on a bad argument:
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...] ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: argument --cmdopt: invalid choice: 'type3' (choose from 'type1', 'type2') pytest: error: argument --cmdopt: invalid choice: 'type3' (choose from 'type1', 'type2')
If you need to provide more detailed error messages, you can use the If you need to provide more detailed error messages, you can use the
``type`` parameter and raise ``pytest.UsageError``: ``type`` parameter and raise ``pytest.UsageError``:
@ -232,8 +233,8 @@ directory with the above conftest.py:
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 0 items collected 0 items
========================== no tests ran in 0.12s =========================== ========================== no tests ran in 0.12s ===========================
@ -297,8 +298,8 @@ and when running it will see a skipped "slow" test:
$ pytest -rs # "-rs" means report details on the little 's' $ pytest -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 2 items collected 2 items
test_module.py .s [100%] test_module.py .s [100%]
@ -314,8 +315,8 @@ Or run it including the ``slow`` marked test:
$ pytest --runslow $ pytest --runslow
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 2 items collected 2 items
test_module.py .. [100%] test_module.py .. [100%]
@ -458,9 +459,9 @@ which will add the string to the test header accordingly:
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
project deps: mylib-1.1 project deps: mylib-1.1
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 0 items collected 0 items
========================== no tests ran in 0.12s =========================== ========================== no tests ran in 0.12s ===========================
@ -487,10 +488,10 @@ which will add info only when run with "--v":
$ pytest -v $ pytest -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
info1: did you know that ... info1: did you know that ...
did you? did you?
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 0 items collecting ... collected 0 items
========================== no tests ran in 0.12s =========================== ========================== no tests ran in 0.12s ===========================
@ -502,8 +503,8 @@ and nothing when run plainly:
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 0 items collected 0 items
========================== no tests ran in 0.12s =========================== ========================== no tests ran in 0.12s ===========================
@ -542,8 +543,8 @@ Now we can profile which test functions execute the slowest:
$ pytest --durations=3 $ pytest --durations=3
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 3 items collected 3 items
test_some_are_slow.py ... [100%] test_some_are_slow.py ... [100%]
@ -648,8 +649,8 @@ If we run this:
$ pytest -rx $ pytest -rx
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 4 items collected 4 items
test_step.py .Fx. [100%] test_step.py .Fx. [100%]
@ -657,7 +658,7 @@ If we run this:
================================= FAILURES ================================= ================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________ ____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling object at 0xdeadbeef> self = <test_step.TestUserHandling object at 0xdeadbeef0001>
def test_modification(self): def test_modification(self):
> assert 0 > assert 0
@ -732,8 +733,8 @@ We can run this:
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 7 items collected 7 items
test_step.py .Fx. [ 57%] test_step.py .Fx. [ 57%]
@ -743,17 +744,17 @@ We can run this:
================================== ERRORS ================================== ================================== ERRORS ==================================
_______________________ ERROR at setup of test_root ________________________ _______________________ ERROR at setup of test_root ________________________
file $REGENDOC_TMPDIR/b/test_error.py, line 1 file /home/sweet/project/b/test_error.py, line 1
def test_root(db): # no db here, will error out def test_root(db): # no db here, will error out
E fixture 'db' not found E fixture 'db' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory > available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them. > use 'pytest --fixtures [testpath]' for help on them.
$REGENDOC_TMPDIR/b/test_error.py:1 /home/sweet/project/b/test_error.py:1
================================= FAILURES ================================= ================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________ ____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling object at 0xdeadbeef> self = <test_step.TestUserHandling object at 0xdeadbeef0002>
def test_modification(self): def test_modification(self):
> assert 0 > assert 0
@ -762,21 +763,21 @@ We can run this:
test_step.py:11: AssertionError test_step.py:11: AssertionError
_________________________________ test_a1 __________________________________ _________________________________ test_a1 __________________________________
db = <conftest.DB object at 0xdeadbeef> db = <conftest.DB object at 0xdeadbeef0003>
def test_a1(db): def test_a1(db):
> assert 0, db # to show value > assert 0, db # to show value
E AssertionError: <conftest.DB object at 0xdeadbeef> E AssertionError: <conftest.DB object at 0xdeadbeef0003>
E assert 0 E assert 0
a/test_db.py:2: AssertionError a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________ _________________________________ test_a2 __________________________________
db = <conftest.DB object at 0xdeadbeef> db = <conftest.DB object at 0xdeadbeef0003>
def test_a2(db): def test_a2(db):
> assert 0, db # to show value > assert 0, db # to show value
E AssertionError: <conftest.DB object at 0xdeadbeef> E AssertionError: <conftest.DB object at 0xdeadbeef0003>
E assert 0 E assert 0
a/test_db2.py:2: AssertionError a/test_db2.py:2: AssertionError
@ -851,8 +852,8 @@ and run them:
$ pytest test_module.py $ pytest test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 2 items collected 2 items
test_module.py FF [100%] test_module.py FF [100%]
@ -860,7 +861,7 @@ and run them:
================================= FAILURES ================================= ================================= FAILURES =================================
________________________________ test_fail1 ________________________________ ________________________________ test_fail1 ________________________________
tmp_path = Path('PYTEST_TMPDIR/test_fail10') tmp_path = PosixPath('PYTEST_TMPDIR/test_fail10')
def test_fail1(tmp_path): def test_fail1(tmp_path):
> assert 0 > assert 0
@ -958,8 +959,8 @@ and run it:
$ pytest -s test_module.py $ pytest -s test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 3 items collected 3 items
test_module.py Esetting up a test failed! test_module.py::test_setup_fails test_module.py Esetting up a test failed! test_module.py::test_setup_fails

View File

@ -22,7 +22,7 @@ Install ``pytest``
.. code-block:: bash .. code-block:: bash
$ pytest --version $ pytest --version
pytest 6.2.5 pytest 6.3.0.dev685+g581b021aa.d20210922
.. _`simpletest`: .. _`simpletest`:
@ -48,8 +48,8 @@ The test
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 1 item collected 1 item
test_sample.py F [100%] test_sample.py F [100%]
@ -138,7 +138,7 @@ Once you develop multiple tests, you may want to group them into a class. pytest
================================= FAILURES ================================= ================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________ ____________________________ TestClass.test_two ____________________________
self = <test_class.TestClass object at 0xdeadbeef> self = <test_class.TestClass object at 0xdeadbeef0001>
def test_two(self): def test_two(self):
x = "hello" x = "hello"
@ -186,17 +186,17 @@ This is outlined below:
================================= FAILURES ================================= ================================= FAILURES =================================
______________________ TestClassDemoInstance.test_two ______________________ ______________________ TestClassDemoInstance.test_two ______________________
self = <test_class_demo.TestClassDemoInstance object at 0xdeadbeef> self = <test_class_demo.TestClassDemoInstance object at 0xdeadbeef0002>
def test_two(self): def test_two(self):
> assert self.value == 1 > assert self.value == 1
E assert 0 == 1 E assert 0 == 1
E + where 0 = <test_class_demo.TestClassDemoInstance object at 0xdeadbeef>.value E + where 0 = <test_class_demo.TestClassDemoInstance object at 0xdeadbeef0002>.value
test_class_demo.py:9: AssertionError test_class_demo.py:9: AssertionError
========================= short test summary info ========================== ========================= short test summary info ==========================
FAILED test_class_demo.py::TestClassDemoInstance::test_two - assert 0 == 1 FAILED test_class_demo.py::TestClassDemoInstance::test_two - assert 0 == 1
1 failed, 1 passed in 0.04s 1 failed, 1 passed in 0.12s
Note that attributes added at class level are *class attributes*, so they will be shared between tests. Note that attributes added at class level are *class attributes*, so they will be shared between tests.
@ -221,14 +221,14 @@ List the name ``tmp_path`` in the test function signature and ``pytest`` will lo
================================= FAILURES ================================= ================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________ _____________________________ test_needsfiles ______________________________
tmp_path = Path('PYTEST_TMPDIR/test_needsfiles0') tmp_path = PosixPath('PYTEST_TMPDIR/test_needsfiles0')
def test_needsfiles(tmp_path): def test_needsfiles(tmp_path):
print(tmp_path) print(tmp_path)
> assert 0 > assert 0
E assert 0 E assert 0
test_tmpdir.py:3: AssertionError test_tmp_path.py:3: AssertionError
--------------------------- Captured stdout call --------------------------- --------------------------- Captured stdout call ---------------------------
PYTEST_TMPDIR/test_needsfiles0 PYTEST_TMPDIR/test_needsfiles0
========================= short test summary info ========================== ========================= short test summary info ==========================

View File

@ -30,8 +30,8 @@ you will see the return value of the function call:
$ pytest test_assert1.py $ pytest test_assert1.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 1 item collected 1 item
test_assert1.py F [100%] test_assert1.py F [100%]
@ -185,8 +185,8 @@ if you run this module:
$ pytest test_assert2.py $ pytest test_assert2.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 1 item collected 1 item
test_assert2.py F [100%] test_assert2.py F [100%]
@ -205,7 +205,7 @@ if you run this module:
E '5' E '5'
E Use -v to get the full diff E Use -v to get the full diff
test_assert2.py:6: AssertionError test_assert2.py:4: AssertionError
========================= short test summary info ========================== ========================= short test summary info ==========================
FAILED test_assert2.py::test_set_comparison - AssertionError: assert {'0'... FAILED test_assert2.py::test_set_comparison - AssertionError: assert {'0'...
============================ 1 failed in 0.12s ============================= ============================ 1 failed in 0.12s =============================

View File

@ -87,8 +87,8 @@ If you then run it with ``--lf``:
$ pytest --lf $ pytest --lf
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 2 items collected 2 items
run-last-failure: rerun previous 2 failures run-last-failure: rerun previous 2 failures
@ -134,8 +134,8 @@ of ``FF`` and dots):
$ pytest --ff $ pytest --ff
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 50 items collected 50 items
run-last-failure: rerun previous 2 failures first run-last-failure: rerun previous 2 failures first
@ -278,72 +278,14 @@ You can always peek at the content of the cache using the
$ pytest --cache-show $ pytest --cache-show
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: /home/sweet/project/.pytest_cache
--------------------------- cache values for '*' --------------------------- --------------------------- cache values for '*' ---------------------------
cache/lastfailed contains: cache/lastfailed contains:
{'test_50.py::test_num[17]': True, {'test_caching.py::test_function': True}
'test_50.py::test_num[25]': True,
'test_assert1.py::test_function': True,
'test_assert2.py::test_set_comparison': True,
'test_caching.py::test_function': True,
'test_foocompare.py::test_compare': True}
cache/nodeids contains: cache/nodeids contains:
['test_50.py::test_num[0]', ['test_caching.py::test_function']
'test_50.py::test_num[10]',
'test_50.py::test_num[11]',
'test_50.py::test_num[12]',
'test_50.py::test_num[13]',
'test_50.py::test_num[14]',
'test_50.py::test_num[15]',
'test_50.py::test_num[16]',
'test_50.py::test_num[17]',
'test_50.py::test_num[18]',
'test_50.py::test_num[19]',
'test_50.py::test_num[1]',
'test_50.py::test_num[20]',
'test_50.py::test_num[21]',
'test_50.py::test_num[22]',
'test_50.py::test_num[23]',
'test_50.py::test_num[24]',
'test_50.py::test_num[25]',
'test_50.py::test_num[26]',
'test_50.py::test_num[27]',
'test_50.py::test_num[28]',
'test_50.py::test_num[29]',
'test_50.py::test_num[2]',
'test_50.py::test_num[30]',
'test_50.py::test_num[31]',
'test_50.py::test_num[32]',
'test_50.py::test_num[33]',
'test_50.py::test_num[34]',
'test_50.py::test_num[35]',
'test_50.py::test_num[36]',
'test_50.py::test_num[37]',
'test_50.py::test_num[38]',
'test_50.py::test_num[39]',
'test_50.py::test_num[3]',
'test_50.py::test_num[40]',
'test_50.py::test_num[41]',
'test_50.py::test_num[42]',
'test_50.py::test_num[43]',
'test_50.py::test_num[44]',
'test_50.py::test_num[45]',
'test_50.py::test_num[46]',
'test_50.py::test_num[47]',
'test_50.py::test_num[48]',
'test_50.py::test_num[49]',
'test_50.py::test_num[4]',
'test_50.py::test_num[5]',
'test_50.py::test_num[6]',
'test_50.py::test_num[7]',
'test_50.py::test_num[8]',
'test_50.py::test_num[9]',
'test_assert1.py::test_function',
'test_assert2.py::test_set_comparison',
'test_caching.py::test_function',
'test_foocompare.py::test_compare']
cache/stepwise contains: cache/stepwise contains:
[] []
example/value contains: example/value contains:
@ -359,9 +301,9 @@ filtering:
$ pytest --cache-show example/* $ pytest --cache-show example/*
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: /home/sweet/project/.pytest_cache
----------------------- cache values for 'example/*' ----------------------- ----------------------- cache values for 'example/*' -----------------------
example/value contains: example/value contains:
42 42

View File

@ -84,8 +84,8 @@ of the failing function and hide the other one:
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 2 items collected 2 items
test_module.py .F [100%] test_module.py .F [100%]
@ -99,7 +99,7 @@ of the failing function and hide the other one:
test_module.py:12: AssertionError test_module.py:12: AssertionError
-------------------------- Captured stdout setup --------------------------- -------------------------- Captured stdout setup ---------------------------
setting up <function test_func2 at 0xdeadbeef> setting up <function test_func2 at 0xdeadbeef0001>
========================= short test summary info ========================== ========================= short test summary info ==========================
FAILED test_module.py::test_func2 - assert False FAILED test_module.py::test_func2 - assert False
======================= 1 failed, 1 passed in 0.12s ======================== ======================= 1 failed, 1 passed in 0.12s ========================

View File

@ -29,15 +29,15 @@ Running pytest now produces this output:
$ pytest test_show_warnings.py $ pytest test_show_warnings.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 1 item collected 1 item
test_show_warnings.py . [100%] test_show_warnings.py . [100%]
============================= warnings summary ============================= ============================= warnings summary =============================
test_show_warnings.py::test_one test_show_warnings.py::test_one
$REGENDOC_TMPDIR/test_show_warnings.py:5: UserWarning: api v1, should use functions from v2 /home/sweet/project/test_show_warnings.py:5: UserWarning: api v1, should use functions from v2
warnings.warn(UserWarning("api v1, should use functions from v2")) warnings.warn(UserWarning("api v1, should use functions from v2"))
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
@ -393,7 +393,7 @@ defines an ``__init__`` constructor, as this prevents the class from being insta
============================= warnings summary ============================= ============================= warnings summary =============================
test_pytest_warnings.py:1 test_pytest_warnings.py:1
$REGENDOC_TMPDIR/test_pytest_warnings.py:1: PytestCollectionWarning: cannot collect test class 'Test' because it has a __init__ constructor (from: test_pytest_warnings.py) /home/sweet/project/test_pytest_warnings.py:1: PytestCollectionWarning: cannot collect test class 'Test' because it has a __init__ constructor (from: test_pytest_warnings.py)
class Test: class Test:
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

View File

@ -31,8 +31,8 @@ then you can just invoke ``pytest`` directly:
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 1 item collected 1 item
test_example.txt . [100%] test_example.txt . [100%]
@ -60,8 +60,8 @@ and functions, including from test modules:
$ pytest --doctest-modules $ pytest --doctest-modules
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 2 items collected 2 items
mymodule.py . [ 50%] mymodule.py . [ 50%]

View File

@ -433,8 +433,8 @@ marked ``smtp_connection`` fixture function. Running the test looks like this:
$ pytest test_module.py $ pytest test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 2 items collected 2 items
test_module.py FF [100%] test_module.py FF [100%]
@ -442,7 +442,7 @@ marked ``smtp_connection`` fixture function. Running the test looks like this:
================================= FAILURES ================================= ================================= FAILURES =================================
________________________________ test_ehlo _________________________________ ________________________________ test_ehlo _________________________________
smtp_connection = <smtplib.SMTP object at 0xdeadbeef> smtp_connection = <smtplib.SMTP object at 0xdeadbeef0001>
def test_ehlo(smtp_connection): def test_ehlo(smtp_connection):
response, msg = smtp_connection.ehlo() response, msg = smtp_connection.ehlo()
@ -454,7 +454,7 @@ marked ``smtp_connection`` fixture function. Running the test looks like this:
test_module.py:7: AssertionError test_module.py:7: AssertionError
________________________________ test_noop _________________________________ ________________________________ test_noop _________________________________
smtp_connection = <smtplib.SMTP object at 0xdeadbeef> smtp_connection = <smtplib.SMTP object at 0xdeadbeef0001>
def test_noop(smtp_connection): def test_noop(smtp_connection):
response, msg = smtp_connection.noop() response, msg = smtp_connection.noop()
@ -1050,7 +1050,7 @@ again, nothing much has changed:
.. code-block:: pytest .. code-block:: pytest
$ pytest -s -q --tb=no test_module.py $ pytest -s -q --tb=no test_module.py
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com) FFfinalizing <smtplib.SMTP object at 0xdeadbeef0002> (smtp.gmail.com)
========================= short test summary info ========================== ========================= short test summary info ==========================
FAILED test_module.py::test_ehlo - assert 0 FAILED test_module.py::test_ehlo - assert 0
@ -1083,7 +1083,7 @@ Running it:
E AssertionError: (250, b'mail.python.org') E AssertionError: (250, b'mail.python.org')
E assert 0 E assert 0
------------------------- Captured stdout teardown ------------------------- ------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef> (mail.python.org) finalizing <smtplib.SMTP object at 0xdeadbeef0003> (mail.python.org)
========================= short test summary info ========================== ========================= short test summary info ==========================
FAILED test_anothersmtp.py::test_showhelo - AssertionError: (250, b'mail.... FAILED test_anothersmtp.py::test_showhelo - AssertionError: (250, b'mail....
@ -1218,7 +1218,7 @@ So let's just do another run:
================================= FAILURES ================================= ================================= FAILURES =================================
________________________ test_ehlo[smtp.gmail.com] _________________________ ________________________ test_ehlo[smtp.gmail.com] _________________________
smtp_connection = <smtplib.SMTP object at 0xdeadbeef> smtp_connection = <smtplib.SMTP object at 0xdeadbeef0004>
def test_ehlo(smtp_connection): def test_ehlo(smtp_connection):
response, msg = smtp_connection.ehlo() response, msg = smtp_connection.ehlo()
@ -1230,7 +1230,7 @@ So let's just do another run:
test_module.py:7: AssertionError test_module.py:7: AssertionError
________________________ test_noop[smtp.gmail.com] _________________________ ________________________ test_noop[smtp.gmail.com] _________________________
smtp_connection = <smtplib.SMTP object at 0xdeadbeef> smtp_connection = <smtplib.SMTP object at 0xdeadbeef0004>
def test_noop(smtp_connection): def test_noop(smtp_connection):
response, msg = smtp_connection.noop() response, msg = smtp_connection.noop()
@ -1241,7 +1241,7 @@ So let's just do another run:
test_module.py:13: AssertionError test_module.py:13: AssertionError
________________________ test_ehlo[mail.python.org] ________________________ ________________________ test_ehlo[mail.python.org] ________________________
smtp_connection = <smtplib.SMTP object at 0xdeadbeef> smtp_connection = <smtplib.SMTP object at 0xdeadbeef0005>
def test_ehlo(smtp_connection): def test_ehlo(smtp_connection):
response, msg = smtp_connection.ehlo() response, msg = smtp_connection.ehlo()
@ -1251,10 +1251,10 @@ So let's just do another run:
test_module.py:6: AssertionError test_module.py:6: AssertionError
-------------------------- Captured stdout setup --------------------------- -------------------------- Captured stdout setup ---------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef> finalizing <smtplib.SMTP object at 0xdeadbeef0004>
________________________ test_noop[mail.python.org] ________________________ ________________________ test_noop[mail.python.org] ________________________
smtp_connection = <smtplib.SMTP object at 0xdeadbeef> smtp_connection = <smtplib.SMTP object at 0xdeadbeef0005>
def test_noop(smtp_connection): def test_noop(smtp_connection):
response, msg = smtp_connection.noop() response, msg = smtp_connection.noop()
@ -1264,7 +1264,7 @@ So let's just do another run:
test_module.py:13: AssertionError test_module.py:13: AssertionError
------------------------- Captured stdout teardown ------------------------- ------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef> finalizing <smtplib.SMTP object at 0xdeadbeef0005>
========================= short test summary info ========================== ========================= short test summary info ==========================
FAILED test_module.py::test_ehlo[smtp.gmail.com] - assert 0 FAILED test_module.py::test_ehlo[smtp.gmail.com] - assert 0
FAILED test_module.py::test_noop[smtp.gmail.com] - assert 0 FAILED test_module.py::test_noop[smtp.gmail.com] - assert 0
@ -1332,8 +1332,8 @@ Running the above tests results in the following test IDs being used:
$ pytest --collect-only $ pytest --collect-only
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 11 items collected 11 items
<Module test_anothersmtp.py> <Module test_anothersmtp.py>
@ -1385,8 +1385,8 @@ Running this test will *skip* the invocation of ``data_set`` with value ``2``:
$ pytest test_fixture_marks.py -v $ pytest test_fixture_marks.py -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 3 items collecting ... collected 3 items
test_fixture_marks.py::test_data[0] PASSED [ 33%] test_fixture_marks.py::test_data[0] PASSED [ 33%]
@ -1435,8 +1435,8 @@ Here we declare an ``app`` fixture which receives the previously defined
$ pytest -v test_appsetup.py $ pytest -v test_appsetup.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 2 items collecting ... collected 2 items
test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%] test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%]
@ -1515,8 +1515,8 @@ Let's run the tests in verbose mode and with looking at the print-output:
$ pytest -v -s test_module.py $ pytest -v -s test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collecting ... collected 8 items collecting ... collected 8 items
test_module.py::test_0[1] SETUP otherarg 1 test_module.py::test_0[1] SETUP otherarg 1

View File

@ -68,13 +68,13 @@ Executing pytest normally gives us this output (we are skipping the header to fo
.. code-block:: pytest .. code-block:: pytest
$ pytest --no-header $ pytest --no-header
=========================== test session starts =========================== =========================== test session starts ============================
collected 4 items collected 4 items
test_verbosity_example.py .FFF [100%] test_verbosity_example.py .FFF [100%]
================================ FAILURES ================================= ================================= FAILURES =================================
_____________________________ test_words_fail _____________________________ _____________________________ test_words_fail ______________________________
def test_words_fail(): def test_words_fail():
fruits1 = ["banana", "apple", "grapes", "melon", "kiwi"] fruits1 = ["banana", "apple", "grapes", "melon", "kiwi"]
@ -85,7 +85,7 @@ Executing pytest normally gives us this output (we are skipping the header to fo
E Use -v to get the full diff E Use -v to get the full diff
test_verbosity_example.py:8: AssertionError test_verbosity_example.py:8: AssertionError
____________________________ test_numbers_fail ____________________________ ____________________________ test_numbers_fail _____________________________
def test_numbers_fail(): def test_numbers_fail():
number_to_text1 = {str(x): x for x in range(5)} number_to_text1 = {str(x): x for x in range(5)}
@ -100,7 +100,7 @@ Executing pytest normally gives us this output (we are skipping the header to fo
E Use -v to get the full diff E Use -v to get the full diff
test_verbosity_example.py:14: AssertionError test_verbosity_example.py:14: AssertionError
___________________________ test_long_text_fail ___________________________ ___________________________ test_long_text_fail ____________________________
def test_long_text_fail(): def test_long_text_fail():
long_text = "Lorem ipsum dolor sit amet " * 10 long_text = "Lorem ipsum dolor sit amet " * 10
@ -108,11 +108,11 @@ Executing pytest normally gives us this output (we are skipping the header to fo
E AssertionError: assert 'hello world' in 'Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ips... sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet ' E AssertionError: assert 'hello world' in 'Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ips... sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet '
test_verbosity_example.py:19: AssertionError test_verbosity_example.py:19: AssertionError
========================= short test summary info ========================= ========================= short test summary info ==========================
FAILED test_verbosity_example.py::test_words_fail - AssertionError: asser... FAILED test_verbosity_example.py::test_words_fail - AssertionError: asser...
FAILED test_verbosity_example.py::test_numbers_fail - AssertionError: ass... FAILED test_verbosity_example.py::test_numbers_fail - AssertionError: ass...
FAILED test_verbosity_example.py::test_long_text_fail - AssertionError: a... FAILED test_verbosity_example.py::test_long_text_fail - AssertionError: a...
======================= 3 failed, 1 passed in 0.08s ======================= ======================= 3 failed, 1 passed in 0.12s ========================
Notice that: Notice that:
@ -127,7 +127,7 @@ Now we can increase pytest's verbosity:
.. code-block:: pytest .. code-block:: pytest
$ pytest --no-header -v $ pytest --no-header -v
=========================== test session starts =========================== =========================== test session starts ============================
collecting ... collected 4 items collecting ... collected 4 items
test_verbosity_example.py::test_ok PASSED [ 25%] test_verbosity_example.py::test_ok PASSED [ 25%]
@ -135,8 +135,8 @@ Now we can increase pytest's verbosity:
test_verbosity_example.py::test_numbers_fail FAILED [ 75%] test_verbosity_example.py::test_numbers_fail FAILED [ 75%]
test_verbosity_example.py::test_long_text_fail FAILED [100%] test_verbosity_example.py::test_long_text_fail FAILED [100%]
================================ FAILURES ================================= ================================= FAILURES =================================
_____________________________ test_words_fail _____________________________ _____________________________ test_words_fail ______________________________
def test_words_fail(): def test_words_fail():
fruits1 = ["banana", "apple", "grapes", "melon", "kiwi"] fruits1 = ["banana", "apple", "grapes", "melon", "kiwi"]
@ -151,7 +151,7 @@ Now we can increase pytest's verbosity:
E ? ^ ^ + E ? ^ ^ +
test_verbosity_example.py:8: AssertionError test_verbosity_example.py:8: AssertionError
____________________________ test_numbers_fail ____________________________ ____________________________ test_numbers_fail _____________________________
def test_numbers_fail(): def test_numbers_fail():
number_to_text1 = {str(x): x for x in range(5)} number_to_text1 = {str(x): x for x in range(5)}
@ -169,7 +169,7 @@ Now we can increase pytest's verbosity:
E ...Full output truncated (3 lines hidden), use '-vv' to show E ...Full output truncated (3 lines hidden), use '-vv' to show
test_verbosity_example.py:14: AssertionError test_verbosity_example.py:14: AssertionError
___________________________ test_long_text_fail ___________________________ ___________________________ test_long_text_fail ____________________________
def test_long_text_fail(): def test_long_text_fail():
long_text = "Lorem ipsum dolor sit amet " * 10 long_text = "Lorem ipsum dolor sit amet " * 10
@ -177,11 +177,11 @@ Now we can increase pytest's verbosity:
E AssertionError: assert 'hello world' in 'Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet ' E AssertionError: assert 'hello world' in 'Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet '
test_verbosity_example.py:19: AssertionError test_verbosity_example.py:19: AssertionError
========================= short test summary info ========================= ========================= short test summary info ==========================
FAILED test_verbosity_example.py::test_words_fail - AssertionError: asser... FAILED test_verbosity_example.py::test_words_fail - AssertionError: asser...
FAILED test_verbosity_example.py::test_numbers_fail - AssertionError: ass... FAILED test_verbosity_example.py::test_numbers_fail - AssertionError: ass...
FAILED test_verbosity_example.py::test_long_text_fail - AssertionError: a... FAILED test_verbosity_example.py::test_long_text_fail - AssertionError: a...
======================= 3 failed, 1 passed in 0.07s ======================= ======================= 3 failed, 1 passed in 0.12s ========================
Notice now that: Notice now that:
@ -196,7 +196,7 @@ Now if we increase verbosity even more:
.. code-block:: pytest .. code-block:: pytest
$ pytest --no-header -vv $ pytest --no-header -vv
=========================== test session starts =========================== =========================== test session starts ============================
collecting ... collected 4 items collecting ... collected 4 items
test_verbosity_example.py::test_ok PASSED [ 25%] test_verbosity_example.py::test_ok PASSED [ 25%]
@ -204,8 +204,8 @@ Now if we increase verbosity even more:
test_verbosity_example.py::test_numbers_fail FAILED [ 75%] test_verbosity_example.py::test_numbers_fail FAILED [ 75%]
test_verbosity_example.py::test_long_text_fail FAILED [100%] test_verbosity_example.py::test_long_text_fail FAILED [100%]
================================ FAILURES ================================= ================================= FAILURES =================================
_____________________________ test_words_fail _____________________________ _____________________________ test_words_fail ______________________________
def test_words_fail(): def test_words_fail():
fruits1 = ["banana", "apple", "grapes", "melon", "kiwi"] fruits1 = ["banana", "apple", "grapes", "melon", "kiwi"]
@ -220,7 +220,7 @@ Now if we increase verbosity even more:
E ? ^ ^ + E ? ^ ^ +
test_verbosity_example.py:8: AssertionError test_verbosity_example.py:8: AssertionError
____________________________ test_numbers_fail ____________________________ ____________________________ test_numbers_fail _____________________________
def test_numbers_fail(): def test_numbers_fail():
number_to_text1 = {str(x): x for x in range(5)} number_to_text1 = {str(x): x for x in range(5)}
@ -239,7 +239,7 @@ Now if we increase verbosity even more:
E + {'0': 0, '1': 1, '2': 2, '3': 3, '4': 4} E + {'0': 0, '1': 1, '2': 2, '3': 3, '4': 4}
test_verbosity_example.py:14: AssertionError test_verbosity_example.py:14: AssertionError
___________________________ test_long_text_fail ___________________________ ___________________________ test_long_text_fail ____________________________
def test_long_text_fail(): def test_long_text_fail():
long_text = "Lorem ipsum dolor sit amet " * 10 long_text = "Lorem ipsum dolor sit amet " * 10
@ -247,11 +247,11 @@ Now if we increase verbosity even more:
E AssertionError: assert 'hello world' in 'Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet ' E AssertionError: assert 'hello world' in 'Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet '
test_verbosity_example.py:19: AssertionError test_verbosity_example.py:19: AssertionError
========================= short test summary info ========================= ========================= short test summary info ==========================
FAILED test_verbosity_example.py::test_words_fail - AssertionError: asser... FAILED test_verbosity_example.py::test_words_fail - AssertionError: asser...
FAILED test_verbosity_example.py::test_numbers_fail - AssertionError: ass... FAILED test_verbosity_example.py::test_numbers_fail - AssertionError: ass...
FAILED test_verbosity_example.py::test_long_text_fail - AssertionError: a... FAILED test_verbosity_example.py::test_long_text_fail - AssertionError: a...
======================= 3 failed, 1 passed in 0.07s ======================= ======================= 3 failed, 1 passed in 0.12s ========================
Notice now that: Notice now that:
@ -322,8 +322,8 @@ Example:
$ pytest -ra $ pytest -ra
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 6 items collected 6 items
test_example.py .FEsxX [100%] test_example.py .FEsxX [100%]
@ -380,8 +380,8 @@ More than one character can be used, so for example to only see failed and skipp
$ pytest -rfs $ pytest -rfs
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 6 items collected 6 items
test_example.py .FEsxX [100%] test_example.py .FEsxX [100%]
@ -416,8 +416,8 @@ captured output:
$ pytest -rpP $ pytest -rpP
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 6 items collected 6 items
test_example.py .FEsxX [100%] test_example.py .FEsxX [100%]
@ -447,7 +447,6 @@ captured output:
PASSED test_example.py::test_ok PASSED test_example.py::test_ok
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s === == 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===
Creating resultlog format files Creating resultlog format files
-------------------------------------------------- --------------------------------------------------

View File

@ -57,8 +57,8 @@ them in turn:
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 3 items collected 3 items
test_expectation.py ..F [100%] test_expectation.py ..F [100%]
@ -169,8 +169,8 @@ Let's run this:
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 3 items collected 3 items
test_expectation.py ..x [100%] test_expectation.py ..x [100%]
@ -268,8 +268,8 @@ Let's also run with a stringinput that will lead to a failing test:
def test_valid_string(stringinput): def test_valid_string(stringinput):
> assert stringinput.isalpha() > assert stringinput.isalpha()
E AssertionError: assert False E AssertionError: assert False
E + where False = <built-in method isalpha of str object at 0xdeadbeef>() E + where False = <built-in method isalpha of str object at 0xdeadbeef0001>()
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha E + where <built-in method isalpha of str object at 0xdeadbeef0001> = '!'.isalpha
test_strings.py:4: AssertionError test_strings.py:4: AssertionError
========================= short test summary info ========================== ========================= short test summary info ==========================
@ -287,7 +287,7 @@ list:
$ pytest -q -rs test_strings.py $ pytest -q -rs test_strings.py
s [100%] s [100%]
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIPPED [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:2 SKIPPED [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at /home/sweet/project/test_strings.py:2
1 skipped in 0.12s 1 skipped in 0.12s
Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across

View File

@ -37,8 +37,8 @@ Running this would result in a passed test except for the last
$ pytest test_tmp_path.py $ pytest test_tmp_path.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 1 item collected 1 item
test_tmp_path.py F [100%] test_tmp_path.py F [100%]

View File

@ -137,8 +137,8 @@ the ``self.db`` values in the traceback:
$ pytest test_unittest_db.py $ pytest test_unittest_db.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 2 items collected 2 items
test_unittest_db.py FF [100%] test_unittest_db.py FF [100%]
@ -151,7 +151,7 @@ the ``self.db`` values in the traceback:
def test_method1(self): def test_method1(self):
assert hasattr(self, "db") assert hasattr(self, "db")
> assert 0, self.db # fail for demo purposes > assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef> E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef0001>
E assert 0 E assert 0
test_unittest_db.py:10: AssertionError test_unittest_db.py:10: AssertionError
@ -161,7 +161,7 @@ the ``self.db`` values in the traceback:
def test_method2(self): def test_method2(self):
> assert 0, self.db # fail for demo purposes > assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef> E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef0001>
E assert 0 E assert 0
test_unittest_db.py:13: AssertionError test_unittest_db.py:13: AssertionError

View File

@ -201,28 +201,8 @@ hook was invoked:
.. code-block:: pytest .. code-block:: pytest
$ python myinvoke.py $ python myinvoke.py
.FEsxX. [100%]*** test run reporting finishing *** test run reporting finishing
================================== ERRORS ==================================
_______________________ ERROR at setup of test_error _______________________
@pytest.fixture
def error_fixture():
> assert 0
E assert 0
test_example.py:6: AssertionError
================================= FAILURES =================================
________________________________ test_fail _________________________________
def test_fail():
> assert 0
E assert 0
test_example.py:14: AssertionError
========================= short test summary info ==========================
FAILED test_example.py::test_fail - assert 0
ERROR test_example.py::test_error - assert 0
.. note:: .. note::

View File

@ -448,8 +448,8 @@ in our ``pytest.ini`` to tell pytest where to look for example files.
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, configfile: pytest.ini rootdir: /home/sweet/project, configfile: pytest.ini
collected 2 items collected 2 items
test_example.py .. [100%] test_example.py .. [100%]

View File

@ -45,8 +45,8 @@ To execute it:
$ pytest $ pytest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR rootdir: /home/sweet/project
collected 1 item collected 1 item
test_sample.py F [100%] test_sample.py F [100%]

View File

@ -1774,8 +1774,8 @@ All the command-line flags can be obtained by running ``pytest --help``::
--pdb start the interactive Python debugger on errors or --pdb start the interactive Python debugger on errors or
KeyboardInterrupt. KeyboardInterrupt.
--pdbcls=modulename:classname --pdbcls=modulename:classname
start a custom interactive Python debugger on specify a custom interactive Python debugger for use
errors. For example: with --pdb.For example:
--pdbcls=IPython.terminal.debugger:TerminalPdb --pdbcls=IPython.terminal.debugger:TerminalPdb
--trace Immediately break when running each test. --trace Immediately break when running each test.
--capture=method per-test capturing method: one of fd|sys|no|tee-sys. --capture=method per-test capturing method: one of fd|sys|no|tee-sys.
@ -1800,7 +1800,8 @@ All the command-line flags can be obtained by running ``pytest --help``::
test next time test next time
--sw-skip, --stepwise-skip --sw-skip, --stepwise-skip
ignore the first failing test but stop on the next ignore the first failing test but stop on the next
failing test failing test.
implicitly enables --stepwise.
reporting: reporting:
--durations=N show N slowest setup/test durations (N=0 for all). --durations=N show N slowest setup/test durations (N=0 for all).
@ -1887,7 +1888,7 @@ All the command-line flags can be obtained by running ``pytest --help``::
--basetemp=dir base temporary directory for this test run.(warning: --basetemp=dir base temporary directory for this test run.(warning:
this directory is removed if it exists) this directory is removed if it exists)
-V, --version display pytest version and information about -V, --version display pytest version and information about
plugins.When given twice, also display information plugins. When given twice, also display information
about plugins. about plugins.
-h, --help show help message and configuration info -h, --help show help message and configuration info
-p name early-load given plugin module name or entry point -p name early-load given plugin module name or entry point
@ -1895,8 +1896,12 @@ All the command-line flags can be obtained by running ``pytest --help``::
To avoid loading of plugins, use the `no:` prefix, To avoid loading of plugins, use the `no:` prefix,
e.g. `no:doctest`. e.g. `no:doctest`.
--trace-config trace considerations of conftest.py files. --trace-config trace considerations of conftest.py files.
--debug store internal tracing debug information in --debug=[DEBUG_FILE_NAME]
'pytestdebug.log'. store internal tracing debug information in this log
file.
This file is opened with 'w' and truncated as a
result, care advised.
Defaults to 'pytestdebug.log'.
-o OVERRIDE_INI, --override-ini=OVERRIDE_INI -o OVERRIDE_INI, --override-ini=OVERRIDE_INI
override ini option with "option=value" style, e.g. override ini option with "option=value" style, e.g.
`-o xfail_strict=True -o cache_dir=cache`. `-o xfail_strict=True -o cache_dir=cache`.

View File

@ -89,16 +89,11 @@ passenv = SETUPTOOLS_SCM_PRETEND_VERSION_FOR_PYTEST
deps = deps =
dataclasses dataclasses
PyYAML PyYAML
regendoc>=0.6.1 regendoc>=0.8.1
sphinx sphinx
whitelist_externals = whitelist_externals =
rm
make make
commands = commands =
# don't show hypothesis plugin info in docs, see #4602
pip uninstall hypothesis -y
rm -rf /tmp/doc-exec*
rm -rf {envdir}/.pytest_cache
make regen make regen
[testenv:plugins] [testenv:plugins]