Preparing release version 3.5.0
This commit is contained in:
parent
b148770066
commit
beacecf29b
152
CHANGELOG.rst
152
CHANGELOG.rst
|
@ -8,6 +8,158 @@
|
|||
|
||||
.. towncrier release notes start
|
||||
|
||||
Pytest 3.5.0 (2018-03-21)
|
||||
=========================
|
||||
|
||||
Deprecations and Removals
|
||||
-------------------------
|
||||
|
||||
- ``record_xml_property`` fixture is now deprecated in favor of the more
|
||||
generic ``record_property``. (`#2770
|
||||
<https://github.com/pytest-dev/pytest/issues/2770>`_)
|
||||
|
||||
- Defining ``pytest_plugins`` is now deprecated in non-top-level conftest.py
|
||||
files, because they "leak" to the entire directory tree. (`#3084
|
||||
<https://github.com/pytest-dev/pytest/issues/3084>`_)
|
||||
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- New ``--show-capture`` command-line option that allows to specify how to
|
||||
display captured output when tests fail: ``no``, ``stdout``, ``stderr``,
|
||||
``log`` or ``all`` (the default). (`#1478
|
||||
<https://github.com/pytest-dev/pytest/issues/1478>`_)
|
||||
|
||||
- New ``--rootdir`` command-line option to override the rules for discovering
|
||||
the root directory. See `customize
|
||||
<https://docs.pytest.org/en/latest/customize.html>`_ in the documentation for
|
||||
details. (`#1642 <https://github.com/pytest-dev/pytest/issues/1642>`_)
|
||||
|
||||
- Fixtures are now instantiated based on their scopes, with higher-scoped
|
||||
fixtures (such as ``session``) being instantiated first than lower-scoped
|
||||
fixtures (such as ``function``). The relative order of fixtures of the same
|
||||
scope is kept unchanged, based in their declaration order and their
|
||||
dependencies. (`#2405 <https://github.com/pytest-dev/pytest/issues/2405>`_)
|
||||
|
||||
- ``record_xml_property`` renamed to ``record_property`` and is now compatible
|
||||
with xdist, markers and any reporter. ``record_xml_property`` name is now
|
||||
deprecated. (`#2770 <https://github.com/pytest-dev/pytest/issues/2770>`_)
|
||||
|
||||
- New ``--nf``, ``--new-first`` options: run new tests first followed by the
|
||||
rest of the tests, in both cases tests are also sorted by the file modified
|
||||
time, with more recent files coming first. (`#3034
|
||||
<https://github.com/pytest-dev/pytest/issues/3034>`_)
|
||||
|
||||
- New ``--last-failed-no-failures`` command-line option that allows to specify
|
||||
the behavior of the cache plugin's ```--last-failed`` feature when no tests
|
||||
failed in the last run (or no cache was found): ``none`` or ``all`` (the
|
||||
default). (`#3139 <https://github.com/pytest-dev/pytest/issues/3139>`_)
|
||||
|
||||
- New ``--doctest-continue-on-failure`` command-line option to enable doctests
|
||||
to show multiple failures for each snippet, instead of stopping at the first
|
||||
failure. (`#3149 <https://github.com/pytest-dev/pytest/issues/3149>`_)
|
||||
|
||||
- Captured log messages are added to the ``<system-out>`` tag in the generated
|
||||
junit xml file if the ``junit_logging`` ini option is set to ``system-out``.
|
||||
If the value of this ini option is ``system-err`, the logs are written to
|
||||
``<system-err>``. The default value for ``junit_logging`` is ``no``, meaning
|
||||
captured logs are not written to the output file. (`#3156
|
||||
<https://github.com/pytest-dev/pytest/issues/3156>`_)
|
||||
|
||||
- Allow the logging plugin to handle ``pytest_runtest_logstart`` and
|
||||
``pytest_runtest_logfinish`` hooks when live logs are enabled. (`#3189
|
||||
<https://github.com/pytest-dev/pytest/issues/3189>`_)
|
||||
|
||||
- Passing `--log-cli-level` in the command-line now automatically activates
|
||||
live logging. (`#3190 <https://github.com/pytest-dev/pytest/issues/3190>`_)
|
||||
|
||||
- Add command line option ``--deselect`` to allow deselection of individual
|
||||
tests at collection time. (`#3198
|
||||
<https://github.com/pytest-dev/pytest/issues/3198>`_)
|
||||
|
||||
- Captured logs are printed before entering pdb. (`#3204
|
||||
<https://github.com/pytest-dev/pytest/issues/3204>`_)
|
||||
|
||||
- Deselected item count is now shown before tests are run, e.g. ``collected X
|
||||
items / Y deselected``. (`#3213
|
||||
<https://github.com/pytest-dev/pytest/issues/3213>`_)
|
||||
|
||||
- The builtin module ``platform`` is now available for use in expressions in
|
||||
``pytest.mark``. (`#3236
|
||||
<https://github.com/pytest-dev/pytest/issues/3236>`_)
|
||||
|
||||
- The *short test summary info* section now is displayed after tracebacks and
|
||||
warnings in the terminal. (`#3255
|
||||
<https://github.com/pytest-dev/pytest/issues/3255>`_)
|
||||
|
||||
- New ``--verbosity`` flag to set verbosity level explicitly. (`#3296
|
||||
<https://github.com/pytest-dev/pytest/issues/3296>`_)
|
||||
|
||||
- ``pytest.approx`` now accepts comparing a numpy array with a scalar. (`#3312
|
||||
<https://github.com/pytest-dev/pytest/issues/3312>`_)
|
||||
|
||||
|
||||
Bug Fixes
|
||||
---------
|
||||
|
||||
- Suppress ``IOError`` when closing the temporary file used for capturing
|
||||
streams in Python 2.7. (`#2370
|
||||
<https://github.com/pytest-dev/pytest/issues/2370>`_)
|
||||
|
||||
- Fixed ``clear()`` method on ``caplog`` fixture which cleared ``records``, but
|
||||
not the ``text`` property. (`#3297
|
||||
<https://github.com/pytest-dev/pytest/issues/3297>`_)
|
||||
|
||||
- During test collection, when stdin is not allowed to be read, the
|
||||
``DontReadFromStdin`` object still allow itself to be iterable and resolved
|
||||
to an iterator without crashing. (`#3314
|
||||
<https://github.com/pytest-dev/pytest/issues/3314>`_)
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Added a `reference <https://docs.pytest.org/en/latest/reference.html>`_ page
|
||||
to the docs. (`#1713 <https://github.com/pytest-dev/pytest/issues/1713>`_)
|
||||
|
||||
|
||||
Trivial/Internal Changes
|
||||
------------------------
|
||||
|
||||
- Change minimum requirement of ``attrs`` to ``17.4.0``. (`#3228
|
||||
<https://github.com/pytest-dev/pytest/issues/3228>`_)
|
||||
|
||||
- Renamed example directories so all tests pass when ran from the base
|
||||
directory. (`#3245 <https://github.com/pytest-dev/pytest/issues/3245>`_)
|
||||
|
||||
- Internal ``mark.py`` module has been turned into a package. (`#3250
|
||||
<https://github.com/pytest-dev/pytest/issues/3250>`_)
|
||||
|
||||
- ``pytest`` now depends on the `more_itertools
|
||||
<https://github.com/erikrose/more-itertools>`_ package. (`#3265
|
||||
<https://github.com/pytest-dev/pytest/issues/3265>`_)
|
||||
|
||||
- Added warning when ``[pytest]`` section is used in a ``.cfg`` file passed
|
||||
with ``-c`` (`#3268 <https://github.com/pytest-dev/pytest/issues/3268>`_)
|
||||
|
||||
- ``nodeids`` can now be passed explicitly to ``FSCollector`` and ``Node``
|
||||
constructors. (`#3291 <https://github.com/pytest-dev/pytest/issues/3291>`_)
|
||||
|
||||
- Internal refactoring of ``FormattedExcinfo`` to use ``attrs`` facilities and
|
||||
remove old support code for legacy Python versions. (`#3292
|
||||
<https://github.com/pytest-dev/pytest/issues/3292>`_)
|
||||
|
||||
- Refactoring to unify how verbosity is handled internally. (`#3296
|
||||
<https://github.com/pytest-dev/pytest/issues/3296>`_)
|
||||
|
||||
- Internal refactoring to better integrate with argparse. (`#3304
|
||||
<https://github.com/pytest-dev/pytest/issues/3304>`_)
|
||||
|
||||
- Fix a python example when calling a fixture in doc/en/usage.rst (`#3308
|
||||
<https://github.com/pytest-dev/pytest/issues/3308>`_)
|
||||
|
||||
|
||||
Pytest 3.4.2 (2018-03-04)
|
||||
=========================
|
||||
|
||||
|
|
|
@ -6,6 +6,7 @@ Release announcements
|
|||
:maxdepth: 2
|
||||
|
||||
|
||||
release-3.5.0
|
||||
release-3.4.2
|
||||
release-3.4.1
|
||||
release-3.4.0
|
||||
|
|
|
@ -0,0 +1,51 @@
|
|||
pytest-3.5.0
|
||||
=======================================
|
||||
|
||||
The pytest team is proud to announce the 3.5.0 release!
|
||||
|
||||
pytest is a mature Python testing tool with more than a 1600 tests
|
||||
against itself, passing on many different interpreters and platforms.
|
||||
|
||||
This release contains a number of bugs fixes and improvements, so users are encouraged
|
||||
to take a look at the CHANGELOG:
|
||||
|
||||
http://doc.pytest.org/en/latest/changelog.html
|
||||
|
||||
For complete documentation, please visit:
|
||||
|
||||
http://docs.pytest.org
|
||||
|
||||
As usual, you can upgrade from pypi via:
|
||||
|
||||
pip install -U pytest
|
||||
|
||||
Thanks to all who contributed to this release, among them:
|
||||
|
||||
* Allan Feldman
|
||||
* Brian Maissy
|
||||
* Bruno Oliveira
|
||||
* Carlos Jenkins
|
||||
* Daniel Hahler
|
||||
* Florian Bruhin
|
||||
* Jason R. Coombs
|
||||
* Jeffrey Rackauckas
|
||||
* Jordan Speicher
|
||||
* Julien Palard
|
||||
* Kale Kundert
|
||||
* Kostis Anagnostopoulos
|
||||
* Kyle Altendorf
|
||||
* Maik Figura
|
||||
* Pedro Algarvio
|
||||
* Ronny Pfannschmidt
|
||||
* Tadeu Manoel
|
||||
* Tareq Alayan
|
||||
* Thomas Hisch
|
||||
* William Lee
|
||||
* codetriage-readme-bot
|
||||
* feuillemorte
|
||||
* joshm91
|
||||
* mike
|
||||
|
||||
|
||||
Happy testing,
|
||||
The Pytest Development Team
|
|
@ -15,7 +15,106 @@ For information on the ``pytest.mark`` mechanism, see :ref:`mark`.
|
|||
For information about fixtures, see :ref:`fixtures`. To see a complete list of available fixtures, type::
|
||||
|
||||
$ pytest -q --fixtures
|
||||
|
||||
cache
|
||||
Return a cache object that can persist state between testing sessions.
|
||||
|
||||
cache.get(key, default)
|
||||
cache.set(key, value)
|
||||
|
||||
Keys must be a ``/`` separated value, where the first part is usually the
|
||||
name of your plugin or application to avoid clashes with other cache users.
|
||||
|
||||
Values can be any object handled by the json stdlib module.
|
||||
capsys
|
||||
Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
|
||||
captured output available via ``capsys.readouterr()`` method calls
|
||||
which return a ``(out, err)`` namedtuple. ``out`` and ``err`` will be ``text``
|
||||
objects.
|
||||
capsysbinary
|
||||
Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
|
||||
captured output available via ``capsys.readouterr()`` method calls
|
||||
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be ``bytes``
|
||||
objects.
|
||||
capfd
|
||||
Enable capturing of writes to file descriptors ``1`` and ``2`` and make
|
||||
captured output available via ``capfd.readouterr()`` method calls
|
||||
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be ``text``
|
||||
objects.
|
||||
capfdbinary
|
||||
Enable capturing of write to file descriptors 1 and 2 and make
|
||||
captured output available via ``capfdbinary.readouterr`` method calls
|
||||
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be
|
||||
``bytes`` objects.
|
||||
doctest_namespace
|
||||
Fixture that returns a :py:class:`dict` that will be injected into the namespace of doctests.
|
||||
pytestconfig
|
||||
Session-scoped fixture that returns the :class:`_pytest.config.Config` object.
|
||||
|
||||
Example::
|
||||
|
||||
def test_foo(pytestconfig):
|
||||
if pytestconfig.getoption("verbose"):
|
||||
...
|
||||
record_property
|
||||
Add an extra properties the calling test.
|
||||
User properties become part of the test report and are available to the
|
||||
configured reporters, like JUnit XML.
|
||||
The fixture is callable with ``(name, value)``, with value being automatically
|
||||
xml-encoded.
|
||||
|
||||
Example::
|
||||
|
||||
def test_function(record_property):
|
||||
record_property("example_key", 1)
|
||||
record_xml_property
|
||||
(Deprecated) use record_property.
|
||||
record_xml_attribute
|
||||
Add extra xml attributes to the tag for the calling test.
|
||||
The fixture is callable with ``(name, value)``, with value being
|
||||
automatically xml-encoded
|
||||
caplog
|
||||
Access and control log capturing.
|
||||
|
||||
Captured logs are available through the following methods::
|
||||
|
||||
* caplog.text() -> string containing formatted log output
|
||||
* caplog.records() -> list of logging.LogRecord instances
|
||||
* caplog.record_tuples() -> list of (logger_name, level, message) tuples
|
||||
* caplog.clear() -> clear captured records and formatted log output string
|
||||
monkeypatch
|
||||
The returned ``monkeypatch`` fixture provides these
|
||||
helper methods to modify objects, dictionaries or os.environ::
|
||||
|
||||
monkeypatch.setattr(obj, name, value, raising=True)
|
||||
monkeypatch.delattr(obj, name, raising=True)
|
||||
monkeypatch.setitem(mapping, name, value)
|
||||
monkeypatch.delitem(obj, name, raising=True)
|
||||
monkeypatch.setenv(name, value, prepend=False)
|
||||
monkeypatch.delenv(name, value, raising=True)
|
||||
monkeypatch.syspath_prepend(path)
|
||||
monkeypatch.chdir(path)
|
||||
|
||||
All modifications will be undone after the requesting
|
||||
test function or fixture has finished. The ``raising``
|
||||
parameter determines if a KeyError or AttributeError
|
||||
will be raised if the set/deletion operation has no target.
|
||||
recwarn
|
||||
Return a :class:`WarningsRecorder` instance that records all warnings emitted by test functions.
|
||||
|
||||
See http://docs.python.org/library/warnings.html for information
|
||||
on warning categories.
|
||||
tmpdir_factory
|
||||
Return a TempdirFactory instance for the test session.
|
||||
tmpdir
|
||||
Return a temporary directory path object
|
||||
which is unique to each test function invocation,
|
||||
created as a sub directory of the base temporary
|
||||
directory. The returned object is a `py.path.local`_
|
||||
path object.
|
||||
|
||||
.. _`py.path.local`: https://py.readthedocs.io/en/latest/path.html
|
||||
|
||||
no tests ran in 0.12 seconds
|
||||
|
||||
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like::
|
||||
|
||||
|
|
|
@ -78,7 +78,7 @@ If you then run it with ``--lf``::
|
|||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collected 50 items
|
||||
collected 50 items / 48 deselected
|
||||
run-last-failure: rerun previous 2 failures
|
||||
|
||||
test_50.py FF [100%]
|
||||
|
@ -106,7 +106,6 @@ If you then run it with ``--lf``::
|
|||
E Failed: bad luck
|
||||
|
||||
test_50.py:6: Failed
|
||||
=========================== 48 tests deselected ============================
|
||||
================= 2 failed, 48 deselected in 0.12 seconds ==================
|
||||
|
||||
You have run only the two failing test from the last run, while 48 tests have
|
||||
|
@ -243,6 +242,8 @@ You can always peek at the content of the cache using the
|
|||
------------------------------- cache values -------------------------------
|
||||
cache/lastfailed contains:
|
||||
{'test_caching.py::test_function': True}
|
||||
cache/nodeids contains:
|
||||
['test_caching.py::test_function']
|
||||
example/value contains:
|
||||
42
|
||||
|
||||
|
|
|
@ -34,11 +34,10 @@ You can then restrict a test run to only run tests marked with ``webtest``::
|
|||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||
cachedir: .pytest_cache
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collecting ... collected 4 items
|
||||
collecting ... collected 4 items / 3 deselected
|
||||
|
||||
test_server.py::test_send_http PASSED [100%]
|
||||
|
||||
============================ 3 tests deselected ============================
|
||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||
|
||||
Or the inverse, running all tests except the webtest ones::
|
||||
|
@ -48,13 +47,12 @@ Or the inverse, running all tests except the webtest ones::
|
|||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||
cachedir: .pytest_cache
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collecting ... collected 4 items
|
||||
collecting ... collected 4 items / 1 deselected
|
||||
|
||||
test_server.py::test_something_quick PASSED [ 33%]
|
||||
test_server.py::test_another PASSED [ 66%]
|
||||
test_server.py::TestClass::test_method PASSED [100%]
|
||||
|
||||
============================ 1 tests deselected ============================
|
||||
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
||||
|
||||
Selecting tests based on their node ID
|
||||
|
@ -133,11 +131,10 @@ select tests based on their names::
|
|||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||
cachedir: .pytest_cache
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collecting ... collected 4 items
|
||||
collecting ... collected 4 items / 3 deselected
|
||||
|
||||
test_server.py::test_send_http PASSED [100%]
|
||||
|
||||
============================ 3 tests deselected ============================
|
||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||
|
||||
And you can also run all tests except the ones that match the keyword::
|
||||
|
@ -147,13 +144,12 @@ And you can also run all tests except the ones that match the keyword::
|
|||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||
cachedir: .pytest_cache
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collecting ... collected 4 items
|
||||
collecting ... collected 4 items / 1 deselected
|
||||
|
||||
test_server.py::test_something_quick PASSED [ 33%]
|
||||
test_server.py::test_another PASSED [ 66%]
|
||||
test_server.py::TestClass::test_method PASSED [100%]
|
||||
|
||||
============================ 1 tests deselected ============================
|
||||
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
||||
|
||||
Or to select "http" and "quick" tests::
|
||||
|
@ -163,12 +159,11 @@ Or to select "http" and "quick" tests::
|
|||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||
cachedir: .pytest_cache
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collecting ... collected 4 items
|
||||
collecting ... collected 4 items / 2 deselected
|
||||
|
||||
test_server.py::test_send_http PASSED [ 50%]
|
||||
test_server.py::test_something_quick PASSED [100%]
|
||||
|
||||
============================ 2 tests deselected ============================
|
||||
================== 2 passed, 2 deselected in 0.12 seconds ==================
|
||||
|
||||
.. note::
|
||||
|
@ -547,11 +542,10 @@ Note that if you specify a platform via the marker-command line option like this
|
|||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collected 4 items
|
||||
collected 4 items / 3 deselected
|
||||
|
||||
test_plat.py . [100%]
|
||||
|
||||
============================ 3 tests deselected ============================
|
||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||
|
||||
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
||||
|
@ -599,7 +593,7 @@ We can now use the ``-m option`` to select one set::
|
|||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collected 4 items
|
||||
collected 4 items / 2 deselected
|
||||
|
||||
test_module.py FF [100%]
|
||||
|
||||
|
@ -612,7 +606,6 @@ We can now use the ``-m option`` to select one set::
|
|||
test_module.py:6: in test_interface_complex
|
||||
assert 0
|
||||
E assert 0
|
||||
============================ 2 tests deselected ============================
|
||||
================== 2 failed, 2 deselected in 0.12 seconds ==================
|
||||
|
||||
or to select both "event" and "interface" tests::
|
||||
|
@ -621,7 +614,7 @@ or to select both "event" and "interface" tests::
|
|||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collected 4 items
|
||||
collected 4 items / 1 deselected
|
||||
|
||||
test_module.py FFF [100%]
|
||||
|
||||
|
@ -638,5 +631,4 @@ or to select both "event" and "interface" tests::
|
|||
test_module.py:9: in test_event_simple
|
||||
assert 0
|
||||
E assert 0
|
||||
============================ 1 tests deselected ============================
|
||||
================== 3 failed, 1 deselected in 0.12 seconds ==================
|
||||
|
|
|
@ -358,7 +358,7 @@ get on the terminal - we are working on that)::
|
|||
> int(s)
|
||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||
|
||||
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:595>:1: ValueError
|
||||
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:609>:1: ValueError
|
||||
______________________ TestRaises.test_raises_doesnt _______________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
|
|
@ -389,7 +389,7 @@ Now we can profile which test functions execute the slowest::
|
|||
========================= slowest 3 test durations =========================
|
||||
0.30s call test_some_are_slow.py::test_funcslow2
|
||||
0.20s call test_some_are_slow.py::test_funcslow1
|
||||
0.10s call test_some_are_slow.py::test_funcfast
|
||||
0.16s call test_some_are_slow.py::test_funcfast
|
||||
========================= 3 passed in 0.12 seconds =========================
|
||||
|
||||
incremental testing - test steps
|
||||
|
@ -451,9 +451,6 @@ If we run this::
|
|||
collected 4 items
|
||||
|
||||
test_step.py .Fx. [100%]
|
||||
========================= short test summary info ==========================
|
||||
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
||||
reason: previous test failed (test_modification)
|
||||
|
||||
================================= FAILURES =================================
|
||||
____________________ TestUserHandling.test_modification ____________________
|
||||
|
@ -465,6 +462,9 @@ If we run this::
|
|||
E assert 0
|
||||
|
||||
test_step.py:9: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
||||
reason: previous test failed (test_modification)
|
||||
============== 1 failed, 2 passed, 1 xfailed in 0.12 seconds ===============
|
||||
|
||||
We'll see that ``test_deletion`` was not executed because ``test_modification``
|
||||
|
@ -539,7 +539,7 @@ We can run this::
|
|||
file $REGENDOC_TMPDIR/b/test_error.py, line 1
|
||||
def test_root(db): # no db here, will error out
|
||||
E fixture 'db' not found
|
||||
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_xml_attribute, record_property, recwarn, tmpdir, tmpdir_factory
|
||||
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_xml_attribute, record_xml_property, recwarn, tmpdir, tmpdir_factory
|
||||
> use 'pytest --fixtures [testpath]' for help on them.
|
||||
|
||||
$REGENDOC_TMPDIR/b/test_error.py:1
|
||||
|
|
|
@ -740,11 +740,11 @@ Let's run the tests in verbose mode and with looking at the print-output::
|
|||
test_module.py::test_1[mod1] SETUP modarg mod1
|
||||
RUN test1 with modarg mod1
|
||||
PASSED
|
||||
test_module.py::test_2[1-mod1] SETUP otherarg 1
|
||||
test_module.py::test_2[mod1-1] SETUP otherarg 1
|
||||
RUN test2 with otherarg 1 and modarg mod1
|
||||
PASSED TEARDOWN otherarg 1
|
||||
|
||||
test_module.py::test_2[2-mod1] SETUP otherarg 2
|
||||
test_module.py::test_2[mod1-2] SETUP otherarg 2
|
||||
RUN test2 with otherarg 2 and modarg mod1
|
||||
PASSED TEARDOWN otherarg 2
|
||||
|
||||
|
@ -752,11 +752,11 @@ Let's run the tests in verbose mode and with looking at the print-output::
|
|||
SETUP modarg mod2
|
||||
RUN test1 with modarg mod2
|
||||
PASSED
|
||||
test_module.py::test_2[1-mod2] SETUP otherarg 1
|
||||
test_module.py::test_2[mod2-1] SETUP otherarg 1
|
||||
RUN test2 with otherarg 1 and modarg mod2
|
||||
PASSED TEARDOWN otherarg 1
|
||||
|
||||
test_module.py::test_2[2-mod2] SETUP otherarg 2
|
||||
test_module.py::test_2[mod2-2] SETUP otherarg 2
|
||||
RUN test2 with otherarg 2 and modarg mod2
|
||||
PASSED TEARDOWN otherarg 2
|
||||
TEARDOWN modarg mod2
|
||||
|
|
|
@ -51,7 +51,6 @@ Running this would result in a passed test except for the last
|
|||
test_tmpdir.py:7: AssertionError
|
||||
========================= 1 failed in 0.12 seconds =========================
|
||||
|
||||
|
||||
.. _`tmpdir factory example`:
|
||||
|
||||
The 'tmpdir_factory' fixture
|
||||
|
|
|
@ -482,7 +482,7 @@ Running it will show that ``MyPlugin`` was added and its
|
|||
hook was invoked::
|
||||
|
||||
$ python myinvoke.py
|
||||
*** test run reporting finishing
|
||||
. [100%]*** test run reporting finishing
|
||||
|
||||
|
||||
.. note::
|
||||
|
|
Loading…
Reference in New Issue