test_ok1/doc/en/skipping.rst

400 lines
11 KiB
ReStructuredText
Raw Normal View History

.. _`skip and xfail`:
.. _skipping:
2017-01-01 01:54:47 +08:00
Skip and xfail: dealing with tests that cannot succeed
2017-11-04 02:37:18 +08:00
======================================================
You can mark test functions that cannot be run on certain platforms
or that you expect to fail so pytest can deal with them accordingly and
present a summary of the test session, while keeping the test suite *green*.
A **skip** means that you expect your test to pass only if some conditions are met,
otherwise pytest should skip running the test altogether. Common examples are skipping
windows-only tests on non-windows platforms, or skipping tests that depend on an external
resource which is not available at the moment (for example a database).
A **xfail** means that you expect a test to fail for some reason.
A common example is a test for a feature not yet implemented, or a bug not yet fixed.
When a test passes despite being expected to fail (marked with ``pytest.mark.xfail``),
it's an **xpass** and will be reported in the test summary.
``pytest`` counts and lists *skip* and *xfail* tests separately. Detailed
information about skipped/xfailed tests is not shown by default to avoid
cluttering the output. You can use the ``-r`` option to see details
corresponding to the "short" letters shown in the test progress:
.. code-block:: bash
pytest -rxXs # show extra info on xfailed, xpassed, and skipped tests
More details on the ``-r`` option can be found by running ``pytest -h``.
(See :ref:`how to change command line options defaults`)
.. _skipif:
.. _skip:
.. _`condition booleans`:
Skipping test functions
-----------------------
.. versionadded:: 2.9
2016-02-16 06:19:07 +08:00
The simplest way to skip a test function is to mark it with the ``skip`` decorator
which may be passed an optional ``reason``:
.. code-block:: python
2015-09-21 22:29:07 +08:00
@pytest.mark.skip(reason="no way of currently testing this")
def test_the_unknown():
...
Alternatively, it is also possible to skip imperatively during test execution or setup
by calling the ``pytest.skip(reason)`` function:
.. code-block:: python
def test_function():
if not valid_config():
pytest.skip("unsupported configuration")
The imperative method is useful when it is not possible to evaluate the skip condition
during import time.
It is also possible to skip the whole module using
``pytest.skip(reason, allow_module_level=True)`` at the module level:
.. code-block:: python
import sys
import pytest
if not sys.platform.startswith("win"):
pytest.skip("skipping windows-only tests", allow_module_level=True)
2018-03-01 07:34:20 +08:00
**Reference**: :ref:`pytest.mark.skip ref`
``skipif``
~~~~~~~~~~
.. versionadded:: 2.0
2015-09-21 22:29:50 +08:00
2016-02-16 06:19:07 +08:00
If you wish to skip something conditionally then you can use ``skipif`` instead.
2015-09-21 22:33:48 +08:00
Here is an example of marking a test function to be skipped
2019-04-12 19:50:26 +08:00
when run on an interpreter earlier than Python3.6:
.. code-block:: python
import sys
2019-04-12 19:50:26 +08:00
@pytest.mark.skipif(sys.version_info < (3, 6), reason="requires python3.6 or higher")
def test_function():
...
If the condition evaluates to ``True`` during collection, the test function will be skipped,
with the specified reason appearing in the summary when using ``-rs``.
2019-04-12 19:50:26 +08:00
You can share ``skipif`` markers between modules. Consider this test module:
.. code-block:: python
# content of test_mymodule.py
import mymodule
2019-04-12 19:50:26 +08:00
minversion = pytest.mark.skipif(
mymodule.__versioninfo__ < (1, 1), reason="at least mymodule-1.1 required"
)
@minversion
def test_function():
...
2019-04-12 19:50:26 +08:00
You can import the marker and reuse it in another test module:
.. code-block:: python
# test_myothermodule.py
from test_mymodule import minversion
2019-04-12 19:50:26 +08:00
@minversion
def test_anotherfunction():
...
For larger test suites it's usually a good idea to have one file
where you define the markers which you then consistently apply
throughout your test suite.
Alternatively, you can use :ref:`condition strings
<string conditions>` instead of booleans, but they can't be shared between modules easily
so they are supported mainly for backward compatibility reasons.
2018-03-01 07:34:20 +08:00
**Reference**: :ref:`pytest.mark.skipif ref`
Skip all test functions of a class or module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2019-04-12 19:50:26 +08:00
You can use the ``skipif`` marker (as any other marker) on classes:
2019-04-12 19:50:26 +08:00
.. code-block:: python
2019-04-12 19:50:26 +08:00
@pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows")
class TestPosixCalls(object):
def test_function(self):
"will not be setup or run under 'win32' platform"
If the condition is ``True``, this marker will produce a skip result for
each of the test methods of that class.
If you want to skip all test functions of a module, you may use
the ``pytestmark`` name on the global level:
.. code-block:: python
# test_module.py
pytestmark = pytest.mark.skipif(...)
If multiple ``skipif`` decorators are applied to a test function, it
will be skipped if any of the skip conditions is true.
.. _`whole class- or module level`: mark.html#scoped-marking
2017-11-04 02:37:18 +08:00
Skipping files or directories
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sometimes you may need to skip an entire file or directory, for example if the
tests rely on Python version-specific features or contain code that you do not
wish pytest to run. In this case, you must exclude the files and directories
from collection. Refer to :ref:`customizing-test-collection` for more
information.
Skipping on a missing import dependency
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use the following helper at module level
or within a test or test setup function::
docutils = pytest.importorskip("docutils")
If ``docutils`` cannot be imported here, this will lead to a
skip outcome of the test. You can also skip based on the
version number of a library::
docutils = pytest.importorskip("docutils", minversion="0.3")
The version will be read from the specified
module's ``__version__`` attribute.
Summary
~~~~~~~
Here's a quick guide on how to skip tests in a module in different situations:
1. Skip all tests in a module unconditionally:
.. code-block:: python
2018-06-03 11:29:28 +08:00
pytestmark = pytest.mark.skip("all tests still WIP")
2. Skip all tests in a module based on some condition:
.. code-block:: python
pytestmark = pytest.mark.skipif(sys.platform == "win32", reason="tests for linux only")
3. Skip all tests in a module if some import is missing:
.. code-block:: python
2018-06-03 11:29:28 +08:00
pexpect = pytest.importorskip("pexpect")
.. _xfail:
XFail: mark test functions as expected to fail
----------------------------------------------
You can use the ``xfail`` marker to indicate that you
expect a test to fail::
@pytest.mark.xfail
def test_function():
...
This test will be run but no traceback will be reported
when it fails. Instead terminal reporting will list it in the
"expected to fail" (``XFAIL``) or "unexpectedly passing" (``XPASS``) sections.
Alternatively, you can also mark a test as ``XFAIL`` from within a test or setup function
imperatively:
.. code-block:: python
def test_function():
if not valid_config():
pytest.xfail("failing configuration (but should work)")
This will unconditionally make ``test_function`` ``XFAIL``. Note that no other code is executed
after ``pytest.xfail`` call, differently from the marker. That's because it is implemented
internally by raising a known exception.
2018-03-01 07:34:20 +08:00
**Reference**: :ref:`pytest.mark.xfail ref`
.. _`xfail strict tutorial`:
``strict`` parameter
~~~~~~~~~~~~~~~~~~~~
.. versionadded:: 2.9
Both ``XFAIL`` and ``XPASS`` don't fail the test suite, unless the ``strict`` keyword-only
parameter is passed as ``True``:
.. code-block:: python
@pytest.mark.xfail(strict=True)
def test_function():
...
This will make ``XPASS`` ("unexpectedly passing") results from this test to fail the test suite.
You can change the default value of the ``strict`` parameter using the
``xfail_strict`` ini option:
.. code-block:: ini
[pytest]
xfail_strict=true
``reason`` parameter
~~~~~~~~~~~~~~~~~~~~
As with skipif_ you can also mark your expectation of a failure
2019-04-12 19:50:26 +08:00
on a particular platform:
.. code-block:: python
2019-04-12 19:50:26 +08:00
@pytest.mark.xfail(sys.version_info >= (3, 6), reason="python3.6 api changes")
def test_function():
...
``raises`` parameter
~~~~~~~~~~~~~~~~~~~~
If you want to be more specific as to why the test is failing, you can specify
a single exception, or a tuple of exceptions, in the ``raises`` argument.
.. code-block:: python
@pytest.mark.xfail(raises=RuntimeError)
def test_function():
...
Then the test will be reported as a regular failure if it fails with an
exception not mentioned in ``raises``.
``run`` parameter
~~~~~~~~~~~~~~~~~
If a test should be marked as xfail and reported as such but should not be
even executed, use the ``run`` parameter as ``False``:
.. code-block:: python
@pytest.mark.xfail(run=False)
def test_function():
...
This is specially useful for xfailing tests that are crashing the interpreter and should be
investigated later.
Ignoring xfail
~~~~~~~~~~~~~~
By specifying on the commandline:
.. code-block:: bash
pytest --runxfail
you can force the running and reporting of an ``xfail`` marked test
as if it weren't marked at all. This also causes ``pytest.xfail`` to produce no effect.
Examples
~~~~~~~~
Here is a simple test file with the several usages:
.. literalinclude:: example/xfail_demo.py
2018-11-24 13:41:22 +08:00
Running it with the report-on-xfail option gives this output:
.. code-block:: pytest
example $ pytest -rx xfail_demo.py
=========================== test session starts ============================
2018-11-24 04:09:57 +08:00
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
2019-01-31 00:25:38 +08:00
cachedir: $PYTHON_PREFIX/.pytest_cache
2019-04-15 22:24:17 +08:00
rootdir: $REGENDOC_TMPDIR/example
2017-05-13 04:51:20 +08:00
collected 7 items
xfail_demo.py xxxxxxx [100%]
========================= short test summary info ==========================
2017-05-13 04:51:20 +08:00
XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2
reason: [NOTRUN]
2017-05-13 04:51:20 +08:00
XFAIL xfail_demo.py::test_hello3
condition: hasattr(os, 'sep')
XFAIL xfail_demo.py::test_hello4
bug 110
XFAIL xfail_demo.py::test_hello5
condition: pytest.__version__[0] != "17"
XFAIL xfail_demo.py::test_hello6
reason: reason
XFAIL xfail_demo.py::test_hello7
======================== 7 xfailed in 0.12 seconds =========================
2013-05-21 09:12:45 +08:00
.. _`skip/xfail with parametrize`:
Skip/xfail with parametrize
---------------------------
It is possible to apply markers like skip and xfail to individual
test instances when using parametrize:
.. code-block:: python
2013-05-21 09:12:45 +08:00
import pytest
2018-06-03 11:29:28 +08:00
@pytest.mark.parametrize(
("n", "expected"),
[
(1, 2),
pytest.param(1, 0, marks=pytest.mark.xfail),
pytest.param(1, 3, marks=pytest.mark.xfail(reason="some bug")),
(2, 3),
(3, 4),
(4, 5),
pytest.param(
10, 11, marks=pytest.mark.skipif(sys.version_info >= (3, 0), reason="py2k")
),
],
)
def test_increment(n, expected):
assert n + 1 == expected