216 lines
7.0 KiB
Plaintext
216 lines
7.0 KiB
Plaintext
|
.. _`skip and xfail`:
|
||
|
|
||
|
Skip and xfail: dealing with tests that can not succeed
|
||
|
=====================================================================
|
||
|
|
||
|
If you have test functions that cannot be run on certain platforms
|
||
|
or that you expect to fail you can mark them accordingly or you
|
||
|
may call helper functions during execution of setup or test functions.
|
||
|
|
||
|
A *skip* means that you expect your test to pass unless a certain
|
||
|
configuration or condition (e.g. wrong Python interpreter, missing
|
||
|
dependency) prevents it to run. And *xfail* means that your test
|
||
|
can run but you expect it to fail because there is an implementation problem.
|
||
|
|
||
|
py.test counts and lists *skip* and *xfail* tests separately. However,
|
||
|
detailed information about skipped/xfailed tests is not shown by default
|
||
|
to avoid cluttering the output. You can use the ``-r`` option to see
|
||
|
details corresponding to the "short" letters shown in the test
|
||
|
progress::
|
||
|
|
||
|
py.test -rxs # show extra info on skips and xfails
|
||
|
|
||
|
(See :ref:`how to change command line options defaults`)
|
||
|
|
||
|
.. _skipif:
|
||
|
|
||
|
Marking a test function to be skipped
|
||
|
-------------------------------------------
|
||
|
|
||
|
Here is an example of marking a test function to be skipped
|
||
|
when run on a Python3 interpreter::
|
||
|
|
||
|
import sys
|
||
|
@pytest.mark.skipif("sys.version_info >= (3,0)")
|
||
|
def test_function():
|
||
|
...
|
||
|
|
||
|
During test function setup the skipif condition is
|
||
|
evaluated by calling ``eval('sys.version_info >= (3,0)', namespace)``.
|
||
|
(*New in version 2.0.2*) The namespace contains all the module globals of the test function so that
|
||
|
you can for example check for versions of a module you are using::
|
||
|
|
||
|
import mymodule
|
||
|
|
||
|
@pytest.mark.skipif("mymodule.__version__ < '1.2'")
|
||
|
def test_function():
|
||
|
...
|
||
|
|
||
|
The test function will not be run ("skipped") if
|
||
|
``mymodule`` is below the specified version. The reason
|
||
|
for specifying the condition as a string is mainly that
|
||
|
py.test can report a summary of skip conditions.
|
||
|
For information on the construction of the ``namespace``
|
||
|
see `evaluation of skipif/xfail conditions`_.
|
||
|
|
||
|
You can of course create a shortcut for your conditional skip
|
||
|
decorator at module level like this::
|
||
|
|
||
|
win32only = pytest.mark.skipif("sys.platform != 'win32'")
|
||
|
|
||
|
@win32only
|
||
|
def test_function():
|
||
|
...
|
||
|
|
||
|
Skip all test functions of a class
|
||
|
--------------------------------------
|
||
|
|
||
|
As with all function :ref:`marking <mark>` you can skip test functions at the
|
||
|
`whole class- or module level`_. Here is an example
|
||
|
for skipping all methods of a test class based on the platform::
|
||
|
|
||
|
class TestPosixCalls:
|
||
|
pytestmark = pytest.mark.skipif("sys.platform == 'win32'")
|
||
|
|
||
|
def test_function(self):
|
||
|
"will not be setup or run under 'win32' platform"
|
||
|
|
||
|
The ``pytestmark`` special name tells py.test to apply it to each test
|
||
|
function in the class. If your code targets python2.6 or above you can
|
||
|
more naturally use the skipif decorator (and any other marker) on
|
||
|
classes::
|
||
|
|
||
|
@pytest.mark.skipif("sys.platform == 'win32'")
|
||
|
class TestPosixCalls:
|
||
|
|
||
|
def test_function(self):
|
||
|
"will not be setup or run under 'win32' platform"
|
||
|
|
||
|
Using multiple "skipif" decorators on a single function is generally fine - it means that if any of the conditions apply the function execution will be skipped.
|
||
|
|
||
|
.. _`whole class- or module level`: mark.html#scoped-marking
|
||
|
|
||
|
.. _xfail:
|
||
|
|
||
|
Mark a test function as expected to fail
|
||
|
-------------------------------------------------------
|
||
|
|
||
|
You can use the ``xfail`` marker to indicate that you
|
||
|
expect the test to fail::
|
||
|
|
||
|
@pytest.mark.xfail
|
||
|
def test_function():
|
||
|
...
|
||
|
|
||
|
This test will be run but no traceback will be reported
|
||
|
when it fails. Instead terminal reporting will list it in the
|
||
|
"expected to fail" or "unexpectedly passing" sections.
|
||
|
|
||
|
By specifying on the commandline::
|
||
|
|
||
|
pytest --runxfail
|
||
|
|
||
|
you can force the running and reporting of an ``xfail`` marked test
|
||
|
as if it weren't marked at all.
|
||
|
|
||
|
As with skipif_ you can also mark your expectation of a failure
|
||
|
on a particular platform::
|
||
|
|
||
|
@pytest.mark.xfail("sys.version_info >= (3,0)")
|
||
|
def test_function():
|
||
|
...
|
||
|
|
||
|
You can furthermore prevent the running of an "xfail" test or
|
||
|
specify a reason such as a bug ID or similar. Here is
|
||
|
a simple test file with the several usages:
|
||
|
|
||
|
.. literalinclude:: example/xfail_demo.py
|
||
|
|
||
|
Running it with the report-on-xfail option gives this output::
|
||
|
|
||
|
example $ py.test -rx xfail_demo.py
|
||
|
=========================== test session starts ============================
|
||
|
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
||
|
collecting ... collected 6 items
|
||
|
|
||
|
xfail_demo.py xxxxxx
|
||
|
========================= short test summary info ==========================
|
||
|
XFAIL xfail_demo.py::test_hello
|
||
|
XFAIL xfail_demo.py::test_hello2
|
||
|
reason: [NOTRUN]
|
||
|
XFAIL xfail_demo.py::test_hello3
|
||
|
condition: hasattr(os, 'sep')
|
||
|
XFAIL xfail_demo.py::test_hello4
|
||
|
bug 110
|
||
|
XFAIL xfail_demo.py::test_hello5
|
||
|
condition: pytest.__version__[0] != "17"
|
||
|
XFAIL xfail_demo.py::test_hello6
|
||
|
reason: reason
|
||
|
|
||
|
======================== 6 xfailed in 0.03 seconds =========================
|
||
|
|
||
|
.. _`evaluation of skipif/xfail conditions`:
|
||
|
|
||
|
Evaluation of skipif/xfail expressions
|
||
|
----------------------------------------------------
|
||
|
|
||
|
.. versionadded:: 2.0.2
|
||
|
|
||
|
The evaluation of a condition string in ``pytest.mark.skipif(conditionstring)``
|
||
|
or ``pytest.mark.xfail(conditionstring)`` takes place in a namespace
|
||
|
dictionary which is constructed as follows:
|
||
|
|
||
|
* the namespace is initialized by putting the ``sys`` and ``os`` modules
|
||
|
and the pytest ``config`` object into it.
|
||
|
|
||
|
* updated with the module globals of the test function for which the
|
||
|
expression is applied.
|
||
|
|
||
|
The pytest ``config`` object allows you to skip based on a test configuration value
|
||
|
which you might have added::
|
||
|
|
||
|
@pytest.mark.skipif("not config.getvalue('db')")
|
||
|
def test_function(...):
|
||
|
...
|
||
|
|
||
|
|
||
|
Imperative xfail from within a test or setup function
|
||
|
------------------------------------------------------
|
||
|
|
||
|
If you cannot declare xfail-conditions at import time
|
||
|
you can also imperatively produce an XFail-outcome from
|
||
|
within test or setup code. Example::
|
||
|
|
||
|
def test_function():
|
||
|
if not valid_config():
|
||
|
pytest.xfail("unsupported configuration")
|
||
|
|
||
|
|
||
|
Skipping on a missing import dependency
|
||
|
--------------------------------------------------
|
||
|
|
||
|
You can use the following import helper at module level
|
||
|
or within a test or test setup function::
|
||
|
|
||
|
docutils = pytest.importorskip("docutils")
|
||
|
|
||
|
If ``docutils`` cannot be imported here, this will lead to a
|
||
|
skip outcome of the test. You can also skip based on the
|
||
|
version number of a library::
|
||
|
|
||
|
docutils = pytest.importorskip("docutils", minversion="0.3")
|
||
|
|
||
|
The version will be read from the specified module's ``__version__`` attribute.
|
||
|
|
||
|
Imperative skip from within a test or setup function
|
||
|
------------------------------------------------------
|
||
|
|
||
|
If for some reason you cannot declare skip-conditions
|
||
|
you can also imperatively produce a skip-outcome from
|
||
|
within test or setup code. Example::
|
||
|
|
||
|
def test_function():
|
||
|
if not valid_config():
|
||
|
pytest.skip("unsupported configuration")
|
||
|
|