176 lines
5.7 KiB
Plaintext
176 lines
5.7 KiB
Plaintext
|
|
skip and xfail mechanisms
|
|
=====================================================================
|
|
|
|
You can skip or "xfail" test functions, either by marking functions
|
|
through a decorator or by calling the ``pytest.skip|xfail`` helpers.
|
|
A *skip* means that you expect your test to pass unless a certain configuration or condition (e.g. wrong Python interpreter, missing dependency) prevents it to run. And *xfail* means that you expect your test to fail because there is an
|
|
implementation problem. Counting and listing *xfailing* tests separately
|
|
helps to maintain a list of implementation problems and you can provide
|
|
info such as a bug number or a URL to provide a human readable problem context.
|
|
|
|
Usually detailed information about skipped/xfailed tests is not shown
|
|
to avoid cluttering the output. You can use the ``-r`` option to
|
|
see details corresponding to the "short" letters shown in the
|
|
test progress::
|
|
|
|
py.test -rxs # show extra info on skips and xfail tests
|
|
|
|
.. _skipif:
|
|
|
|
Skipping a single function
|
|
-------------------------------------------
|
|
|
|
Here is an example for marking a test function to be skipped
|
|
when run on a Python3 interpreter::
|
|
|
|
@pytest.mark.skipif("sys.version_info >= (3,0)")
|
|
def test_function():
|
|
...
|
|
|
|
During test function setup the skipif condition is
|
|
evaluated by calling ``eval(expr, namespace)``. The namespace
|
|
contains the ``sys`` and ``os`` modules and the test
|
|
``config`` object. The latter allows you to skip based
|
|
on a test configuration value e.g. like this::
|
|
|
|
@pytest.mark.skipif("not config.getvalue('db')")
|
|
def test_function(...):
|
|
...
|
|
|
|
Create a shortcut for your conditional skip decorator
|
|
at module level like this::
|
|
|
|
win32only = pytest.mark.skipif("sys.platform != 'win32'")
|
|
|
|
@win32only
|
|
def test_function():
|
|
...
|
|
|
|
|
|
skip test functions of a class
|
|
--------------------------------------
|
|
|
|
As with all function :ref:`marking` you can do it at
|
|
`whole class- or module level`_. Here is an example
|
|
for skipping all methods of a test class based on platform::
|
|
|
|
class TestPosixCalls:
|
|
pytestmark = pytest.mark.skipif("sys.platform == 'win32'")
|
|
|
|
def test_function(self):
|
|
"will not be setup or run under 'win32' platform"
|
|
|
|
The ``pytestmark`` decorator will be applied to each test function.
|
|
If your code targets python2.6 or above you can equivalently use
|
|
the skipif decorator on classes::
|
|
|
|
@pytest.mark.skipif("sys.platform == 'win32'")
|
|
class TestPosixCalls:
|
|
|
|
def test_function(self):
|
|
"will not be setup or run under 'win32' platform"
|
|
|
|
It is fine in general to apply multiple "skipif" decorators
|
|
on a single function - this means that if any of the conditions
|
|
apply the function will be skipped.
|
|
|
|
.. _`whole class- or module level`: mark.html#scoped-marking
|
|
|
|
.. _xfail:
|
|
|
|
mark a test function as expected to fail
|
|
-------------------------------------------------------
|
|
|
|
You can use the ``xfail`` marker to indicate that you
|
|
expect the test to fail::
|
|
|
|
@pytest.mark.xfail
|
|
def test_function():
|
|
...
|
|
|
|
This test will be run but no traceback will be reported
|
|
when it fails. Instead terminal reporting will list it in the
|
|
"expected to fail" or "unexpectedly passing" sections.
|
|
|
|
By specifying on the commandline::
|
|
|
|
pytest --runxfail
|
|
|
|
you can force the running and reporting of an ``xfail`` marked test
|
|
as if it weren't marked at all.
|
|
|
|
Same as with skipif_ you can also selectively expect a failure
|
|
depending on platform::
|
|
|
|
@pytest.mark.xfail("sys.version_info >= (3,0)")
|
|
def test_function():
|
|
...
|
|
|
|
You can also avoid running an "xfail" test at all or
|
|
specify a reason such as a bug ID or similar. Here is
|
|
a simple test file with usages:
|
|
|
|
.. literalinclude:: example/xfail_demo.py
|
|
|
|
Running it with the report-on-xfail option gives this output::
|
|
|
|
example $ py.test -rx xfail_demo.py
|
|
=========================== test session starts ============================
|
|
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev31
|
|
test path 1: xfail_demo.py
|
|
|
|
xfail_demo.py xxxxx
|
|
========================= short test summary info ==========================
|
|
XFAIL xfail_demo.py::test_hello
|
|
XFAIL xfail_demo.py::test_hello2
|
|
reason: [NOTRUN]
|
|
XFAIL xfail_demo.py::test_hello3
|
|
condition: hasattr(os, 'sep')
|
|
XFAIL xfail_demo.py::test_hello4
|
|
bug 110
|
|
XFAIL xfail_demo.py::test_hello5
|
|
reason: reason
|
|
|
|
======================== 5 xfailed in 0.04 seconds =========================
|
|
|
|
imperative xfail from within a test or setup function
|
|
------------------------------------------------------
|
|
|
|
If you cannot declare xfail-conditions at import time
|
|
you can also imperatively produce an XFail-outcome from
|
|
within test or setup code. Example::
|
|
|
|
def test_function():
|
|
if not valid_config():
|
|
pytest.xfail("unsuppored configuration")
|
|
|
|
|
|
skipping on a missing import dependency
|
|
--------------------------------------------------
|
|
|
|
You can use the following import helper at module level
|
|
or within a test or test setup function::
|
|
|
|
docutils = pytest.importorskip("docutils")
|
|
|
|
If ``docutils`` cannot be imported here, this will lead to a
|
|
skip outcome of the test. You can also skip dependeing if
|
|
if a library does not come with a high enough version::
|
|
|
|
docutils = pytest.importorskip("docutils", minversion="0.3")
|
|
|
|
The version will be read from the specified module's ``__version__`` attribute.
|
|
|
|
imperative skip from within a test or setup function
|
|
------------------------------------------------------
|
|
|
|
If for some reason you cannot declare skip-conditions
|
|
you can also imperatively produce a Skip-outcome from
|
|
within test or setup code. Example::
|
|
|
|
def test_function():
|
|
if not valid_config():
|
|
pytest.skip("unsuppored configuration")
|
|
|