2009-10-15 22:18:57 +08:00
2010-11-25 19:11:10 +08:00
.. _`skip and xfail`:
2010-10-13 18:26:14 +08:00
skip and xfail mechanisms
=====================================================================
2010-01-13 23:00:33 +08:00
2010-11-21 04:35:55 +08:00
You can skip or "xfail" test functions, either by marking functions
through a decorator or by calling the ``pytest.skip|xfail`` helpers.
A *skip* means that you expect your test to pass unless a certain configuration or condition (e.g. wrong Python interpreter, missing dependency) prevents it to run. And *xfail* means that you expect your test to fail because there is an
2010-11-25 19:11:10 +08:00
implementation problem. py.test counts and lists *xfailing* tests separately
and you can provide info such as a bug number or a URL to provide a
human readable problem context.
2010-11-21 04:35:55 +08:00
Usually detailed information about skipped/xfailed tests is not shown
to avoid cluttering the output. You can use the ``-r`` option to
see details corresponding to the "short" letters shown in the
test progress::
py.test -rxs # show extra info on skips and xfail tests
2009-10-15 22:18:57 +08:00
2010-11-25 19:11:10 +08:00
(See :ref:`how to change command line options defaults`)
2009-10-15 22:18:57 +08:00
.. _skipif:
2010-07-27 03:15:15 +08:00
Skipping a single function
2009-10-15 22:18:57 +08:00
-------------------------------------------
2009-11-06 00:46:14 +08:00
Here is an example for marking a test function to be skipped
when run on a Python3 interpreter::
2009-10-15 22:18:57 +08:00
2010-11-18 05:12:16 +08:00
@pytest.mark.skipif("sys.version_info >= (3,0)")
2009-10-15 22:18:57 +08:00
def test_function():
...
2010-07-27 03:15:15 +08:00
During test function setup the skipif condition is
2009-10-23 02:57:21 +08:00
evaluated by calling ``eval(expr, namespace)``. The namespace
2010-07-27 03:15:15 +08:00
contains the ``sys`` and ``os`` modules and the test
``config`` object. The latter allows you to skip based
2009-10-16 02:10:06 +08:00
on a test configuration value e.g. like this::
2009-10-15 22:18:57 +08:00
2010-11-18 05:12:16 +08:00
@pytest.mark.skipif("not config.getvalue('db')")
2009-10-15 22:18:57 +08:00
def test_function(...):
...
2010-07-27 03:15:15 +08:00
Create a shortcut for your conditional skip decorator
2009-11-06 00:46:14 +08:00
at module level like this::
2010-11-18 05:12:16 +08:00
win32only = pytest.mark.skipif("sys.platform != 'win32'")
2009-11-06 00:46:14 +08:00
@win32only
def test_function():
...
2009-10-16 02:10:06 +08:00
2010-11-21 04:35:55 +08:00
skip test functions of a class
2009-10-23 02:57:21 +08:00
--------------------------------------
2010-10-11 05:45:45 +08:00
As with all function :ref:`marking` you can do it at
2010-07-27 03:15:15 +08:00
`whole class- or module level`_. Here is an example
2009-10-23 02:57:21 +08:00
for skipping all methods of a test class based on platform::
class TestPosixCalls:
2010-11-18 05:12:16 +08:00
pytestmark = pytest.mark.skipif("sys.platform == 'win32'")
2010-07-27 03:15:15 +08:00
2009-10-23 02:57:21 +08:00
def test_function(self):
2010-11-21 04:35:55 +08:00
"will not be setup or run under 'win32' platform"
2009-10-23 02:57:21 +08:00
2009-11-06 00:46:14 +08:00
The ``pytestmark`` decorator will be applied to each test function.
2010-07-27 03:15:15 +08:00
If your code targets python2.6 or above you can equivalently use
2010-05-26 03:01:43 +08:00
the skipif decorator on classes::
2010-05-22 00:11:47 +08:00
2010-11-18 05:12:16 +08:00
@pytest.mark.skipif("sys.platform == 'win32'")
2010-05-22 00:11:47 +08:00
class TestPosixCalls:
2010-07-27 03:15:15 +08:00
2010-05-22 00:11:47 +08:00
def test_function(self):
2010-11-21 04:35:55 +08:00
"will not be setup or run under 'win32' platform"
2010-05-22 00:11:47 +08:00
2010-05-26 03:01:43 +08:00
It is fine in general to apply multiple "skipif" decorators
on a single function - this means that if any of the conditions
2010-07-27 03:15:15 +08:00
apply the function will be skipped.
2009-10-23 02:57:21 +08:00
.. _`whole class- or module level`: mark.html#scoped-marking
2010-07-07 22:37:28 +08:00
.. _xfail:
2009-10-23 02:57:21 +08:00
2010-10-11 06:49:54 +08:00
mark a test function as expected to fail
2009-10-15 22:18:57 +08:00
-------------------------------------------------------
2009-10-23 02:57:21 +08:00
You can use the ``xfail`` marker to indicate that you
2010-07-27 03:15:15 +08:00
expect the test to fail::
2009-10-15 22:18:57 +08:00
2010-11-18 05:12:16 +08:00
@pytest.mark.xfail
2009-10-23 02:57:21 +08:00
def test_function():
2009-10-15 22:18:57 +08:00
...
2009-10-23 02:57:21 +08:00
This test will be run but no traceback will be reported
2009-10-15 22:18:57 +08:00
when it fails. Instead terminal reporting will list it in the
"expected to fail" or "unexpectedly passing" sections.
2009-10-23 02:57:21 +08:00
2010-11-21 04:35:55 +08:00
By specifying on the commandline::
pytest --runxfail
you can force the running and reporting of an ``xfail`` marked test
as if it weren't marked at all.
2009-10-23 02:57:21 +08:00
Same as with skipif_ you can also selectively expect a failure
2009-10-15 22:18:57 +08:00
depending on platform::
2010-11-18 05:12:16 +08:00
@pytest.mark.xfail("sys.version_info >= (3,0)")
2009-10-15 22:18:57 +08:00
def test_function():
...
2010-11-21 04:35:55 +08:00
You can also avoid running an "xfail" test at all or
specify a reason such as a bug ID or similar. Here is
a simple test file with usages:
.. literalinclude:: example/xfail_demo.py
Running it with the report-on-xfail option gives this output::
example $ py.test -rx xfail_demo.py
=========================== test session starts ============================
2010-11-26 20:26:56 +08:00
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 5 items
2010-11-21 04:35:55 +08:00
xfail_demo.py xxxxx
========================= short test summary info ==========================
XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2
2010-11-26 20:26:56 +08:00
reason: [NOTRUN]
2010-11-21 04:35:55 +08:00
XFAIL xfail_demo.py::test_hello3
condition: hasattr(os, 'sep')
XFAIL xfail_demo.py::test_hello4
bug 110
XFAIL xfail_demo.py::test_hello5
reason: reason
======================== 5 xfailed in 0.04 seconds =========================
2010-10-11 05:45:45 +08:00
2010-05-26 03:01:43 +08:00
imperative xfail from within a test or setup function
------------------------------------------------------
If you cannot declare xfail-conditions at import time
2010-07-27 03:15:15 +08:00
you can also imperatively produce an XFail-outcome from
2010-05-26 03:01:43 +08:00
within test or setup code. Example::
def test_function():
if not valid_config():
2010-11-18 05:12:16 +08:00
pytest.xfail("unsuppored configuration")
2010-05-26 03:01:43 +08:00
2009-10-15 22:18:57 +08:00
2009-10-23 02:57:21 +08:00
skipping on a missing import dependency
--------------------------------------------------
2009-10-15 22:18:57 +08:00
2010-07-27 03:15:15 +08:00
You can use the following import helper at module level
2009-11-06 00:46:14 +08:00
or within a test or test setup function::
2009-10-15 22:18:57 +08:00
2010-11-18 05:12:16 +08:00
docutils = pytest.importorskip("docutils")
2009-10-15 22:18:57 +08:00
2009-10-23 02:57:21 +08:00
If ``docutils`` cannot be imported here, this will lead to a
skip outcome of the test. You can also skip dependeing if
if a library does not come with a high enough version::
2009-10-15 22:18:57 +08:00
2010-11-18 05:12:16 +08:00
docutils = pytest.importorskip("docutils", minversion="0.3")
2009-10-15 22:18:57 +08:00
The version will be read from the specified module's ``__version__`` attribute.
2009-10-23 02:57:21 +08:00
imperative skip from within a test or setup function
------------------------------------------------------
2009-10-15 22:18:57 +08:00
2009-10-23 02:57:21 +08:00
If for some reason you cannot declare skip-conditions
2010-07-27 03:15:15 +08:00
you can also imperatively produce a Skip-outcome from
2009-10-23 02:57:21 +08:00
within test or setup code. Example::
2009-10-15 22:18:57 +08:00
def test_function():
if not valid_config():
2010-11-18 05:12:16 +08:00
pytest.skip("unsuppored configuration")
2009-10-15 22:18:57 +08:00