2009-10-15 22:18:57 +08:00
|
|
|
|
2010-10-13 18:26:14 +08:00
|
|
|
skip and xfail mechanisms
|
|
|
|
=====================================================================
|
2010-01-13 23:00:33 +08:00
|
|
|
|
2010-10-11 05:45:45 +08:00
|
|
|
You can mark test functions for a conditional *skip* or as *xfail*,
|
|
|
|
expected-to-fail. Skipping a test avoids running a test.
|
|
|
|
Whereas an xfail-marked test usually is run but if it fails it is
|
|
|
|
not reported in detail and counted separately. The latter allows
|
|
|
|
to keep track of real implementation problems whereas test skips
|
|
|
|
are normally tied to a condition, such as a platform or dependency
|
|
|
|
requirement without which considering or running the test does
|
|
|
|
not make sense. If a test fails under all conditions then it's
|
|
|
|
probably best to mark your test as 'xfail'.
|
|
|
|
|
|
|
|
By running ``py.test -rxs`` you will see extra reporting
|
|
|
|
information on skips and xfail-run tests at the end of a test run.
|
2009-10-15 22:18:57 +08:00
|
|
|
|
|
|
|
.. _skipif:
|
|
|
|
|
2010-07-27 03:15:15 +08:00
|
|
|
Skipping a single function
|
2009-10-15 22:18:57 +08:00
|
|
|
-------------------------------------------
|
|
|
|
|
2009-11-06 00:46:14 +08:00
|
|
|
Here is an example for marking a test function to be skipped
|
|
|
|
when run on a Python3 interpreter::
|
2009-10-15 22:18:57 +08:00
|
|
|
|
|
|
|
@py.test.mark.skipif("sys.version_info >= (3,0)")
|
|
|
|
def test_function():
|
|
|
|
...
|
|
|
|
|
2010-07-27 03:15:15 +08:00
|
|
|
During test function setup the skipif condition is
|
2009-10-23 02:57:21 +08:00
|
|
|
evaluated by calling ``eval(expr, namespace)``. The namespace
|
2010-07-27 03:15:15 +08:00
|
|
|
contains the ``sys`` and ``os`` modules and the test
|
|
|
|
``config`` object. The latter allows you to skip based
|
2009-10-16 02:10:06 +08:00
|
|
|
on a test configuration value e.g. like this::
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2009-10-16 02:10:06 +08:00
|
|
|
@py.test.mark.skipif("not config.getvalue('db')")
|
2009-10-15 22:18:57 +08:00
|
|
|
def test_function(...):
|
|
|
|
...
|
|
|
|
|
2010-07-27 03:15:15 +08:00
|
|
|
Create a shortcut for your conditional skip decorator
|
2009-11-06 00:46:14 +08:00
|
|
|
at module level like this::
|
|
|
|
|
|
|
|
win32only = py.test.mark.skipif("sys.platform != 'win32'")
|
|
|
|
|
|
|
|
@win32only
|
|
|
|
def test_function():
|
|
|
|
...
|
|
|
|
|
2009-10-16 02:10:06 +08:00
|
|
|
|
2010-07-27 03:15:15 +08:00
|
|
|
skip groups of test functions
|
2009-10-23 02:57:21 +08:00
|
|
|
--------------------------------------
|
|
|
|
|
2010-10-11 05:45:45 +08:00
|
|
|
As with all function :ref:`marking` you can do it at
|
2010-07-27 03:15:15 +08:00
|
|
|
`whole class- or module level`_. Here is an example
|
2009-10-23 02:57:21 +08:00
|
|
|
for skipping all methods of a test class based on platform::
|
|
|
|
|
|
|
|
class TestPosixCalls:
|
|
|
|
pytestmark = py.test.mark.skipif("sys.platform == 'win32'")
|
2010-07-27 03:15:15 +08:00
|
|
|
|
2009-10-23 02:57:21 +08:00
|
|
|
def test_function(self):
|
|
|
|
# will not be setup or run under 'win32' platform
|
|
|
|
#
|
|
|
|
|
2009-11-06 00:46:14 +08:00
|
|
|
The ``pytestmark`` decorator will be applied to each test function.
|
2010-07-27 03:15:15 +08:00
|
|
|
If your code targets python2.6 or above you can equivalently use
|
2010-05-26 03:01:43 +08:00
|
|
|
the skipif decorator on classes::
|
2010-05-22 00:11:47 +08:00
|
|
|
|
|
|
|
@py.test.mark.skipif("sys.platform == 'win32'")
|
|
|
|
class TestPosixCalls:
|
2010-07-27 03:15:15 +08:00
|
|
|
|
2010-05-22 00:11:47 +08:00
|
|
|
def test_function(self):
|
|
|
|
# will not be setup or run under 'win32' platform
|
|
|
|
#
|
|
|
|
|
2010-05-26 03:01:43 +08:00
|
|
|
It is fine in general to apply multiple "skipif" decorators
|
|
|
|
on a single function - this means that if any of the conditions
|
2010-07-27 03:15:15 +08:00
|
|
|
apply the function will be skipped.
|
2009-10-23 02:57:21 +08:00
|
|
|
|
|
|
|
.. _`whole class- or module level`: mark.html#scoped-marking
|
|
|
|
|
2010-07-07 22:37:28 +08:00
|
|
|
.. _xfail:
|
2009-10-23 02:57:21 +08:00
|
|
|
|
2010-10-11 06:49:54 +08:00
|
|
|
mark a test function as expected to fail
|
2009-10-15 22:18:57 +08:00
|
|
|
-------------------------------------------------------
|
|
|
|
|
2009-10-23 02:57:21 +08:00
|
|
|
You can use the ``xfail`` marker to indicate that you
|
2010-07-27 03:15:15 +08:00
|
|
|
expect the test to fail::
|
2009-10-15 22:18:57 +08:00
|
|
|
|
|
|
|
@py.test.mark.xfail
|
2009-10-23 02:57:21 +08:00
|
|
|
def test_function():
|
2009-10-15 22:18:57 +08:00
|
|
|
...
|
|
|
|
|
2009-10-23 02:57:21 +08:00
|
|
|
This test will be run but no traceback will be reported
|
2009-10-15 22:18:57 +08:00
|
|
|
when it fails. Instead terminal reporting will list it in the
|
|
|
|
"expected to fail" or "unexpectedly passing" sections.
|
2009-10-23 02:57:21 +08:00
|
|
|
|
|
|
|
Same as with skipif_ you can also selectively expect a failure
|
2009-10-15 22:18:57 +08:00
|
|
|
depending on platform::
|
|
|
|
|
2009-11-06 00:46:14 +08:00
|
|
|
@py.test.mark.xfail("sys.version_info >= (3,0)")
|
2009-10-15 22:18:57 +08:00
|
|
|
def test_function():
|
|
|
|
...
|
|
|
|
|
2010-05-06 03:01:54 +08:00
|
|
|
To not run a test and still regard it as "xfailed"::
|
|
|
|
|
|
|
|
@py.test.mark.xfail(..., run=False)
|
|
|
|
|
|
|
|
To specify an explicit reason to be shown with xfailure detail::
|
|
|
|
|
|
|
|
@py.test.mark.xfail(..., reason="my reason")
|
|
|
|
|
2010-10-11 05:45:45 +08:00
|
|
|
By specifying on the commandline::
|
|
|
|
|
|
|
|
py.test --runxfail
|
|
|
|
|
|
|
|
you can force the running and reporting of a runnable ``xfail`` marked test.
|
|
|
|
|
2010-05-26 03:01:43 +08:00
|
|
|
imperative xfail from within a test or setup function
|
|
|
|
------------------------------------------------------
|
|
|
|
|
|
|
|
If you cannot declare xfail-conditions at import time
|
2010-07-27 03:15:15 +08:00
|
|
|
you can also imperatively produce an XFail-outcome from
|
2010-05-26 03:01:43 +08:00
|
|
|
within test or setup code. Example::
|
|
|
|
|
|
|
|
def test_function():
|
|
|
|
if not valid_config():
|
|
|
|
py.test.xfail("unsuppored configuration")
|
|
|
|
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2009-10-23 02:57:21 +08:00
|
|
|
skipping on a missing import dependency
|
|
|
|
--------------------------------------------------
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2010-07-27 03:15:15 +08:00
|
|
|
You can use the following import helper at module level
|
2009-11-06 00:46:14 +08:00
|
|
|
or within a test or test setup function::
|
2009-10-15 22:18:57 +08:00
|
|
|
|
|
|
|
docutils = py.test.importorskip("docutils")
|
|
|
|
|
2009-10-23 02:57:21 +08:00
|
|
|
If ``docutils`` cannot be imported here, this will lead to a
|
|
|
|
skip outcome of the test. You can also skip dependeing if
|
|
|
|
if a library does not come with a high enough version::
|
2009-10-15 22:18:57 +08:00
|
|
|
|
|
|
|
docutils = py.test.importorskip("docutils", minversion="0.3")
|
|
|
|
|
|
|
|
The version will be read from the specified module's ``__version__`` attribute.
|
|
|
|
|
2009-10-23 02:57:21 +08:00
|
|
|
imperative skip from within a test or setup function
|
|
|
|
------------------------------------------------------
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2009-10-23 02:57:21 +08:00
|
|
|
If for some reason you cannot declare skip-conditions
|
2010-07-27 03:15:15 +08:00
|
|
|
you can also imperatively produce a Skip-outcome from
|
2009-10-23 02:57:21 +08:00
|
|
|
within test or setup code. Example::
|
2009-10-15 22:18:57 +08:00
|
|
|
|
|
|
|
def test_function():
|
|
|
|
if not valid_config():
|
|
|
|
py.test.skip("unsuppored configuration")
|
|
|
|
|