116 lines
3.4 KiB
Plaintext
116 lines
3.4 KiB
Plaintext
|
|
pytest_skipping plugin
|
|
======================
|
|
|
|
advanced conditional skipping for python test functions, classes or modules.
|
|
|
|
.. contents::
|
|
:local:
|
|
|
|
You can mark functions, classes or modules for for conditional
|
|
skipping (skipif) or as expected-to-fail (xfail). The difference
|
|
between the two is that 'xfail' will still execute test functions
|
|
but it will invert the outcome: a passing test becomes a failure and
|
|
a failing test is a semi-passing one. All skip conditions are
|
|
reported at the end of test run through the terminal reporter.
|
|
|
|
.. _skipif:
|
|
|
|
skip a test function conditionally
|
|
-------------------------------------------
|
|
|
|
Here is an example for skipping a test function on Python3::
|
|
|
|
@py.test.mark.skipif("sys.version_info >= (3,0)")
|
|
def test_function():
|
|
...
|
|
|
|
The 'skipif' marker accepts an **arbitrary python expression**
|
|
as a condition. When setting up the test function the condition
|
|
is evaluated by calling ``eval(expr, namespace)``. The namespace
|
|
contains the ``sys`` and ``os`` modules as well as the
|
|
test ``config`` object. The latter allows you to skip based
|
|
on a test configuration value e.g. like this::
|
|
|
|
@py.test.mark.skipif("not config.getvalue('db')")
|
|
def test_function(...):
|
|
...
|
|
|
|
|
|
conditionally mark a function as "expected to fail"
|
|
-------------------------------------------------------
|
|
|
|
You can use the ``xfail`` keyword to mark your test functions as
|
|
'expected to fail'::
|
|
|
|
@py.test.mark.xfail
|
|
def test_hello():
|
|
...
|
|
|
|
This test will be executed but no traceback will be reported
|
|
when it fails. Instead terminal reporting will list it in the
|
|
"expected to fail" or "unexpectedly passing" sections.
|
|
As with skipif_ you may selectively expect a failure
|
|
depending on platform::
|
|
|
|
@py.test.mark.xfail("sys.version_info >= (3,0)")
|
|
def test_function():
|
|
...
|
|
|
|
skip/xfail a whole test class or module
|
|
-------------------------------------------
|
|
|
|
Instead of marking single functions you can skip
|
|
a whole class of tests when running on a specific
|
|
platform::
|
|
|
|
class TestSomething:
|
|
skipif = "sys.platform == 'win32'"
|
|
|
|
Or you can mark all test functions as expected
|
|
to fail for a specific test configuration::
|
|
|
|
xfail = "config.getvalue('db') == 'mysql'"
|
|
|
|
|
|
skip if a dependency cannot be imported
|
|
---------------------------------------------
|
|
|
|
You can use a helper to skip on a failing import::
|
|
|
|
docutils = py.test.importorskip("docutils")
|
|
|
|
You can use this helper at module level or within
|
|
a test or setup function.
|
|
|
|
You can also skip if a library does not come with a high enough version::
|
|
|
|
docutils = py.test.importorskip("docutils", minversion="0.3")
|
|
|
|
The version will be read from the specified module's ``__version__`` attribute.
|
|
|
|
dynamically skip from within a test or setup
|
|
-------------------------------------------------
|
|
|
|
If you want to skip the execution of a test you can call
|
|
``py.test.skip()`` within a test, a setup or from a
|
|
`funcarg factory`_ function. Example::
|
|
|
|
def test_function():
|
|
if not valid_config():
|
|
py.test.skip("unsuppored configuration")
|
|
|
|
.. _`funcarg factory`: ../funcargs.html#factory
|
|
|
|
Start improving this plugin in 30 seconds
|
|
=========================================
|
|
|
|
|
|
1. Download `pytest_skipping.py`_ plugin source code
|
|
2. put it somewhere as ``pytest_skipping.py`` into your import path
|
|
3. a subsequent ``py.test`` run will use your local version
|
|
|
|
Checkout customize_, other plugins_ or `get in contact`_.
|
|
|
|
.. include:: links.txt
|