2010-11-25 19:11:10 +08:00
|
|
|
.. _`skip and xfail`:
|
|
|
|
|
2012-10-09 20:35:17 +08:00
|
|
|
.. _skipping:
|
|
|
|
|
2017-01-01 01:54:47 +08:00
|
|
|
Skip and xfail: dealing with tests that cannot succeed
|
2017-11-04 02:37:18 +08:00
|
|
|
======================================================
|
2010-01-13 23:00:33 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
You can mark test functions that cannot be run on certain platforms
|
|
|
|
or that you expect to fail so pytest can deal with them accordingly and
|
|
|
|
present a summary of the test session, while keeping the test suite *green*.
|
2011-03-04 06:22:55 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
A **skip** means that you expect your test to pass only if some conditions are met,
|
|
|
|
otherwise pytest should skip running the test altogether. Common examples are skipping
|
|
|
|
windows-only tests on non-windows platforms, or skipping tests that depend on an external
|
|
|
|
resource which is not available at the moment (for example a database).
|
|
|
|
|
|
|
|
A **xfail** means that you expect a test to fail for some reason.
|
|
|
|
A common example is a test for a feature not yet implemented, or a bug not yet fixed.
|
2017-10-06 08:14:45 +08:00
|
|
|
When a test passes despite being expected to fail (marked with ``pytest.mark.xfail``),
|
|
|
|
it's an **xpass** and will be reported in the test summary.
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2014-01-18 19:31:33 +08:00
|
|
|
``pytest`` counts and lists *skip* and *xfail* tests separately. Detailed
|
2013-05-08 00:40:26 +08:00
|
|
|
information about skipped/xfailed tests is not shown by default to avoid
|
|
|
|
cluttering the output. You can use the ``-r`` option to see details
|
2019-02-15 21:10:37 +08:00
|
|
|
corresponding to the "short" letters shown in the test progress:
|
|
|
|
|
|
|
|
.. code-block:: bash
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2017-10-06 08:14:45 +08:00
|
|
|
pytest -rxXs # show extra info on xfailed, xpassed, and skipped tests
|
|
|
|
|
|
|
|
More details on the ``-r`` option can be found by running ``pytest -h``.
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2010-11-25 19:11:10 +08:00
|
|
|
(See :ref:`how to change command line options defaults`)
|
|
|
|
|
2009-10-15 22:18:57 +08:00
|
|
|
.. _skipif:
|
2017-08-04 09:00:11 +08:00
|
|
|
.. _skip:
|
2013-05-08 00:40:26 +08:00
|
|
|
.. _`condition booleans`:
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
Skipping test functions
|
|
|
|
-----------------------
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2019-04-28 23:37:58 +08:00
|
|
|
|
2015-09-23 22:04:04 +08:00
|
|
|
|
2016-02-16 06:19:07 +08:00
|
|
|
The simplest way to skip a test function is to mark it with the ``skip`` decorator
|
2016-02-15 06:45:55 +08:00
|
|
|
which may be passed an optional ``reason``:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2015-09-21 22:29:07 +08:00
|
|
|
|
|
|
|
@pytest.mark.skip(reason="no way of currently testing this")
|
|
|
|
def test_the_unknown():
|
|
|
|
...
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
|
|
|
|
Alternatively, it is also possible to skip imperatively during test execution or setup
|
|
|
|
by calling the ``pytest.skip(reason)`` function:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
def test_function():
|
|
|
|
if not valid_config():
|
|
|
|
pytest.skip("unsupported configuration")
|
|
|
|
|
2018-10-20 22:28:39 +08:00
|
|
|
The imperative method is useful when it is not possible to evaluate the skip condition
|
|
|
|
during import time.
|
|
|
|
|
2017-10-02 05:40:19 +08:00
|
|
|
It is also possible to skip the whole module using
|
|
|
|
``pytest.skip(reason, allow_module_level=True)`` at the module level:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
2018-10-20 22:28:39 +08:00
|
|
|
import sys
|
2017-10-03 08:26:00 +08:00
|
|
|
import pytest
|
|
|
|
|
2018-10-20 22:28:39 +08:00
|
|
|
if not sys.platform.startswith("win"):
|
|
|
|
pytest.skip("skipping windows-only tests", allow_module_level=True)
|
2017-10-02 05:40:19 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
|
2018-03-01 07:34:20 +08:00
|
|
|
**Reference**: :ref:`pytest.mark.skip ref`
|
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
``skipif``
|
|
|
|
~~~~~~~~~~
|
|
|
|
|
2019-04-28 23:37:58 +08:00
|
|
|
|
2015-09-21 22:29:50 +08:00
|
|
|
|
2016-02-16 06:19:07 +08:00
|
|
|
If you wish to skip something conditionally then you can use ``skipif`` instead.
|
2015-09-21 22:33:48 +08:00
|
|
|
Here is an example of marking a test function to be skipped
|
2019-04-12 19:50:26 +08:00
|
|
|
when run on an interpreter earlier than Python3.6:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2011-03-04 06:22:55 +08:00
|
|
|
import sys
|
2019-04-12 19:50:26 +08:00
|
|
|
|
|
|
|
|
|
|
|
@pytest.mark.skipif(sys.version_info < (3, 6), reason="requires python3.6 or higher")
|
2009-10-15 22:18:57 +08:00
|
|
|
def test_function():
|
|
|
|
...
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
If the condition evaluates to ``True`` during collection, the test function will be skipped,
|
2017-09-09 03:01:33 +08:00
|
|
|
with the specified reason appearing in the summary when using ``-rs``.
|
2011-03-04 06:22:55 +08:00
|
|
|
|
2019-04-12 19:50:26 +08:00
|
|
|
You can share ``skipif`` markers between modules. Consider this test module:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2011-03-04 06:22:55 +08:00
|
|
|
|
2013-05-08 00:40:26 +08:00
|
|
|
# content of test_mymodule.py
|
|
|
|
import mymodule
|
2019-04-12 19:50:26 +08:00
|
|
|
|
|
|
|
minversion = pytest.mark.skipif(
|
|
|
|
mymodule.__versioninfo__ < (1, 1), reason="at least mymodule-1.1 required"
|
|
|
|
)
|
|
|
|
|
|
|
|
|
2013-05-08 00:40:26 +08:00
|
|
|
@minversion
|
2011-03-04 06:22:55 +08:00
|
|
|
def test_function():
|
|
|
|
...
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2019-04-12 19:50:26 +08:00
|
|
|
You can import the marker and reuse it in another test module:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2009-11-06 00:46:14 +08:00
|
|
|
|
2013-05-08 00:40:26 +08:00
|
|
|
# test_myothermodule.py
|
|
|
|
from test_mymodule import minversion
|
2009-11-06 00:46:14 +08:00
|
|
|
|
2019-04-12 19:50:26 +08:00
|
|
|
|
2013-05-08 00:40:26 +08:00
|
|
|
@minversion
|
|
|
|
def test_anotherfunction():
|
2009-11-06 00:46:14 +08:00
|
|
|
...
|
|
|
|
|
2013-05-08 00:40:26 +08:00
|
|
|
For larger test suites it's usually a good idea to have one file
|
|
|
|
where you define the markers which you then consistently apply
|
|
|
|
throughout your test suite.
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
Alternatively, you can use :ref:`condition strings
|
|
|
|
<string conditions>` instead of booleans, but they can't be shared between modules easily
|
|
|
|
so they are supported mainly for backward compatibility reasons.
|
2013-05-08 00:40:26 +08:00
|
|
|
|
2018-03-01 07:34:20 +08:00
|
|
|
**Reference**: :ref:`pytest.mark.skipif ref`
|
|
|
|
|
2013-05-08 00:40:26 +08:00
|
|
|
|
|
|
|
Skip all test functions of a class or module
|
2017-06-01 06:51:47 +08:00
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
2009-10-23 02:57:21 +08:00
|
|
|
|
2019-04-12 19:50:26 +08:00
|
|
|
You can use the ``skipif`` marker (as any other marker) on classes:
|
2009-10-23 02:57:21 +08:00
|
|
|
|
2019-04-12 19:50:26 +08:00
|
|
|
.. code-block:: python
|
2010-07-27 03:15:15 +08:00
|
|
|
|
2019-04-12 19:50:26 +08:00
|
|
|
@pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows")
|
2019-08-07 03:40:27 +08:00
|
|
|
class TestPosixCalls:
|
2009-10-23 02:57:21 +08:00
|
|
|
def test_function(self):
|
2010-11-21 04:35:55 +08:00
|
|
|
"will not be setup or run under 'win32' platform"
|
2009-10-23 02:57:21 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
If the condition is ``True``, this marker will produce a skip result for
|
|
|
|
each of the test methods of that class.
|
2013-05-08 00:40:26 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
If you want to skip all test functions of a module, you may use
|
2015-07-10 08:50:38 +08:00
|
|
|
the ``pytestmark`` name on the global level:
|
2013-05-08 00:40:26 +08:00
|
|
|
|
2015-07-10 08:50:38 +08:00
|
|
|
.. code-block:: python
|
2013-05-08 00:40:26 +08:00
|
|
|
|
2015-07-10 08:50:38 +08:00
|
|
|
# test_module.py
|
2013-05-08 00:40:26 +08:00
|
|
|
pytestmark = pytest.mark.skipif(...)
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
If multiple ``skipif`` decorators are applied to a test function, it
|
2013-05-08 00:40:26 +08:00
|
|
|
will be skipped if any of the skip conditions is true.
|
2009-10-23 02:57:21 +08:00
|
|
|
|
|
|
|
.. _`whole class- or module level`: mark.html#scoped-marking
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
|
2017-11-04 02:37:18 +08:00
|
|
|
Skipping files or directories
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
Sometimes you may need to skip an entire file or directory, for example if the
|
|
|
|
tests rely on Python version-specific features or contain code that you do not
|
|
|
|
wish pytest to run. In this case, you must exclude the files and directories
|
|
|
|
from collection. Refer to :ref:`customizing-test-collection` for more
|
|
|
|
information.
|
|
|
|
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
Skipping on a missing import dependency
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
2019-08-10 03:35:03 +08:00
|
|
|
You can skip tests on a missing import by using :ref:`pytest.importorskip ref`
|
2019-08-15 21:05:42 +08:00
|
|
|
at module level, within a test, or test setup function.
|
2017-06-01 06:51:47 +08:00
|
|
|
|
2019-08-07 04:25:54 +08:00
|
|
|
.. code-block:: python
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
docutils = pytest.importorskip("docutils")
|
|
|
|
|
2019-08-10 03:35:03 +08:00
|
|
|
If ``docutils`` cannot be imported here, this will lead to a skip outcome of
|
|
|
|
the test. You can also skip based on the version number of a library:
|
2017-06-01 06:51:47 +08:00
|
|
|
|
2019-08-07 04:25:54 +08:00
|
|
|
.. code-block:: python
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
docutils = pytest.importorskip("docutils", minversion="0.3")
|
|
|
|
|
|
|
|
The version will be read from the specified
|
|
|
|
module's ``__version__`` attribute.
|
|
|
|
|
|
|
|
Summary
|
|
|
|
~~~~~~~
|
|
|
|
|
|
|
|
Here's a quick guide on how to skip tests in a module in different situations:
|
|
|
|
|
|
|
|
1. Skip all tests in a module unconditionally:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
2018-06-03 11:29:28 +08:00
|
|
|
pytestmark = pytest.mark.skip("all tests still WIP")
|
2017-06-01 06:51:47 +08:00
|
|
|
|
|
|
|
2. Skip all tests in a module based on some condition:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
2019-04-17 00:30:13 +08:00
|
|
|
pytestmark = pytest.mark.skipif(sys.platform == "win32", reason="tests for linux only")
|
2017-06-01 06:51:47 +08:00
|
|
|
|
|
|
|
3. Skip all tests in a module if some import is missing:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
2018-06-03 11:29:28 +08:00
|
|
|
pexpect = pytest.importorskip("pexpect")
|
2017-06-01 06:51:47 +08:00
|
|
|
|
|
|
|
|
2010-07-07 22:37:28 +08:00
|
|
|
.. _xfail:
|
2009-10-23 02:57:21 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
XFail: mark test functions as expected to fail
|
|
|
|
----------------------------------------------
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2009-10-23 02:57:21 +08:00
|
|
|
You can use the ``xfail`` marker to indicate that you
|
2019-08-07 07:20:06 +08:00
|
|
|
expect a test to fail:
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2019-08-07 04:25:54 +08:00
|
|
|
.. code-block:: python
|
|
|
|
|
2010-11-18 05:12:16 +08:00
|
|
|
@pytest.mark.xfail
|
2009-10-23 02:57:21 +08:00
|
|
|
def test_function():
|
2009-10-15 22:18:57 +08:00
|
|
|
...
|
|
|
|
|
2020-02-20 02:04:37 +08:00
|
|
|
This test will run but no traceback will be reported when it fails. Instead, terminal
|
|
|
|
reporting will list it in the "expected to fail" (``XFAIL``) or "unexpectedly
|
|
|
|
passing" (``XPASS``) sections.
|
2009-10-23 02:57:21 +08:00
|
|
|
|
2020-02-20 02:04:37 +08:00
|
|
|
Alternatively, you can also mark a test as ``XFAIL`` from within the test or its setup function
|
2017-06-01 06:51:47 +08:00
|
|
|
imperatively:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
def test_function():
|
|
|
|
if not valid_config():
|
|
|
|
pytest.xfail("failing configuration (but should work)")
|
|
|
|
|
2020-02-20 02:04:37 +08:00
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
def test_function2():
|
|
|
|
import slow_module
|
|
|
|
|
|
|
|
if slow_module.slow_function():
|
|
|
|
pytest.xfail("slow_module taking too long")
|
|
|
|
|
|
|
|
These two examples illustrate situations where you don't want to check for a condition
|
|
|
|
at the module level, which is when a condition would otherwise be evaluated for marks.
|
|
|
|
|
|
|
|
This will make ``test_function`` ``XFAIL``. Note that no other code is executed after
|
|
|
|
the ``pytest.xfail`` call, differently from the marker. That's because it is implemented
|
2017-06-01 06:51:47 +08:00
|
|
|
internally by raising a known exception.
|
|
|
|
|
2018-03-01 07:34:20 +08:00
|
|
|
**Reference**: :ref:`pytest.mark.xfail ref`
|
2017-06-01 06:51:47 +08:00
|
|
|
|
2018-03-13 06:43:04 +08:00
|
|
|
|
|
|
|
.. _`xfail strict tutorial`:
|
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
``strict`` parameter
|
|
|
|
~~~~~~~~~~~~~~~~~~~~
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2019-04-28 23:37:58 +08:00
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
|
2020-02-20 02:04:37 +08:00
|
|
|
Both ``XFAIL`` and ``XPASS`` don't fail the test suite by default.
|
|
|
|
You can change this by setting the ``strict`` keyword-only parameter to ``True``:
|
2016-02-15 06:45:55 +08:00
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
@pytest.mark.xfail(strict=True)
|
|
|
|
def test_function():
|
|
|
|
...
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
|
|
|
|
This will make ``XPASS`` ("unexpectedly passing") results from this test to fail the test suite.
|
|
|
|
|
|
|
|
You can change the default value of the ``strict`` parameter using the
|
|
|
|
``xfail_strict`` ini option:
|
|
|
|
|
|
|
|
.. code-block:: ini
|
|
|
|
|
|
|
|
[pytest]
|
|
|
|
xfail_strict=true
|
|
|
|
|
|
|
|
|
|
|
|
``reason`` parameter
|
|
|
|
~~~~~~~~~~~~~~~~~~~~
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2011-03-04 06:40:38 +08:00
|
|
|
As with skipif_ you can also mark your expectation of a failure
|
2019-04-12 19:50:26 +08:00
|
|
|
on a particular platform:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2019-04-12 19:50:26 +08:00
|
|
|
@pytest.mark.xfail(sys.version_info >= (3, 6), reason="python3.6 api changes")
|
2009-10-15 22:18:57 +08:00
|
|
|
def test_function():
|
|
|
|
...
|
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
|
|
|
|
``raises`` parameter
|
|
|
|
~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
2014-07-27 00:10:32 +08:00
|
|
|
If you want to be more specific as to why the test is failing, you can specify
|
2018-08-30 06:11:00 +08:00
|
|
|
a single exception, or a tuple of exceptions, in the ``raises`` argument.
|
2016-02-15 06:45:55 +08:00
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
@pytest.mark.xfail(raises=RuntimeError)
|
|
|
|
def test_function():
|
|
|
|
...
|
|
|
|
|
|
|
|
Then the test will be reported as a regular failure if it fails with an
|
2014-07-27 00:10:32 +08:00
|
|
|
exception not mentioned in ``raises``.
|
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
``run`` parameter
|
|
|
|
~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
If a test should be marked as xfail and reported as such but should not be
|
|
|
|
even executed, use the ``run`` parameter as ``False``:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
@pytest.mark.xfail(run=False)
|
|
|
|
def test_function():
|
|
|
|
...
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
This is specially useful for xfailing tests that are crashing the interpreter and should be
|
|
|
|
investigated later.
|
2016-02-15 06:45:55 +08:00
|
|
|
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
Ignoring xfail
|
|
|
|
~~~~~~~~~~~~~~
|
2016-02-15 06:45:55 +08:00
|
|
|
|
2019-02-15 21:10:37 +08:00
|
|
|
By specifying on the commandline:
|
|
|
|
|
|
|
|
.. code-block:: bash
|
2016-02-15 06:45:55 +08:00
|
|
|
|
|
|
|
pytest --runxfail
|
|
|
|
|
|
|
|
you can force the running and reporting of an ``xfail`` marked test
|
2017-06-01 06:51:47 +08:00
|
|
|
as if it weren't marked at all. This also causes ``pytest.xfail`` to produce no effect.
|
2016-02-15 06:45:55 +08:00
|
|
|
|
|
|
|
Examples
|
|
|
|
~~~~~~~~
|
|
|
|
|
|
|
|
Here is a simple test file with the several usages:
|
2010-11-21 04:35:55 +08:00
|
|
|
|
|
|
|
.. literalinclude:: example/xfail_demo.py
|
|
|
|
|
2018-11-24 13:41:22 +08:00
|
|
|
Running it with the report-on-xfail option gives this output:
|
|
|
|
|
|
|
|
.. code-block:: pytest
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
example $ pytest -rx xfail_demo.py
|
2017-11-23 23:33:41 +08:00
|
|
|
=========================== test session starts ============================
|
2019-07-05 08:01:16 +08:00
|
|
|
platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
|
2019-01-31 00:25:38 +08:00
|
|
|
cachedir: $PYTHON_PREFIX/.pytest_cache
|
2019-04-15 22:24:17 +08:00
|
|
|
rootdir: $REGENDOC_TMPDIR/example
|
2017-05-13 04:51:20 +08:00
|
|
|
collected 7 items
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
xfail_demo.py xxxxxxx [100%]
|
2019-05-09 05:50:08 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
========================= short test summary info ==========================
|
2017-05-13 04:51:20 +08:00
|
|
|
XFAIL xfail_demo.py::test_hello
|
|
|
|
XFAIL xfail_demo.py::test_hello2
|
2018-05-18 16:19:46 +08:00
|
|
|
reason: [NOTRUN]
|
2017-05-13 04:51:20 +08:00
|
|
|
XFAIL xfail_demo.py::test_hello3
|
|
|
|
condition: hasattr(os, 'sep')
|
|
|
|
XFAIL xfail_demo.py::test_hello4
|
|
|
|
bug 110
|
|
|
|
XFAIL xfail_demo.py::test_hello5
|
|
|
|
condition: pytest.__version__[0] != "17"
|
|
|
|
XFAIL xfail_demo.py::test_hello6
|
|
|
|
reason: reason
|
|
|
|
XFAIL xfail_demo.py::test_hello7
|
2019-08-30 23:43:47 +08:00
|
|
|
============================ 7 xfailed in 0.12s ============================
|
2010-10-11 05:45:45 +08:00
|
|
|
|
2013-05-21 09:12:45 +08:00
|
|
|
.. _`skip/xfail with parametrize`:
|
|
|
|
|
|
|
|
Skip/xfail with parametrize
|
|
|
|
---------------------------
|
|
|
|
|
|
|
|
It is possible to apply markers like skip and xfail to individual
|
2017-06-01 06:51:47 +08:00
|
|
|
test instances when using parametrize:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2013-05-21 09:12:45 +08:00
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
import pytest
|
|
|
|
|
2018-06-03 11:29:28 +08:00
|
|
|
|
|
|
|
@pytest.mark.parametrize(
|
|
|
|
("n", "expected"),
|
|
|
|
[
|
|
|
|
(1, 2),
|
|
|
|
pytest.param(1, 0, marks=pytest.mark.xfail),
|
|
|
|
pytest.param(1, 3, marks=pytest.mark.xfail(reason="some bug")),
|
|
|
|
(2, 3),
|
|
|
|
(3, 4),
|
|
|
|
(4, 5),
|
|
|
|
pytest.param(
|
|
|
|
10, 11, marks=pytest.mark.skipif(sys.version_info >= (3, 0), reason="py2k")
|
|
|
|
),
|
|
|
|
],
|
|
|
|
)
|
2016-02-15 06:45:55 +08:00
|
|
|
def test_increment(n, expected):
|
|
|
|
assert n + 1 == expected
|