2010-11-25 19:11:10 +08:00
|
|
|
.. _`skip and xfail`:
|
|
|
|
|
2012-10-09 20:35:17 +08:00
|
|
|
.. _skipping:
|
|
|
|
|
2017-01-01 01:54:47 +08:00
|
|
|
Skip and xfail: dealing with tests that cannot succeed
|
2010-10-13 18:26:14 +08:00
|
|
|
=====================================================================
|
2010-01-13 23:00:33 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
You can mark test functions that cannot be run on certain platforms
|
|
|
|
or that you expect to fail so pytest can deal with them accordingly and
|
|
|
|
present a summary of the test session, while keeping the test suite *green*.
|
2011-03-04 06:22:55 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
A **skip** means that you expect your test to pass only if some conditions are met,
|
|
|
|
otherwise pytest should skip running the test altogether. Common examples are skipping
|
|
|
|
windows-only tests on non-windows platforms, or skipping tests that depend on an external
|
|
|
|
resource which is not available at the moment (for example a database).
|
|
|
|
|
|
|
|
A **xfail** means that you expect a test to fail for some reason.
|
|
|
|
A common example is a test for a feature not yet implemented, or a bug not yet fixed.
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2014-01-18 19:31:33 +08:00
|
|
|
``pytest`` counts and lists *skip* and *xfail* tests separately. Detailed
|
2013-05-08 00:40:26 +08:00
|
|
|
information about skipped/xfailed tests is not shown by default to avoid
|
|
|
|
cluttering the output. You can use the ``-r`` option to see details
|
|
|
|
corresponding to the "short" letters shown in the test progress::
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
pytest -rxs # show extra info on skips and xfails
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2010-11-25 19:11:10 +08:00
|
|
|
(See :ref:`how to change command line options defaults`)
|
|
|
|
|
2009-10-15 22:18:57 +08:00
|
|
|
.. _skipif:
|
2013-05-08 00:40:26 +08:00
|
|
|
.. _`condition booleans`:
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
Skipping test functions
|
|
|
|
-----------------------
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2015-09-23 22:04:04 +08:00
|
|
|
.. versionadded:: 2.9
|
|
|
|
|
2016-02-16 06:19:07 +08:00
|
|
|
The simplest way to skip a test function is to mark it with the ``skip`` decorator
|
2016-02-15 06:45:55 +08:00
|
|
|
which may be passed an optional ``reason``:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2015-09-21 22:29:07 +08:00
|
|
|
|
|
|
|
@pytest.mark.skip(reason="no way of currently testing this")
|
|
|
|
def test_the_unknown():
|
|
|
|
...
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
|
|
|
|
Alternatively, it is also possible to skip imperatively during test execution or setup
|
|
|
|
by calling the ``pytest.skip(reason)`` function:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
def test_function():
|
|
|
|
if not valid_config():
|
|
|
|
pytest.skip("unsupported configuration")
|
|
|
|
|
|
|
|
The imperative method is useful when it is not possible to evaluate the skip condition
|
|
|
|
during import time.
|
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
``skipif``
|
|
|
|
~~~~~~~~~~
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
.. versionadded:: 2.0
|
2015-09-21 22:29:50 +08:00
|
|
|
|
2016-02-16 06:19:07 +08:00
|
|
|
If you wish to skip something conditionally then you can use ``skipif`` instead.
|
2015-09-21 22:33:48 +08:00
|
|
|
Here is an example of marking a test function to be skipped
|
2013-05-08 00:40:26 +08:00
|
|
|
when run on a Python3.3 interpreter::
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2011-03-04 06:22:55 +08:00
|
|
|
import sys
|
2014-04-04 04:26:10 +08:00
|
|
|
@pytest.mark.skipif(sys.version_info < (3,3),
|
2013-05-08 00:40:26 +08:00
|
|
|
reason="requires python3.3")
|
2009-10-15 22:18:57 +08:00
|
|
|
def test_function():
|
|
|
|
...
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
If the condition evaluates to ``True`` during collection, the test function will be skipped,
|
|
|
|
with the specified reason appearing in the summary when using ``-rs``.
|
2011-03-04 06:22:55 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
You can share ``skipif`` markers between modules. Consider this test module::
|
2011-03-04 06:22:55 +08:00
|
|
|
|
2013-05-08 00:40:26 +08:00
|
|
|
# content of test_mymodule.py
|
|
|
|
import mymodule
|
2014-04-04 04:26:10 +08:00
|
|
|
minversion = pytest.mark.skipif(mymodule.__versioninfo__ < (1,1),
|
2013-05-08 00:40:26 +08:00
|
|
|
reason="at least mymodule-1.1 required")
|
|
|
|
@minversion
|
2011-03-04 06:22:55 +08:00
|
|
|
def test_function():
|
|
|
|
...
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
You can import the marker and reuse it in another test module::
|
2009-11-06 00:46:14 +08:00
|
|
|
|
2013-05-08 00:40:26 +08:00
|
|
|
# test_myothermodule.py
|
|
|
|
from test_mymodule import minversion
|
2009-11-06 00:46:14 +08:00
|
|
|
|
2013-05-08 00:40:26 +08:00
|
|
|
@minversion
|
|
|
|
def test_anotherfunction():
|
2009-11-06 00:46:14 +08:00
|
|
|
...
|
|
|
|
|
2013-05-08 00:40:26 +08:00
|
|
|
For larger test suites it's usually a good idea to have one file
|
|
|
|
where you define the markers which you then consistently apply
|
|
|
|
throughout your test suite.
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
Alternatively, you can use :ref:`condition strings
|
|
|
|
<string conditions>` instead of booleans, but they can't be shared between modules easily
|
|
|
|
so they are supported mainly for backward compatibility reasons.
|
2013-05-08 00:40:26 +08:00
|
|
|
|
|
|
|
|
|
|
|
Skip all test functions of a class or module
|
2017-06-01 06:51:47 +08:00
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
2009-10-23 02:57:21 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
You can use the ``skipif`` marker (as any other marker) on classes::
|
2009-10-23 02:57:21 +08:00
|
|
|
|
2015-12-05 10:35:31 +08:00
|
|
|
@pytest.mark.skipif(sys.platform == 'win32',
|
|
|
|
reason="does not run on windows")
|
2017-02-17 02:41:51 +08:00
|
|
|
class TestPosixCalls(object):
|
2010-07-27 03:15:15 +08:00
|
|
|
|
2009-10-23 02:57:21 +08:00
|
|
|
def test_function(self):
|
2010-11-21 04:35:55 +08:00
|
|
|
"will not be setup or run under 'win32' platform"
|
2009-10-23 02:57:21 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
If the condition is ``True``, this marker will produce a skip result for
|
|
|
|
each of the test methods of that class.
|
2013-05-08 00:40:26 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
If you want to skip all test functions of a module, you may use
|
2015-07-10 08:50:38 +08:00
|
|
|
the ``pytestmark`` name on the global level:
|
2013-05-08 00:40:26 +08:00
|
|
|
|
2015-07-10 08:50:38 +08:00
|
|
|
.. code-block:: python
|
2013-05-08 00:40:26 +08:00
|
|
|
|
2015-07-10 08:50:38 +08:00
|
|
|
# test_module.py
|
2013-05-08 00:40:26 +08:00
|
|
|
pytestmark = pytest.mark.skipif(...)
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
If multiple ``skipif`` decorators are applied to a test function, it
|
2013-05-08 00:40:26 +08:00
|
|
|
will be skipped if any of the skip conditions is true.
|
2009-10-23 02:57:21 +08:00
|
|
|
|
|
|
|
.. _`whole class- or module level`: mark.html#scoped-marking
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
|
|
|
|
Skipping on a missing import dependency
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
You can use the following helper at module level
|
|
|
|
or within a test or test setup function::
|
|
|
|
|
|
|
|
docutils = pytest.importorskip("docutils")
|
|
|
|
|
|
|
|
If ``docutils`` cannot be imported here, this will lead to a
|
|
|
|
skip outcome of the test. You can also skip based on the
|
|
|
|
version number of a library::
|
|
|
|
|
|
|
|
docutils = pytest.importorskip("docutils", minversion="0.3")
|
|
|
|
|
|
|
|
The version will be read from the specified
|
|
|
|
module's ``__version__`` attribute.
|
|
|
|
|
|
|
|
Summary
|
|
|
|
~~~~~~~
|
|
|
|
|
|
|
|
Here's a quick guide on how to skip tests in a module in different situations:
|
|
|
|
|
|
|
|
1. Skip all tests in a module unconditionally:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
pytestmark = pytest.mark.skip('all tests still WIP')
|
|
|
|
|
|
|
|
2. Skip all tests in a module based on some condition:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
pytestmark = pytest.mark.skipif(sys.platform == 'win32', 'tests for linux only')
|
|
|
|
|
|
|
|
3. Skip all tests in a module if some import is missing:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
pexpect = pytest.importorskip('pexpect')
|
|
|
|
|
|
|
|
|
2010-07-07 22:37:28 +08:00
|
|
|
.. _xfail:
|
2009-10-23 02:57:21 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
XFail: mark test functions as expected to fail
|
|
|
|
----------------------------------------------
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2009-10-23 02:57:21 +08:00
|
|
|
You can use the ``xfail`` marker to indicate that you
|
2016-02-15 06:45:55 +08:00
|
|
|
expect a test to fail::
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2010-11-18 05:12:16 +08:00
|
|
|
@pytest.mark.xfail
|
2009-10-23 02:57:21 +08:00
|
|
|
def test_function():
|
2009-10-15 22:18:57 +08:00
|
|
|
...
|
|
|
|
|
2009-10-23 02:57:21 +08:00
|
|
|
This test will be run but no traceback will be reported
|
2009-10-15 22:18:57 +08:00
|
|
|
when it fails. Instead terminal reporting will list it in the
|
2016-02-15 06:45:55 +08:00
|
|
|
"expected to fail" (``XFAIL``) or "unexpectedly passing" (``XPASS``) sections.
|
2009-10-23 02:57:21 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
Alternatively, you can also mark a test as ``XFAIL`` from within a test or setup function
|
|
|
|
imperatively:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
def test_function():
|
|
|
|
if not valid_config():
|
|
|
|
pytest.xfail("failing configuration (but should work)")
|
|
|
|
|
|
|
|
This will unconditionally make ``test_function`` ``XFAIL``. Note that no other code is executed
|
|
|
|
after ``pytest.xfail`` call, differently from the marker. That's because it is implemented
|
|
|
|
internally by raising a known exception.
|
|
|
|
|
|
|
|
Here's the signature of the ``xfail`` **marker** (not the function), using Python 3 keyword-only
|
|
|
|
arguments syntax:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
def xfail(condition=None, *, reason=None, raises=None, run=True, strict=False):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
``strict`` parameter
|
|
|
|
~~~~~~~~~~~~~~~~~~~~
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
.. versionadded:: 2.9
|
|
|
|
|
|
|
|
Both ``XFAIL`` and ``XPASS`` don't fail the test suite, unless the ``strict`` keyword-only
|
|
|
|
parameter is passed as ``True``:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
@pytest.mark.xfail(strict=True)
|
|
|
|
def test_function():
|
|
|
|
...
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
|
|
|
|
This will make ``XPASS`` ("unexpectedly passing") results from this test to fail the test suite.
|
|
|
|
|
|
|
|
You can change the default value of the ``strict`` parameter using the
|
|
|
|
``xfail_strict`` ini option:
|
|
|
|
|
|
|
|
.. code-block:: ini
|
|
|
|
|
|
|
|
[pytest]
|
|
|
|
xfail_strict=true
|
|
|
|
|
|
|
|
|
|
|
|
``reason`` parameter
|
|
|
|
~~~~~~~~~~~~~~~~~~~~
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2011-03-04 06:40:38 +08:00
|
|
|
As with skipif_ you can also mark your expectation of a failure
|
|
|
|
on a particular platform::
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2013-07-07 00:54:24 +08:00
|
|
|
@pytest.mark.xfail(sys.version_info >= (3,3),
|
2013-05-08 00:40:26 +08:00
|
|
|
reason="python3.3 api changes")
|
2009-10-15 22:18:57 +08:00
|
|
|
def test_function():
|
|
|
|
...
|
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
|
|
|
|
``raises`` parameter
|
|
|
|
~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
2014-07-27 00:10:32 +08:00
|
|
|
If you want to be more specific as to why the test is failing, you can specify
|
2016-02-15 06:45:55 +08:00
|
|
|
a single exception, or a list of exceptions, in the ``raises`` argument.
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
@pytest.mark.xfail(raises=RuntimeError)
|
|
|
|
def test_function():
|
|
|
|
...
|
|
|
|
|
|
|
|
Then the test will be reported as a regular failure if it fails with an
|
2014-07-27 00:10:32 +08:00
|
|
|
exception not mentioned in ``raises``.
|
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
``run`` parameter
|
|
|
|
~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
If a test should be marked as xfail and reported as such but should not be
|
|
|
|
even executed, use the ``run`` parameter as ``False``:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
@pytest.mark.xfail(run=False)
|
|
|
|
def test_function():
|
|
|
|
...
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
This is specially useful for xfailing tests that are crashing the interpreter and should be
|
|
|
|
investigated later.
|
2016-02-15 06:45:55 +08:00
|
|
|
|
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
Ignoring xfail
|
|
|
|
~~~~~~~~~~~~~~
|
2016-02-15 06:45:55 +08:00
|
|
|
|
|
|
|
By specifying on the commandline::
|
|
|
|
|
|
|
|
pytest --runxfail
|
|
|
|
|
|
|
|
you can force the running and reporting of an ``xfail`` marked test
|
2017-06-01 06:51:47 +08:00
|
|
|
as if it weren't marked at all. This also causes ``pytest.xfail`` to produce no effect.
|
2016-02-15 06:45:55 +08:00
|
|
|
|
|
|
|
Examples
|
|
|
|
~~~~~~~~
|
|
|
|
|
|
|
|
Here is a simple test file with the several usages:
|
2010-11-21 04:35:55 +08:00
|
|
|
|
|
|
|
.. literalinclude:: example/xfail_demo.py
|
|
|
|
|
|
|
|
Running it with the report-on-xfail option gives this output::
|
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
example $ pytest -rx xfail_demo.py
|
2017-05-13 04:51:20 +08:00
|
|
|
======= test session starts ========
|
|
|
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
|
|
|
rootdir: $REGENDOC_TMPDIR/example, inifile:
|
|
|
|
collected 7 items
|
2016-03-01 04:52:56 +08:00
|
|
|
|
2017-05-13 04:51:20 +08:00
|
|
|
xfail_demo.py xxxxxxx
|
|
|
|
======= short test summary info ========
|
|
|
|
XFAIL xfail_demo.py::test_hello
|
|
|
|
XFAIL xfail_demo.py::test_hello2
|
|
|
|
reason: [NOTRUN]
|
|
|
|
XFAIL xfail_demo.py::test_hello3
|
|
|
|
condition: hasattr(os, 'sep')
|
|
|
|
XFAIL xfail_demo.py::test_hello4
|
|
|
|
bug 110
|
|
|
|
XFAIL xfail_demo.py::test_hello5
|
|
|
|
condition: pytest.__version__[0] != "17"
|
|
|
|
XFAIL xfail_demo.py::test_hello6
|
|
|
|
reason: reason
|
|
|
|
XFAIL xfail_demo.py::test_hello7
|
2016-03-01 04:52:56 +08:00
|
|
|
|
2017-05-13 04:51:20 +08:00
|
|
|
======= 7 xfailed in 0.12 seconds ========
|
2010-10-11 05:45:45 +08:00
|
|
|
|
2013-05-21 09:12:45 +08:00
|
|
|
.. _`skip/xfail with parametrize`:
|
|
|
|
|
|
|
|
Skip/xfail with parametrize
|
|
|
|
---------------------------
|
|
|
|
|
|
|
|
It is possible to apply markers like skip and xfail to individual
|
2017-06-01 06:51:47 +08:00
|
|
|
test instances when using parametrize:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2013-05-21 09:12:45 +08:00
|
|
|
|
2016-02-15 06:45:55 +08:00
|
|
|
import pytest
|
|
|
|
|
|
|
|
@pytest.mark.parametrize(("n", "expected"), [
|
|
|
|
(1, 2),
|
2017-06-01 06:51:47 +08:00
|
|
|
pytest.param(1, 0, marks=pytest.mark.xfail),
|
|
|
|
pytest.param(1, 3, marks=pytest.mark.xfail(reason="some bug")),
|
2016-02-15 06:45:55 +08:00
|
|
|
(2, 3),
|
|
|
|
(3, 4),
|
|
|
|
(4, 5),
|
2017-06-01 06:51:47 +08:00
|
|
|
pytest.param(10, 11, marks=pytest.mark.skipif(sys.version_info >= (3, 0), reason="py2k")),
|
2016-02-15 06:45:55 +08:00
|
|
|
])
|
|
|
|
def test_increment(n, expected):
|
|
|
|
assert n + 1 == expected
|
2013-05-21 09:12:45 +08:00
|
|
|
|
2011-03-05 20:08:43 +08:00
|
|
|
|
2013-10-15 19:45:55 +08:00
|
|
|
.. _string conditions:
|
2013-05-08 00:40:26 +08:00
|
|
|
|
2017-06-01 06:51:47 +08:00
|
|
|
Conditions as strings instead of booleans
|
|
|
|
-----------------------------------------
|
2013-05-08 00:40:26 +08:00
|
|
|
|
|
|
|
Prior to pytest-2.4 the only way to specify skipif/xfail conditions was
|
|
|
|
to use strings::
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2013-05-08 00:40:26 +08:00
|
|
|
import sys
|
|
|
|
@pytest.mark.skipif("sys.version_info >= (3,3)")
|
2009-10-15 22:18:57 +08:00
|
|
|
def test_function():
|
2013-05-08 00:40:26 +08:00
|
|
|
...
|
|
|
|
|
2013-07-07 00:54:24 +08:00
|
|
|
During test function setup the skipif condition is evaluated by calling
|
|
|
|
``eval('sys.version_info >= (3,0)', namespace)``. The namespace contains
|
2013-05-08 00:40:26 +08:00
|
|
|
all the module globals, and ``os`` and ``sys`` as a minimum.
|
|
|
|
|
2013-07-07 00:54:24 +08:00
|
|
|
Since pytest-2.4 `condition booleans`_ are considered preferable
|
2013-05-08 00:40:26 +08:00
|
|
|
because markers can then be freely imported between test modules.
|
|
|
|
With strings you need to import not only the marker but all variables
|
2017-06-01 06:51:47 +08:00
|
|
|
used by the marker, which violates encapsulation.
|
2013-05-08 00:40:26 +08:00
|
|
|
|
2014-01-18 19:31:33 +08:00
|
|
|
The reason for specifying the condition as a string was that ``pytest`` can
|
2013-07-07 00:54:24 +08:00
|
|
|
report a summary of skip conditions based purely on the condition string.
|
|
|
|
With conditions as booleans you are required to specify a ``reason`` string.
|
2013-05-08 00:40:26 +08:00
|
|
|
|
|
|
|
Note that string conditions will remain fully supported and you are free
|
|
|
|
to use them if you have no need for cross-importing markers.
|
|
|
|
|
|
|
|
The evaluation of a condition string in ``pytest.mark.skipif(conditionstring)``
|
|
|
|
or ``pytest.mark.xfail(conditionstring)`` takes place in a namespace
|
|
|
|
dictionary which is constructed as follows:
|
|
|
|
|
|
|
|
* the namespace is initialized by putting the ``sys`` and ``os`` modules
|
|
|
|
and the pytest ``config`` object into it.
|
2013-07-07 00:54:24 +08:00
|
|
|
|
2013-05-08 00:40:26 +08:00
|
|
|
* updated with the module globals of the test function for which the
|
|
|
|
expression is applied.
|
|
|
|
|
|
|
|
The pytest ``config`` object allows you to skip based on a test
|
|
|
|
configuration value which you might have added::
|
|
|
|
|
|
|
|
@pytest.mark.skipif("not config.getvalue('db')")
|
|
|
|
def test_function(...):
|
|
|
|
...
|
|
|
|
|
|
|
|
The equivalent with "boolean conditions" is::
|
|
|
|
|
|
|
|
@pytest.mark.skipif(not pytest.config.getvalue("db"),
|
|
|
|
reason="--db was not specified")
|
|
|
|
def test_function(...):
|
|
|
|
pass
|
|
|
|
|
2014-03-26 14:15:54 +08:00
|
|
|
.. note::
|
2009-10-15 22:18:57 +08:00
|
|
|
|
2014-03-26 14:15:54 +08:00
|
|
|
You cannot use ``pytest.config.getvalue()`` in code
|
2016-06-21 22:16:57 +08:00
|
|
|
imported before pytest's argument parsing takes place. For example,
|
2014-03-26 14:15:54 +08:00
|
|
|
``conftest.py`` files are imported before command line parsing and thus
|
|
|
|
``config.getvalue()`` will not execute correctly.
|
2016-09-26 06:02:46 +08:00
|
|
|
|
|
|
|
|