test_ok1/doc/en/cache.rst

263 lines
8.1 KiB
ReStructuredText
Raw Normal View History

2017-07-07 18:08:12 +08:00
.. _`cache_provider`:
.. _cache:
2017-07-07 18:08:12 +08:00
Cache: working with cross-testrun state
=======================================
2015-08-18 01:17:39 +08:00
.. versionadded:: 2.8
Usage
---------
The plugin provides two command line options to rerun failures from the
last ``pytest`` invocation:
2016-01-28 05:57:11 +08:00
* ``--lf``, ``--last-failed`` - to only re-run the failures.
* ``--ff``, ``--failed-first`` - to run the failures first and then the rest of
the tests.
For cleanup (usually not needed), a ``--cache-clear`` option allows to remove
all cross-session cache contents ahead of a test run.
Other plugins may access the `config.cache`_ object to set/get
**json encodable** values between ``pytest`` invocations.
.. note::
This plugin is enabled by default, but can be disabled if needed: see
:ref:`cmdunregister` (the internal name for this plugin is
``cacheprovider``).
Rerunning only failures or failures first
-----------------------------------------------
First, let's create 50 test invocation of which only 2 fail::
# content of test_50.py
import pytest
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
pytest.fail("bad luck")
If you run this for the first time you will see two failures::
$ pytest -q
.................F.......F........................ [100%]
================================= FAILURES =================================
_______________________________ test_num[17] _______________________________
i = 17
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:6: Failed
_______________________________ test_num[25] _______________________________
i = 25
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:6: Failed
2 failed, 48 passed in 0.12 seconds
2015-09-17 02:44:41 +08:00
If you then run it with ``--lf``::
$ pytest --lf
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
2017-03-14 06:41:20 +08:00
rootdir: $REGENDOC_TMPDIR, inifile:
2018-03-22 04:46:07 +08:00
collected 50 items / 48 deselected
2017-07-31 05:37:18 +08:00
run-last-failure: rerun previous 2 failures
test_50.py FF [100%]
================================= FAILURES =================================
_______________________________ test_num[17] _______________________________
i = 17
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:6: Failed
_______________________________ test_num[25] _______________________________
i = 25
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:6: Failed
================= 2 failed, 48 deselected in 0.12 seconds ==================
2015-09-17 02:44:41 +08:00
You have run only the two failing test from the last run, while 48 tests have
not been run ("deselected").
2015-09-17 02:44:41 +08:00
Now, if you run with the ``--ff`` option, all tests will be run but the first
previous failures will be executed first (as can be seen from the series
of ``FF`` and dots)::
$ pytest --ff
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
2017-03-14 06:41:20 +08:00
rootdir: $REGENDOC_TMPDIR, inifile:
collected 50 items
2017-07-31 05:37:18 +08:00
run-last-failure: rerun previous 2 failures first
test_50.py FF................................................ [100%]
================================= FAILURES =================================
_______________________________ test_num[17] _______________________________
i = 17
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:6: Failed
_______________________________ test_num[25] _______________________________
i = 25
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:6: Failed
=================== 2 failed, 48 passed in 0.12 seconds ====================
.. _`config.cache`:
2018-02-24 03:49:17 +08:00
New ``--nf``, ``--new-first`` options: run new tests first followed by the rest
of the tests, in both cases tests are also sorted by the file modified time,
with more recent files coming first.
2018-03-11 04:45:45 +08:00
Behavior when no tests failed in the last run
---------------------------------------------
When no tests failed in the last run, or when no cached ``lastfailed`` data was
found, ``pytest`` can be configured either to run all of the tests or no tests,
using the ``--last-failed-no-failures`` option, which takes one of the following values::
pytest --last-failed --last-failed-no-failures all # run all tests (default behavior)
pytest --last-failed --last-failed-no-failures none # run no tests and exit
2018-03-11 04:45:45 +08:00
The new config.cache object
--------------------------------
.. regendoc:wipe
2015-09-17 03:06:44 +08:00
Plugins or conftest.py support code can get a cached value using the
pytest ``config`` object. Here is a basic example plugin which
implements a :ref:`fixture` which re-uses previously created state
across pytest invocations::
# content of test_caching.py
2015-09-17 03:06:44 +08:00
import pytest
import time
2015-09-17 03:06:44 +08:00
@pytest.fixture
def mydata(request):
val = request.config.cache.get("example/value", None)
if val is None:
time.sleep(9*0.6) # expensive computation :)
val = 42
request.config.cache.set("example/value", val)
return val
def test_function(mydata):
assert mydata == 23
If you run this command once, it will take a while because
of the sleep::
$ pytest -q
F [100%]
================================= FAILURES =================================
______________________________ test_function _______________________________
mydata = 42
def test_function(mydata):
> assert mydata == 23
E assert 42 == 23
2015-09-17 03:06:44 +08:00
test_caching.py:14: AssertionError
1 failed in 0.12 seconds
If you run it a second time the value will be retrieved from
the cache and this will be quick::
$ pytest -q
F [100%]
================================= FAILURES =================================
______________________________ test_function _______________________________
mydata = 42
def test_function(mydata):
> assert mydata == 23
E assert 42 == 23
2015-09-17 03:06:44 +08:00
test_caching.py:14: AssertionError
1 failed in 0.12 seconds
See the :ref:`cache-api` for more details.
Inspecting Cache content
-------------------------------
You can always peek at the content of the cache using the
``--cache-show`` command line option::
2018-05-13 18:09:47 +08:00
$ pytest --cache-show
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
2017-03-14 06:41:20 +08:00
rootdir: $REGENDOC_TMPDIR, inifile:
2018-01-31 03:47:56 +08:00
cachedir: $REGENDOC_TMPDIR/.pytest_cache
2016-08-02 02:46:34 +08:00
------------------------------- cache values -------------------------------
2017-05-20 06:12:59 +08:00
cache/lastfailed contains:
{'test_caching.py::test_function': True}
2018-03-22 04:46:07 +08:00
cache/nodeids contains:
['test_caching.py::test_function']
example/value contains:
42
======================= no tests ran in 0.12 seconds =======================
Clearing Cache content
-------------------------------
You can instruct pytest to clear all cache files and values
by adding the ``--cache-clear`` option like this::
pytest --cache-clear
2017-01-01 01:54:47 +08:00
This is recommended for invocations from Continuous Integration
servers where isolation and correctness is more important
than speed.