259 lines
7.4 KiB
ReStructuredText
259 lines
7.4 KiB
ReStructuredText
cache: working with cross-testrun state
|
|
=======================================
|
|
|
|
.. versionadded:: 2.8
|
|
|
|
.. warning::
|
|
|
|
The functionality of this core plugin was previosuly distributed
|
|
as a third party plugin named ``pytest-cache``. The core plugin
|
|
is compatible regarding command line options and API usage except that you
|
|
can only store/receive data between test runs that is json-serializable.
|
|
|
|
Usage
|
|
---------
|
|
|
|
plugins can access the `config.cache`_ object
|
|
which helps sharing **json encodable** values between ``py.test`` invocations.
|
|
|
|
The plugin provides two options to rerun failures, namely:
|
|
|
|
* ``--lf`` (last failures) - to only re-run the failures.
|
|
* ``--ff`` (failures first) - to run the failures first and then the rest of
|
|
the tests.
|
|
|
|
For cleanup (usually not needed), a ``--clearcache`` option allows to remove
|
|
all cross-session cache contents ahead of a test run.
|
|
|
|
|
|
Rerunning only failures or failures first
|
|
-----------------------------------------------
|
|
|
|
First, let's create 50 test invocation of which only 2 fail::
|
|
|
|
# content of test_50.py
|
|
import pytest
|
|
|
|
@pytest.mark.parametrize("i", range(50))
|
|
def test_num(i):
|
|
if i in (17, 25):
|
|
pytest.fail("bad luck")
|
|
|
|
If you run this for the first time you will see two failures::
|
|
|
|
$ py.test -q
|
|
.................F.......F........................
|
|
=================================== FAILURES ===================================
|
|
_________________________________ test_num[17] _________________________________
|
|
|
|
i = 17
|
|
|
|
@pytest.mark.parametrize("i", range(50))
|
|
def test_num(i):
|
|
if i in (17,25):
|
|
> pytest.fail("bad luck")
|
|
E Failed: bad luck
|
|
|
|
test_50.py:6: Failed
|
|
_________________________________ test_num[25] _________________________________
|
|
|
|
i = 25
|
|
|
|
@pytest.mark.parametrize("i", range(50))
|
|
def test_num(i):
|
|
if i in (17,25):
|
|
> pytest.fail("bad luck")
|
|
E Failed: bad luck
|
|
|
|
test_50.py:6: Failed
|
|
|
|
If you then run it with ``--lf`` you will run only the two failing test
|
|
from the last run::
|
|
|
|
$ py.test --lf
|
|
============================= test session starts ==============================
|
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
|
|
run-last-failure: rerun last 2 failures
|
|
plugins: cache
|
|
collected 50 items
|
|
|
|
test_50.py FF
|
|
|
|
=================================== FAILURES ===================================
|
|
_________________________________ test_num[17] _________________________________
|
|
|
|
i = 17
|
|
|
|
@pytest.mark.parametrize("i", range(50))
|
|
def test_num(i):
|
|
if i in (17,25):
|
|
> pytest.fail("bad luck")
|
|
E Failed: bad luck
|
|
|
|
test_50.py:6: Failed
|
|
_________________________________ test_num[25] _________________________________
|
|
|
|
i = 25
|
|
|
|
@pytest.mark.parametrize("i", range(50))
|
|
def test_num(i):
|
|
if i in (17,25):
|
|
> pytest.fail("bad luck")
|
|
E Failed: bad luck
|
|
|
|
test_50.py:6: Failed
|
|
=================== 2 failed, 48 deselected in 0.02 seconds ====================
|
|
|
|
The last line indicates that 48 tests have not been run.
|
|
|
|
If you run with the ``--ff`` option, all tests will be run but the first
|
|
failures will be executed first (as can be seen from the series of ``FF`` and
|
|
dots)::
|
|
|
|
$ py.test --ff
|
|
============================= test session starts ==============================
|
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
|
|
run-last-failure: rerun last 2 failures first
|
|
plugins: cache
|
|
collected 50 items
|
|
|
|
test_50.py FF................................................
|
|
|
|
=================================== FAILURES ===================================
|
|
_________________________________ test_num[17] _________________________________
|
|
|
|
i = 17
|
|
|
|
@pytest.mark.parametrize("i", range(50))
|
|
def test_num(i):
|
|
if i in (17,25):
|
|
> pytest.fail("bad luck")
|
|
E Failed: bad luck
|
|
|
|
test_50.py:6: Failed
|
|
_________________________________ test_num[25] _________________________________
|
|
|
|
i = 25
|
|
|
|
@pytest.mark.parametrize("i", range(50))
|
|
def test_num(i):
|
|
if i in (17,25):
|
|
> pytest.fail("bad luck")
|
|
E Failed: bad luck
|
|
|
|
test_50.py:6: Failed
|
|
===================== 2 failed, 48 passed in 0.07 seconds ======================
|
|
|
|
.. _`config.cache`:
|
|
|
|
The new config.cache object
|
|
--------------------------------
|
|
|
|
.. regendoc:wipe
|
|
|
|
Plugins or conftest.py support code can get a cached value
|
|
using the pytest ``config`` object. Here is a basic example
|
|
plugin which implements a `funcarg <http://pytest.org/latest/funcargs.html>`_
|
|
which re-uses previously created state across py.test invocations::
|
|
|
|
# content of test_caching.py
|
|
import time
|
|
|
|
def pytest_funcarg__mydata(request):
|
|
val = request.config.cache.get("example/value", None)
|
|
if val is None:
|
|
time.sleep(9*0.6) # expensive computation :)
|
|
val = 42
|
|
request.config.cache.set("example/value", val)
|
|
return val
|
|
|
|
def test_function(mydata):
|
|
assert mydata == 23
|
|
|
|
If you run this command once, it will take a while because
|
|
of the sleep::
|
|
|
|
$ py.test -q
|
|
F
|
|
=================================== FAILURES ===================================
|
|
________________________________ test_function _________________________________
|
|
|
|
mydata = 42
|
|
|
|
def test_function(mydata):
|
|
> assert mydata == 23
|
|
E assert 42 == 23
|
|
|
|
test_caching.py:12: AssertionError
|
|
|
|
If you run it a second time the value will be retrieved from
|
|
the cache and this will be quick::
|
|
|
|
$ py.test -q
|
|
F
|
|
=================================== FAILURES ===================================
|
|
________________________________ test_function _________________________________
|
|
|
|
mydata = 42
|
|
|
|
def test_function(mydata):
|
|
> assert mydata == 23
|
|
E assert 42 == 23
|
|
|
|
test_caching.py:12: AssertionError
|
|
|
|
See the `cache-api`_ for more details.
|
|
|
|
|
|
Inspecting Cache content
|
|
-------------------------------
|
|
|
|
You can always peek at the content of the cache using the
|
|
``--cache`` command line option::
|
|
|
|
$ py.test --cache
|
|
============================= test session starts ==============================
|
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
|
|
plugins: cache
|
|
cachedir: /tmp/doc-exec-6/.cache
|
|
--------------------------------- cache values ---------------------------------
|
|
example/value contains:
|
|
42
|
|
cache/lastfailed contains:
|
|
set(['test_caching.py::test_function'])
|
|
|
|
=============================== in 0.01 seconds ===============================
|
|
|
|
Clearing Cache content
|
|
-------------------------------
|
|
|
|
You can instruct pytest to clear all cache files and values
|
|
by adding the ``--clearcache`` option like this::
|
|
|
|
py.test --clearcache
|
|
|
|
This is recommended for invocations from Continous Integration
|
|
servers where isolation and correctness is more important
|
|
than speed.
|
|
|
|
|
|
.. _`cache-api`:
|
|
|
|
config.cache API
|
|
========================================
|
|
|
|
The `config.cache`` object allows other plugins,
|
|
including ``conftest.py`` files,
|
|
to safely and flexibly store and retrieve values across
|
|
test runs because the ``config`` object is available
|
|
in many places.
|
|
|
|
Under the hood, the cache plugin uses the simple
|
|
dumps/loads API of the json stdlib module
|
|
|
|
.. currentmodule:: _pytest.cacheprovider
|
|
|
|
.. automethod:: Cache.get
|
|
.. automethod:: Cache.set
|
|
.. automethod:: Cache.makedir
|