2010-11-06 06:37:25 +08:00
|
|
|
|
|
|
|
|
2011-09-06 17:43:42 +08:00
|
|
|
Basic patterns and examples
|
2010-11-06 06:37:25 +08:00
|
|
|
==========================================================
|
|
|
|
|
2018-02-28 04:58:51 +08:00
|
|
|
.. _request example:
|
|
|
|
|
2011-09-06 17:43:42 +08:00
|
|
|
Pass different values to a test function, depending on command line options
|
2010-11-06 06:37:25 +08:00
|
|
|
----------------------------------------------------------------------------
|
|
|
|
|
2010-11-26 20:26:56 +08:00
|
|
|
.. regendoc:wipe
|
|
|
|
|
2010-11-06 06:37:25 +08:00
|
|
|
Suppose we want to write a test that depends on a command line option.
|
2016-08-23 10:35:41 +08:00
|
|
|
Here is a basic pattern to achieve this:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2010-11-06 06:37:25 +08:00
|
|
|
|
|
|
|
# content of test_sample.py
|
|
|
|
def test_answer(cmdopt):
|
|
|
|
if cmdopt == "type1":
|
|
|
|
print ("first")
|
|
|
|
elif cmdopt == "type2":
|
|
|
|
print ("second")
|
|
|
|
assert 0 # to see what was printed
|
|
|
|
|
|
|
|
|
|
|
|
For this to work we need to add a command line option and
|
2016-08-23 10:35:41 +08:00
|
|
|
provide the ``cmdopt`` through a :ref:`fixture function <fixture function>`:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2010-11-06 06:37:25 +08:00
|
|
|
|
|
|
|
# content of conftest.py
|
2012-10-07 19:06:17 +08:00
|
|
|
import pytest
|
|
|
|
|
2010-11-06 06:37:25 +08:00
|
|
|
def pytest_addoption(parser):
|
|
|
|
parser.addoption("--cmdopt", action="store", default="type1",
|
|
|
|
help="my option: type1 or type2")
|
|
|
|
|
2012-10-07 19:06:17 +08:00
|
|
|
@pytest.fixture
|
|
|
|
def cmdopt(request):
|
2012-11-06 21:09:12 +08:00
|
|
|
return request.config.getoption("--cmdopt")
|
2010-11-06 06:37:25 +08:00
|
|
|
|
2012-10-07 19:06:17 +08:00
|
|
|
Let's run this without supplying our new option::
|
2010-11-06 06:37:25 +08:00
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest -q test_sample.py
|
2017-11-23 23:33:41 +08:00
|
|
|
F [100%]
|
|
|
|
================================= FAILURES =================================
|
|
|
|
_______________________________ test_answer ________________________________
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2010-11-06 06:37:25 +08:00
|
|
|
cmdopt = 'type1'
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2010-11-06 06:37:25 +08:00
|
|
|
def test_answer(cmdopt):
|
|
|
|
if cmdopt == "type1":
|
|
|
|
print ("first")
|
|
|
|
elif cmdopt == "type2":
|
|
|
|
print ("second")
|
|
|
|
> assert 0 # to see what was printed
|
|
|
|
E assert 0
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2010-11-06 06:37:25 +08:00
|
|
|
test_sample.py:6: AssertionError
|
2015-09-22 20:02:11 +08:00
|
|
|
--------------------------- Captured stdout call ---------------------------
|
2010-11-06 06:37:25 +08:00
|
|
|
first
|
2015-06-07 05:30:49 +08:00
|
|
|
1 failed in 0.12 seconds
|
2010-11-06 06:37:25 +08:00
|
|
|
|
|
|
|
And now with supplying a command line option::
|
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest -q --cmdopt=type2
|
2017-11-23 23:33:41 +08:00
|
|
|
F [100%]
|
|
|
|
================================= FAILURES =================================
|
|
|
|
_______________________________ test_answer ________________________________
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2010-11-06 06:37:25 +08:00
|
|
|
cmdopt = 'type2'
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2010-11-06 06:37:25 +08:00
|
|
|
def test_answer(cmdopt):
|
|
|
|
if cmdopt == "type1":
|
|
|
|
print ("first")
|
|
|
|
elif cmdopt == "type2":
|
|
|
|
print ("second")
|
|
|
|
> assert 0 # to see what was printed
|
|
|
|
E assert 0
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2010-11-06 06:37:25 +08:00
|
|
|
test_sample.py:6: AssertionError
|
2015-09-22 20:02:11 +08:00
|
|
|
--------------------------- Captured stdout call ---------------------------
|
2010-11-06 06:37:25 +08:00
|
|
|
second
|
2015-06-07 05:30:49 +08:00
|
|
|
1 failed in 0.12 seconds
|
2010-11-06 06:37:25 +08:00
|
|
|
|
2012-10-07 19:06:17 +08:00
|
|
|
You can see that the command line option arrived in our test. This
|
|
|
|
completes the basic pattern. However, one often rather wants to process
|
|
|
|
command line options outside of the test and rather pass in different or
|
|
|
|
more complex objects.
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2011-09-06 17:43:42 +08:00
|
|
|
Dynamically adding command line options
|
2010-12-07 19:14:12 +08:00
|
|
|
--------------------------------------------------------------
|
|
|
|
|
|
|
|
.. regendoc:wipe
|
|
|
|
|
|
|
|
Through :confval:`addopts` you can statically add command line
|
|
|
|
options for your project. You can also dynamically modify
|
2016-08-23 10:35:41 +08:00
|
|
|
the command line arguments before they get processed:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2010-12-07 19:14:12 +08:00
|
|
|
|
|
|
|
# content of conftest.py
|
|
|
|
import sys
|
2018-04-23 09:56:18 +08:00
|
|
|
def pytest_load_initial_conftests(args):
|
2010-12-07 19:14:12 +08:00
|
|
|
if 'xdist' in sys.modules: # pytest-xdist plugin
|
|
|
|
import multiprocessing
|
|
|
|
num = max(multiprocessing.cpu_count() / 2, 1)
|
|
|
|
args[:] = ["-n", str(num)] + args
|
|
|
|
|
2018-04-26 21:45:48 +08:00
|
|
|
If you have the `xdist plugin <https://pypi.org/project/pytest-xdist/>`_ installed
|
2010-12-07 19:14:12 +08:00
|
|
|
you will now always perform test runs using a number
|
|
|
|
of subprocesses close to your CPU. Running in an empty
|
|
|
|
directory with the above conftest.py::
|
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest
|
2017-11-23 23:33:41 +08:00
|
|
|
=========================== test session starts ============================
|
2017-05-13 04:17:40 +08:00
|
|
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
2017-03-14 06:41:20 +08:00
|
|
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
2012-10-07 19:06:17 +08:00
|
|
|
collected 0 items
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
======================= no tests ran in 0.12 seconds =======================
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2011-02-09 21:55:21 +08:00
|
|
|
.. _`excontrolskip`:
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2011-09-06 17:43:42 +08:00
|
|
|
Control skipping of tests according to command line option
|
2010-11-21 04:35:55 +08:00
|
|
|
--------------------------------------------------------------
|
|
|
|
|
2010-11-26 20:26:56 +08:00
|
|
|
.. regendoc:wipe
|
|
|
|
|
2010-11-21 04:35:55 +08:00
|
|
|
Here is a ``conftest.py`` file adding a ``--runslow`` command
|
2017-08-09 10:51:07 +08:00
|
|
|
line option to control skipping of ``pytest.mark.slow`` marked tests:
|
2016-08-23 10:35:41 +08:00
|
|
|
|
|
|
|
.. code-block:: python
|
2010-11-21 04:35:55 +08:00
|
|
|
|
|
|
|
# content of conftest.py
|
|
|
|
|
|
|
|
import pytest
|
|
|
|
def pytest_addoption(parser):
|
|
|
|
parser.addoption("--runslow", action="store_true",
|
2017-08-09 10:51:07 +08:00
|
|
|
default=False, help="run slow tests")
|
|
|
|
|
|
|
|
def pytest_collection_modifyitems(config, items):
|
|
|
|
if config.getoption("--runslow"):
|
|
|
|
# --runslow given in cli: do not skip slow tests
|
|
|
|
return
|
|
|
|
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
|
|
|
|
for item in items:
|
|
|
|
if "slow" in item.keywords:
|
|
|
|
item.add_marker(skip_slow)
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2016-08-23 10:35:41 +08:00
|
|
|
We can now write a test module like this:
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2016-08-23 10:35:41 +08:00
|
|
|
.. code-block:: python
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2016-08-23 10:35:41 +08:00
|
|
|
# content of test_module.py
|
2010-11-21 04:35:55 +08:00
|
|
|
import pytest
|
|
|
|
|
|
|
|
|
2015-10-01 03:53:34 +08:00
|
|
|
def test_func_fast():
|
|
|
|
pass
|
|
|
|
|
|
|
|
|
2017-08-09 10:51:07 +08:00
|
|
|
@pytest.mark.slow
|
2010-11-21 04:35:55 +08:00
|
|
|
def test_func_slow():
|
|
|
|
pass
|
|
|
|
|
|
|
|
and when running it will see a skipped "slow" test::
|
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest -rs # "-rs" means report details on the little 's'
|
2017-11-23 23:33:41 +08:00
|
|
|
=========================== test session starts ============================
|
2017-05-13 04:17:40 +08:00
|
|
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
2017-03-14 06:41:20 +08:00
|
|
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
2012-10-07 19:06:17 +08:00
|
|
|
collected 2 items
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
test_module.py .s [100%]
|
|
|
|
========================= short test summary info ==========================
|
2017-08-09 10:51:07 +08:00
|
|
|
SKIP [1] test_module.py:8: need --runslow option to run
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
2010-11-21 04:35:55 +08:00
|
|
|
|
|
|
|
Or run it including the ``slow`` marked test::
|
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest --runslow
|
2017-11-23 23:33:41 +08:00
|
|
|
=========================== test session starts ============================
|
2017-05-13 04:17:40 +08:00
|
|
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
2017-03-14 06:41:20 +08:00
|
|
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
2012-10-07 19:06:17 +08:00
|
|
|
collected 2 items
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
test_module.py .. [100%]
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
========================= 2 passed in 0.12 seconds =========================
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2011-09-06 17:43:42 +08:00
|
|
|
Writing well integrated assertion helpers
|
2010-11-21 04:35:55 +08:00
|
|
|
--------------------------------------------------
|
|
|
|
|
2010-11-26 20:26:56 +08:00
|
|
|
.. regendoc:wipe
|
|
|
|
|
2010-11-21 04:35:55 +08:00
|
|
|
If you have a test helper function called from a test you can
|
|
|
|
use the ``pytest.fail`` marker to fail a test with a certain message.
|
|
|
|
The test support function will not show up in the traceback if you
|
|
|
|
set the ``__tracebackhide__`` option somewhere in the helper function.
|
2016-08-23 10:35:41 +08:00
|
|
|
Example:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2010-11-21 04:35:55 +08:00
|
|
|
|
|
|
|
# content of test_checkconfig.py
|
|
|
|
import pytest
|
|
|
|
def checkconfig(x):
|
|
|
|
__tracebackhide__ = True
|
|
|
|
if not hasattr(x, "config"):
|
|
|
|
pytest.fail("not configured: %s" %(x,))
|
|
|
|
|
|
|
|
def test_something():
|
|
|
|
checkconfig(42)
|
|
|
|
|
2014-01-18 19:31:33 +08:00
|
|
|
The ``__tracebackhide__`` setting influences ``pytest`` showing
|
2010-11-21 04:35:55 +08:00
|
|
|
of tracebacks: the ``checkconfig`` function will not be shown
|
2016-03-21 00:12:50 +08:00
|
|
|
unless the ``--full-trace`` command line option is specified.
|
2010-11-21 04:35:55 +08:00
|
|
|
Let's run our little function::
|
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest -q test_checkconfig.py
|
2017-11-23 23:33:41 +08:00
|
|
|
F [100%]
|
|
|
|
================================= FAILURES =================================
|
|
|
|
______________________________ test_something ______________________________
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2010-11-21 04:35:55 +08:00
|
|
|
def test_something():
|
|
|
|
> checkconfig(42)
|
|
|
|
E Failed: not configured: 42
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2010-11-21 04:35:55 +08:00
|
|
|
test_checkconfig.py:8: Failed
|
2015-06-07 05:30:49 +08:00
|
|
|
1 failed in 0.12 seconds
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2016-04-20 17:07:34 +08:00
|
|
|
If you only want to hide certain exceptions, you can set ``__tracebackhide__``
|
|
|
|
to a callable which gets the ``ExceptionInfo`` object. You can for example use
|
2016-08-23 10:35:41 +08:00
|
|
|
this to make sure unexpected exception types aren't hidden:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2016-04-20 16:25:33 +08:00
|
|
|
|
2016-04-20 17:07:34 +08:00
|
|
|
import operator
|
2016-04-20 16:25:33 +08:00
|
|
|
import pytest
|
|
|
|
|
|
|
|
class ConfigException(Exception):
|
|
|
|
pass
|
|
|
|
|
|
|
|
def checkconfig(x):
|
2016-04-20 17:07:34 +08:00
|
|
|
__tracebackhide__ = operator.methodcaller('errisinstance', ConfigException)
|
2016-04-20 16:25:33 +08:00
|
|
|
if not hasattr(x, "config"):
|
|
|
|
raise ConfigException("not configured: %s" %(x,))
|
|
|
|
|
|
|
|
def test_something():
|
|
|
|
checkconfig(42)
|
|
|
|
|
|
|
|
This will avoid hiding the exception traceback on unrelated exceptions (i.e.
|
|
|
|
bugs in assertion helpers).
|
|
|
|
|
|
|
|
|
2014-01-18 19:31:33 +08:00
|
|
|
Detect if running from within a pytest run
|
2010-11-21 04:35:55 +08:00
|
|
|
--------------------------------------------------------------
|
|
|
|
|
2010-11-26 20:26:56 +08:00
|
|
|
.. regendoc:wipe
|
|
|
|
|
2010-11-21 04:35:55 +08:00
|
|
|
Usually it is a bad idea to make application code
|
|
|
|
behave differently if called from a test. But if you
|
|
|
|
absolutely must find out if your application code is
|
2016-08-23 10:35:41 +08:00
|
|
|
running from a test you can do something like this:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2010-11-21 04:35:55 +08:00
|
|
|
|
2010-11-26 20:26:56 +08:00
|
|
|
# content of conftest.py
|
2010-11-21 04:35:55 +08:00
|
|
|
|
|
|
|
def pytest_configure(config):
|
|
|
|
import sys
|
|
|
|
sys._called_from_test = True
|
|
|
|
|
|
|
|
def pytest_unconfigure(config):
|
2017-05-18 22:18:09 +08:00
|
|
|
import sys
|
2010-11-21 04:35:55 +08:00
|
|
|
del sys._called_from_test
|
|
|
|
|
2016-08-23 10:35:41 +08:00
|
|
|
and then check for the ``sys._called_from_test`` flag:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2010-11-21 04:35:55 +08:00
|
|
|
|
|
|
|
if hasattr(sys, '_called_from_test'):
|
|
|
|
# called from within a test run
|
2018-06-03 11:19:17 +08:00
|
|
|
...
|
2010-11-21 04:35:55 +08:00
|
|
|
else:
|
|
|
|
# called "normally"
|
2018-06-03 11:19:17 +08:00
|
|
|
...
|
2011-12-05 18:10:48 +08:00
|
|
|
|
2010-11-21 04:35:55 +08:00
|
|
|
accordingly in your application. It's also a good idea
|
2011-12-05 18:10:48 +08:00
|
|
|
to use your own application module rather than ``sys``
|
2010-11-21 04:35:55 +08:00
|
|
|
for handling flag.
|
|
|
|
|
2011-01-13 02:39:36 +08:00
|
|
|
Adding info to test report header
|
|
|
|
--------------------------------------------------------------
|
|
|
|
|
|
|
|
.. regendoc:wipe
|
|
|
|
|
2016-08-23 10:35:41 +08:00
|
|
|
It's easy to present extra information in a ``pytest`` run:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2011-01-13 02:39:36 +08:00
|
|
|
|
|
|
|
# content of conftest.py
|
2014-01-18 19:31:33 +08:00
|
|
|
|
2011-01-13 02:39:36 +08:00
|
|
|
def pytest_report_header(config):
|
|
|
|
return "project deps: mylib-1.1"
|
|
|
|
|
|
|
|
which will add the string to the test header accordingly::
|
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest
|
2017-11-23 23:33:41 +08:00
|
|
|
=========================== test session starts ============================
|
2017-05-13 04:17:40 +08:00
|
|
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
2011-01-13 02:39:36 +08:00
|
|
|
project deps: mylib-1.1
|
2017-03-14 06:41:20 +08:00
|
|
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
2012-10-07 19:06:17 +08:00
|
|
|
collected 0 items
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
======================= no tests ran in 0.12 seconds =======================
|
2011-01-13 02:39:36 +08:00
|
|
|
|
|
|
|
.. regendoc:wipe
|
|
|
|
|
2016-08-23 10:35:41 +08:00
|
|
|
It is also possible to return a list of strings which will be considered as several
|
|
|
|
lines of information. You may consider ``config.getoption('verbose')`` in order to
|
|
|
|
display more information if applicable:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2011-01-13 02:39:36 +08:00
|
|
|
|
|
|
|
# content of conftest.py
|
2011-12-05 18:10:48 +08:00
|
|
|
|
2011-01-13 02:39:36 +08:00
|
|
|
def pytest_report_header(config):
|
2016-06-02 14:52:56 +08:00
|
|
|
if config.getoption('verbose') > 0:
|
2011-01-13 02:39:36 +08:00
|
|
|
return ["info1: did you know that ...", "did you?"]
|
|
|
|
|
|
|
|
which will add info only when run with "--v"::
|
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest -v
|
2017-11-23 23:33:41 +08:00
|
|
|
=========================== test session starts ============================
|
2017-05-13 19:25:52 +08:00
|
|
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
2018-01-31 03:47:56 +08:00
|
|
|
cachedir: .pytest_cache
|
2011-01-13 02:39:36 +08:00
|
|
|
info1: did you know that ...
|
|
|
|
did you?
|
2017-03-14 06:41:20 +08:00
|
|
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
2011-01-13 02:39:36 +08:00
|
|
|
collecting ... collected 0 items
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
======================= no tests ran in 0.12 seconds =======================
|
2011-01-13 02:39:36 +08:00
|
|
|
|
|
|
|
and nothing when run plainly::
|
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest
|
2017-11-23 23:33:41 +08:00
|
|
|
=========================== test session starts ============================
|
2017-05-13 04:17:40 +08:00
|
|
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
2017-03-14 06:41:20 +08:00
|
|
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
2012-10-07 19:06:17 +08:00
|
|
|
collected 0 items
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
======================= no tests ran in 0.12 seconds =======================
|
2011-11-09 01:20:56 +08:00
|
|
|
|
|
|
|
profiling test duration
|
|
|
|
--------------------------
|
|
|
|
|
|
|
|
.. regendoc:wipe
|
|
|
|
|
|
|
|
.. versionadded: 2.2
|
|
|
|
|
|
|
|
If you have a slow running large test suite you might want to find
|
2016-08-23 10:35:41 +08:00
|
|
|
out which tests are the slowest. Let's make an artificial test suite:
|
2011-11-09 01:20:56 +08:00
|
|
|
|
2016-08-23 10:35:41 +08:00
|
|
|
.. code-block:: python
|
2011-12-05 18:10:48 +08:00
|
|
|
|
2016-08-23 10:35:41 +08:00
|
|
|
# content of test_some_are_slow.py
|
2011-12-05 18:10:48 +08:00
|
|
|
import time
|
2011-11-09 01:20:56 +08:00
|
|
|
|
|
|
|
def test_funcfast():
|
2017-08-09 05:04:21 +08:00
|
|
|
time.sleep(0.1)
|
2011-12-05 18:10:48 +08:00
|
|
|
|
2011-11-09 01:20:56 +08:00
|
|
|
def test_funcslow1():
|
2017-08-09 05:04:21 +08:00
|
|
|
time.sleep(0.2)
|
2011-12-05 18:10:48 +08:00
|
|
|
|
2011-11-09 01:20:56 +08:00
|
|
|
def test_funcslow2():
|
2017-08-09 05:04:21 +08:00
|
|
|
time.sleep(0.3)
|
2011-12-05 18:10:48 +08:00
|
|
|
|
|
|
|
Now we can profile which test functions execute the slowest::
|
2011-11-09 01:20:56 +08:00
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest --durations=3
|
2017-11-23 23:33:41 +08:00
|
|
|
=========================== test session starts ============================
|
2017-05-13 04:17:40 +08:00
|
|
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
2017-03-14 06:41:20 +08:00
|
|
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
2012-10-07 19:06:17 +08:00
|
|
|
collected 3 items
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
test_some_are_slow.py ... [100%]
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
========================= slowest 3 test durations =========================
|
2018-02-20 09:43:59 +08:00
|
|
|
0.30s call test_some_are_slow.py::test_funcslow2
|
|
|
|
0.20s call test_some_are_slow.py::test_funcslow1
|
2018-05-23 08:07:48 +08:00
|
|
|
0.11s call test_some_are_slow.py::test_funcfast
|
2017-11-23 23:33:41 +08:00
|
|
|
========================= 3 passed in 0.12 seconds =========================
|
2012-10-18 18:24:50 +08:00
|
|
|
|
|
|
|
incremental testing - test steps
|
|
|
|
---------------------------------------------------
|
|
|
|
|
|
|
|
.. regendoc:wipe
|
|
|
|
|
|
|
|
Sometimes you may have a testing situation which consists of a series
|
|
|
|
of test steps. If one step fails it makes no sense to execute further
|
|
|
|
steps as they are all expected to fail anyway and their tracebacks
|
|
|
|
add no insight. Here is a simple ``conftest.py`` file which introduces
|
2016-08-23 10:35:41 +08:00
|
|
|
an ``incremental`` marker which is to be used on classes:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2012-10-18 18:24:50 +08:00
|
|
|
|
|
|
|
# content of conftest.py
|
|
|
|
|
|
|
|
import pytest
|
|
|
|
|
|
|
|
def pytest_runtest_makereport(item, call):
|
2012-10-18 21:06:55 +08:00
|
|
|
if "incremental" in item.keywords:
|
2012-10-18 18:24:50 +08:00
|
|
|
if call.excinfo is not None:
|
|
|
|
parent = item.parent
|
|
|
|
parent._previousfailed = item
|
|
|
|
|
|
|
|
def pytest_runtest_setup(item):
|
2012-10-18 21:06:55 +08:00
|
|
|
if "incremental" in item.keywords:
|
2012-10-18 18:24:50 +08:00
|
|
|
previousfailed = getattr(item.parent, "_previousfailed", None)
|
|
|
|
if previousfailed is not None:
|
|
|
|
pytest.xfail("previous test failed (%s)" %previousfailed.name)
|
|
|
|
|
|
|
|
These two hook implementations work together to abort incremental-marked
|
2016-08-23 10:35:41 +08:00
|
|
|
tests in a class. Here is a test module example:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2012-10-18 18:24:50 +08:00
|
|
|
|
|
|
|
# content of test_step.py
|
|
|
|
|
|
|
|
import pytest
|
|
|
|
|
|
|
|
@pytest.mark.incremental
|
2017-02-17 02:41:51 +08:00
|
|
|
class TestUserHandling(object):
|
2012-10-18 18:24:50 +08:00
|
|
|
def test_login(self):
|
|
|
|
pass
|
|
|
|
def test_modification(self):
|
|
|
|
assert 0
|
|
|
|
def test_deletion(self):
|
|
|
|
pass
|
|
|
|
|
|
|
|
def test_normal():
|
|
|
|
pass
|
|
|
|
|
|
|
|
If we run this::
|
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest -rx
|
2017-11-23 23:33:41 +08:00
|
|
|
=========================== test session starts ============================
|
2017-05-13 04:17:40 +08:00
|
|
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
2017-03-14 06:41:20 +08:00
|
|
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
2012-10-18 18:24:50 +08:00
|
|
|
collected 4 items
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
test_step.py .Fx. [100%]
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
================================= FAILURES =================================
|
|
|
|
____________________ TestUserHandling.test_modification ____________________
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2015-09-22 22:52:35 +08:00
|
|
|
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2012-10-18 18:24:50 +08:00
|
|
|
def test_modification(self):
|
|
|
|
> assert 0
|
|
|
|
E assert 0
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2012-10-18 18:24:50 +08:00
|
|
|
test_step.py:9: AssertionError
|
2018-03-22 04:46:07 +08:00
|
|
|
========================= short test summary info ==========================
|
|
|
|
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
|
|
|
reason: previous test failed (test_modification)
|
2017-11-23 23:33:41 +08:00
|
|
|
============== 1 failed, 2 passed, 1 xfailed in 0.12 seconds ===============
|
2012-10-18 18:24:50 +08:00
|
|
|
|
|
|
|
We'll see that ``test_deletion`` was not executed because ``test_modification``
|
|
|
|
failed. It is reported as an "expected failure".
|
|
|
|
|
2012-11-07 18:11:40 +08:00
|
|
|
|
|
|
|
Package/Directory-level fixtures (setups)
|
|
|
|
-------------------------------------------------------
|
|
|
|
|
|
|
|
If you have nested test directories, you can have per-directory fixture scopes
|
|
|
|
by placing fixture functions in a ``conftest.py`` file in that directory
|
|
|
|
You can use all types of fixtures including :ref:`autouse fixtures
|
|
|
|
<autouse fixtures>` which are the equivalent of xUnit's setup/teardown
|
|
|
|
concept. It's however recommended to have explicit fixture references in your
|
2015-11-28 14:46:45 +08:00
|
|
|
tests or test classes rather than relying on implicitly executing
|
2012-11-07 18:11:40 +08:00
|
|
|
setup/teardown functions, especially if they are far away from the actual tests.
|
|
|
|
|
2017-01-01 01:54:47 +08:00
|
|
|
Here is an example for making a ``db`` fixture available in a directory:
|
2016-08-23 10:35:41 +08:00
|
|
|
|
|
|
|
.. code-block:: python
|
2012-11-07 18:11:40 +08:00
|
|
|
|
|
|
|
# content of a/conftest.py
|
|
|
|
import pytest
|
|
|
|
|
2017-02-17 02:41:51 +08:00
|
|
|
class DB(object):
|
2012-11-07 18:11:40 +08:00
|
|
|
pass
|
|
|
|
|
|
|
|
@pytest.fixture(scope="session")
|
|
|
|
def db():
|
|
|
|
return DB()
|
|
|
|
|
2016-08-23 10:35:41 +08:00
|
|
|
and then a test module in that directory:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2012-11-07 18:11:40 +08:00
|
|
|
|
|
|
|
# content of a/test_db.py
|
|
|
|
def test_a1(db):
|
|
|
|
assert 0, db # to show value
|
|
|
|
|
2016-08-23 10:35:41 +08:00
|
|
|
another test module:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2012-11-07 18:11:40 +08:00
|
|
|
|
|
|
|
# content of a/test_db2.py
|
|
|
|
def test_a2(db):
|
|
|
|
assert 0, db # to show value
|
|
|
|
|
|
|
|
and then a module in a sister directory which will not see
|
2016-08-23 10:35:41 +08:00
|
|
|
the ``db`` fixture:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2012-11-07 18:11:40 +08:00
|
|
|
|
|
|
|
# content of b/test_error.py
|
|
|
|
def test_root(db): # no db here, will error out
|
|
|
|
pass
|
2014-01-18 19:31:33 +08:00
|
|
|
|
2012-11-07 18:11:40 +08:00
|
|
|
We can run this::
|
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest
|
2017-11-23 23:33:41 +08:00
|
|
|
=========================== test session starts ============================
|
2017-05-13 04:17:40 +08:00
|
|
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
2017-03-14 06:41:20 +08:00
|
|
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
2012-11-09 06:36:16 +08:00
|
|
|
collected 7 items
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
test_step.py .Fx. [ 57%]
|
|
|
|
a/test_db.py F [ 71%]
|
|
|
|
a/test_db2.py F [ 85%]
|
|
|
|
b/test_error.py E [100%]
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
================================== ERRORS ==================================
|
|
|
|
_______________________ ERROR at setup of test_root ________________________
|
2015-06-07 05:30:49 +08:00
|
|
|
file $REGENDOC_TMPDIR/b/test_error.py, line 1
|
2012-11-07 18:11:40 +08:00
|
|
|
def test_root(db): # no db here, will error out
|
2016-08-02 02:46:34 +08:00
|
|
|
E fixture 'db' not found
|
2018-03-22 04:46:07 +08:00
|
|
|
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_xml_attribute, record_xml_property, recwarn, tmpdir, tmpdir_factory
|
2016-08-18 20:27:16 +08:00
|
|
|
> use 'pytest --fixtures [testpath]' for help on them.
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2015-06-07 05:30:49 +08:00
|
|
|
$REGENDOC_TMPDIR/b/test_error.py:1
|
2017-11-23 23:33:41 +08:00
|
|
|
================================= FAILURES =================================
|
|
|
|
____________________ TestUserHandling.test_modification ____________________
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2015-09-22 22:52:35 +08:00
|
|
|
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2012-11-09 06:36:16 +08:00
|
|
|
def test_modification(self):
|
|
|
|
> assert 0
|
|
|
|
E assert 0
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2012-11-09 06:36:16 +08:00
|
|
|
test_step.py:9: AssertionError
|
2017-11-23 23:33:41 +08:00
|
|
|
_________________________________ test_a1 __________________________________
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2015-09-22 22:52:35 +08:00
|
|
|
db = <conftest.DB object at 0xdeadbeef>
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2012-11-07 18:11:40 +08:00
|
|
|
def test_a1(db):
|
|
|
|
> assert 0, db # to show value
|
2015-09-22 22:52:35 +08:00
|
|
|
E AssertionError: <conftest.DB object at 0xdeadbeef>
|
2014-09-24 20:46:56 +08:00
|
|
|
E assert 0
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2012-11-07 18:11:40 +08:00
|
|
|
a/test_db.py:2: AssertionError
|
2017-11-23 23:33:41 +08:00
|
|
|
_________________________________ test_a2 __________________________________
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2015-09-22 22:52:35 +08:00
|
|
|
db = <conftest.DB object at 0xdeadbeef>
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2012-11-07 18:11:40 +08:00
|
|
|
def test_a2(db):
|
|
|
|
> assert 0, db # to show value
|
2015-09-22 22:52:35 +08:00
|
|
|
E AssertionError: <conftest.DB object at 0xdeadbeef>
|
2014-09-24 20:46:56 +08:00
|
|
|
E assert 0
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2012-11-07 18:11:40 +08:00
|
|
|
a/test_db2.py:2: AssertionError
|
2017-11-23 23:33:41 +08:00
|
|
|
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ==========
|
2012-11-07 18:11:40 +08:00
|
|
|
|
|
|
|
The two test modules in the ``a`` directory see the same ``db`` fixture instance
|
|
|
|
while the one test in the sister-directory ``b`` doesn't see it. We could of course
|
|
|
|
also define a ``db`` fixture in that sister directory's ``conftest.py`` file.
|
|
|
|
Note that each fixture is only instantiated if there is a test actually needing
|
|
|
|
it (unless you use "autouse" fixture which are always executed ahead of the first test
|
|
|
|
executing).
|
|
|
|
|
|
|
|
|
2012-11-09 06:36:16 +08:00
|
|
|
post-process test reports / failures
|
|
|
|
---------------------------------------
|
|
|
|
|
|
|
|
If you want to postprocess test reports and need access to the executing
|
2014-01-18 19:31:33 +08:00
|
|
|
environment you can implement a hook that gets called when the test
|
2012-11-09 06:36:16 +08:00
|
|
|
"report" object is about to be created. Here we write out all failing
|
|
|
|
test calls and also access a fixture (if it was used by the test) in
|
|
|
|
case you want to query/look at it during your post processing. In our
|
2017-01-01 01:54:47 +08:00
|
|
|
case we just write some information out to a ``failures`` file:
|
2016-08-23 10:35:41 +08:00
|
|
|
|
|
|
|
.. code-block:: python
|
2012-11-09 06:36:16 +08:00
|
|
|
|
|
|
|
# content of conftest.py
|
|
|
|
|
|
|
|
import pytest
|
|
|
|
import os.path
|
|
|
|
|
2015-08-09 06:07:27 +08:00
|
|
|
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
|
|
|
|
def pytest_runtest_makereport(item, call):
|
2012-11-09 06:36:16 +08:00
|
|
|
# execute all other hooks to obtain the report object
|
2015-08-09 06:07:27 +08:00
|
|
|
outcome = yield
|
|
|
|
rep = outcome.get_result()
|
2012-11-09 06:36:16 +08:00
|
|
|
|
|
|
|
# we only look at actual failing test calls, not setup/teardown
|
2014-01-18 19:31:33 +08:00
|
|
|
if rep.when == "call" and rep.failed:
|
2012-11-09 06:36:16 +08:00
|
|
|
mode = "a" if os.path.exists("failures") else "w"
|
|
|
|
with open("failures", mode) as f:
|
|
|
|
# let's also access a fixture for the fun of it
|
2015-08-09 06:07:27 +08:00
|
|
|
if "tmpdir" in item.fixturenames:
|
2012-11-09 06:36:16 +08:00
|
|
|
extra = " (%s)" % item.funcargs["tmpdir"]
|
|
|
|
else:
|
|
|
|
extra = ""
|
2014-01-18 19:31:33 +08:00
|
|
|
|
2012-11-09 06:36:16 +08:00
|
|
|
f.write(rep.nodeid + extra + "\n")
|
2015-08-09 06:07:27 +08:00
|
|
|
|
2012-11-09 06:36:16 +08:00
|
|
|
|
2016-08-23 10:35:41 +08:00
|
|
|
if you then have failing tests:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2012-11-09 06:36:16 +08:00
|
|
|
|
|
|
|
# content of test_module.py
|
|
|
|
def test_fail1(tmpdir):
|
2014-01-18 19:31:33 +08:00
|
|
|
assert 0
|
2012-11-09 06:36:16 +08:00
|
|
|
def test_fail2():
|
2014-01-18 19:31:33 +08:00
|
|
|
assert 0
|
|
|
|
|
2012-11-09 06:36:16 +08:00
|
|
|
and run them::
|
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest test_module.py
|
2017-11-23 23:33:41 +08:00
|
|
|
=========================== test session starts ============================
|
2017-05-13 04:17:40 +08:00
|
|
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
2017-03-14 06:41:20 +08:00
|
|
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
2012-11-09 06:36:16 +08:00
|
|
|
collected 2 items
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
test_module.py FF [100%]
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
================================= FAILURES =================================
|
|
|
|
________________________________ test_fail1 ________________________________
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2015-09-22 20:02:11 +08:00
|
|
|
tmpdir = local('PYTEST_TMPDIR/test_fail10')
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2012-11-09 06:36:16 +08:00
|
|
|
def test_fail1(tmpdir):
|
|
|
|
> assert 0
|
|
|
|
E assert 0
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2012-11-09 06:36:16 +08:00
|
|
|
test_module.py:2: AssertionError
|
2017-11-23 23:33:41 +08:00
|
|
|
________________________________ test_fail2 ________________________________
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2012-11-09 06:36:16 +08:00
|
|
|
def test_fail2():
|
|
|
|
> assert 0
|
|
|
|
E assert 0
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2012-11-09 06:36:16 +08:00
|
|
|
test_module.py:4: AssertionError
|
2017-11-23 23:33:41 +08:00
|
|
|
========================= 2 failed in 0.12 seconds =========================
|
2012-11-09 06:36:16 +08:00
|
|
|
|
|
|
|
you will have a "failures" file which contains the failing test ids::
|
|
|
|
|
|
|
|
$ cat failures
|
2015-09-22 20:02:11 +08:00
|
|
|
test_module.py::test_fail1 (PYTEST_TMPDIR/test_fail10)
|
|
|
|
test_module.py::test_fail2
|
2012-11-14 16:39:21 +08:00
|
|
|
|
|
|
|
Making test result information available in fixtures
|
|
|
|
-----------------------------------------------------------
|
|
|
|
|
|
|
|
.. regendoc:wipe
|
|
|
|
|
|
|
|
If you want to make test result reports available in fixture finalizers
|
2016-08-23 10:35:41 +08:00
|
|
|
here is a little example implemented via a local plugin:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2012-11-14 16:39:21 +08:00
|
|
|
|
|
|
|
# content of conftest.py
|
|
|
|
|
|
|
|
import pytest
|
|
|
|
|
2015-08-09 06:07:27 +08:00
|
|
|
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
|
|
|
|
def pytest_runtest_makereport(item, call):
|
2012-11-14 16:39:21 +08:00
|
|
|
# execute all other hooks to obtain the report object
|
2015-08-09 06:07:27 +08:00
|
|
|
outcome = yield
|
|
|
|
rep = outcome.get_result()
|
2012-11-14 16:39:21 +08:00
|
|
|
|
2017-01-01 01:54:47 +08:00
|
|
|
# set a report attribute for each phase of a call, which can
|
2012-11-14 16:39:21 +08:00
|
|
|
# be "setup", "call", "teardown"
|
|
|
|
|
|
|
|
setattr(item, "rep_" + rep.when, rep)
|
|
|
|
|
|
|
|
|
|
|
|
@pytest.fixture
|
|
|
|
def something(request):
|
2016-06-08 07:59:58 +08:00
|
|
|
yield
|
|
|
|
# request.node is an "item" because we use the default
|
|
|
|
# "function" scope
|
|
|
|
if request.node.rep_setup.failed:
|
|
|
|
print ("setting up a test failed!", request.node.nodeid)
|
|
|
|
elif request.node.rep_setup.passed:
|
|
|
|
if request.node.rep_call.failed:
|
|
|
|
print ("executing test failed", request.node.nodeid)
|
2012-11-14 16:39:21 +08:00
|
|
|
|
|
|
|
|
2016-08-23 10:35:41 +08:00
|
|
|
if you then have failing tests:
|
|
|
|
|
|
|
|
.. code-block:: python
|
2012-11-14 16:39:21 +08:00
|
|
|
|
|
|
|
# content of test_module.py
|
2014-01-18 19:31:33 +08:00
|
|
|
|
2012-11-14 16:39:21 +08:00
|
|
|
import pytest
|
|
|
|
|
|
|
|
@pytest.fixture
|
|
|
|
def other():
|
|
|
|
assert 0
|
2014-01-18 19:31:33 +08:00
|
|
|
|
2012-11-14 16:39:21 +08:00
|
|
|
def test_setup_fails(something, other):
|
|
|
|
pass
|
|
|
|
|
|
|
|
def test_call_fails(something):
|
2014-01-18 19:31:33 +08:00
|
|
|
assert 0
|
2012-11-14 16:39:21 +08:00
|
|
|
|
|
|
|
def test_fail2():
|
2014-01-18 19:31:33 +08:00
|
|
|
assert 0
|
|
|
|
|
2012-11-14 16:39:21 +08:00
|
|
|
and run it::
|
|
|
|
|
2016-06-21 22:16:57 +08:00
|
|
|
$ pytest -s test_module.py
|
2017-11-23 23:33:41 +08:00
|
|
|
=========================== test session starts ============================
|
2017-05-13 04:17:40 +08:00
|
|
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
2017-03-14 06:41:20 +08:00
|
|
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
2014-09-24 20:46:56 +08:00
|
|
|
collected 3 items
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2015-09-22 22:52:35 +08:00
|
|
|
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
|
|
|
|
Fexecuting test failed test_module.py::test_call_fails
|
2018-01-31 03:47:56 +08:00
|
|
|
F
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2017-11-23 23:33:41 +08:00
|
|
|
================================== ERRORS ==================================
|
|
|
|
____________________ ERROR at setup of test_setup_fails ____________________
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2014-09-24 20:46:56 +08:00
|
|
|
@pytest.fixture
|
|
|
|
def other():
|
|
|
|
> assert 0
|
|
|
|
E assert 0
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2014-09-24 20:46:56 +08:00
|
|
|
test_module.py:6: AssertionError
|
2017-11-23 23:33:41 +08:00
|
|
|
================================= FAILURES =================================
|
|
|
|
_____________________________ test_call_fails ______________________________
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2014-09-24 20:46:56 +08:00
|
|
|
something = None
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2014-09-24 20:46:56 +08:00
|
|
|
def test_call_fails(something):
|
|
|
|
> assert 0
|
|
|
|
E assert 0
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2014-09-24 20:46:56 +08:00
|
|
|
test_module.py:12: AssertionError
|
2017-11-23 23:33:41 +08:00
|
|
|
________________________________ test_fail2 ________________________________
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2014-09-24 20:46:56 +08:00
|
|
|
def test_fail2():
|
|
|
|
> assert 0
|
|
|
|
E assert 0
|
2018-05-18 16:19:46 +08:00
|
|
|
|
2014-09-24 20:46:56 +08:00
|
|
|
test_module.py:15: AssertionError
|
2017-11-23 23:33:41 +08:00
|
|
|
==================== 2 failed, 1 error in 0.12 seconds =====================
|
2012-11-14 16:39:21 +08:00
|
|
|
|
|
|
|
You'll see that the fixture finalizers could use the precise reporting
|
|
|
|
information.
|
|
|
|
|
2018-03-07 07:40:07 +08:00
|
|
|
.. _pytest current test env:
|
|
|
|
|
2017-07-19 04:18:34 +08:00
|
|
|
``PYTEST_CURRENT_TEST`` environment variable
|
|
|
|
--------------------------------------------
|
|
|
|
|
|
|
|
.. versionadded:: 3.2
|
|
|
|
|
|
|
|
Sometimes a test session might get stuck and there might be no easy way to figure out
|
|
|
|
which test got stuck, for example if pytest was run in quiet mode (``-q``) or you don't have access to the console
|
|
|
|
output. This is particularly a problem if the problem helps only sporadically, the famous "flaky" kind of tests.
|
|
|
|
|
|
|
|
``pytest`` sets a ``PYTEST_CURRENT_TEST`` environment variable when running tests, which can be inspected
|
2018-04-26 21:45:48 +08:00
|
|
|
by process monitoring utilities or libraries like `psutil <https://pypi.org/project/psutil/>`_ to discover which
|
2017-07-19 04:18:34 +08:00
|
|
|
test got stuck if necessary:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
import psutil
|
|
|
|
|
|
|
|
for pid in psutil.pids():
|
|
|
|
environ = psutil.Process(pid).environ()
|
|
|
|
if 'PYTEST_CURRENT_TEST' in environ:
|
|
|
|
print(f'pytest process {pid} running: {environ["PYTEST_CURRENT_TEST"]}')
|
|
|
|
|
|
|
|
During the test session pytest will set ``PYTEST_CURRENT_TEST`` to the current test
|
|
|
|
:ref:`nodeid <nodeids>` and the current stage, which can be ``setup``, ``call``
|
|
|
|
and ``teardown``.
|
|
|
|
|
|
|
|
For example, when running a single test function named ``test_foo`` from ``foo_module.py``,
|
|
|
|
``PYTEST_CURRENT_TEST`` will be set to:
|
|
|
|
|
|
|
|
#. ``foo_module.py::test_foo (setup)``
|
|
|
|
#. ``foo_module.py::test_foo (call)``
|
|
|
|
#. ``foo_module.py::test_foo (teardown)``
|
|
|
|
|
|
|
|
In that order.
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
The contents of ``PYTEST_CURRENT_TEST`` is meant to be human readable and the actual format
|
|
|
|
can be changed between releases (even bug fixes) so it shouldn't be relied on for scripting
|
|
|
|
or automation.
|
|
|
|
|
2018-05-18 16:19:46 +08:00
|
|
|
Freezing pytest
|
2016-07-27 19:49:00 +08:00
|
|
|
---------------
|
2014-07-29 09:40:23 +08:00
|
|
|
|
|
|
|
If you freeze your application using a tool like
|
2016-07-27 08:29:07 +08:00
|
|
|
`PyInstaller <https://pyinstaller.readthedocs.io>`_
|
|
|
|
in order to distribute it to your end-users, it is a good idea to also package
|
|
|
|
your test runner and run your tests using the frozen application. This way packaging
|
|
|
|
errors such as dependencies not being included into the executable can be detected early
|
|
|
|
while also allowing you to send test files to users so they can run them in their
|
2016-07-27 19:49:00 +08:00
|
|
|
machines, which can be useful to obtain more information about a hard to reproduce bug.
|
2016-07-27 08:29:07 +08:00
|
|
|
|
2016-07-27 19:49:00 +08:00
|
|
|
Fortunately recent ``PyInstaller`` releases already have a custom hook
|
2018-05-18 16:19:46 +08:00
|
|
|
for pytest, but if you are using another tool to freeze executables
|
2016-07-27 19:49:00 +08:00
|
|
|
such as ``cx_freeze`` or ``py2exe``, you can use ``pytest.freeze_includes()``
|
|
|
|
to obtain the full list of internal pytest modules. How to configure the tools
|
|
|
|
to find the internal modules varies from tool to tool, however.
|
2014-07-29 09:40:23 +08:00
|
|
|
|
2018-05-18 16:19:46 +08:00
|
|
|
Instead of freezing the pytest runner as a separate executable, you can make
|
2016-07-27 19:49:00 +08:00
|
|
|
your frozen program work as the pytest runner by some clever
|
2018-05-18 16:19:46 +08:00
|
|
|
argument handling during program startup. This allows you to
|
2016-07-27 19:49:00 +08:00
|
|
|
have a single executable, which is usually more convenient.
|
2018-01-10 23:44:26 +08:00
|
|
|
Please note that the mechanism for plugin discovery used by pytest
|
|
|
|
(setupttools entry points) doesn't work with frozen executables so pytest
|
2018-05-18 16:19:46 +08:00
|
|
|
can't find any third party plugins automatically. To include third party plugins
|
2018-01-10 23:44:26 +08:00
|
|
|
like ``pytest-timeout`` they must be imported explicitly and passed on to pytest.main.
|
2014-07-29 09:40:23 +08:00
|
|
|
|
2016-07-27 19:49:00 +08:00
|
|
|
.. code-block:: python
|
2014-07-29 09:40:23 +08:00
|
|
|
|
|
|
|
# contents of app_main.py
|
|
|
|
import sys
|
2018-01-10 23:44:26 +08:00
|
|
|
import pytest_timeout # Third party plugin
|
2014-07-29 09:40:23 +08:00
|
|
|
|
|
|
|
if len(sys.argv) > 1 and sys.argv[1] == '--pytest':
|
|
|
|
import pytest
|
2018-01-10 23:44:26 +08:00
|
|
|
sys.exit(pytest.main(sys.argv[2:], plugins=[pytest_timeout]))
|
2014-07-29 09:40:23 +08:00
|
|
|
else:
|
|
|
|
# normal application execution: at this point argv can be parsed
|
|
|
|
# by your argument-parsing library of choice as usual
|
|
|
|
...
|
|
|
|
|
2016-08-07 04:58:17 +08:00
|
|
|
|
2016-07-27 19:49:00 +08:00
|
|
|
This allows you to execute tests using the frozen
|
2016-08-07 04:58:17 +08:00
|
|
|
application with standard ``pytest`` command-line options::
|
2014-07-29 09:40:23 +08:00
|
|
|
|
2016-02-23 18:05:51 +08:00
|
|
|
./app_main --pytest --verbose --tb=long --junitxml=results.xml test-suite/
|