329 lines
9.8 KiB
Plaintext
329 lines
9.8 KiB
Plaintext
|
|
.. highlightlang:: python
|
|
|
|
Basic patterns and examples
|
|
==========================================================
|
|
|
|
Pass different values to a test function, depending on command line options
|
|
----------------------------------------------------------------------------
|
|
|
|
.. regendoc:wipe
|
|
|
|
Suppose we want to write a test that depends on a command line option.
|
|
Here is a basic pattern how to achieve this::
|
|
|
|
# content of test_sample.py
|
|
def test_answer(cmdopt):
|
|
if cmdopt == "type1":
|
|
print ("first")
|
|
elif cmdopt == "type2":
|
|
print ("second")
|
|
assert 0 # to see what was printed
|
|
|
|
|
|
For this to work we need to add a command line option and
|
|
provide the ``cmdopt`` through a :ref:`function argument <funcarg>` factory::
|
|
|
|
# content of conftest.py
|
|
def pytest_addoption(parser):
|
|
parser.addoption("--cmdopt", action="store", default="type1",
|
|
help="my option: type1 or type2")
|
|
|
|
def pytest_funcarg__cmdopt(request):
|
|
return request.config.option.cmdopt
|
|
|
|
Let's run this without supplying our new command line option::
|
|
|
|
$ py.test -q test_sample.py
|
|
collecting ... collected 1 items
|
|
F
|
|
================================= FAILURES =================================
|
|
_______________________________ test_answer ________________________________
|
|
|
|
cmdopt = 'type1'
|
|
|
|
def test_answer(cmdopt):
|
|
if cmdopt == "type1":
|
|
print ("first")
|
|
elif cmdopt == "type2":
|
|
print ("second")
|
|
> assert 0 # to see what was printed
|
|
E assert 0
|
|
|
|
test_sample.py:6: AssertionError
|
|
----------------------------- Captured stdout ------------------------------
|
|
first
|
|
1 failed in 0.02 seconds
|
|
|
|
And now with supplying a command line option::
|
|
|
|
$ py.test -q --cmdopt=type2
|
|
collecting ... collected 1 items
|
|
F
|
|
================================= FAILURES =================================
|
|
_______________________________ test_answer ________________________________
|
|
|
|
cmdopt = 'type2'
|
|
|
|
def test_answer(cmdopt):
|
|
if cmdopt == "type1":
|
|
print ("first")
|
|
elif cmdopt == "type2":
|
|
print ("second")
|
|
> assert 0 # to see what was printed
|
|
E assert 0
|
|
|
|
test_sample.py:6: AssertionError
|
|
----------------------------- Captured stdout ------------------------------
|
|
second
|
|
1 failed in 0.02 seconds
|
|
|
|
Ok, this completes the basic pattern. However, one often rather
|
|
wants to process command line options outside of the test and
|
|
rather pass in different or more complex objects. See the
|
|
next example or refer to :ref:`mysetup` for more information
|
|
on real-life examples.
|
|
|
|
|
|
Dynamically adding command line options
|
|
--------------------------------------------------------------
|
|
|
|
.. regendoc:wipe
|
|
|
|
Through :confval:`addopts` you can statically add command line
|
|
options for your project. You can also dynamically modify
|
|
the command line arguments before they get processed::
|
|
|
|
# content of conftest.py
|
|
import sys
|
|
def pytest_cmdline_preparse(args):
|
|
if 'xdist' in sys.modules: # pytest-xdist plugin
|
|
import multiprocessing
|
|
num = max(multiprocessing.cpu_count() / 2, 1)
|
|
args[:] = ["-n", str(num)] + args
|
|
|
|
If you have the :ref:`xdist plugin <xdist>` installed
|
|
you will now always perform test runs using a number
|
|
of subprocesses close to your CPU. Running in an empty
|
|
directory with the above conftest.py::
|
|
|
|
$ py.test
|
|
=========================== test session starts ============================
|
|
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
|
gw0 I
|
|
gw0 [0]
|
|
|
|
scheduling tests via LoadScheduling
|
|
|
|
============================= in 0.48 seconds =============================
|
|
|
|
.. _`excontrolskip`:
|
|
|
|
Control skipping of tests according to command line option
|
|
--------------------------------------------------------------
|
|
|
|
.. regendoc:wipe
|
|
|
|
Here is a ``conftest.py`` file adding a ``--runslow`` command
|
|
line option to control skipping of ``slow`` marked tests::
|
|
|
|
# content of conftest.py
|
|
|
|
import pytest
|
|
def pytest_addoption(parser):
|
|
parser.addoption("--runslow", action="store_true",
|
|
help="run slow tests")
|
|
|
|
def pytest_runtest_setup(item):
|
|
if 'slow' in item.keywords and not item.config.getvalue("runslow"):
|
|
pytest.skip("need --runslow option to run")
|
|
|
|
We can now write a test module like this::
|
|
|
|
# content of test_module.py
|
|
|
|
import pytest
|
|
slow = pytest.mark.slow
|
|
|
|
def test_func_fast():
|
|
pass
|
|
|
|
@slow
|
|
def test_func_slow():
|
|
pass
|
|
|
|
and when running it will see a skipped "slow" test::
|
|
|
|
$ py.test -rs # "-rs" means report details on the little 's'
|
|
=========================== test session starts ============================
|
|
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
|
collecting ... collected 2 items
|
|
|
|
test_module.py .s
|
|
========================= short test summary info ==========================
|
|
SKIP [1] /Users/hpk/tmp/doc-exec-172/conftest.py:9: need --runslow option to run
|
|
|
|
=================== 1 passed, 1 skipped in 0.02 seconds ====================
|
|
|
|
Or run it including the ``slow`` marked test::
|
|
|
|
$ py.test --runslow
|
|
=========================== test session starts ============================
|
|
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
|
collecting ... collected 2 items
|
|
|
|
test_module.py ..
|
|
|
|
========================= 2 passed in 0.02 seconds =========================
|
|
|
|
Writing well integrated assertion helpers
|
|
--------------------------------------------------
|
|
|
|
.. regendoc:wipe
|
|
|
|
If you have a test helper function called from a test you can
|
|
use the ``pytest.fail`` marker to fail a test with a certain message.
|
|
The test support function will not show up in the traceback if you
|
|
set the ``__tracebackhide__`` option somewhere in the helper function.
|
|
Example::
|
|
|
|
# content of test_checkconfig.py
|
|
import pytest
|
|
def checkconfig(x):
|
|
__tracebackhide__ = True
|
|
if not hasattr(x, "config"):
|
|
pytest.fail("not configured: %s" %(x,))
|
|
|
|
def test_something():
|
|
checkconfig(42)
|
|
|
|
The ``__tracebackhide__`` setting influences py.test showing
|
|
of tracebacks: the ``checkconfig`` function will not be shown
|
|
unless the ``--fulltrace`` command line option is specified.
|
|
Let's run our little function::
|
|
|
|
$ py.test -q test_checkconfig.py
|
|
collecting ... collected 1 items
|
|
F
|
|
================================= FAILURES =================================
|
|
______________________________ test_something ______________________________
|
|
|
|
def test_something():
|
|
> checkconfig(42)
|
|
E Failed: not configured: 42
|
|
|
|
test_checkconfig.py:8: Failed
|
|
1 failed in 0.02 seconds
|
|
|
|
Detect if running from within a py.test run
|
|
--------------------------------------------------------------
|
|
|
|
.. regendoc:wipe
|
|
|
|
Usually it is a bad idea to make application code
|
|
behave differently if called from a test. But if you
|
|
absolutely must find out if your application code is
|
|
running from a test you can do something like this::
|
|
|
|
# content of conftest.py
|
|
|
|
def pytest_configure(config):
|
|
import sys
|
|
sys._called_from_test = True
|
|
|
|
def pytest_unconfigure(config):
|
|
del sys._called_from_test
|
|
|
|
and then check for the ``sys._called_from_test`` flag::
|
|
|
|
if hasattr(sys, '_called_from_test'):
|
|
# called from within a test run
|
|
else:
|
|
# called "normally"
|
|
|
|
accordingly in your application. It's also a good idea
|
|
to rather use your own application module rather than ``sys``
|
|
for handling flag.
|
|
|
|
Adding info to test report header
|
|
--------------------------------------------------------------
|
|
|
|
.. regendoc:wipe
|
|
|
|
It's easy to present extra information in a py.test run::
|
|
|
|
# content of conftest.py
|
|
|
|
def pytest_report_header(config):
|
|
return "project deps: mylib-1.1"
|
|
|
|
which will add the string to the test header accordingly::
|
|
|
|
$ py.test
|
|
=========================== test session starts ============================
|
|
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
|
project deps: mylib-1.1
|
|
collecting ... collected 0 items
|
|
|
|
============================= in 0.00 seconds =============================
|
|
|
|
.. regendoc:wipe
|
|
|
|
You can also return a list of strings which will be considered as several
|
|
lines of information. You can of course also make the amount of reporting
|
|
information on e.g. the value of ``config.option.verbose`` so that
|
|
you present more information appropriately::
|
|
|
|
# content of conftest.py
|
|
|
|
def pytest_report_header(config):
|
|
if config.option.verbose > 0:
|
|
return ["info1: did you know that ...", "did you?"]
|
|
|
|
which will add info only when run with "--v"::
|
|
|
|
$ py.test -v
|
|
=========================== test session starts ============================
|
|
platform darwin -- Python 2.7.1 -- pytest-2.1.3 -- /Users/hpk/venv/0/bin/python
|
|
info1: did you know that ...
|
|
did you?
|
|
collecting ... collected 0 items
|
|
|
|
============================= in 0.00 seconds =============================
|
|
|
|
and nothing when run plainly::
|
|
|
|
$ py.test
|
|
=========================== test session starts ============================
|
|
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
|
collecting ... collected 0 items
|
|
|
|
============================= in 0.00 seconds =============================
|
|
|
|
profiling test duration
|
|
--------------------------
|
|
|
|
.. regendoc:wipe
|
|
|
|
.. versionadded: 2.2
|
|
|
|
If you have a slow running large test suite you might want to find
|
|
out which tests are slowest. Let's make an artifical test suite::
|
|
|
|
# content of test_some_are_slow.py
|
|
|
|
import time
|
|
|
|
def test_funcfast():
|
|
pass
|
|
|
|
def test_funcslow1():
|
|
time.sleep(0.1)
|
|
|
|
def test_funcslow2():
|
|
time.sleep(0.2)
|
|
|
|
Now we can profile which test functions execute slowest::
|
|
|
|
$ py.test --durations=3
|