178 lines
5.9 KiB
Plaintext
178 lines
5.9 KiB
Plaintext
|
Looping on the failing test set
|
||
|
-----------------------------------------
|
||
|
|
||
|
``py.test --looponfailing`` (implemented through the external
|
||
|
`pytest-xdist`_ plugin) allows to run a test suite,
|
||
|
memorize all failures and then loop over the failing set
|
||
|
of tests until they all pass. It will re-start running
|
||
|
the tests when it detects file changes in your project.
|
||
|
|
||
|
select tests by keyword / test name search
|
||
|
-----------------------------------------------------
|
||
|
|
||
|
.. _`selection by keyword`:
|
||
|
|
||
|
You can selectively run tests by specifiying a keyword
|
||
|
on the command line. Examples::
|
||
|
|
||
|
py.test -k test_simple
|
||
|
py.test -k "-test_simple"
|
||
|
|
||
|
will run all tests matching (or not matching) the
|
||
|
"test_simple" keyword. Note that you need to quote
|
||
|
the keyword if "-" is recognized as an indicator
|
||
|
for a commandline option. Lastly, you may use::
|
||
|
|
||
|
py.test. -k "test_simple:"
|
||
|
|
||
|
which will run all tests after the expression has *matched once*, i.e.
|
||
|
all tests that are seen after a test that matches the "test_simple"
|
||
|
keyword.
|
||
|
|
||
|
By default, all filename parts and
|
||
|
class/function names of a test function are put into the set
|
||
|
of keywords for a given test. You can specify additional
|
||
|
kewords like this:
|
||
|
|
||
|
.. sourcecode:: python
|
||
|
|
||
|
@py.test.mark.webtest
|
||
|
def test_send_http():
|
||
|
...
|
||
|
|
||
|
and then use those keywords to select tests. See the `pytest_keyword`_
|
||
|
plugin for more information.
|
||
|
|
||
|
.. _`pytest_keyword`: plugin/mark.html
|
||
|
skip or expect-to-fail a test
|
||
|
-------------------------------------------
|
||
|
|
||
|
py.test has a dedicated `skipping plugin`_ that allows to define
|
||
|
|
||
|
* define "skip" outcomes indicating a platform or a
|
||
|
dependency mismatch.
|
||
|
|
||
|
* "xfail" outcomes indicating an "expected failure" either with
|
||
|
with or without running a test.
|
||
|
|
||
|
* skip and xfail outcomes can be applied at module, class or method
|
||
|
level or even only for certain argument sets of a parametrized function.
|
||
|
|
||
|
.. _`skipping plugin`: plugin/skipping.html
|
||
|
.. _`funcargs mechanism`: funcargs.html
|
||
|
.. _`doctest.py`: http://docs.python.org/library/doctest.html
|
||
|
.. _`xUnit style setup`: xunit_setup.html
|
||
|
.. _`pytest_nose`: plugin/nose.html
|
||
|
|
||
|
|
||
|
no-boilerplate testing
|
||
|
----------------------------------
|
||
|
|
||
|
.. _`autocollect`:
|
||
|
|
||
|
automatic Python test discovery
|
||
|
+++++++++++++++++++++++++++++++++++
|
||
|
|
||
|
By default, all python modules with a ``test_*.py``
|
||
|
filename are inspected for finding tests:
|
||
|
|
||
|
* functions with a name beginning with ``test_``
|
||
|
* classes with a leading ``Test`` name and ``test`` prefixed methods.
|
||
|
* ``unittest.TestCase`` subclasses
|
||
|
|
||
|
parametrizing test functions and functional testing
|
||
|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||
|
|
||
|
py.test offers the unique `funcargs mechanism`_ for setting up
|
||
|
and passing project-specific objects to Python test functions.
|
||
|
Test Parametrization happens by triggering a call to the same test
|
||
|
function with different argument values. For doing fixtures
|
||
|
using the funcarg mechanism makes your test and setup code
|
||
|
more efficient and more readable. This is especially true
|
||
|
for functional tests which might depend on command line
|
||
|
options and a setup that needs to be shared across
|
||
|
a whole test run.
|
||
|
|
||
|
per-test capturing of output, including subprocesses
|
||
|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||
|
|
||
|
By default, ``py.test`` captures all writes to stdout/stderr.
|
||
|
Output from ``print`` statements as well as from subprocesses
|
||
|
is captured_. When a test fails, the associated captured outputs are shown.
|
||
|
This allows you to put debugging print statements in your code without
|
||
|
being overwhelmed by all the output that might be generated by tests
|
||
|
that do not fail.
|
||
|
|
||
|
.. _captured: plugin/capture.html
|
||
|
|
||
|
assert with the ``assert`` statement
|
||
|
+++++++++++++++++++++++++++++++++++++++++++++
|
||
|
|
||
|
``py.test`` allows to use the standard python
|
||
|
``assert statement`` for verifying expectations
|
||
|
and values in Python tests. For example, you can
|
||
|
write the following in your tests::
|
||
|
|
||
|
assert hasattr(x, 'attribute')
|
||
|
|
||
|
to state that your object has a certain ``attribute``. In case this
|
||
|
assertion fails you will see the value of ``x``. Intermediate
|
||
|
values are computed by executing the assert expression a second time.
|
||
|
If you execute code with side effects, e.g. read from a file like this::
|
||
|
|
||
|
assert f.read() != '...'
|
||
|
|
||
|
then you may get a warning from pytest if that assertions
|
||
|
first failed and then succeeded.
|
||
|
|
||
|
asserting expected exceptions
|
||
|
+++++++++++++++++++++++++++++++++++++++
|
||
|
|
||
|
In order to write assertions about exceptions, you can use
|
||
|
``py.test.raises`` as a context manager like this:
|
||
|
|
||
|
.. sourcecode:: python
|
||
|
|
||
|
with py.test.raises(ZeroDivisionError):
|
||
|
1 / 0
|
||
|
|
||
|
and if you need to have access to the actual exception info you may use:
|
||
|
|
||
|
.. sourcecode:: python
|
||
|
|
||
|
with py.test.raises(RuntimeError) as excinfo:
|
||
|
def f():
|
||
|
f()
|
||
|
f()
|
||
|
|
||
|
# do checks related to excinfo.type, excinfo.value, excinfo.traceback
|
||
|
|
||
|
If you want to write test code that works on Python2.4 as well,
|
||
|
you may also use two other ways to test for an expected exception:
|
||
|
|
||
|
.. sourcecode:: python
|
||
|
|
||
|
py.test.raises(ExpectedException, func, *args, **kwargs)
|
||
|
py.test.raises(ExpectedException, "func(*args, **kwargs)")
|
||
|
|
||
|
both of which execute the specified function with args and kwargs and
|
||
|
asserts that the given ``ExpectedException`` is raised. The reporter will
|
||
|
provide you with helpful output in case of failures such as *no
|
||
|
exception* or *wrong exception*.
|
||
|
|
||
|
information-rich tracebacks, PDB introspection
|
||
|
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||
|
|
||
|
.. _`example tracebacks`: http://paste.pocoo.org/show/134814/
|
||
|
|
||
|
A lot of care is taken to present useful failure information
|
||
|
and in particular nice and concise Python tracebacks. This
|
||
|
is especially useful if you need to regularly look at failures
|
||
|
from nightly runs, i.e. are detached from the actual test
|
||
|
running session. Here are `example tracebacks`_ for a number of failing
|
||
|
test functions. You can modify traceback printing styles through the
|
||
|
command line. Using the `--pdb`` option you can automatically activate
|
||
|
a PDB `Python debugger`_ when a test fails.
|
||
|
|
||
|
|