124 lines
4.2 KiB
Plaintext
124 lines
4.2 KiB
Plaintext
Looping on the failing test set
|
|
-----------------------------------------
|
|
|
|
``py.test --looponfailing`` (implemented through the external
|
|
`pytest-xdist`_ plugin) allows to run a test suite,
|
|
memorize all failures and then loop over the failing set
|
|
of tests until they all pass. It will re-start running
|
|
the tests when it detects file changes in your project.
|
|
|
|
select tests by keyword / test name search
|
|
-----------------------------------------------------
|
|
|
|
.. _`selection by keyword`:
|
|
|
|
You can selectively run tests by specifiying a keyword
|
|
on the command line. Examples::
|
|
|
|
py.test -k test_simple
|
|
py.test -k "-test_simple"
|
|
|
|
will run all tests matching (or not matching) the
|
|
"test_simple" keyword. Note that you need to quote
|
|
the keyword if "-" is recognized as an indicator
|
|
for a commandline option. Lastly, you may use::
|
|
|
|
py.test. -k "test_simple:"
|
|
|
|
which will run all tests after the expression has *matched once*, i.e.
|
|
all tests that are seen after a test that matches the "test_simple"
|
|
keyword.
|
|
|
|
By default, all filename parts and
|
|
class/function names of a test function are put into the set
|
|
of keywords for a given test. You can specify additional
|
|
kewords like this:
|
|
|
|
.. sourcecode:: python
|
|
|
|
@py.test.mark.webtest
|
|
def test_send_http():
|
|
...
|
|
|
|
and then use those keywords to select tests. See the `pytest_keyword`_
|
|
plugin for more information.
|
|
|
|
.. _`pytest_keyword`: plugin/mark.html
|
|
skip or expect-to-fail a test
|
|
-------------------------------------------
|
|
|
|
py.test has a dedicated `skipping plugin`_ that allows to define
|
|
|
|
* define "skip" outcomes indicating a platform or a
|
|
dependency mismatch.
|
|
|
|
* "xfail" outcomes indicating an "expected failure" either with
|
|
with or without running a test.
|
|
|
|
* skip and xfail outcomes can be applied at module, class or method
|
|
level or even only for certain argument sets of a parametrized function.
|
|
|
|
.. _`skipping plugin`: plugin/skipping.html
|
|
.. _`funcargs mechanism`: funcargs.html
|
|
.. _`doctest.py`: http://docs.python.org/library/doctest.html
|
|
.. _`xUnit style setup`: xunit_setup.html
|
|
.. _`pytest_nose`: plugin/nose.html
|
|
|
|
|
|
no-boilerplate testing
|
|
----------------------------------
|
|
|
|
.. _`autocollect`:
|
|
|
|
automatic Python test discovery
|
|
+++++++++++++++++++++++++++++++++++
|
|
|
|
By default, all python modules with a ``test_*.py``
|
|
filename are inspected for finding tests:
|
|
|
|
* functions with a name beginning with ``test_``
|
|
* classes with a leading ``Test`` name and ``test`` prefixed methods.
|
|
* ``unittest.TestCase`` subclasses
|
|
|
|
parametrizing test functions and functional testing
|
|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
|
|
|
py.test offers the unique `funcargs mechanism`_ for setting up
|
|
and passing project-specific objects to Python test functions.
|
|
Test Parametrization happens by triggering a call to the same test
|
|
function with different argument values. For doing fixtures
|
|
using the funcarg mechanism makes your test and setup code
|
|
more efficient and more readable. This is especially true
|
|
for functional tests which might depend on command line
|
|
options and a setup that needs to be shared across
|
|
a whole test run.
|
|
|
|
per-test capturing of output, including subprocesses
|
|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
|
|
|
By default, ``py.test`` captures all writes to stdout/stderr.
|
|
Output from ``print`` statements as well as from subprocesses
|
|
is captured_. When a test fails, the associated captured outputs are shown.
|
|
This allows you to put debugging print statements in your code without
|
|
being overwhelmed by all the output that might be generated by tests
|
|
that do not fail.
|
|
|
|
.. _captured: plugin/capture.html
|
|
|
|
|
|
information-rich tracebacks, PDB introspection
|
|
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
|
|
|
.. _`example tracebacks`: http://paste.pocoo.org/show/134814/
|
|
|
|
A lot of care is taken to present useful failure information
|
|
and in particular nice and concise Python tracebacks. This
|
|
is especially useful if you need to regularly look at failures
|
|
from nightly runs, i.e. are detached from the actual test
|
|
running session. Here are `example tracebacks`_ for a number of failing
|
|
test functions. You can modify traceback printing styles through the
|
|
command line. Using the `--pdb`` option you can automatically activate
|
|
a PDB `Python debugger`_ when a test fails.
|
|
|
|
|