222 lines
8.3 KiB
Plaintext
222 lines
8.3 KiB
Plaintext
==================================================
|
|
py.test feature overview
|
|
==================================================
|
|
|
|
.. contents::
|
|
:local:
|
|
:depth: 1
|
|
|
|
mature command line testing tool
|
|
====================================================
|
|
|
|
py.test is a command line tool to collect, run and report about automated tests. It runs well on Linux, Windows and OSX and on Python 2.4 through to 3.1 versions.
|
|
It is used in many projects, ranging from running 10 thousands of tests
|
|
to a few inlined tests on a command line script. As of version 1.2 you can also
|
|
generate a no-dependency py.test-equivalent standalone script that you
|
|
can distribute along with your application.
|
|
|
|
extensive easy plugin system
|
|
======================================================
|
|
|
|
.. _`suprisingly easy`: http://bruynooghe.blogspot.com/2009/12/skipping-slow-test-by-default-in-pytest.html
|
|
|
|
py.test delegates almost all aspects of its operation to plugins_.
|
|
It is `suprisingly easy`_ to add command line options or
|
|
do other kind of add-ons and customizations. This can
|
|
be done per-project or by distributing a global plugin.
|
|
One can can thus modify or add aspects for purposes such as:
|
|
|
|
* reporting extensions
|
|
* customizing collection and execution of tests
|
|
* running and managing non-python tests
|
|
* managing domain-specific test state setup
|
|
* adding non-python tests into the run, e.g. driven by data files
|
|
|
|
.. _`plugins`: plugin/index.html
|
|
|
|
distributing tests to your CPUs and SSH accounts
|
|
==========================================================
|
|
|
|
.. _`pytest-xdist`: plugin/xdist.html
|
|
|
|
Through the use of the separately released `pytest-xdist`_ plugin you
|
|
can seemlessly distribute runs to multiple CPUs or remote computers
|
|
through SSH and sockets. This plugin also offers a ``--looponfailing``
|
|
mode which will continously re-run only failing tests in a subprocess.
|
|
|
|
supports several testing practises and methods
|
|
==================================================================
|
|
|
|
py.test supports many testing methods conventionally used in
|
|
the Python community. It runs traditional `unittest.py`_,
|
|
`doctest.py`_, supports `xUnit style setup`_ and nose_ specific
|
|
setups and test suites. It offers minimal no-boilerplate model
|
|
for configuring and deploying tests written as simple Python
|
|
functions or methods. It also integrates `coverage testing
|
|
with figleaf`_ or `Javasript unit- and functional testing`_.
|
|
|
|
.. _`Javasript unit- and functional testing`: plugin/oejskit.html
|
|
.. _`coverage testing with figleaf`: plugin/figleaf.html
|
|
|
|
integrates well with CI systems
|
|
====================================================
|
|
|
|
py.test can produce JUnitXML style output as well as formatted
|
|
"resultlog" files that can be postprocessed by Continous Integration
|
|
systems such as Hudson or Buildbot easily. It also provides command
|
|
line options to control test configuration lookup behaviour or ignoring
|
|
certain tests or directories.
|
|
|
|
no-boilerplate test functions with Python
|
|
===================================================
|
|
|
|
.. _`autocollect`:
|
|
|
|
automatic Python test discovery
|
|
------------------------------------
|
|
|
|
By default, all python modules with a ``test_*.py``
|
|
filename are inspected for finding tests:
|
|
|
|
* functions with a name beginning with ``test_``
|
|
* classes with a leading ``Test`` name and ``test`` prefixed methods.
|
|
* ``unittest.TestCase`` subclasses
|
|
|
|
parametrizing test functions and advanced functional testing
|
|
--------------------------------------------------------------
|
|
|
|
py.test offers the unique `funcargs mechanism`_ for setting up
|
|
and passing project-specific objects to Python test functions.
|
|
Test Parametrization happens by triggering a call to the same test
|
|
function with different argument values. For doing fixtures
|
|
using the funcarg mechanism makes your test and setup code
|
|
more efficient and more readable. This is especially true
|
|
for functional tests which might depend on command line
|
|
options and a setup that needs to be shared across
|
|
a whole test run.
|
|
|
|
per-test capturing of output, including subprocesses
|
|
----------------------------------------------------
|
|
|
|
By default, ``py.test`` captures all writes to stdout/stderr.
|
|
Output from ``print`` statements as well as from subprocesses
|
|
is captured_. When a test fails, the associated captured outputs are shown.
|
|
This allows you to put debugging print statements in your code without
|
|
being overwhelmed by all the output that might be generated by tests
|
|
that do not fail.
|
|
|
|
.. _captured: plugin/capture.html
|
|
|
|
assert with the ``assert`` statement
|
|
----------------------------------------
|
|
|
|
``py.test`` allows to use the standard python
|
|
``assert statement`` for verifying expectations
|
|
and values in Python tests. For example, you can
|
|
write the following in your tests::
|
|
|
|
assert hasattr(x, 'attribute')
|
|
|
|
to state that your object has a certain ``attribute``. In case this
|
|
assertion fails you will see the value of ``x``. Intermediate
|
|
values are computed by executing the assert expression a second time.
|
|
If you execute code with side effects, e.g. read from a file like this::
|
|
|
|
assert f.read() != '...'
|
|
|
|
then you may get a warning from pytest if that assertions
|
|
first failed and then succeeded.
|
|
|
|
asserting expected exceptions
|
|
----------------------------------------
|
|
|
|
In order to write assertions about exceptions, you use
|
|
one of two forms::
|
|
|
|
py.test.raises(Exception, func, *args, **kwargs)
|
|
py.test.raises(Exception, "func(*args, **kwargs)")
|
|
|
|
both of which execute the specified function with args and kwargs and
|
|
asserts that the given ``Exception`` is raised. The reporter will
|
|
provide you with helpful output in case of failures such as *no
|
|
exception* or *wrong exception*.
|
|
|
|
|
|
information-rich tracebacks, PDB introspection
|
|
-------------------------------------------------------
|
|
|
|
.. _`example tracebacks`: http://paste.pocoo.org/show/134814/
|
|
|
|
A lot of care is taken to present useful failure information
|
|
and in particular nice and concise Python tracebacks. This
|
|
is especially useful if you need to regularly look at failures
|
|
from nightly runs, i.e. are detached from the actual test
|
|
running session. Here are `example tracebacks`_ for a number of failing
|
|
test functions. You can modify traceback printing styles through the
|
|
command line. Using the `--pdb`` option you can automatically activate
|
|
a PDB `Python debugger`_ when a test fails.
|
|
|
|
advanced skipping of tests
|
|
======================================
|
|
|
|
py.test has `advanced support for skipping tests`_ or expecting
|
|
failures on tests on certain platforms. Apart from the
|
|
minimal py.test style also unittest- and nose-style tests
|
|
can make use of this feature.
|
|
|
|
.. _`advanced support for skipping tests`: plugin/skipping.html
|
|
.. _`funcargs mechanism`: funcargs.html
|
|
.. _`unittest.py`: http://docs.python.org/library/unittest.html
|
|
.. _`doctest.py`: http://docs.python.org/library/doctest.html
|
|
.. _`xUnit style setup`: xunit_setup.html
|
|
.. _`pytest_nose`: plugin/nose.html
|
|
|
|
advanced test selection and running modes
|
|
=========================================================
|
|
|
|
.. _`selection by keyword`:
|
|
|
|
``py.test --looponfailing`` (implemented through the external
|
|
`pytest-xdist`_ plugin) allows to run a test suite,
|
|
memorize all failures and then loop over the failing set
|
|
of tests until they all pass. It will re-start running
|
|
the tests when it detects file changes in your project.
|
|
|
|
You can selectively run tests by specifiying a keyword
|
|
on the command line. Examples::
|
|
|
|
py.test -k test_simple
|
|
py.test -k "-test_simple"
|
|
|
|
will run all tests matching (or not matching) the
|
|
"test_simple" keyword. Note that you need to quote
|
|
the keyword if "-" is recognized as an indicator
|
|
for a commandline option. Lastly, you may use::
|
|
|
|
py.test. -k "test_simple:"
|
|
|
|
which will run all tests after the expression has *matched once*, i.e.
|
|
all tests that are seen after a test that matches the "test_simple"
|
|
keyword.
|
|
|
|
By default, all filename parts and
|
|
class/function names of a test function are put into the set
|
|
of keywords for a given test. You can specify additional
|
|
kewords like this::
|
|
|
|
@py.test.mark.webtest
|
|
def test_send_http():
|
|
...
|
|
|
|
and then use those keywords to select tests. See the `pytest_keyword`_
|
|
plugin for more information.
|
|
|
|
.. _`pytest_keyword`: plugin/mark.html
|
|
|
|
|
|
.. _`reStructured Text`: http://docutils.sourceforge.net
|
|
.. _`Python debugger`: http://docs.python.org/lib/module-pdb.html
|
|
|
|
|
|
.. _nose: http://somethingaboutorange.com/mrl/projects/nose/
|