309 lines
10 KiB
Plaintext
309 lines
10 KiB
Plaintext
==================================================
|
|
py.test Features
|
|
==================================================
|
|
|
|
py.test is an extensible tool for running all kinds
|
|
of tests one one or more machines. It supports a variety
|
|
of testing methods for your Python application and modules,
|
|
including unit, functional, integration and doc-testing.
|
|
|
|
It is used in projects that run more than 10000 tests
|
|
daily as well as single-python-module projects.
|
|
|
|
py.test presents a clean and powerful command line interface
|
|
and strives to generally make testing a fun effort.
|
|
|
|
py.test 1.0 works across linux, windows and osx
|
|
and on Python 2.3 - Python 2.6.
|
|
|
|
More detailed feature list:
|
|
|
|
.. contents::
|
|
:depth: 1
|
|
|
|
automatically collects and executes tests
|
|
===============================================
|
|
|
|
py.test discovers tests automatically by inspect specified
|
|
directories or files. By default, it collects all python
|
|
modules a leading ``test_`` or trailing ``_test`` filename.
|
|
From each test module every function with a leading ``test_``
|
|
or class with a leading ``Test`` name is collected.
|
|
|
|
.. _`collection process`: extend.html#collection-process
|
|
|
|
|
|
funcargs and xUnit style setups
|
|
===================================================
|
|
|
|
py.test provides powerful means for managing test
|
|
state and fixtures. Apart from the `traditional
|
|
xUnit style setup`_ for unittests it features the
|
|
simple and powerful `funcargs mechanism`_ for handling
|
|
both complex and simple test scenarious.
|
|
|
|
.. _`funcargs mechanism`: funcargs.html
|
|
.. _`traditional xUnit style setup`: xunit_setup.html
|
|
|
|
load-balance tests to multiple CPUs
|
|
===================================
|
|
|
|
For large test suites you can distribute your
|
|
tests to multiple CPUs by issuing for example::
|
|
|
|
py.test -n 3
|
|
|
|
Read more on `distributed testing`_.
|
|
|
|
.. _`distributed testing`: dist.html
|
|
|
|
Distribute tests across machines
|
|
===================================
|
|
|
|
py.test supports the sending of tests to
|
|
remote ssh-accounts or socket servers.
|
|
It can `ad-hoc run your test on multiple
|
|
platforms one a single test run`. Ad-hoc
|
|
means that there are **no installation
|
|
requirements whatsoever** on the remote side.
|
|
|
|
.. _`ad-hoc run your test on multiple platforms one a single test run`: dist.html#atonce
|
|
|
|
extensive debugging support
|
|
===================================
|
|
|
|
testing starts immediately
|
|
--------------------------
|
|
|
|
Testing starts as soon as the first ``test item``
|
|
is collected. The collection process is iterative
|
|
and does not need to complete before your first
|
|
test items are executed.
|
|
|
|
support for modules containing tests
|
|
--------------------------------------
|
|
|
|
As ``py.test`` operates as a separate cmdline
|
|
tool you can easily have a command line utility and
|
|
some tests in the same file.
|
|
|
|
debug with the ``print`` statement
|
|
----------------------------------
|
|
|
|
By default, ``py.test`` catches text written to stdout/stderr during
|
|
the execution of each individual test. This output will only be
|
|
displayed however if the test fails; you will not see it
|
|
otherwise. This allows you to put debugging print statements in your
|
|
code without being overwhelmed by all the output that might be
|
|
generated by tests that do not fail.
|
|
|
|
Each failing test that produced output during the running of the test
|
|
function will have its output displayed in the ``recorded stdout`` section.
|
|
|
|
During Setup and Teardown ("Fixture") capturing is performed separately so
|
|
that you will only see this output if the actual fixture functions fail.
|
|
|
|
The catching of stdout/stderr output can be disabled using the
|
|
``--nocapture`` or ``-s`` option to the ``py.test`` tool. Any output will
|
|
in this case be displayed as soon as it is generated.
|
|
|
|
test execution order
|
|
--------------------------------
|
|
|
|
Tests usually run in the order in which they appear in the files.
|
|
However, tests should not rely on running one after another, as
|
|
this prevents more advanced usages: running tests
|
|
distributedly or selectively, or in "looponfailing" mode,
|
|
will cause them to run in random order.
|
|
|
|
assert with the ``assert`` statement
|
|
----------------------------------------
|
|
|
|
``py.test`` allows to use the standard python
|
|
``assert statement`` for verifying expectations
|
|
and values in Python tests. For example, you can
|
|
write the following in your tests::
|
|
|
|
assert hasattr(x, 'attribute')
|
|
|
|
to state that your object has a certain ``attribute``. In case this
|
|
assertion fails you will see the value of ``x``. Intermediate
|
|
values are computed by executing the assert expression a second time.
|
|
If you execute code with side effects, e.g. read from a file like this::
|
|
|
|
assert f.read() != '...'
|
|
|
|
then you may get a warning from pytest if that assertions
|
|
first failed and then succeeded.
|
|
|
|
asserting expected exceptions
|
|
----------------------------------------
|
|
|
|
In order to write assertions about exceptions, you use
|
|
one of two forms::
|
|
|
|
py.test.raises(Exception, func, *args, **kwargs)
|
|
py.test.raises(Exception, "func(*args, **kwargs)")
|
|
|
|
both of which execute the given function with args and kwargs and
|
|
asserts that the given ``Exception`` is raised. The reporter will
|
|
provide you with helpful output in case of failures such as *no
|
|
exception* or *wrong exception*.
|
|
|
|
useful tracebacks, recursion detection
|
|
--------------------------------------
|
|
|
|
A lot of care is taken to present nice tracebacks in case of test
|
|
failure. Try::
|
|
|
|
py.test py/doc/example/pytest/failure_demo.py
|
|
|
|
to see a variety of tracebacks, each representing a different
|
|
failure situation.
|
|
|
|
``py.test`` uses the same order for presenting tracebacks as Python
|
|
itself: the oldest function call is shown first, and the most recent call is
|
|
shown last. A ``py.test`` reported traceback starts with your
|
|
failing test function. If the maximum recursion depth has been
|
|
exceeded during the running of a test, for instance because of
|
|
infinite recursion, ``py.test`` will indicate where in the
|
|
code the recursion was taking place. You can inhibit
|
|
traceback "cutting" magic by supplying ``--fulltrace``.
|
|
|
|
There is also the possibility of using ``--tb=short`` to get regular CPython
|
|
tracebacks. Or you can use ``--tb=no`` to not show any tracebacks at all.
|
|
|
|
no inheritance requirement
|
|
--------------------------
|
|
|
|
Test classes are recognized by their leading ``Test`` name. Unlike
|
|
``unitest.py``, you don't need to inherit from some base class to make
|
|
them be found by the test runner. Besides being easier, it also allows
|
|
you to write test classes that subclass from application level
|
|
classes.
|
|
|
|
testing for deprecated APIs
|
|
------------------------------
|
|
|
|
In your tests you can use ``py.test.deprecated_call(func, *args, **kwargs)``
|
|
to test that a particular function call triggers a DeprecationWarning.
|
|
This is useful for testing phasing out of old APIs in your projects.
|
|
|
|
|
|
advanced test selection / skipping
|
|
=========================================================
|
|
|
|
dynamically skipping tests
|
|
-------------------------------
|
|
|
|
If you want to skip tests you can use ``py.test.skip`` within
|
|
test or setup functions. Example::
|
|
|
|
py.test.skip("message")
|
|
|
|
You can also use a helper to skip on a failing import::
|
|
|
|
docutils = py.test.importorskip("docutils")
|
|
|
|
or to skip if a library does not have the right version::
|
|
|
|
docutils = py.test.importorskip("docutils", minversion="0.3")
|
|
|
|
The version will be read from the module's ``__version__`` attribute.
|
|
|
|
.. _`selection by keyword`:
|
|
|
|
selecting/unselecting tests by keyword
|
|
---------------------------------------------
|
|
|
|
Pytest's keyword mechanism provides a powerful way to
|
|
group and selectively run tests in your test code base.
|
|
You can selectively run tests by specifiying a keyword
|
|
on the command line. Examples::
|
|
|
|
py.test -k test_simple
|
|
py.test -k "-test_simple"
|
|
|
|
will run all tests matching (or not matching) the
|
|
"test_simple" keyword. Note that you need to quote
|
|
the keyword if "-" is recognized as an indicator
|
|
for a commandline option. Lastly, you may use::
|
|
|
|
py.test. -k "test_simple:"
|
|
|
|
which will run all tests after the expression has *matched once*, i.e.
|
|
all tests that are seen after a test that matches the "test_simple"
|
|
keyword.
|
|
|
|
By default, all filename parts and
|
|
class/function names of a test function are put into the set
|
|
of keywords for a given test. You may specify additional
|
|
kewords like this::
|
|
|
|
@py.test.mark(webtest=True)
|
|
def test_send_http():
|
|
...
|
|
|
|
disabling a test class
|
|
----------------------
|
|
|
|
If you want to disable a complete test class you
|
|
can set the class-level attribute ``disabled``.
|
|
For example, in order to avoid running some tests on Win32::
|
|
|
|
class TestPosixOnly:
|
|
disabled = sys.platform == 'win32'
|
|
|
|
def test_xxx(self):
|
|
...
|
|
|
|
.. _`test generators`: funcargs.html#test-generators
|
|
|
|
.. _`generative tests`:
|
|
|
|
generative tests: yielding parametrized tests
|
|
====================================================
|
|
|
|
Deprecated since 1.0 in favour of `test generators`_.
|
|
|
|
*Generative tests* are test methods that are *generator functions* which
|
|
``yield`` callables and their arguments. This is useful for running a
|
|
test function multiple times against different parameters. Example::
|
|
|
|
def test_generative():
|
|
for x in (42,17,49):
|
|
yield check, x
|
|
|
|
def check(arg):
|
|
assert arg % 7 == 0 # second generated tests fails!
|
|
|
|
Note that ``test_generative()`` will cause three tests
|
|
to get run, notably ``check(42)``, ``check(17)`` and ``check(49)``
|
|
of which the middle one will obviously fail.
|
|
|
|
To make it easier to distinguish the generated tests it is possible to specify an explicit name for them, like for example::
|
|
|
|
def test_generative():
|
|
for x in (42,17,49):
|
|
yield "case %d" % x, check, x
|
|
|
|
easy to extend
|
|
=========================================
|
|
|
|
Since 1.0 py.test has advanced `extension mechanisms`_.
|
|
One can can easily modify or add aspects for for
|
|
purposes such as:
|
|
|
|
* reporting extensions
|
|
* customizing collection and execution of tests
|
|
* running non-python tests
|
|
* managing custom test state setup
|
|
|
|
.. _`extension mechanisms`: extend.html
|
|
|
|
.. _`reStructured Text`: http://docutils.sourceforge.net
|
|
.. _`Python debugger`: http://docs.python.org/lib/module-pdb.html
|
|
|
|
|
|
.. _nose: http://somethingaboutorange.com/mrl/projects/nose/
|