[svn r63200] new docs about testing

refactoring of documentation
new entry page

--HG--
branch : trunk
This commit is contained in:
hpk 2009-03-22 01:38:43 +01:00
parent 9bfdb42273
commit 887a837600
10 changed files with 600 additions and 737 deletions

View File

@ -1,6 +1,8 @@
py lib contact and communication py lib contact and communication
=================================== ===================================
- You may also subscribe to `tetamap`_, where Holger Krekel
often posts about testing and py.test related news.
- **#pylib on irc.freenode.net**: you are welcome - **#pylib on irc.freenode.net**: you are welcome
to lurk or ask questions in this IRC channel, it also tracks py lib commits. to lurk or ask questions in this IRC channel, it also tracks py lib commits.
@ -20,6 +22,8 @@ py lib contact and communication
.. _`get an account`: .. _`get an account`:
.. _tetamap: http://tetamap.wordpress.com
get an account on codespeak get an account on codespeak
--------------------------- ---------------------------

View File

@ -131,28 +131,6 @@ conftest.py configurations::
py.test --traceconfig py.test --traceconfig
adding custom options
+++++++++++++++++++++++
To register a project-specific command line option
you may have the following code within a ``conftest.py`` file::
import py
Option = py.test.config.Option
option = py.test.config.addoptions("pypy options",
Option('-V', '--view', action="store_true", dest="view", default=False,
help="view translation tests' flow graphs with Pygame"),
)
and you can then access ``option.view`` like this::
if option.view:
print "view this!"
The option will be available if you type ``py.test -h``
Note that you may only register upper case short
options. ``py.test`` reserves all lower
case short options for its own cross-project usage.
customizing the collecting and running process customizing the collecting and running process
----------------------------------------------- -----------------------------------------------
@ -245,7 +223,7 @@ useful for calling application test machinery with different
parameter sets but counting each of the calls as a separate parameter sets but counting each of the calls as a separate
tests. tests.
.. _`generative tests`: test.html#generative-tests .. _`generative tests`: test-features.html#generative-tests
The other extension possibility is about The other extension possibility is about
specifying a custom test ``Item`` class which specifying a custom test ``Item`` class which

View File

@ -1,28 +1,23 @@
Test configuration Test configuration
======================== ========================
using / specifying plugins test options and values
-------------------------------
you can instruct py.test to use additional plugins by:
* setting the PYTEST_PLUGINS environment variable
to a comma-separated list of plugins
* XXX supplying "--plugins=NAME1,NAME2,..." at the command line
* setting "pytest_plugins='name1', 'name2'" in
``conftest.py`` files or in python test modules.
py.test will load all plugins along with their dependencies
(plugins may specify "pytest_plugins" as well).
test option values
----------------------------- -----------------------------
py.test will lookup the value of an option "NAME" in this order: You can see all available command line options by running::
py.test -h
py.test will lookup values of options in this order:
* option value supplied at command line * option value supplied at command line
* content of environment variable ``PYTEST_OPTION_NAME=...`` * content of environment variable ``PYTEST_OPTION_NAME=...``
* ``name = ...`` setting in the nearest ``conftest.py`` file. * ``name = ...`` setting in the nearest ``conftest.py`` file.
This means that you can specify default options per-run, The name of an option usually is the one you find
per shell session or per project directory. in the longform of the option, i.e. the name
behind the ``--`` double-dash.
IOW, you can set default values for options per project, per
home-directoray, per shell session or per test-run.

107
py/doc/test-dist.txt Normal file
View File

@ -0,0 +1,107 @@
.. _`distribute tests across machines`:
``py.test`` can ad-hoc distribute test runs to multiple CPUs or remote
machines. This allows to speed up development or to use special resources
of remote machines. Before running tests remotely, ``py.test`` efficiently
synchronizes your program source code to the remote place. All test results
are reported back and displayed to your local test session. You may
specify different Python versions and interpreters.
Speed up test runs by sending tests to multiple CPUs
----------------------------------------------------------
To send tests to multiple CPUs, type::
py.test -n NUM
Especially for longer running tests or tests requiring
a lot of IO this can lead to considerable speed ups.
Test on a different python interpreter
----------------------------------------------------------
To send tests to a python2.4 process, you may type::
py.test --tx popen//python=python2.4
This will start a subprocess which is run with the "python2.4"
Python interpreter, found in your system binary lookup path.
.. For convenience you may prepend ``3*`` to create three sub processes.
Sending tests to remote SSH accounts
------------------------------------
Suppose you have a package ``mypkg`` which contains some
tests that you can successfully run locally. And you
have a ssh-reachable machine ``myhost``. Then
you can ad-hoc distribute your tests by typing::
py.test -d --tx ssh=myhostpopen --rsyncdir mypkg mypkg
This will synchronize your ``mypkg`` package directory
to an remote ssh account and then locally collect tests
and send them to remote places for execution.
You can specify multiple ``--rsyncdir`` directories
to be sent to the remote side.
Sending tests to remote Socket Servers
----------------------------------------
Download the single-module `socketserver.py`_ Python program
and run it on the specified hosts like this::
python socketserver.py
It will tell you that it starts listening. You can now
on your home machine specify this new socket host
with something like this::
py.test -d --tx socket=192.168.1.102:8888 --rsyncdir mypkg mypkg
no remote installation requirements
++++++++++++++++++++++++++++++++++++++++++++
Synchronisation and running of tests only requires
a bare Python installation on the remote side. No
special software is installed - this is realized through the
*zero installation* principle that the underlying
`py.execnet`_ mechanisms implements.
.. _`socketserver.py`: ../execnet/script/socketserver.py
.. _`py.execnet`: execnet.html
Differences from local tests
----------------------------
* Test order is rather random (instead of in file order).
* the test process may hang due to network problems
* you may not reference files outside of rsynced directory structures
Specifying test exec environments in a conftest.py
-------------------------------------------------------------
Instead of specifying command line options, you can
put options values in a ``conftest.py`` file like this::
pytest_option_tx = ['ssh=myhost//python=python2.5', 'popen//python=python2.5']
pytest_option_dist = True
Any commandline ``--tx`` specifictions will add to the list of available execution
environments.
Specifying "rsync" dirs in a conftest.py
-------------------------------------------------------------
In your ``mypkg/conftest.py`` you may specify directories to synchronise
or to exclude::
rsyncdirs = ['.', '../plugins']
rsyncignore = ['_cache']
These directory specifications are relative to the directory
where the ``conftest.py`` is found.

54
py/doc/test-examples.txt Normal file
View File

@ -0,0 +1,54 @@
Working Examples
================
managing state at module, class and method level
------------------------------------------------------------
Here is a working example for what goes on when you setup modules,
classes and methods::
# [[from py/documentation/example/pytest/test_setup_flow_example.py]]
def setup_module(module):
module.TestStateFullThing.classcount = 0
class TestStateFullThing:
def setup_class(cls):
cls.classcount += 1
def teardown_class(cls):
cls.classcount -= 1
def setup_method(self, method):
self.id = eval(method.func_name[5:])
def test_42(self):
assert self.classcount == 1
assert self.id == 42
def test_23(self):
assert self.classcount == 1
assert self.id == 23
def teardown_module(module):
assert module.TestStateFullThing.classcount == 0
For this example the control flow happens as follows::
import test_setup_flow_example
setup_module(test_setup_flow_example)
setup_class(TestStateFullThing)
instance = TestStateFullThing()
setup_method(instance, instance.test_42)
instance.test_42()
setup_method(instance, instance.test_23)
instance.test_23()
teardown_class(TestStateFullThing)
teardown_module(test_setup_flow_example)
Note that ``setup_class(TestStateFullThing)`` is called and not
``TestStateFullThing.setup_class()`` which would require you
to insert ``setup_class = classmethod(setup_class)`` to make
your setup function callable. Did we mention that lazyness
is a virtue?

61
py/doc/test-ext.txt Normal file
View File

@ -0,0 +1,61 @@
Learning by examples
------------------------
XXX
adding custom options
+++++++++++++++++++++++
py.test supports adding of standard optparse_ Options.
A plugin may implement the ``addoption`` hook for registering
custom options::
class ConftestPlugin:
def pytest_addoption(self, parser):
parser.addoption("-M", "--myopt", action="store",
help="specify string to set myopt")
def pytest_configure(self, config):
if config.option.myopt:
# do action based on option value
.. _optparse: http://docs.python.org/library/optparse.html
Setting default values for test options
-----------------------------------------
You can see all available command line options by running::
py.test -h
py.test will lookup values of options in this order:
* option value supplied at command line
* content of environment variable ``PYTEST_OPTION_NAME=...``
* ``name = ...`` setting in the nearest ``conftest.py`` file.
The name of an option usually is the one you find
in the longform of the option, i.e. the name
behind the ``--`` double-dash.
IOW, you can set default values for options per project, per
home-directoray, per shell session or per test-run.
Plugin methods
----------------------------------
A Plugin class may implement the following attributes and methods:
XXX
_`pytest event`:
Pytest Events
-------------------
XXX

246
py/doc/test-features.txt Normal file
View File

@ -0,0 +1,246 @@
Basic Features of ``py.test``
=============================
automatic collection of tests on all levels
-------------------------------------------
The automated test collection process walks the current
directory (or the directory given as a command line argument)
and all its subdirectories and collects python modules with a
leading ``test_`` or trailing ``_test`` filename. From each
test module every function with a leading ``test_`` or class with
a leading ``Test`` name is collected. The collecting process can
be customized at directory, module or class level. (see
`collection process`_ for some implementation details).
.. _`generative tests`:
.. _`collection process`: impl-test.html#collection-process
assert with the ``assert`` statement
------------------------------------
``py.test`` allows to use the standard python
``assert statement`` for verifying expectations
and values in Python tests. For example, you can
write the following in your tests::
assert hasattr(x, 'attribute')
to state that your object has a certain ``attribute``. In case this
assertion fails you will see the value of ``x``. Intermediate
values are computed by executing the assert expression a second time.
If you execute code with side effects, e.g. read from a file like this::
assert f.read() != '...'
then you may get a warning from pytest if that assertions
first failed and then succeeded.
asserting expected exceptions
----------------------------------------------
In order to write assertions about exceptions, you use
one of two forms::
py.test.raises(Exception, func, *args, **kwargs)
py.test.raises(Exception, "func(*args, **kwargs)")
both of which execute the given function with args and kwargs and
asserts that the given ``Exception`` is raised. The reporter will
provide you with helpful output in case of failures such as *no
exception* or *wrong exception*.
dynamically skipping tests
----------------------------------------
If you want to skip tests you can use ``py.test.skip`` within
test or setup functions. Example::
py.test.skip("message")
You can also use a helper to skip on a failing import::
docutils = py.test.importorskip("docutils")
or to skip if a library does not have the right version::
docutils = py.test.importorskip("docutils", minversion="0.3")
The version will be read from the module's ``__version__`` attribute.
generative tests: yielding more tests
-------------------------------------
*Generative tests* are test methods that are *generator functions* which
``yield`` callables and their arguments. This is most useful for running a
test function multiple times against different parameters. Example::
def test_generative():
for x in (42,17,49):
yield check, x
def check(arg):
assert arg % 7 == 0 # second generated tests fails!
Note that ``test_generative()`` will cause three tests
to get run, notably ``check(42)``, ``check(17)`` and ``check(49)``
of which the middle one will obviously fail.
To make it easier to distinguish the generated tests it is possible to specify an explicit name for them, like for example::
def test_generative():
for x in (42,17,49):
yield "case %d" % x, check, x
.. _`selection by keyword`:
selecting/unselecting tests by keyword
---------------------------------------------
Pytest's keyword mechanism provides a powerful way to
group and selectively run tests in your test code base.
You can selectively run tests by specifiying a keyword
on the command line. Examples:
py.test -k test_simple
py.test -k "-test_simple"
will run all tests matching (or not matching) the
"test_simple" keyword. Note that you need to quote
the keyword if "-" is recognized as an indicator
for a commandline option. Lastly, you may use
py.test. -k "test_simple:"
which will run all tests after the expression has *matched once*, i.e.
all tests that are seen after a test that matches the "test_simple"
keyword.
By default, all filename parts and
class/function names of a test function are put into the set
of keywords for a given test. You may specify additional
kewords like this::
@py.test.mark(webtest=True)
def test_send_http():
...
testing with multiple python versions / executables
---------------------------------------------------
With ``--tx EXECUTABLE`` you can specify a python
executable (e.g. ``python2.2``) with which the tests
will be executed.
testing starts immediately
--------------------------
Testing starts as soon as the first ``test item``
is collected. The collection process is iterative
and does not need to complete before your first
test items are executed.
support for modules containing tests
--------------------------------------
As ``py.test`` operates as a separate cmdline
tool you can easily have a command line utility and
some tests in the same file.
debug with the ``print`` statement
----------------------------------
By default, ``py.test`` catches text written to stdout/stderr during
the execution of each individual test. This output will only be
displayed however if the test fails; you will not see it
otherwise. This allows you to put debugging print statements in your
code without being overwhelmed by all the output that might be
generated by tests that do not fail.
Each failing test that produced output during the running of the test
will have its output displayed in the ``recorded stdout`` section.
The catching of stdout/stderr output can be disabled using the
``--nocapture`` option to the ``py.test`` tool. Any output will
in this case be displayed as soon as it is generated.
test execution order
--------------------------------
Tests usually run in the order in which they appear in the files.
However, tests should not rely on running one after another, as
this prevents more advanced usages: running tests
distributedly or selectively, or in "looponfailing" mode,
will cause them to run in random order.
useful tracebacks, recursion detection
--------------------------------------
A lot of care is taken to present nice tracebacks in case of test
failure. Try::
py.test py/doc/example/pytest/failure_demo.py
to see a variety of 17 tracebacks, each tailored to a different
failure situation.
``py.test`` uses the same order for presenting tracebacks as Python
itself: the oldest function call is shown first, and the most recent call is
shown last. A ``py.test`` reported traceback starts with your
failing test function. If the maximum recursion depth has been
exceeded during the running of a test, for instance because of
infinite recursion, ``py.test`` will indicate where in the
code the recursion was taking place. You can inhibit
traceback "cutting" magic by supplying ``--fulltrace``.
There is also the possibility of using ``--tb=short`` to get regular CPython
tracebacks. Or you can use ``--tb=no`` to not show any tracebacks at all.
no inheritance requirement
--------------------------
Test classes are recognized by their leading ``Test`` name. Unlike
``unitest.py``, you don't need to inherit from some base class to make
them be found by the test runner. Besides being easier, it also allows
you to write test classes that subclass from application level
classes.
disabling a test class
----------------------
If you want to disable a complete test class you
can set the class-level attribute ``disabled``.
For example, in order to avoid running some tests on Win32::
class TestPosixOnly:
disabled = sys.platform == 'win32'
def test_xxx(self):
...
testing for deprecated APIs
------------------------------
In your tests you can use ``py.test.deprecated_call(func, *args, **kwargs)``
to test that a particular function call triggers a DeprecationWarning.
This is useful for testing phasing out of old APIs in your projects.
doctest support
-------------------
If you want to integrate doctests, ``py.test`` now by default
picks up files matching the ``test_*.txt`` or ``*_test.txt``
patterns and processes them as text files containing doctests.
This is an experimental feature and likely to change
its implementation.
.. _`reStructured Text`: http://docutils.sourceforge.net
.. _`Python debugger`: http://docs.python.org/lib/module-pdb.html
.. _nose: http://somethingaboutorange.com/mrl/projects/nose/

View File

@ -1,67 +1,56 @@
pytest plugins Many of py.test's features are implemented as a plugin.
==================
specifying plugins for directories or test modules Available plugins
---------------------------------------------------------
py.test loads and configures plugins at tool startup and whenever
it encounters new confest or test modules which
contain a ``pytest_plugins`` definition. At tool
startup the ``PYTEST_PLUGINS`` environment variable
is considered as well.
Example
++++++++++
If you create a ``conftest.py`` file with the following content::
pytest_plugins = "pytest_plugin1", MyLocalPluginClass
then test execution within that directory can make use
of the according instantiated plugins:
* the module ``pytest_plugin1`` will be imported and
and its contained `Plugin1`` class instantiated.
A plugin module can put its dependencies into
a "pytest_plugins" attribute at module level as well.
* the ``MyLocalPluginClass`` will be instantiated
and added to the pluginmanager.
Plugin methods
----------------------------------
A Plugin class may implement the following attributes and methods:
* pytest_cmdlineoptions: a list of optparse-style py.test.config.Option objects
* pytest_configure(self, config): called after command line options have been parsed
* pytest_unconfigure(self, config): called before the test process quits
* pytest_event(self, event): called for each `pytest event`_
XXX reference APIcheck'ed full documentation
_`pytest event`:
Pytest Events
-------------------
XXX Various reporting events.
Example plugins
----------------------- -----------------------
XXX here are a few existing plugins: py.test has a number of default plugins. You can see which
ones by specifying ``--trace=config``.
* adding reporting facilities, e.g. * adding reporting facilities, examples:
pytest_terminal: default reporter for writing info to terminals pytest_terminal: default reporter for writing info to terminals
pytest_resultlog: log test results in machine-readable form to a file pytest_resultlog: log test results in machine-readable form to a file
pytest_eventlog: log all internal pytest events to a file pytest_eventlog: log all internal pytest events to a file
* marking and reporting test specially
pytest_xfail: "expected to fail" test marker pytest_xfail: "expected to fail" test marker
* funcargs for advanced
pytest_tmpdir: provide temporary directories to test functions pytest_tmpdir: provide temporary directories to test functions
pytest_plugintester: generic apichecks, support for functional plugin tests pytest_plugintester: generic apichecks, support for functional plugin tests
pytest_pytester: support for testing py.test runs pytest_pytester: support for testing py.test runs
* extending test execution, e.g. * extending test execution, e.g.
pytest_apigen: tracing values of function/method calls when running tests pytest_apigen: tracing values of function/method calls when running tests
Loading plugins and specifying dependencies
---------------------------------------------------------
py.test loads and configures plugins at tool startup:
* by reading the ``PYTEST_PLUGINS`` environment variable
and importing the comma-separated list of plugin names.
* by loading all plugins specified via one or more ``-p name``
command line options.
* by loading all plugins specified via a ``pytest_plugins``
variable in ``conftest.py`` files or test modules.
example: ensure a plugin is loaded
++++++++++++++++++++++++++++++++++++
If you create a ``conftest.py`` file with the following content::
pytest_plugins = "pytest_myextension",
then all tests in that directory and below it will run with
an instantiated "pytest_myextension". Here is how instantiation
takes place:
* the module ``pytest_extension`` will be imported and
and its contained `ExtensionPlugin`` class will
be instantiated. A plugin module may specify its
dependencies via another ``pytest_plugins`` definition.

View File

@ -0,0 +1,58 @@
.. _`setuptools installation`: http://pypi.python.org/pypi/setuptools
Installing py.test
-------------------------------
This document assumes basic python knowledge. If you have a
`setuptools installation`_, install ``py.test`` by typing::
easy_install -U py
For alternative installation methods please see the download_ page.
You should now have a ``py.test`` command line tool and can
look at its documented cmdline options via this command::
py.test -h
Writing and running a test
---------------------------
``py.test`` is the command line tool to run tests.
Let's write a first test module by putting the following
test function into a ``test_sample.py`` file::
# content of test_sample.py
def test_answer():
assert 42 == 43
Now you can run the test by passing it as an argument::
py.test test_sample.py
What does happen here? ``py.test`` looks for functions and
methods in the module that start with ``test_``. It then
executes those tests. Assertions about test outcomes are
done via the standard ``assert`` statement.
You can also use ``py.test`` to run all tests in a directory structure by
invoking it without any arguments::
py.test
This will automatically collect and run any Python module whose filenames
start with ``test_`` or ends with ``_test`` from the directory and any
subdirectories, starting with the current directory, and run them. Each
Python test module is inspected for test methods starting with ``test_``.
.. Organising your tests
.. ---------------------------
Please refer to `features`_ for a walk through the basic features.
.. _download: download.html
.. _features: test-features.html

View File

@ -1,651 +1,22 @@
================================ *py.test* is a tool for:
The ``py.test`` tool and library
================================
.. contents:: * rapidly writing unit- and functional tests in Python
.. sectnum:: * writing tests for non-python code and data
* receiving useful reports on test failures
* distributing tests to multiple CPUs and remote environments
quickstart_: for getting started immediately.
This document is about the *usage* of the ``py.test`` testing tool. There is features_: a walk through basic features and usage.
also document describing the `implementation and the extending of py.test`_.
.. _`implementation and the extending of py.test`: impl-test.html plugins_: using available plugins.
starting point: ``py.test`` command line tool extend_: writing plugins and advanced configuration.
=============================================
We presume you have done an installation as per the `distributed testing`_ how to distribute test runs to other machines and platforms.
download_ page after which you should be able to execute the
'py.test' tool from a command line shell.
``py.test`` is the command line tool to run tests. You can supply it .. _quickstart: test-quickstart.html
with a Python test file (or directory) by passing it as an argument:: .. _features: test-features.html
.. _plugins: test-plugins.html
py.test test_sample.py .. _extend: test-ext.html
.. _`distributed testing`: test-dist.html
``py.test`` looks for any functions and methods in the module that
start with with ``test_`` and will then run those methods. Assertions
about test outcomes are done via the standard ``assert`` statement.
This means you can write tests without any boilerplate::
# content of test_sample.py
def test_answer():
assert 42 == 43
You may have test functions and test methods, there is no
need to subclass or to put tests into a class.
You can also use ``py.test`` to run all tests in a directory structure by
invoking it without any arguments::
py.test
This will automatically collect and run any Python module whose filenames
start with ``test_`` or ends with ``_test`` from the directory and any
subdirectories, starting with the current directory, and run them. Each
Python test module is inspected for test methods starting with ``test_``.
.. _download: download.html
.. _features:
Basic Features of ``py.test``
=============================
assert with the ``assert`` statement
------------------------------------
Writing assertions is very simple and this is one of py.test's
most noticeable features, as you can use the ``assert``
statement with arbitrary expressions. For example you can
write the following in your tests::
assert hasattr(x, 'attribute')
to state that your object has a certain ``attribute``. In case this
assertion fails the test ``reporter`` will provide you with a very
helpful analysis and a clean traceback.
how to write assertions about exceptions
----------------------------------------
In order to write assertions about exceptions, you use
one of two forms::
py.test.raises(Exception, func, *args, **kwargs)
py.test.raises(Exception, "func(*args, **kwargs)")
both of which execute the given function with args and kwargs and
asserts that the given ``Exception`` is raised. The reporter will
provide you with helpful output in case of failures such as *no
exception* or *wrong exception*.
Skipping tests
----------------------------------------
If you want to skip tests you can use ``py.test.skip`` within
test or setup functions. Example::
py.test.skip("message")
You can also use a helper to skip on a failing import::
docutils = py.test.importorskip("docutils")
or to skip if the library does not have the right version::
docutils = py.test.importorskip("docutils", minversion="0.3")
automatic collection of tests on all levels
-------------------------------------------
The automated test collection process walks the current
directory (or the directory given as a command line argument)
and all its subdirectories and collects python modules with a
leading ``test_`` or trailing ``_test`` filename. From each
test module every function with a leading ``test_`` or class with
a leading ``Test`` name is collected. The collecting process can
be customized at directory, module or class level. (see
`collection process`_ for some implementation details).
.. _`generative tests`:
.. _`collection process`: impl-test.html#collection-process
generative tests: yielding more tests
-------------------------------------
*Generative tests* are test methods that are *generator functions* which
``yield`` callables and their arguments. This is most useful for running a
test function multiple times against different parameters.
Example::
def test_generative():
for x in (42,17,49):
yield check, x
def check(arg):
assert arg % 7 == 0 # second generated tests fails!
Note that ``test_generative()`` will cause three tests
to get run, notably ``check(42)``, ``check(17)`` and ``check(49)``
of which the middle one will obviously fail.
To make it easier to distinguish the generated tests it is possible to specify an explicit name for them, like for example::
def test_generative():
for x in (42,17,49):
yield "case %d" % x, check, x
.. _`selection by keyword`:
selecting/unselecting tests by keyword
---------------------------------------------
Pytest's keyword mechanism provides a powerful way to
group and selectively run tests in your test code base.
You can selectively run tests by specifiying a keyword
on the command line. Examples:
py.test -k test_simple
py.test -k "-test_simple"
will run all tests matching (or not matching) the
"test_simple" keyword. Note that you need to quote
the keyword if "-" is recognized as an indicator
for a commandline option. Lastly, you may use
py.test. -k "test_simple:"
which will run all tests after the expression has *matched once*, i.e.
all tests that are seen after a test that matches the "test_simple"
keyword.
By default, all filename parts and
class/function names of a test function are put into the set
of keywords for a given test. You may specify additional
kewords like this::
@py.test.mark(webtest=True)
def test_send_http():
...
testing with multiple python versions / executables
---------------------------------------------------
With ``--exec=EXECUTABLE`` you can specify a python
executable (e.g. ``python2.2``) with which the tests
will be executed.
testing starts immediately
--------------------------
Testing starts as soon as the first ``test item``
is collected. The collection process is iterative
and does not need to complete before your first
test items are executed.
no interference with cmdline utilities
--------------------------------------
As ``py.test`` mainly operates as a separate cmdline
tool you can easily have a command line utility and
some tests in the same file.
debug with the ``print`` statement
----------------------------------
By default, ``py.test`` catches text written to stdout/stderr during
the execution of each individual test. This output will only be
displayed however if the test fails; you will not see it
otherwise. This allows you to put debugging print statements in your
code without being overwhelmed by all the output that might be
generated by tests that do not fail.
Each failing test that produced output during the running of the test
will have its output displayed in the ``recorded stdout`` section.
The catching of stdout/stderr output can be disabled using the
``--nocapture`` option to the ``py.test`` tool. Any output will
in this case be displayed as soon as it is generated.
test execution order
--------------------------------
Tests usually run in the order in which they appear in the files.
However, tests should not rely on running one after another, as
this prevents more advanced usages: running tests
distributedly or selectively, or in "looponfailing" mode,
will cause them to run in random order.
useful tracebacks, recursion detection
--------------------------------------
A lot of care is taken to present nice tracebacks in case of test
failure. Try::
py.test py/documentation/example/pytest/failure_demo.py
to see a variety of 17 tracebacks, each tailored to a different
failure situation.
``py.test`` uses the same order for presenting tracebacks as Python
itself: the oldest function call is shown first, and the most recent call is
shown last. A ``py.test`` reported traceback starts with your
failing test function. If the maximum recursion depth has been
exceeded during the running of a test, for instance because of
infinite recursion, ``py.test`` will indicate where in the
code the recursion was taking place. You can inhibit
traceback "cutting" magic by supplying ``--fulltrace``.
There is also the possibility of using ``--tb=short`` to get regular CPython
tracebacks. Or you can use ``--tb=no`` to not show any tracebacks at all.
no inheritance requirement
--------------------------
Test classes are recognized by their leading ``Test`` name. Unlike
``unitest.py``, you don't need to inherit from some base class to make
them be found by the test runner. Besides being easier, it also allows
you to write test classes that subclass from application level
classes.
disabling a test class
----------------------
If you want to disable a complete test class you
can set the class-level attribute ``disabled``.
For example, in order to avoid running some tests on Win32::
class TestEgSomePosixStuff:
disabled = sys.platform == 'win32'
def test_xxx(self):
...
testing for deprecated APIs
------------------------------
In your tests you can use ``py.test.deprecated_call(func, *args, **kwargs)``
to test that a particular function call triggers a DeprecationWarning.
This is useful for testing phasing out of old APIs in your projects.
Managing test state across test modules, classes and methods
------------------------------------------------------------
Often you want to create some files, database connections or other
state in order to run tests in a certain environment. With
``py.test`` there are three scopes for which you can provide hooks to
manage such state. Again, ``py.test`` will detect these hooks in
modules on a name basis. The following module-level hooks will
automatically be called by the session::
def setup_module(module):
""" setup up any state specific to the execution
of the given module.
"""
def teardown_module(module):
""" teardown any state that was previously setup
with a setup_module method.
"""
The following hooks are available for test classes::
def setup_class(cls):
""" setup up any state specific to the execution
of the given class (which usually contains tests).
"""
def teardown_class(cls):
""" teardown any state that was previously setup
with a call to setup_class.
"""
def setup_method(self, method):
""" setup up any state tied to the execution of the given
method in a class. setup_method is invoked for every
test method of a class.
"""
def teardown_method(self, method):
""" teardown any state that was previously setup
with a setup_method call.
"""
The last two hooks, ``setup_method`` and ``teardown_method``, are
equivalent to ``setUp`` and ``tearDown`` in the Python standard
library's ``unitest`` module.
All setup/teardown methods are optional. You could have a
``setup_module`` but no ``teardown_module`` and the other way round.
Note that while the test session guarantees that for every ``setup`` a
corresponding ``teardown`` will be invoked (if it exists) it does
*not* guarantee that any ``setup`` is called only happens once. For
example, the session might decide to call the ``setup_module`` /
``teardown_module`` pair more than once during the execution of a test
module.
Experimental doctest support
------------------------------------------------------------
If you want to integrate doctests, ``py.test`` now by default
picks up files matching the ``test_*.txt`` or ``*_test.txt``
patterns and processes them as text files containing doctests.
This is an experimental feature and likely to change
its implementation.
Working Examples
================
Example for managing state at module, class and method level
------------------------------------------------------------
Here is a working example for what goes on when you setup modules,
classes and methods::
# [[from py/documentation/example/pytest/test_setup_flow_example.py]]
def setup_module(module):
module.TestStateFullThing.classcount = 0
class TestStateFullThing:
def setup_class(cls):
cls.classcount += 1
def teardown_class(cls):
cls.classcount -= 1
def setup_method(self, method):
self.id = eval(method.func_name[5:])
def test_42(self):
assert self.classcount == 1
assert self.id == 42
def test_23(self):
assert self.classcount == 1
assert self.id == 23
def teardown_module(module):
assert module.TestStateFullThing.classcount == 0
For this example the control flow happens as follows::
import test_setup_flow_example
setup_module(test_setup_flow_example)
setup_class(TestStateFullThing)
instance = TestStateFullThing()
setup_method(instance, instance.test_42)
instance.test_42()
setup_method(instance, instance.test_23)
instance.test_23()
teardown_class(TestStateFullThing)
teardown_module(test_setup_flow_example)
Note that ``setup_class(TestStateFullThing)`` is called and not
``TestStateFullThing.setup_class()`` which would require you
to insert ``setup_class = classmethod(setup_class)`` to make
your setup function callable. Did we mention that lazyness
is a virtue?
Some ``py.test`` command-line options
=====================================
Regular options
---------------
``-v, --verbose``
Increase verbosity. This shows a test per line while running and also
shows the traceback after interrupting the test run with Ctrl-C.
``-x, --exitfirst``
exit instantly on the first error or the first failed test.
``-s, --nocapture``
disable catching of sys.stdout/stderr output.
``-k KEYWORD``
only run test items matching the given keyword expression. You can also add
use ``-k -KEYWORD`` to exlude tests from being run. The keyword is matched
against filename, test class name, method name.
``-l, --showlocals``
show locals in tracebacks: for every frame in the traceback, show the values
of the local variables.
``--pdb``
drop into pdb (the `Python debugger`_) on exceptions. If the debugger is
quitted, the next test is run. This implies ``-s``.
``--tb=TBSTYLE``
traceback verboseness: ``long`` is the default, ``short`` are the normal
Python tracebacks, ``no`` omits tracebacks completely.
``--fulltrace``
Don't cut any tracebacks. The default is to leave out frames if an infinite
recursion is detected.
``--nomagic``
Refrain from using magic as much as possible. This can be useful if you are
suspicious that ``py.test`` somehow interferes with your program in
unintended ways (if this is the case, please contact us!).
``--collectonly``
Only collect tests, don't execute them.
``--traceconfig``
trace considerations of conftest.py files. Useful when you have various
conftest.py files around and are unsure about their interaction.
``-f, --looponfailing``
Loop on failing test set. This is a feature you can use when you are trying
to fix a number of failing tests: First all the tests are being run. If a
number of tests are failing, these are run repeatedly afterwards. Every
repetition is started once a file below the directory that you started
testing for is changed. If one of the previously failing tests now passes,
it is removed from the test set.
``--exec=EXECUTABLE``
Python executable to run the tests with. Useful for testing on different
versions of Python.
experimental options
--------------------
**Note**: these options could change in the future.
``-d, --dist``
ad-hoc `distribute tests across machines`_ (requires conftest settings)
``-w, --startserver``
starts local web server for displaying test progress.
``-r, --runbrowser``
Run browser (implies --startserver).
``--boxed``
Use boxed tests: run each test in an external process. Very useful for testing
things that occasionally segfault (since normally the segfault then would
stop the whole test process).
``--rest``
`reStructured Text`_ output reporting.
.. _`reStructured Text`: http://docutils.sourceforge.net
.. _`Python debugger`: http://docs.python.org/lib/module-pdb.html
.. _`distribute tests across machines`:
Automated Distributed Testing
==================================
If you have a project with a large number of tests, and you have
machines accessible through SSH, ``py.test`` can distribute
tests across the machines. It does not require any particular
installation on the remote machine sides as it uses `py.execnet`_
mechanisms to distribute execution. Using distributed testing
can speed up your development process considerably and it
may also be useful where you need to use a remote server
that has more resources (e.g. RAM/diskspace) than your
local machine.
*WARNING*: support for distributed testing is experimental,
its mechanics and configuration options may change without
prior notice. Particularly, not all reporting features
of the in-process py.test have been integrated into
the distributed testing approach.
Requirements
------------
Local requirements:
* ssh client
* python
requirements for remote machines:
* ssh daemon running
* ssh keys setup to allow login without a password
* python
* unix like machine (reliance on ``os.fork``)
How to use it
-----------------------
When you issue ``py.test -d`` then your computer becomes
the distributor of tests ("master") and will start collecting
and distributing tests to several machines. The machines
need to be specified in a ``conftest.py`` file.
At start up, the master connects to each node using `py.execnet.SshGateway`_
and *rsyncs* all specified python packages to all nodes.
Then the master collects all of the tests and immediately sends test item
descriptions to its connected nodes. Each node has a local queue of tests
to run and begins to execute the tests, following the setup and teardown
semantics. The test are distributed at function and method level.
When a test run on a node is completed it reports back the result
to the master.
The master can run one of three reporters to process the events
from the testing nodes: command line, rest output and ajaxy web based.
.. _`py.execnet`: execnet.html
.. _`py.execnet.SshGateway`: execnet.html
Differences from local tests
----------------------------
* Test order is rather random (instead of in file order).
* the test process may hang due to network problems
* you may not reference files outside of rsynced directory structures
Configuration
-------------
You must create a conftest.py in any parent directory above your tests.
The options that you need to specify in that conftest.py file are:
* `dist_hosts`: a required list of host specifications
* `dist_rsync_roots` - a list of relative locations to copy to the remote machines.
* `dist_rsync_ignore` - a list of relative locations to ignore for rsyncing
* `dist_remotepython` - the remote python executable to run.
* `dist_nicelevel` - process priority of remote nodes.
* `dist_boxed` - will run each single test in a separate process
(allowing to survive segfaults for example)
* `dist_taskspernode` - Maximum number of tasks being queued to remote nodes
Sample configuration::
dist_hosts = ['localhost', 'user@someserver:/tmp/somedir']
dist_rsync_roots = ['../pypy', '../py']
dist_remotepython = 'python2.4'
dist_nicelevel = 10
dist_boxed = False
dist_maxwait = 100
dist_taskspernode = 10
To use the browser-based reporter (with a nice AJAX interface) you have to tell
``py.test`` to run a small server locally using the ``-w`` or ``--startserver``
command line options. Afterwards you can point your browser to localhost:8000
to see the progress of the testing.
Development Notes
-----------------
Changing the behavior of the web based reporter requires `pypy`_ since the
javascript is actually generated fom rpython source.
.. _`pypy`: http://codespeak.net/pypy
Future/Planned Features of py.test
==================================
integrating various test methods
-------------------------------------------
There are various conftest.py's out there
that do html-reports, ad-hoc distribute tests
to windows machines or other fun stuff.
These approaches should be offerred natively
by py.test at some point (requires refactorings).
In addition, performing special checks such
as w3c-conformance tests or ReST checks
should be offered from mainline py.test.
more distributed testing
-----------------------------------------
We'd like to generalize and extend our ad-hoc
distributed testing approach to allow for running
on multiple platforms simultanously and selectively.
The web reporter should learn to deal with driving
complex multi-platform test runs and providing
useful introspection and interactive debugging hooks.
move to report event based architecture
--------------------------------------------
To facilitate writing of custom reporters
py.test is to learn to generate reporting events
at all levels which a reporter can choose to
interpret and present. The distributed testing
approach already uses such an approach and
we'd like to unify this with the default
in-process py.test mode.
see what other tools do currently (nose, etc.)
----------------------------------------------------
There are various tools out there, among them
the nose_ clone. It's about time to look again
at these and other tools, integrate interesting
features and maybe collaborate on some issues.
.. _nose: http://somethingaboutorange.com/mrl/projects/nose/