incorporate typo/grammar fixes from Laura and respond to a number of issues she raised in comments.

Also fixed links and some other bits and pieces.
This commit is contained in:
holger krekel 2011-03-03 23:40:38 +01:00
parent 070c73ff2f
commit fadd1a2313
35 changed files with 379 additions and 391 deletions

View File

@ -29,9 +29,8 @@ Changes between 2.0.1 and 2.0.2
a newer py lib version the py.path.local() implementation acknowledges a newer py lib version the py.path.local() implementation acknowledges
this. this.
- fixed typos in the docs (thanks Victor, Brianna) and particular - fixed typos in the docs (thanks Victor Garcia, Brianna Laugher) and particular
thanks to Laura who also revieved the documentation which thanks to Laura Creighton who also revieved parts of the documentation.
lead to some improvements.
Changes between 2.0.0 and 2.0.1 Changes between 2.0.0 and 2.0.1
---------------------------------------------- ----------------------------------------------

View File

@ -192,18 +192,16 @@ class CaptureManager:
return rep return rep
def pytest_funcarg__capsys(request): def pytest_funcarg__capsys(request):
"""captures writes to sys.stdout/sys.stderr and makes """enables capturing of writes to sys.stdout/sys.stderr and makes
them available successively via a ``capsys.readouterr()`` method captured output available via ``capsys.readouterr()`` method calls
which returns a ``(out, err)`` tuple of captured snapshot strings. which return a ``(out, err)`` tuple.
""" """
return CaptureFuncarg(py.io.StdCapture) return CaptureFuncarg(py.io.StdCapture)
def pytest_funcarg__capfd(request): def pytest_funcarg__capfd(request):
"""captures writes to file descriptors 1 and 2 and makes """enables capturing of writes to file descriptors 1 and 2 and makes
snapshotted ``(out, err)`` string tuples available captured output available via ``capsys.readouterr()`` method calls
via the ``capsys.readouterr()`` method. If the underlying which return a ``(out, err)`` tuple.
platform does not have ``os.dup`` (e.g. Jython) tests using
this funcarg will automatically skip.
""" """
if not hasattr(os, 'dup'): if not hasattr(os, 'dup'):
py.test.skip("capfd funcarg needs os.dup") py.test.skip("capfd funcarg needs os.dup")

View File

@ -14,8 +14,8 @@ def pytest_funcarg__monkeypatch(request):
monkeypatch.delenv(name, value, raising=True) monkeypatch.delenv(name, value, raising=True)
monkeypatch.syspath_prepend(path) monkeypatch.syspath_prepend(path)
All modifications will be undone when the requesting All modifications will be undone after the requesting
test function finished its execution. The ``raising`` test function has finished. The ``raising``
parameter determines if a KeyError or AttributeError parameter determines if a KeyError or AttributeError
will be raised if the set/deletion operation has no target. will be raised if the set/deletion operation has no target.
""" """

View File

@ -8,6 +8,9 @@ def pytest_funcarg__recwarn(request):
* ``pop(category=None)``: return last warning matching the category. * ``pop(category=None)``: return last warning matching the category.
* ``clear()``: clear list of warnings * ``clear()``: clear list of warnings
See http://docs.python.org/library/warnings.html for information
on warning categories.
""" """
if sys.version_info >= (2,7): if sys.version_info >= (2,7):
import warnings import warnings

View File

@ -59,7 +59,7 @@ def pytest_unconfigure(config):
def pytest_funcarg__tmpdir(request): def pytest_funcarg__tmpdir(request):
"""return a temporary directory path object """return a temporary directory path object
unique to each test function invocation, which is unique to each test function invocation,
created as a sub directory of the base temporary created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_ directory. The returned object is a `py.path.local`_
path object. path object.

View File

@ -1,15 +1,15 @@
Writing and reporting of assertions in tests The writing and reporting of assertions in tests
============================================ ==================================================
.. _`assert with the assert statement`: .. _`assert with the assert statement`:
assert with the ``assert`` statement assert with the ``assert`` statement
--------------------------------------------------------- ---------------------------------------------------------
``py.test`` allows to use the standard python ``assert`` for verifying ``py.test`` allows you to use the standard python ``assert`` for verifying
expectations and values in Python tests. For example, you can write the expectations and values in Python tests. For example, you can write the
following in your tests:: following::
# content of test_assert1.py # content of test_assert1.py
def f(): def f():
@ -18,12 +18,12 @@ following in your tests::
def test_function(): def test_function():
assert f() == 4 assert f() == 4
to state that your object has a certain ``attribute``. In case this to assert that your object returns a certain value. If this
assertion fails you will see the value of ``x``:: assertion fails you will see the value of ``x``::
$ py.test test_assert1.py $ py.test test_assert1.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 1 items collecting ... collected 1 items
test_assert1.py F test_assert1.py F
@ -37,30 +37,33 @@ assertion fails you will see the value of ``x``::
E + where 3 = f() E + where 3 = f()
test_assert1.py:5: AssertionError test_assert1.py:5: AssertionError
========================= 1 failed in 0.02 seconds ========================= ========================= 1 failed in 0.03 seconds =========================
Reporting details about the failing assertion is achieved by re-evaluating Reporting details about the failing assertion is achieved by re-evaluating
the assert expression and recording intermediate values. the assert expression and recording the intermediate values.
Note: If evaluating the assert expression has side effects you may get a Note: If evaluating the assert expression has side effects you may get a
warning that the intermediate values could not be determined safely. A warning that the intermediate values could not be determined safely. A
common example for this issue is reading from a file and comparing in one common example of this issue is an assertion which reads from a file::
line::
assert f.read() != '...' assert f.read() != '...'
This might fail but when re-interpretation comes along it might pass. You can If this assertion fails then the re-evaluation will probably succeed!
rewrite this (and any other expression with side effects) easily, though:: This is because ``f.read()`` will return an empty string when it is
called the second time during the re-evaluation. However, it is
easy to rewrite the assertion and avoid any trouble::
content = f.read() content = f.read()
assert content != '...' assert content != '...'
assertions about expected exceptions assertions about expected exceptions
------------------------------------------ ------------------------------------------
In order to write assertions about raised exceptions, you can use In order to write assertions about raised exceptions, you can use
``pytest.raises`` as a context manager like this:: ``pytest.raises`` as a context manager like this::
import pytest
with pytest.raises(ZeroDivisionError): with pytest.raises(ZeroDivisionError):
1 / 0 1 / 0
@ -105,7 +108,7 @@ if you run this module::
$ py.test test_assert2.py $ py.test test_assert2.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 1 items collecting ... collected 1 items
test_assert2.py F test_assert2.py F

View File

@ -12,7 +12,7 @@ You can always use an interactive Python prompt and type::
import pytest import pytest
help(pytest) help(pytest)
to get an overview on available globally available helpers. to get an overview on the globally available helpers.
.. automodule:: pytest .. automodule:: pytest
:members: :members:
@ -27,20 +27,18 @@ You can ask for available builtin or project-custom
pytestconfig pytestconfig
the pytest config object with access to command line opts. the pytest config object with access to command line opts.
capsys capsys
captures writes to sys.stdout/sys.stderr and makes enables capturing of writes to sys.stdout/sys.stderr and makes
them available successively via a ``capsys.readouterr()`` method captured output available via ``capsys.readouterr()`` method calls
which returns a ``(out, err)`` tuple of captured snapshot strings. which return a ``(out, err)`` tuple.
capfd capfd
captures writes to file descriptors 1 and 2 and makes enables capturing of writes to file descriptors 1 and 2 and makes
snapshotted ``(out, err)`` string tuples available captured output available via ``capsys.readouterr()`` method calls
via the ``capsys.readouterr()`` method. If the underlying which return a ``(out, err)`` tuple.
platform does not have ``os.dup`` (e.g. Jython) tests using
this funcarg will automatically skip.
tmpdir tmpdir
return a temporary directory path object return a temporary directory path object
unique to each test function invocation, which is unique to each test function invocation,
created as a sub directory of the base temporary created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_ directory. The returned object is a `py.path.local`_
path object. path object.
@ -57,8 +55,8 @@ You can ask for available builtin or project-custom
monkeypatch.delenv(name, value, raising=True) monkeypatch.delenv(name, value, raising=True)
monkeypatch.syspath_prepend(path) monkeypatch.syspath_prepend(path)
All modifications will be undone when the requesting All modifications will be undone after the requesting
test function finished its execution. The ``raising`` test function has finished. The ``raising``
parameter determines if a KeyError or AttributeError parameter determines if a KeyError or AttributeError
will be raised if the set/deletion operation has no target. will be raised if the set/deletion operation has no target.
@ -68,3 +66,6 @@ You can ask for available builtin or project-custom
* ``pop(category=None)``: return last warning matching the category. * ``pop(category=None)``: return last warning matching the category.
* ``clear()``: clear list of warnings * ``clear()``: clear list of warnings
See http://docs.python.org/library/warnings.html for information
on warning categories.

View File

@ -1,16 +1,44 @@
.. _`captures`: .. _`captures`:
Capturing of stdout/stderr output Capturing of the stdout/stderr output
========================================================= =========================================================
By default ``stdout`` and ``stderr`` output is captured separately for Default stdout/stderr/stdin capturing behaviour
setup and test execution code. If a test or a setup method fails its ---------------------------------------------------------
according output will usually be shown along with the failure traceback.
In addition, ``stdin`` is set to a "null" object which will fail all During test execution any output sent to ``stdout`` and ``stderr`` is
attempts to read from it. This is important if some code paths in captured. If a test or a setup method fails its according captured
test otherwise might lead to waiting for input - which is usually output will usually be shown along with the failure traceback.
not desired when running automated tests.
In addition, ``stdin`` is set to a "null" object which will
fail on attempts to read from it because it is rarely desired
to wait for interactive input when running automated tests.
By default capturing is done by intercepting writes to low level
file descriptors. This allows to capture output from simple
print statements as well as output from a subprocess started by
a test.
Setting capturing methods or disabling capturing
-------------------------------------------------
There are two ways in which ``py.test`` can perform capturing:
* file descriptor (FD) level capturing (default): All writes going to the
operating system file descriptors 1 and 2 will be captured.
* ``sys`` level capturing: Only writes to Python files ``sys.stdout``
and ``sys.stderr`` will be captured. No capturing of writes to
filedescriptors is performed.
.. _`disable capturing`:
You can influence output capturing mechanisms from the command line::
py.test -s # disable all capturing
py.test --capture=sys # replace sys.stdout/stderr with in-mem files
py.test --capture=fd # also point filedescriptors 1 and 2 to temp file
.. _printdebugging: .. _printdebugging:
@ -36,7 +64,7 @@ of the failing function and hide the other one::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 2 items collecting ... collected 2 items
test_module.py .F test_module.py .F
@ -50,33 +78,9 @@ of the failing function and hide the other one::
test_module.py:9: AssertionError test_module.py:9: AssertionError
----------------------------- Captured stdout ------------------------------ ----------------------------- Captured stdout ------------------------------
setting up <function test_func2 at 0x2897d70> setting up <function test_func2 at 0x21031b8>
==================== 1 failed, 1 passed in 0.02 seconds ==================== ==================== 1 failed, 1 passed in 0.02 seconds ====================
Setting capturing methods or disabling capturing
-------------------------------------------------
There are two ways in which ``py.test`` can perform capturing:
* ``fd`` level capturing (default): All writes going to the operating
system file descriptors 1 and 2 will be captured, for example writes such
as ``os.write(1, 'hello')``. Capturing on ``fd``-level also includes
**output from subprocesses**.
* ``sys`` level capturing: The ``sys.stdout`` and ``sys.stderr`` will
will be replaced with in-memory files and the ``print`` builtin or
output from code like ``sys.stderr.write(...)`` will be captured with
this method.
.. _`disable capturing`:
You can influence output capturing mechanisms from the command line::
py.test -s # disable all capturing
py.test --capture=sys # replace sys.stdout/stderr with in-mem files
py.test --capture=fd # also point filedescriptors 1 and 2 to temp file
Accessing captured output from a test function Accessing captured output from a test function
--------------------------------------------------- ---------------------------------------------------

View File

@ -4,20 +4,20 @@ basic test configuration
Command line options and configuration file settings Command line options and configuration file settings
----------------------------------------------------------------- -----------------------------------------------------------------
You can get help on options and ini-config values by running:: You can get help on command line options and values in INI-style
configurations files by using the general help option::
py.test -h # prints options _and_ config file settings py.test -h # prints options _and_ config file settings
This will display command line and configuration file settings This will display command line and configuration file settings
which were registered by installed plugins. which were registered by installed plugins.
How test configuration is read from configuration INI-files
-------------------------------------------------------------
how test configuration is read from setup/tox ini-files py.test searches for the first matching ini-style configuration file
--------------------------------------------------------
py.test searched for the first matching ini-style configuration file
in the directories of command line argument and the directories above. in the directories of command line argument and the directories above.
It looks for filenames in this order:: It looks for file basenames in this order::
pytest.ini pytest.ini
tox.ini tox.ini
@ -44,29 +44,27 @@ is used to start the search.
.. _`how to change command line options defaults`: .. _`how to change command line options defaults`:
.. _`adding default options`: .. _`adding default options`:
how to change command line options defaults How to change command line options defaults
------------------------------------------------ ------------------------------------------------
py.test provides a simple way to set some default It can be tedious to type the same series of command line options
command line options. For example, if you want every time you use py.test . For example, if you always want to see
to always see detailed info on skipped and xfailed detailed info on skipped and xfailed tests, as well as have terser "dot"
tests, as well as have terser "dot progress output", progress output, you can write it into a configuration file::
you can add this to your root directory::
# content of pytest.ini # content of pytest.ini
# (or tox.ini or setup.cfg) # (or tox.ini or setup.cfg)
[pytest] [pytest]
addopts = -rsxX -q addopts = -rsxX -q
From now on, running ``py.test`` will implicitly add From now on, running ``py.test`` will add the specified options.
the specified options.
builtin configuration file options builtin configuration file options
---------------------------------------------- ----------------------------------------------
.. confval:: minversion .. confval:: minversion
specifies a minimal pytest version needed for running tests. specifies a minimal pytest version required for running tests.
minversion = 2.1 # will fail if we run with pytest-2.0 minversion = 2.1 # will fail if we run with pytest-2.0
@ -97,14 +95,14 @@ builtin configuration file options
[!seq] matches any char not in seq [!seq] matches any char not in seq
Default patterns are ``.* _* CVS {args}``. Setting a ``norecurse`` Default patterns are ``.* _* CVS {args}``. Setting a ``norecurse``
replaces the default. Here is a customizing example for avoiding replaces the default. Here is an example of how to avoid
a different set of directories:: certain directories::
# content of setup.cfg # content of setup.cfg
[pytest] [pytest]
norecursedirs = .svn _build tmp* norecursedirs = .svn _build tmp*
This would tell py.test to not recurse into typical subversion or This would tell py.test to not look into typical subversion or
sphinx-build directories or into any ``tmp`` prefixed directory. sphinx-build directories or into any ``tmp`` prefixed directory.
.. confval:: python_files .. confval:: python_files

View File

@ -22,7 +22,7 @@ download and unpack a TAR file::
http://pypi.python.org/pypi/pytest/ http://pypi.python.org/pypi/pytest/
activating a checkout with setuptools Activating a checkout with setuptools
-------------------------------------------- --------------------------------------------
With a working Distribute_ or setuptools_ installation you can type:: With a working Distribute_ or setuptools_ installation you can type::

View File

@ -1,5 +1,5 @@
doctest integration for modules and test files. doctest integration for modules and test files
========================================================= =========================================================
By default all files matching the ``test*.txt`` pattern will By default all files matching the ``test*.txt`` pattern will
@ -44,7 +44,7 @@ then you can just invoke ``py.test`` without command line options::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 1 items collecting ... collected 1 items
mymodule.py . mymodule.py .

View File

@ -26,7 +26,7 @@ Let's write a simple test function using a ``mysetup`` funcarg::
To run this test py.test needs to find and call a factory to To run this test py.test needs to find and call a factory to
obtain the required ``mysetup`` function argument. To make obtain the required ``mysetup`` function argument. To make
an according factory findable we write down a specifically named factory an according factory findable we write down a specifically named factory
method in a :ref:`local plugin`:: method in a :ref:`local plugin <localplugin>` ::
# content of conftest.py # content of conftest.py
from myapp import MyApp from myapp import MyApp
@ -49,7 +49,7 @@ You can now run the test::
$ py.test test_sample.py $ py.test test_sample.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 1 items collecting ... collected 1 items
test_sample.py F test_sample.py F
@ -57,7 +57,7 @@ You can now run the test::
================================= FAILURES ================================= ================================= FAILURES =================================
_______________________________ test_answer ________________________________ _______________________________ test_answer ________________________________
mysetup = <conftest.MySetup instance at 0x2526440> mysetup = <conftest.MySetup instance at 0x1e2db90>
def test_answer(mysetup): def test_answer(mysetup):
app = mysetup.myapp() app = mysetup.myapp()
@ -122,12 +122,12 @@ Running it yields::
$ py.test test_ssh.py -rs $ py.test test_ssh.py -rs
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 1 items collecting ... collected 1 items
test_ssh.py s test_ssh.py s
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-166/conftest.py:22: specify ssh host with --ssh SKIP [1] /tmp/doc-exec-35/conftest.py:22: specify ssh host with --ssh
======================== 1 skipped in 0.02 seconds ========================= ======================== 1 skipped in 0.02 seconds =========================

View File

@ -27,7 +27,7 @@ now execute the test specification::
nonpython $ py.test test_simple.yml nonpython $ py.test test_simple.yml
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 2 items collecting ... collected 2 items
test_simple.yml .F test_simple.yml .F
@ -37,7 +37,7 @@ now execute the test specification::
usecase execution failed usecase execution failed
spec failed: 'some': 'other' spec failed: 'some': 'other'
no further details known at this point. no further details known at this point.
==================== 1 failed, 1 passed in 0.06 seconds ==================== ==================== 1 failed, 1 passed in 0.36 seconds ====================
You get one dot for the passing ``sub1: sub1`` check and one failure. You get one dot for the passing ``sub1: sub1`` check and one failure.
Obviously in the above ``conftest.py`` you'll want to implement a more Obviously in the above ``conftest.py`` you'll want to implement a more
@ -56,7 +56,7 @@ reporting in ``verbose`` mode::
nonpython $ py.test -v nonpython $ py.test -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 -- /home/hpk/venv/0/bin/python platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2 -- /home/hpk/venv/0/bin/python
collecting ... collected 2 items collecting ... collected 2 items
test_simple.yml:1: usecase: ok PASSED test_simple.yml:1: usecase: ok PASSED
@ -67,7 +67,7 @@ reporting in ``verbose`` mode::
usecase execution failed usecase execution failed
spec failed: 'some': 'other' spec failed: 'some': 'other'
no further details known at this point. no further details known at this point.
==================== 1 failed, 1 passed in 0.06 seconds ==================== ==================== 1 failed, 1 passed in 0.08 seconds ====================
While developing your custom test collection and execution it's also While developing your custom test collection and execution it's also
interesting to just look at the collection tree:: interesting to just look at the collection tree::

View File

@ -125,7 +125,7 @@ And then when we run the test::
================================= FAILURES ================================= ================================= FAILURES =================================
__________________________ test_db_initialized[1] __________________________ __________________________ test_db_initialized[1] __________________________
db = <conftest.DB2 instance at 0x1a5b488> db = <conftest.DB2 instance at 0x1873170>
def test_db_initialized(db): def test_db_initialized(db):
# a dummy test # a dummy test
@ -179,7 +179,7 @@ the respective settings::
================================= FAILURES ================================= ================================= FAILURES =================================
__________________________ test_db_initialized[1] __________________________ __________________________ test_db_initialized[1] __________________________
db = <conftest.DB2 instance at 0xf81c20> db = <conftest.DB2 instance at 0x2cc75f0>
def test_db_initialized(db): def test_db_initialized(db):
# a dummy test # a dummy test
@ -190,7 +190,7 @@ the respective settings::
test_backends.py:6: Failed test_backends.py:6: Failed
_________________________ TestClass.test_equals[0] _________________________ _________________________ TestClass.test_equals[0] _________________________
self = <test_parametrize.TestClass instance at 0xf93050>, a = 1, b = 2 self = <test_parametrize.TestClass instance at 0x2cd3050>, a = 1, b = 2
def test_equals(self, a, b): def test_equals(self, a, b):
> assert a == b > assert a == b
@ -199,7 +199,7 @@ the respective settings::
test_parametrize.py:17: AssertionError test_parametrize.py:17: AssertionError
______________________ TestClass.test_zerodivision[1] ______________________ ______________________ TestClass.test_zerodivision[1] ______________________
self = <test_parametrize.TestClass instance at 0xf93098>, a = 3, b = 2 self = <test_parametrize.TestClass instance at 0x2cd3998>, a = 3, b = 2
def test_zerodivision(self, a, b): def test_zerodivision(self, a, b):
> pytest.raises(ZeroDivisionError, "a/b") > pytest.raises(ZeroDivisionError, "a/b")
@ -247,7 +247,7 @@ Running it gives similar results as before::
================================= FAILURES ================================= ================================= FAILURES =================================
_________________________ TestClass.test_equals[0] _________________________ _________________________ TestClass.test_equals[0] _________________________
self = <test_parametrize2.TestClass instance at 0x27e15a8>, a = 1, b = 2 self = <test_parametrize2.TestClass instance at 0x10b29e0>, a = 1, b = 2
@params([dict(a=1, b=2), dict(a=3, b=3), ]) @params([dict(a=1, b=2), dict(a=3, b=3), ])
def test_equals(self, a, b): def test_equals(self, a, b):
@ -257,7 +257,7 @@ Running it gives similar results as before::
test_parametrize2.py:19: AssertionError test_parametrize2.py:19: AssertionError
______________________ TestClass.test_zerodivision[1] ______________________ ______________________ TestClass.test_zerodivision[1] ______________________
self = <test_parametrize2.TestClass instance at 0x2953bd8>, a = 3, b = 2 self = <test_parametrize2.TestClass instance at 0x10bdb00>, a = 3, b = 2
@params([dict(a=1, b=0), dict(a=3, b=2)]) @params([dict(a=1, b=0), dict(a=3, b=2)])
def test_zerodivision(self, a, b): def test_zerodivision(self, a, b):
@ -286,4 +286,4 @@ Running it (with Python-2.4 through to Python2.7 installed)::
. $ py.test -q multipython.py . $ py.test -q multipython.py
collecting ... collected 75 items collecting ... collected 75 items
....s....s....s....ssssss....s....s....s....ssssss....s....s....s....ssssss ....s....s....s....ssssss....s....s....s....ssssss....s....s....s....ssssss
48 passed, 27 skipped in 1.59 seconds 48 passed, 27 skipped in 1.92 seconds

View File

@ -13,7 +13,7 @@ get on the terminal - we are working on that):
assertion $ py.test failure_demo.py assertion $ py.test failure_demo.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 39 items collecting ... collected 39 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
@ -30,7 +30,7 @@ get on the terminal - we are working on that):
failure_demo.py:15: AssertionError failure_demo.py:15: AssertionError
_________________________ TestFailing.test_simple __________________________ _________________________ TestFailing.test_simple __________________________
self = <failure_demo.TestFailing object at 0x1b42950> self = <failure_demo.TestFailing object at 0x1b3b5d0>
def test_simple(self): def test_simple(self):
def f(): def f():
@ -40,13 +40,13 @@ get on the terminal - we are working on that):
> assert f() == g() > assert f() == g()
E assert 42 == 43 E assert 42 == 43
E + where 42 = <function f at 0x1b33de8>() E + where 42 = <function f at 0x1b3f410>()
E + and 43 = <function g at 0x1b47140>() E + and 43 = <function g at 0x1b3f0c8>()
failure_demo.py:28: AssertionError failure_demo.py:28: AssertionError
____________________ TestFailing.test_simple_multiline _____________________ ____________________ TestFailing.test_simple_multiline _____________________
self = <failure_demo.TestFailing object at 0x1b42c50> self = <failure_demo.TestFailing object at 0x1b2e6d0>
def test_simple_multiline(self): def test_simple_multiline(self):
otherfunc_multi( otherfunc_multi(
@ -66,19 +66,19 @@ get on the terminal - we are working on that):
failure_demo.py:12: AssertionError failure_demo.py:12: AssertionError
___________________________ TestFailing.test_not ___________________________ ___________________________ TestFailing.test_not ___________________________
self = <failure_demo.TestFailing object at 0x1b42190> self = <failure_demo.TestFailing object at 0x1b3b950>
def test_not(self): def test_not(self):
def f(): def f():
return 42 return 42
> assert not f() > assert not f()
E assert not 42 E assert not 42
E + where 42 = <function f at 0x1b47320>() E + where 42 = <function f at 0x1b3f668>()
failure_demo.py:38: AssertionError failure_demo.py:38: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________ _________________ TestSpecialisedExplanations.test_eq_text _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1b42150> self = <failure_demo.TestSpecialisedExplanations object at 0x1b3b490>
def test_eq_text(self): def test_eq_text(self):
> assert 'spam' == 'eggs' > assert 'spam' == 'eggs'
@ -89,7 +89,7 @@ get on the terminal - we are working on that):
failure_demo.py:42: AssertionError failure_demo.py:42: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________ _____________ TestSpecialisedExplanations.test_eq_similar_text _____________
self = <failure_demo.TestSpecialisedExplanations object at 0x1b48610> self = <failure_demo.TestSpecialisedExplanations object at 0x1b2ed10>
def test_eq_similar_text(self): def test_eq_similar_text(self):
> assert 'foo 1 bar' == 'foo 2 bar' > assert 'foo 1 bar' == 'foo 2 bar'
@ -102,7 +102,7 @@ get on the terminal - we are working on that):
failure_demo.py:45: AssertionError failure_demo.py:45: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________ ____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x1b38f90> self = <failure_demo.TestSpecialisedExplanations object at 0x1b3bf10>
def test_eq_multiline_text(self): def test_eq_multiline_text(self):
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar' > assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
@ -115,7 +115,7 @@ get on the terminal - we are working on that):
failure_demo.py:48: AssertionError failure_demo.py:48: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________ ______________ TestSpecialisedExplanations.test_eq_long_text _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x1b42cd0> self = <failure_demo.TestSpecialisedExplanations object at 0x1b3b890>
def test_eq_long_text(self): def test_eq_long_text(self):
a = '1'*100 + 'a' + '2'*100 a = '1'*100 + 'a' + '2'*100
@ -132,7 +132,7 @@ get on the terminal - we are working on that):
failure_demo.py:53: AssertionError failure_demo.py:53: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________ _________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba6a90> self = <failure_demo.TestSpecialisedExplanations object at 0x1b538d0>
def test_eq_long_text_multiline(self): def test_eq_long_text_multiline(self):
a = '1\n'*100 + 'a' + '2\n'*100 a = '1\n'*100 + 'a' + '2\n'*100
@ -156,7 +156,7 @@ get on the terminal - we are working on that):
failure_demo.py:58: AssertionError failure_demo.py:58: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________ _________________ TestSpecialisedExplanations.test_eq_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba6bd0> self = <failure_demo.TestSpecialisedExplanations object at 0x1b53b50>
def test_eq_list(self): def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3] > assert [0, 1, 2] == [0, 1, 3]
@ -166,7 +166,7 @@ get on the terminal - we are working on that):
failure_demo.py:61: AssertionError failure_demo.py:61: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________ ______________ TestSpecialisedExplanations.test_eq_list_long _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x1b42910> self = <failure_demo.TestSpecialisedExplanations object at 0x1b3b390>
def test_eq_list_long(self): def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100 a = [0]*100 + [1] + [3]*100
@ -178,7 +178,7 @@ get on the terminal - we are working on that):
failure_demo.py:66: AssertionError failure_demo.py:66: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________ _________________ TestSpecialisedExplanations.test_eq_dict _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba6f90> self = <failure_demo.TestSpecialisedExplanations object at 0x1b53b10>
def test_eq_dict(self): def test_eq_dict(self):
> assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2} > assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
@ -191,7 +191,7 @@ get on the terminal - we are working on that):
failure_demo.py:69: AssertionError failure_demo.py:69: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________ _________________ TestSpecialisedExplanations.test_eq_set __________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1b485d0> self = <failure_demo.TestSpecialisedExplanations object at 0x1b439d0>
def test_eq_set(self): def test_eq_set(self):
> assert set([0, 10, 11, 12]) == set([0, 20, 21]) > assert set([0, 10, 11, 12]) == set([0, 20, 21])
@ -207,7 +207,7 @@ get on the terminal - we are working on that):
failure_demo.py:72: AssertionError failure_demo.py:72: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________ _____________ TestSpecialisedExplanations.test_eq_longer_list ______________
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba2850> self = <failure_demo.TestSpecialisedExplanations object at 0x1b58750>
def test_eq_longer_list(self): def test_eq_longer_list(self):
> assert [1,2] == [1,2,3] > assert [1,2] == [1,2,3]
@ -217,7 +217,7 @@ get on the terminal - we are working on that):
failure_demo.py:75: AssertionError failure_demo.py:75: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________ _________________ TestSpecialisedExplanations.test_in_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba2f10> self = <failure_demo.TestSpecialisedExplanations object at 0x1b58f10>
def test_in_list(self): def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5] > assert 1 in [0, 2, 3, 4, 5]
@ -226,7 +226,7 @@ get on the terminal - we are working on that):
failure_demo.py:78: AssertionError failure_demo.py:78: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________ __________ TestSpecialisedExplanations.test_not_in_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba2990> self = <failure_demo.TestSpecialisedExplanations object at 0x1b3b9d0>
def test_not_in_text_multiline(self): def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail' text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
@ -244,7 +244,7 @@ get on the terminal - we are working on that):
failure_demo.py:82: AssertionError failure_demo.py:82: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________ ___________ TestSpecialisedExplanations.test_not_in_text_single ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x1b42110> self = <failure_demo.TestSpecialisedExplanations object at 0x1b53bd0>
def test_not_in_text_single(self): def test_not_in_text_single(self):
text = 'single foo line' text = 'single foo line'
@ -257,7 +257,7 @@ get on the terminal - we are working on that):
failure_demo.py:86: AssertionError failure_demo.py:86: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________ _________ TestSpecialisedExplanations.test_not_in_text_single_long _________
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba65d0> self = <failure_demo.TestSpecialisedExplanations object at 0x1b583d0>
def test_not_in_text_single_long(self): def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20 text = 'head ' * 50 + 'foo ' + 'tail ' * 20
@ -270,7 +270,7 @@ get on the terminal - we are working on that):
failure_demo.py:90: AssertionError failure_demo.py:90: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______ ______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba2c50> self = <failure_demo.TestSpecialisedExplanations object at 0x1b53890>
def test_not_in_text_single_long_term(self): def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20 text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
@ -289,7 +289,7 @@ get on the terminal - we are working on that):
i = Foo() i = Foo()
> assert i.b == 2 > assert i.b == 2
E assert 1 == 2 E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x1ba2ad0>.b E + where 1 = <failure_demo.Foo object at 0x1b58a90>.b
failure_demo.py:101: AssertionError failure_demo.py:101: AssertionError
_________________________ test_attribute_instance __________________________ _________________________ test_attribute_instance __________________________
@ -299,8 +299,8 @@ get on the terminal - we are working on that):
b = 1 b = 1
> assert Foo().b == 2 > assert Foo().b == 2
E assert 1 == 2 E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x1ba2110>.b E + where 1 = <failure_demo.Foo object at 0x1b53dd0>.b
E + where <failure_demo.Foo object at 0x1ba2110> = <class 'failure_demo.Foo'>() E + where <failure_demo.Foo object at 0x1b53dd0> = <class 'failure_demo.Foo'>()
failure_demo.py:107: AssertionError failure_demo.py:107: AssertionError
__________________________ test_attribute_failure __________________________ __________________________ test_attribute_failure __________________________
@ -316,7 +316,7 @@ get on the terminal - we are working on that):
failure_demo.py:116: failure_demo.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.Foo object at 0x1ba2a90> self = <failure_demo.Foo object at 0x1b582d0>
def _get_b(self): def _get_b(self):
> raise Exception('Failed to get attrib') > raise Exception('Failed to get attrib')
@ -332,15 +332,15 @@ get on the terminal - we are working on that):
b = 2 b = 2
> assert Foo().b == Bar().b > assert Foo().b == Bar().b
E assert 1 == 2 E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x1ba2950>.b E + where 1 = <failure_demo.Foo object at 0x1b58450>.b
E + where <failure_demo.Foo object at 0x1ba2950> = <class 'failure_demo.Foo'>() E + where <failure_demo.Foo object at 0x1b58450> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x1ba2390>.b E + and 2 = <failure_demo.Bar object at 0x1b58590>.b
E + where <failure_demo.Bar object at 0x1ba2390> = <class 'failure_demo.Bar'>() E + where <failure_demo.Bar object at 0x1b58590> = <class 'failure_demo.Bar'>()
failure_demo.py:124: AssertionError failure_demo.py:124: AssertionError
__________________________ TestRaises.test_raises __________________________ __________________________ TestRaises.test_raises __________________________
self = <failure_demo.TestRaises instance at 0x1bb3488> self = <failure_demo.TestRaises instance at 0x1bf0440>
def test_raises(self): def test_raises(self):
s = 'qwe' s = 'qwe'
@ -352,10 +352,10 @@ get on the terminal - we are working on that):
> int(s) > int(s)
E ValueError: invalid literal for int() with base 10: 'qwe' E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen /home/hpk/p/pytest/_pytest/python.py:822>:1: ValueError <0-codegen /home/hpk/p/pytest/_pytest/python.py:825>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________ ______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises instance at 0x1bb3098> self = <failure_demo.TestRaises instance at 0x1bf0368>
def test_raises_doesnt(self): def test_raises_doesnt(self):
> raises(IOError, "int('3')") > raises(IOError, "int('3')")
@ -364,7 +364,7 @@ get on the terminal - we are working on that):
failure_demo.py:136: Failed failure_demo.py:136: Failed
__________________________ TestRaises.test_raise ___________________________ __________________________ TestRaises.test_raise ___________________________
self = <failure_demo.TestRaises instance at 0x1ba7d40> self = <failure_demo.TestRaises instance at 0x1be22d8>
def test_raise(self): def test_raise(self):
> raise ValueError("demo error") > raise ValueError("demo error")
@ -373,7 +373,7 @@ get on the terminal - we are working on that):
failure_demo.py:139: ValueError failure_demo.py:139: ValueError
________________________ TestRaises.test_tupleerror ________________________ ________________________ TestRaises.test_tupleerror ________________________
self = <failure_demo.TestRaises instance at 0x1b5cc68> self = <failure_demo.TestRaises instance at 0x1b566c8>
def test_tupleerror(self): def test_tupleerror(self):
> a,b = [1] > a,b = [1]
@ -382,7 +382,7 @@ get on the terminal - we are working on that):
failure_demo.py:142: ValueError failure_demo.py:142: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______ ______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
self = <failure_demo.TestRaises instance at 0x1bb1488> self = <failure_demo.TestRaises instance at 0x1beb440>
def test_reinterpret_fails_with_print_for_the_fun_of_it(self): def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3] l = [1,2,3]
@ -395,7 +395,7 @@ get on the terminal - we are working on that):
l is [1, 2, 3] l is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________ ________________________ TestRaises.test_some_error ________________________
self = <failure_demo.TestRaises instance at 0x1bb9128> self = <failure_demo.TestRaises instance at 0x1bef0e0>
def test_some_error(self): def test_some_error(self):
> if namenotexi: > if namenotexi:
@ -423,7 +423,7 @@ get on the terminal - we are working on that):
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/example/assertion/failure_demo.py:162>:2: AssertionError <2-codegen 'abc-123' /home/hpk/p/pytest/doc/example/assertion/failure_demo.py:162>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________ ____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x1bb8f80> self = <failure_demo.TestMoreErrors instance at 0x1bf3fc8>
def test_complex_error(self): def test_complex_error(self):
def f(): def f():
@ -452,7 +452,7 @@ get on the terminal - we are working on that):
failure_demo.py:5: AssertionError failure_demo.py:5: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________ ___________________ TestMoreErrors.test_z1_unpack_error ____________________
self = <failure_demo.TestMoreErrors instance at 0x1bab200> self = <failure_demo.TestMoreErrors instance at 0x1b56ab8>
def test_z1_unpack_error(self): def test_z1_unpack_error(self):
l = [] l = []
@ -462,7 +462,7 @@ get on the terminal - we are working on that):
failure_demo.py:179: ValueError failure_demo.py:179: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________ ____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x1bb36c8> self = <failure_demo.TestMoreErrors instance at 0x1bf0200>
def test_z2_type_error(self): def test_z2_type_error(self):
l = 3 l = 3
@ -472,20 +472,20 @@ get on the terminal - we are working on that):
failure_demo.py:183: TypeError failure_demo.py:183: TypeError
______________________ TestMoreErrors.test_startswith ______________________ ______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors instance at 0x1bbce60> self = <failure_demo.TestMoreErrors instance at 0x1bf7dd0>
def test_startswith(self): def test_startswith(self):
s = "123" s = "123"
g = "456" g = "456"
> assert s.startswith(g) > assert s.startswith(g)
E assert False E assert False
E + where False = <built-in method startswith of str object at 0x1ad6bd0>('456') E + where False = <built-in method startswith of str object at 0x1abeb10>('456')
E + where <built-in method startswith of str object at 0x1ad6bd0> = '123'.startswith E + where <built-in method startswith of str object at 0x1abeb10> = '123'.startswith
failure_demo.py:188: AssertionError failure_demo.py:188: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________ __________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors instance at 0x1bbeb48> self = <failure_demo.TestMoreErrors instance at 0x1bf8a70>
def test_startswith_nested(self): def test_startswith_nested(self):
def f(): def f():
@ -494,15 +494,15 @@ get on the terminal - we are working on that):
return "456" return "456"
> assert f().startswith(g()) > assert f().startswith(g())
E assert False E assert False
E + where False = <built-in method startswith of str object at 0x1ad6bd0>('456') E + where False = <built-in method startswith of str object at 0x1abeb10>('456')
E + where <built-in method startswith of str object at 0x1ad6bd0> = '123'.startswith E + where <built-in method startswith of str object at 0x1abeb10> = '123'.startswith
E + where '123' = <function f at 0x1baade8>() E + where '123' = <function f at 0x1bf9050>()
E + and '456' = <function g at 0x1baad70>() E + and '456' = <function g at 0x1bf90c8>()
failure_demo.py:195: AssertionError failure_demo.py:195: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________ _____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors instance at 0x1bbe098> self = <failure_demo.TestMoreErrors instance at 0x1bf0248>
def test_global_func(self): def test_global_func(self):
> assert isinstance(globf(42), float) > assert isinstance(globf(42), float)
@ -513,19 +513,19 @@ get on the terminal - we are working on that):
failure_demo.py:198: AssertionError failure_demo.py:198: AssertionError
_______________________ TestMoreErrors.test_instance _______________________ _______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors instance at 0x1ba7bd8> self = <failure_demo.TestMoreErrors instance at 0x1be2998>
def test_instance(self): def test_instance(self):
self.x = 6*7 self.x = 6*7
> assert self.x != 42 > assert self.x != 42
E assert 42 != 42 E assert 42 != 42
E + where 42 = 42 E + where 42 = 42
E + where 42 = <failure_demo.TestMoreErrors instance at 0x1ba7bd8>.x E + where 42 = <failure_demo.TestMoreErrors instance at 0x1be2998>.x
failure_demo.py:202: AssertionError failure_demo.py:202: AssertionError
_______________________ TestMoreErrors.test_compare ________________________ _______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors instance at 0x1bbca28> self = <failure_demo.TestMoreErrors instance at 0x1bf7f80>
def test_compare(self): def test_compare(self):
> assert globf(10) < 5 > assert globf(10) < 5
@ -535,7 +535,7 @@ get on the terminal - we are working on that):
failure_demo.py:205: AssertionError failure_demo.py:205: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________ _____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors instance at 0x1bc0908> self = <failure_demo.TestMoreErrors instance at 0x1bfb878>
def test_try_finally(self): def test_try_finally(self):
x = 1 x = 1
@ -544,4 +544,4 @@ get on the terminal - we are working on that):
E assert 1 == 0 E assert 1 == 0
failure_demo.py:210: AssertionError failure_demo.py:210: AssertionError
======================== 39 failed in 0.22 seconds ========================= ======================== 39 failed in 0.26 seconds =========================

View File

@ -109,13 +109,13 @@ directory with the above conftest.py::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
gw0 I / gw1 I / gw2 I / gw3 I gw0 I / gw1 I / gw2 I / gw3 I
gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0] gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0]
scheduling tests via LoadScheduling scheduling tests via LoadScheduling
============================= in 0.37 seconds ============================= ============================= in 0.35 seconds =============================
.. _`excontrolskip`: .. _`excontrolskip`:
@ -156,12 +156,12 @@ and when running it will see a skipped "slow" test::
$ py.test -rs # "-rs" means report details on the little 's' $ py.test -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 2 items collecting ... collected 2 items
test_module.py .s test_module.py .s
========================= short test summary info ========================== ========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-275/conftest.py:9: need --runslow option to run SKIP [1] /tmp/doc-exec-40/conftest.py:9: need --runslow option to run
=================== 1 passed, 1 skipped in 0.02 seconds ==================== =================== 1 passed, 1 skipped in 0.02 seconds ====================
@ -169,7 +169,7 @@ Or run it including the ``slow`` marked test::
$ py.test --runslow $ py.test --runslow
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 2 items collecting ... collected 2 items
test_module.py .. test_module.py ..
@ -213,7 +213,7 @@ Let's run our little function::
E Failed: not configured: 42 E Failed: not configured: 42
test_checkconfig.py:8: Failed test_checkconfig.py:8: Failed
1 failed in 0.02 seconds 1 failed in 0.03 seconds
Detect if running from within a py.test run Detect if running from within a py.test run
-------------------------------------------------------------- --------------------------------------------------------------
@ -261,7 +261,7 @@ which will add the string to the test header accordingly::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
project deps: mylib-1.1 project deps: mylib-1.1
collecting ... collected 0 items collecting ... collected 0 items
@ -284,7 +284,7 @@ which will add info only when run with "--v"::
$ py.test -v $ py.test -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0 -- /home/hpk/venv/0/bin/python platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2 -- /home/hpk/venv/0/bin/python
info1: did you know that ... info1: did you know that ...
did you? did you?
collecting ... collected 0 items collecting ... collected 0 items
@ -295,7 +295,7 @@ and nothing when run plainly::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 0 items collecting ... collected 0 items
============================= in 0.00 seconds ============================= ============================= in 0.00 seconds =============================

View File

@ -1,32 +0,0 @@
changing Python test discovery patterns
--------------------------------------------------
You can influence python test file, function and class prefixes through
the :confval:`python_patterns` configuration valueto determine which
files are checked and which test functions are found. Example for using
a scheme that builds on ``check`` rather than on ``test`` prefixes::
# content of setup.cfg
[pytest]
python_patterns =
files: check_*.py
functions: check_
classes: Check
See
:confval:`python_funcprefixes` and :confval:`python_classprefixes`
changing test file discovery
-----------------------------------------------------
You can specify patterns where python tests are found::
python_testfilepatterns =
testing/**/{purebasename}.py
testing/*.py
.. note::
conftest.py files are never considered for test discovery

View File

@ -12,25 +12,26 @@ On naming, nosetests, licensing and magic
Why a ``py.test`` instead of a ``pytest`` command? Why a ``py.test`` instead of a ``pytest`` command?
++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++
Some historic, some practical reasons: ``py.test`` used to be part of Some of the reasons are historic, others are practical. ``py.test``
the ``py`` package which provided several developer utilities, used to be part of the ``py`` package which provided several developer
all starting with ``py.<TAB>``, providing nice TAB-completion. If utilities, all starting with ``py.<TAB>``, thus providing nice
TAB-completion. If
you install ``pip install pycmd`` you get these tools from a separate you install ``pip install pycmd`` you get these tools from a separate
package. These days the command line tool could be called ``pytest`` package. These days the command line tool could be called ``pytest``
but then again many people have gotten used to the old name and there since many people have gotten used to the old name and there
is another tool named "pytest" so we just decided to stick with is another tool named "pytest" we just decided to stick with
``py.test``. ``py.test``.
What's the relation to nose and unittest? How does py.test relate to nose and unittest?
+++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++
py.test and nose_ share basic philosophy when it comes py.test and nose_ share basic philosophy when it comes
to running Python tests. In fact, you can run many tests to running and writing Python tests. In fact, you can run many tests
written nose with py.test. nose_ was originally created written for nose with py.test. nose_ was originally created
as a clone of ``py.test`` when py.test was in the ``0.8`` release as a clone of ``py.test`` when py.test was in the ``0.8`` release
cycle. As of version 2.0 support for running unittest test cycle. Note that starting with pytest-2.0 support for running unittest
suites is majorly improved and you should be able to run test suites is majorly improved and you should be able to run
many Django and Twisted test suites. many Django and Twisted test suites without modification.
.. _features: test/features.html .. _features: test/features.html
@ -39,22 +40,20 @@ What's this "magic" with py.test?
++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++
Around 2007 (version ``0.8``) some people claimed that py.test Around 2007 (version ``0.8``) some people claimed that py.test
was using too much "magic". It has been refactored a lot. Thrown was using too much "magic". Partly this has been fixed by removing
out old code. Deprecated unused approaches and code. And it is today unused, deprecated or complicated code. It is today probably one
probably one of the smallest, most universally runnable and most of the smallest, most universally runnable and most
customizable testing frameworks for Python. It's true that customizable testing frameworks for Python. However,
``py.test`` uses metaprogramming techniques, i.e. it views ``py.test`` still uses many metaprogramming techniques and
test code similar to how compilers view programs, using a reading its source is thus likely not something for Python beginners.
somewhat abstract internal model.
It's also true that the no-boilerplate testing is implemented by making A second "magic" issue arguably the assert statement re-intepreation:
use of the Python assert statement through "re-interpretation":
When an ``assert`` statement fails, py.test re-interprets the expression When an ``assert`` statement fails, py.test re-interprets the expression
to show intermediate values if a test fails. If your expression to show intermediate values if a test fails. If your expression
has side effects the intermediate values may not be the same, obfuscating has side effects (better to avoid them anyway!) the intermediate values
the initial error (this is also explained at the command line if it happens). may not be the same, obfuscating the initial error (this is also
explained at the command line if it happens).
``py.test --no-assert`` turns off assert re-interpretation. ``py.test --no-assert`` turns off assert re-interpretation.
Sidenote: it is good practise to avoid asserts with side effects.
.. _`py namespaces`: index.html .. _`py namespaces`: index.html
.. _`py/__init__.py`: http://bitbucket.org/hpk42/py-trunk/src/trunk/py/__init__.py .. _`py/__init__.py`: http://bitbucket.org/hpk42/py-trunk/src/trunk/py/__init__.py
@ -69,7 +68,7 @@ Is using funcarg- versus xUnit setup a style question?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
For simple applications and for people experienced with nose_ or For simple applications and for people experienced with nose_ or
unittest-style test setup using `xUnit style setup`_ often unittest-style test setup using `xUnit style setup`_ probably
feels natural. For larger test suites, parametrized testing feels natural. For larger test suites, parametrized testing
or setup of complex test resources using funcargs_ may feel more natural. or setup of complex test resources using funcargs_ may feel more natural.
Moreover, funcargs are ideal for writing advanced test support Moreover, funcargs are ideal for writing advanced test support
@ -86,13 +85,11 @@ in a managed class/module/function scope.
Why the ``pytest_funcarg__*`` name for funcarg factories? Why the ``pytest_funcarg__*`` name for funcarg factories?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
We alternatively implemented an explicit registration mechanism for We like `Convention over Configuration`_ and didn't see much point
function argument factories. But lacking a good use case for this in allowing a more flexible or abstract mechanism. Moreover,
indirection and flexibility we decided to go for `Convention over is is nice to be able to search for ``pytest_funcarg__MYARG`` in
Configuration`_ and rather have factories specified by convention. a source code and safely find all factory functions for
Besides removing the need for an registration indirection it allows to the ``MYARG`` function argument.
"grep" for ``pytest_funcarg__MYARG`` and will safely find all factory
functions for the ``MYARG`` function argument.
.. _`Convention over Configuration`: http://en.wikipedia.org/wiki/Convention_over_Configuration .. _`Convention over Configuration`: http://en.wikipedia.org/wiki/Convention_over_Configuration

View File

@ -1,5 +1,5 @@
============================================================== ==============================================================
creating and managing test function arguments Injecting objects into test functions (funcargs)
============================================================== ==============================================================
.. currentmodule:: _pytest.python .. currentmodule:: _pytest.python
@ -11,16 +11,26 @@ creating and managing test function arguments
Dependency injection through function arguments Dependency injection through function arguments
================================================= =================================================
py.test allows to inject values into test functions through the *funcarg py.test lets you inject objects into test functions and precisely
mechanism*: For each argument name in a test function signature a factory is control their life cycle in relation to the test execution. It is
looked up and called to create the value. The factory can live in the also possible to run a test function multiple times with different objects.
same test class, test module, in a per-directory ``conftest.py`` file or
in an external plugin. It has full access to the requesting test The basic mechanism for injecting objects is also called the
function, can register finalizers and invoke lifecycle-caching *funcarg mechanism* because objects are ultimatly injected
helpers. As can be expected from a systematic dependency by calling a test function with it as an argument. Unlike the
injection mechanism, this allows full de-coupling of resource and classical xUnit approach *funcargs* relate more to `Dependency Injection`_
fixture setup from test code, enabling more maintainable and because they help to de-couple test code from objects required for
easy-to-modify test suites. them to execute.
.. _`Dependency injection`: http://en.wikipedia.org/wiki/Dependency_injection
To create a value with which to call a test function a factory function
is called which gets full access to the test function context and can
register finalizers or invoke lifecycle-caching helpers. The factory
can be implemented in same test class or test module, or in a
per-directory ``conftest.py`` file or even in an external plugin. This
allows full de-coupling of test code and objects needed for test
execution.
A test function may be invoked multiple times in which case we A test function may be invoked multiple times in which case we
speak of :ref:`parametrized testing <parametrizing-tests>`. This can be speak of :ref:`parametrized testing <parametrizing-tests>`. This can be
@ -28,13 +38,13 @@ very useful if you want to test e.g. against different database backends
or with multiple numerical arguments sets and want to reuse the same set or with multiple numerical arguments sets and want to reuse the same set
of test functions. of test functions.
.. _funcarg: .. _funcarg:
Basic funcarg example Basic injection example
----------------------- --------------------------------
Let's look at a simple self-contained example that you can put Let's look at a simple self-contained test module::
into a test module::
# content of ./test_simplefactory.py # content of ./test_simplefactory.py
def pytest_funcarg__myfuncarg(request): def pytest_funcarg__myfuncarg(request):
@ -43,11 +53,15 @@ into a test module::
def test_function(myfuncarg): def test_function(myfuncarg):
assert myfuncarg == 17 assert myfuncarg == 17
This test function needs an injected object named ``myfuncarg``.
py.test will discover and call the factory named
``pytest_funcarg__myfuncarg`` within the same module in this case.
Running the test looks like this:: Running the test looks like this::
$ py.test test_simplefactory.py $ py.test test_simplefactory.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 1 items collecting ... collected 1 items
test_simplefactory.py F test_simplefactory.py F
@ -64,8 +78,8 @@ Running the test looks like this::
test_simplefactory.py:5: AssertionError test_simplefactory.py:5: AssertionError
========================= 1 failed in 0.02 seconds ========================= ========================= 1 failed in 0.02 seconds =========================
This means that the test function was called with a ``myfuncarg`` value This means that indeed the test function was called with a ``myfuncarg``
of ``42`` and the assert fails accordingly. Here is how py.test argument value of ``42`` and the assert fails. Here is how py.test
comes to call the test function this way: comes to call the test function this way:
1. py.test :ref:`finds <test discovery>` the ``test_function`` because 1. py.test :ref:`finds <test discovery>` the ``test_function`` because
@ -76,14 +90,15 @@ comes to call the test function this way:
2. ``pytest_funcarg__myfuncarg(request)`` is called and 2. ``pytest_funcarg__myfuncarg(request)`` is called and
returns the value for ``myfuncarg``. returns the value for ``myfuncarg``.
3. the test function can now be called: ``test_function(42)`` 3. the test function can now be called: ``test_function(42)``.
and results in the above exception because of the assertion This results in the above exception because of the assertion
mismatch. mismatch.
Note that if you misspell a function argument or want Note that if you misspell a function argument or want
to use one that isn't available, you'll see an error to use one that isn't available, you'll see an error
with a list of available function arguments. You can with a list of available function arguments.
also issue::
You can always issue::
py.test --funcargs test_simplefactory.py py.test --funcargs test_simplefactory.py
@ -152,7 +167,7 @@ Running this::
$ py.test test_example.py $ py.test test_example.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 10 items collecting ... collected 10 items
test_example.py .........F test_example.py .........F
@ -167,7 +182,7 @@ Running this::
E assert 9 < 9 E assert 9 < 9
test_example.py:7: AssertionError test_example.py:7: AssertionError
==================== 1 failed, 9 passed in 0.03 seconds ==================== ==================== 1 failed, 9 passed in 0.05 seconds ====================
Note that the ``pytest_generate_tests(metafunc)`` hook is called during Note that the ``pytest_generate_tests(metafunc)`` hook is called during
the test collection phase which is separate from the actual test running. the test collection phase which is separate from the actual test running.
@ -190,7 +205,7 @@ If you want to select only the run with the value ``7`` you could do::
$ py.test -v -k 7 test_example.py # or -k test_func[7] $ py.test -v -k 7 test_example.py # or -k test_func[7]
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 -- /home/hpk/venv/0/bin/python platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2 -- /home/hpk/venv/0/bin/python
collecting ... collected 10 items collecting ... collected 10 items
test_example.py:6: test_func[7] PASSED test_example.py:6: test_func[7] PASSED

View File

@ -16,7 +16,7 @@ Installation options::
To check your installation has installed the correct version:: To check your installation has installed the correct version::
$ py.test --version $ py.test --version
This is py.test version 2.0.1, imported from /home/hpk/p/pytest/pytest.py This is py.test version 2.0.2.dev2, imported from /home/hpk/p/pytest/pytest.py
setuptools registered plugins: setuptools registered plugins:
pytest-xdist-1.6.dev2 at /home/hpk/p/pytest-xdist/xdist/plugin.pyc pytest-xdist-1.6.dev2 at /home/hpk/p/pytest-xdist/xdist/plugin.pyc
pytest-pep8-0.7 at /home/hpk/p/pytest-pep8/pytest_pep8.pyc pytest-pep8-0.7 at /home/hpk/p/pytest-pep8/pytest_pep8.pyc
@ -41,7 +41,7 @@ That's it. You can execute the test function now::
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 1 items collecting ... collected 1 items
test_sample.py F test_sample.py F
@ -57,7 +57,7 @@ That's it. You can execute the test function now::
test_sample.py:5: AssertionError test_sample.py:5: AssertionError
========================= 1 failed in 0.02 seconds ========================= ========================= 1 failed in 0.02 seconds =========================
py.test found the ``test_answer`` function by following :ref:`standard test discovery rules <test discovery>`, basically detecting the ``test_`` prefixes. We got a failure report because our little ``func(3)`` call did not return ``5``. The report is formatted using the :ref:`standard traceback reporting`. py.test found the ``test_answer`` function by following :ref:`standard test discovery rules <test discovery>`, basically detecting the ``test_`` prefixes. We got a failure report because our little ``func(3)`` call did not return ``5``.
.. note:: .. note::
@ -80,10 +80,10 @@ py.test found the ``test_answer`` function by following :ref:`standard test disc
.. _`assert statement`: http://docs.python.org/reference/simple_stmts.html#the-assert-statement .. _`assert statement`: http://docs.python.org/reference/simple_stmts.html#the-assert-statement
Asserting a certain exception is raised Asserting that a certain exception is raised
-------------------------------------------------------------- --------------------------------------------------------------
If you want to assert some code raises an exception you can If you want to assert that some code raises an exception you can
use the ``raises`` helper:: use the ``raises`` helper::
# content of test_sysexit.py # content of test_sysexit.py
@ -107,9 +107,9 @@ Running it with, this time in "quiet" reporting mode::
Grouping multiple tests in a class Grouping multiple tests in a class
-------------------------------------------------------------- --------------------------------------------------------------
If you start to have more than a few tests it often makes sense Once you start to have more than a few tests it often makes sense
to group tests logically, in classes and modules. Let's put two to group tests logically, in classes and modules. Let's write a class
tests in a class like this:: containing two tests::
# content of test_class.py # content of test_class.py
class TestClass: class TestClass:
@ -131,7 +131,7 @@ run the module by passing its filename::
================================= FAILURES ================================= ================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________ ____________________________ TestClass.test_two ____________________________
self = <test_class.TestClass instance at 0x178b2d8> self = <test_class.TestClass instance at 0x187d758>
def test_two(self): def test_two(self):
x = "hello" x = "hello"
@ -140,7 +140,7 @@ run the module by passing its filename::
E + where False = hasattr('hello', 'check') E + where False = hasattr('hello', 'check')
test_class.py:8: AssertionError test_class.py:8: AssertionError
1 failed, 1 passed in 0.02 seconds 1 failed, 1 passed in 0.04 seconds
The first test passed, the second failed. Again we can easily see The first test passed, the second failed. Again we can easily see
the intermediate values used in the assertion, helping us to the intermediate values used in the assertion, helping us to
@ -169,7 +169,7 @@ before performing the test function call. Let's just run it::
================================= FAILURES ================================= ================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________ _____________________________ test_needsfiles ______________________________
tmpdir = local('/tmp/pytest-101/test_needsfiles0') tmpdir = local('/tmp/pytest-92/test_needsfiles0')
def test_needsfiles(tmpdir): def test_needsfiles(tmpdir):
print tmpdir print tmpdir
@ -178,8 +178,8 @@ before performing the test function call. Let's just run it::
test_tmpdir.py:3: AssertionError test_tmpdir.py:3: AssertionError
----------------------------- Captured stdout ------------------------------ ----------------------------- Captured stdout ------------------------------
/tmp/pytest-101/test_needsfiles0 /tmp/pytest-92/test_needsfiles0
1 failed in 0.03 seconds 1 failed in 0.14 seconds
Before the test runs, a unique-per-test-invocation temporary directory Before the test runs, a unique-per-test-invocation temporary directory
was created. More info at :ref:`tmpdir handling`. was created. More info at :ref:`tmpdir handling`.
@ -194,7 +194,7 @@ where to go next
Here are a few suggestions where to go next: Here are a few suggestions where to go next:
* :ref:`cmdline` for command line invocation examples * :ref:`cmdline` for command line invocation examples
* :ref:`good practises` for virtualenv, test layout, genscript support * :ref:`good practises <goodpractises>` for virtualenv, test layout, genscript support
* :ref:`apiref` for documentation and examples on using py.test * :ref:`apiref` for documentation and examples on using py.test
* :ref:`plugins` managing and writing plugins * :ref:`plugins` managing and writing plugins
@ -228,7 +228,7 @@ py.test not found on Windows despite installation?
- **Jython2.5.1 on Windows XP**: `Jython does not create command line launchers`_ - **Jython2.5.1 on Windows XP**: `Jython does not create command line launchers`_
so ``py.test`` will not work correctly. You may install py.test on so ``py.test`` will not work correctly. You may install py.test on
CPython and type ``py.test --genscript=mytest`` and then use CPython and type ``py.test --genscript=mytest`` and then use
``jython mytest`` to run py.test for your tests to run in Jython. ``jython mytest`` to run py.test for your tests to run with Jython.
:ref:`examples` for more complex examples :ref:`examples` for more complex examples

View File

@ -8,14 +8,11 @@ Good Integration Practises
Work with virtual environments Work with virtual environments
----------------------------------------------------------- -----------------------------------------------------------
We recommend to work with virtualenv_ environments and use easy_install_ We recommend to use virtualenv_ environments and use easy_install_
(or pip_) for installing your application dependencies as well as (or pip_) for installing your application dependencies as well as
the ``pytest`` package itself. This way you get a much more reproducible the ``pytest`` package itself. This way you will get a much more reproducible
environment. A good tool to help you automate test runs against multiple environment. A good tool to help you automate test runs against multiple
dependency configurations or Python interpreters is `tox`_, dependency configurations or Python interpreters is `tox`_.
independently created by the main py.test author. The latter
is also useful for integration with the continuous integration
server Hudson_.
.. _`virtualenv`: http://pypi.python.org/pypi/virtualenv .. _`virtualenv`: http://pypi.python.org/pypi/virtualenv
.. _`buildout`: http://www.buildout.org/ .. _`buildout`: http://www.buildout.org/
@ -24,10 +21,11 @@ server Hudson_.
Use tox and Continuous Integration servers Use tox and Continuous Integration servers
------------------------------------------------- -------------------------------------------------
If you are (often) releasing code to the public you If you frequently relase code to the public you
may want to look into `tox`_, the virtualenv test automation may want to look into `tox`_, the virtualenv test automation
tool and its `pytest support <http://codespeak.net/tox/example/pytest.html>`_. tool and its `pytest support <http://codespeak.net/tox/example/pytest.html>`_.
The basic idea is to generate a JUnitXML file through the ``--junitxml=PATH`` option and have a continuous integration server like Hudson_ pick it up. The basic idea is to generate a JUnitXML file through the ``--junitxml=PATH`` option and have a continuous integration server like Jenkins_ pick it up
and generate reports.
.. _standalone: .. _standalone:
.. _`genscript method`: .. _`genscript method`:
@ -90,7 +88,7 @@ If you now type::
this will execute your tests using ``runtest.py``. As this is a this will execute your tests using ``runtest.py``. As this is a
standalone version of ``py.test`` no prior installation whatsoever is standalone version of ``py.test`` no prior installation whatsoever is
required for calling the test command. You can also pass additional required for calling the test command. You can also pass additional
arguments to the subprocess-calls like your test directory or other arguments to the subprocess-calls such as your test directory or other
options. options.
.. _`test discovery`: .. _`test discovery`:
@ -101,14 +99,14 @@ Conventions for Python test discovery
``py.test`` implements the following standard test discovery: ``py.test`` implements the following standard test discovery:
* collection starts from initial command line arguments * collection starts from the initial command line arguments
which may be directories, filenames or test ids. which may be directories, filenames or test ids.
* recurse into directories, unless they match :confval:`norecursedirs` * recurse into directories, unless they match :confval:`norecursedirs`
* ``test_*.py`` or ``*_test.py`` files, imported by their `package name`_. * ``test_*.py`` or ``*_test.py`` files, imported by their `package name`_.
* ``Test`` prefixed test classes (without an ``__init__`` method) * ``Test`` prefixed test classes (without an ``__init__`` method)
* ``test_`` prefixed test functions or methods are test items * ``test_`` prefixed test functions or methods are test items
For changing and customization example, see :doc:`example/pythoncollection`. For examples of how to cnd cusotmize your test discovery :doc:`example/pythoncollection`.
py.test additionally discovers tests using the standard py.test additionally discovers tests using the standard
:ref:`unittest.TestCase <unittest.TestCase>` subclassing technique. :ref:`unittest.TestCase <unittest.TestCase>` subclassing technique.
@ -154,8 +152,8 @@ You can always run your tests by pointing to it::
Test modules are imported under their fully qualified name as follows: Test modules are imported under their fully qualified name as follows:
* find ``basedir`` -- this is the first "upward" directory not * find ``basedir`` -- this is the first "upward" (towards the root)
containing an ``__init__.py`` directory not containing an ``__init__.py``
* perform ``sys.path.insert(0, basedir)`` to make the fully * perform ``sys.path.insert(0, basedir)`` to make the fully
qualified test module path importable. qualified test module path importable.

View File

@ -4,23 +4,25 @@ Welcome to ``py.test``!
============================================= =============================================
- **a mature fully featured testing tool** - **a mature full-featured testing tool**
- runs on Posix/Windows, Python 2.4-3.2, PyPy and Jython - runs on Posix/Windows, Python 2.4-3.2, PyPy and Jython
- continuously `tested on many Python interpreters <http://hudson.testrun.org/view/pytest/job/pytest/>`_ - continuously `tested on many Python interpreters <http://hudson.testrun.org/view/pytest/job/pytest/>`_
- used in :ref:`many projects and organisations <projects>`, ranging from 10 to 10000 tests - used in :ref:`many projects and organisations <projects>`, in test
suites ranging from 10 to 10s of thousands of tests
- has :ref:`comprehensive documentation <toc>` - has :ref:`comprehensive documentation <toc>`
- comes with :ref:`tested examples <examples>` - comes with :ref:`tested examples <examples>`
- supports :ref:`good integration practises <goodpractises>` - supports :ref:`good integration practises <goodpractises>`
- **provides no-boilerplate testing** - **provides no-boilerplate testing**
- makes it :ref:`easy to get started <getstarted>`, refined :ref:`usage options <usage>` - makes it :ref:`easy to get started <getstarted>`,
- refined :ref:`usage options <usage>`
- :ref:`assert with the assert statement` - :ref:`assert with the assert statement`
- helpful :ref:`traceback and failing assertion reporting <tbreportdemo>` - helpful :ref:`traceback and failing assertion reporting <tbreportdemo>`
- allows :ref:`print debugging <printdebugging>` and :ref:`generic output - allows :ref:`print debugging <printdebugging>` and :ref:`the
capturing <captures>` capturing of standard output during test execution <captures>`
- supports :pep:`8` compliant coding style in tests - supports :pep:`8` compliant coding styles in tests
- **supports functional testing and complex test setups** - **supports functional testing and complex test setups**
@ -39,7 +41,7 @@ Welcome to ``py.test``!
tests, including running testcases made for Django and trial tests, including running testcases made for Django and trial
- supports extended :ref:`xUnit style setup <xunitsetup>` - supports extended :ref:`xUnit style setup <xunitsetup>`
- supports domain-specific :ref:`non-python tests` - supports domain-specific :ref:`non-python tests`
- supports generating testing coverage reports - supports the generation of testing coverage reports
- `Javascript unit- and functional testing`_ - `Javascript unit- and functional testing`_
- **extensive plugin and customization system** - **extensive plugin and customization system**

View File

@ -16,4 +16,5 @@
.. _`pip`: http://pypi.python.org/pypi/pip .. _`pip`: http://pypi.python.org/pypi/pip
.. _`virtualenv`: http://pypi.python.org/pypi/virtualenv .. _`virtualenv`: http://pypi.python.org/pypi/virtualenv
.. _hudson: http://hudson-ci.org/ .. _hudson: http://hudson-ci.org/
.. _jenkins: http://jenkins-ci.org/
.. _tox: http://codespeak.net/tox .. _tox: http://codespeak.net/tox

View File

@ -44,7 +44,7 @@ Marking whole classes or modules
---------------------------------------------------- ----------------------------------------------------
If you are programming with Python2.6 you may use ``pytest.mark`` decorators If you are programming with Python2.6 you may use ``pytest.mark`` decorators
with classes to apply markers to all its test methods:: with classes to apply markers to all of its test methods::
# content of test_mark_classlevel.py # content of test_mark_classlevel.py
import pytest import pytest
@ -88,7 +88,7 @@ You can use the ``-k`` command line option to select tests::
$ py.test -k webtest # running with the above defined examples yields $ py.test -k webtest # running with the above defined examples yields
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 4 items collecting ... collected 4 items
test_mark.py .. test_mark.py ..
@ -100,7 +100,7 @@ And you can also run all tests except the ones that match the keyword::
$ py.test -k-webtest $ py.test -k-webtest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 4 items collecting ... collected 4 items
===================== 4 tests deselected by '-webtest' ===================== ===================== 4 tests deselected by '-webtest' =====================
@ -110,7 +110,7 @@ Or to only select the class::
$ py.test -kTestClass $ py.test -kTestClass
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 4 items collecting ... collected 4 items
test_mark_classlevel.py .. test_mark_classlevel.py ..

View File

@ -9,8 +9,8 @@ on global settings or which invokes code which cannot be easily
tested such as network access. The ``monkeypatch`` function argument tested such as network access. The ``monkeypatch`` function argument
helps you to safely set/delete an attribute, dictionary item or helps you to safely set/delete an attribute, dictionary item or
environment variable or to modify ``sys.path`` for importing. environment variable or to modify ``sys.path`` for importing.
See the `monkeypatch blog post`_ one some introduction material See the `monkeypatch blog post`_ for some introduction material
and motivation. and a discussion of its motivation.
.. _`monkeypatch blog post`: http://tetamap.wordpress.com/2009/03/03/monkeypatching-in-unit-tests-done-right/ .. _`monkeypatch blog post`: http://tetamap.wordpress.com/2009/03/03/monkeypatching-in-unit-tests-done-right/
@ -18,7 +18,7 @@ and motivation.
Simple example: patching ``os.path.expanduser`` Simple example: patching ``os.path.expanduser``
--------------------------------------------------- ---------------------------------------------------
If you e.g. want to pretend that ``os.expanduser`` returns a certain If, for instance, you want to pretend that ``os.expanduser`` returns a certain
directory, you can use the :py:meth:`monkeypatch.setattr` method to directory, you can use the :py:meth:`monkeypatch.setattr` method to
patch this function before calling into a function which uses it:: patch this function before calling into a function which uses it::
@ -39,7 +39,7 @@ will be undone.
.. background check: .. background check:
$ py.test $ py.test
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 0 items collecting ... collected 0 items
============================= in 0.00 seconds ============================= ============================= in 0.00 seconds =============================

View File

@ -16,5 +16,5 @@ these renaming rules::
py.test.cmdline.main -> pytest.main py.test.cmdline.main -> pytest.main
The old ``py.test.*`` ways to access functionality remain The old ``py.test.*`` ways to access functionality remain
valid but you are encouraged to do global renames according valid but you are encouraged to do global renaming according
to the above rules in your test code. to the above rules in your test code.

View File

@ -1,8 +1,8 @@
.. _plugins:
Working with plugins and conftest files Working with plugins and conftest files
============================================= =============================================
.. _`local plugin`:
py.test implements all aspects of configuration, collection, running and reporting by calling `well specified hooks`_. Virtually any Python module can be registered as a plugin. It can implement any number of hook functions (usually two or three) which all have a ``pytest_`` prefix, making hook functions easy to distinguish and find. There are three basic locations types: py.test implements all aspects of configuration, collection, running and reporting by calling `well specified hooks`_. Virtually any Python module can be registered as a plugin. It can implement any number of hook functions (usually two or three) which all have a ``pytest_`` prefix, making hook functions easy to distinguish and find. There are three basic locations types:
* `builtin plugins`_: loaded from py.test's own ``pytest/plugin`` directory. * `builtin plugins`_: loaded from py.test's own ``pytest/plugin`` directory.
@ -12,14 +12,17 @@ py.test implements all aspects of configuration, collection, running and reporti
.. _`pytest/plugin`: http://bitbucket.org/hpk42/pytest/src/tip/pytest/plugin/ .. _`pytest/plugin`: http://bitbucket.org/hpk42/pytest/src/tip/pytest/plugin/
.. _`conftest.py plugins`: .. _`conftest.py plugins`:
.. _`conftest.py`: .. _`conftest.py`:
.. _`localplugin`:
.. _`conftest`:
conftest.py: local per-directory plugins conftest.py: local per-directory plugins
-------------------------------------------------------------- --------------------------------------------------------------
local ``conftest.py`` plugins contain directory-specific hook local ``conftest.py`` plugins contain directory-specific hook
implementations. Session and test running activities will implementations. Session and test running activities will
invoke all hooks defined in "higher up" ``conftest.py`` files. invoke all hooks defined in ``conftest.py`` files closer to the
Example: Assume the following layout and content of files:: root of the filesystem. Example: Assume the following layout
and content of files::
a/conftest.py: a/conftest.py:
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
@ -39,11 +42,6 @@ Here is how you might run it::
py.test test_flat.py # will not show "setting up" py.test test_flat.py # will not show "setting up"
py.test a/test_sub.py # will show "setting up" py.test a/test_sub.py # will show "setting up"
A note on ordering: ``py.test`` loads all ``conftest.py`` files upwards
from the command line file arguments. It usually performs look up
right-to-left, i.e. the hooks in "closer" conftest files will be called
earlier than further away ones.
.. Note:: .. Note::
If you have ``conftest.py`` files which do not reside in a If you have ``conftest.py`` files which do not reside in a
python package directory (i.e. one containing an ``__init__.py``) then python package directory (i.e. one containing an ``__init__.py``) then
@ -112,12 +110,12 @@ Making your plugin installable by others
----------------------------------------------- -----------------------------------------------
If you want to make your plugin externally available, you If you want to make your plugin externally available, you
may define a so called entry point for your distribution so may define a so-called entry point for your distribution so
that ``py.test`` finds your plugin module. Entry points are that ``py.test`` finds your plugin module. Entry points are
a feature that is provided by `setuptools`_ or `Distribute`_. a feature that is provided by `setuptools`_ or `Distribute`_.
The concrete entry point is ``pytest11``. To make your plugin py.test looks up the ``pytest11`` entrypoint to discover its
available you can insert the following lines in your plugins and you can thus make your plugin available by definig
setuptools/distribute-based setup-invocation: it in your setuptools/distribute-based setup-invocation:
.. sourcecode:: python .. sourcecode:: python
@ -137,8 +135,8 @@ setuptools/distribute-based setup-invocation:
) )
If a package is installed this way, py.test will load If a package is installed this way, py.test will load
``myproject.pluginmodule`` and accordingly call functions ``myproject.pluginmodule`` as a plugin which can define
if they match the `well specified hooks`_. `well specified hooks`_.
Plugin discovery order at tool startup Plugin discovery order at tool startup
-------------------------------------------- --------------------------------------------
@ -260,11 +258,11 @@ hook specification and validation
py.test calls hook functions to implement initialization, running, py.test calls hook functions to implement initialization, running,
test execution and reporting. When py.test loads a plugin it validates test execution and reporting. When py.test loads a plugin it validates
that all hook functions conform to their respective hook specification. that each hook function conforms to its respective hook specification.
Each hook function name and its argument names need to match a hook Each hook function name and its argument names need to match a hook
specification exactly but it is allowed for a hook function to accept specification. However, a hook function may accept *fewer* parameters
*less* parameters than specified. If you mistype argument names or the by simply not specifying them. If you mistype argument names or the
hook name itself you get useful errors. hook name itself you get an error showing the available arguments.
initialisation, command line and configuration hooks initialisation, command line and configuration hooks
-------------------------------------------------------------------- --------------------------------------------------------------------
@ -292,8 +290,9 @@ All all runtest related hooks receive a :py:class:`pytest.Item` object.
For deeper understanding you may look at the default implementation of For deeper understanding you may look at the default implementation of
these hooks in :py:mod:`_pytest.runner` and maybe also these hooks in :py:mod:`_pytest.runner` and maybe also
in :py:mod:`_pytest.pdb` which intercepts creation in :py:mod:`_pytest.pdb` which interacts with :py:mod:`_pytest.capture`
of reports in order to drop to interactive debugging. and its input/output capturing in order to immediately drop
into interactive debugging when a test failure occurs.
The :py:mod:`_pytest.terminal` reported specifically uses The :py:mod:`_pytest.terminal` reported specifically uses
the reporting hook to print information about a test run. the reporting hook to print information about a test run.

View File

@ -46,7 +46,7 @@ Some organisations using py.test
* `Shootq <http://web.shootq.com/>`_ * `Shootq <http://web.shootq.com/>`_
* `Stups department of Heinrich Heine University Düsseldorf <http://www.stups.uni-duesseldorf.de/projects.php>`_ * `Stups department of Heinrich Heine University Düsseldorf <http://www.stups.uni-duesseldorf.de/projects.php>`_
* `cellzome <http://www.cellzome.com/>`_ * `cellzome <http://www.cellzome.com/>`_
* `Open End, Gotenborg <http://www.openend.se>`_ * `Open End, Gothenborg <http://www.openend.se>`_
* `Laboraratory of Bioinformatics, Warsaw <http://genesilico.pl/>`_ * `Laboraratory of Bioinformatics, Warsaw <http://genesilico.pl/>`_
* `merlinux, Germany <http://merlinux.eu>`_ * `merlinux, Germany <http://merlinux.eu>`_
* many more ... (please be so kind to send a note via :ref:`contact`) * many more ... (please be so kind to send a note via :ref:`contact`)

View File

@ -1,2 +1,2 @@
[pytest] [pytest]
# just defined to prevent the root level tox.ini to kick in # just defined to prevent the root level tox.ini from kicking in

View File

@ -5,19 +5,18 @@ skip and xfail mechanisms
===================================================================== =====================================================================
You can skip or "xfail" test functions, either by marking functions You can skip or "xfail" test functions, either by marking functions
through a decorator or by calling the ``pytest.skip|xfail`` functions. with a decorator or by calling the ``pytest.skip|xfail`` functions.
A *skip* means that you expect your test to pass unless a certain configuration or condition (e.g. wrong Python interpreter, missing dependency) prevents it to run. And *xfail* means that you expect your test to fail because there is an A *skip* means that you expect your test to pass unless a certain configuration or condition (e.g. wrong Python interpreter, missing dependency) prevents it to run. And *xfail* means that you expect your test to fail because there is an
implementation problem. py.test counts and lists *xfailing* tests separately implementation problem. py.test counts and lists *xfailing* tests separately
and you can provide info such as a bug number or a URL to provide a and it is possible to give additional information, such as bug number or a URL.
human readable problem context.
Usually detailed information about skipped/xfailed tests is not shown Detailed information about skipped/xfailed tests is by default not shown
at the end of a test run to avoid cluttering the output. You can use at the end of a test run to avoid cluttering the output. You can use
the ``-r`` option to see details corresponding to the "short" letters the ``-r`` option to see details corresponding to the "short" letters
shown in the test progress:: shown in the test progress::
py.test -rxs # show extra info on skips and xfail tests py.test -rxs # show extra info on skips and xfails
(See :ref:`how to change command line options defaults`) (See :ref:`how to change command line options defaults`)
@ -26,7 +25,7 @@ shown in the test progress::
Skipping a single function Skipping a single function
------------------------------------------- -------------------------------------------
Here is an example for marking a test function to be skipped Here is an example of marking a test function to be skipped
when run on a Python3 interpreter:: when run on a Python3 interpreter::
import sys import sys
@ -60,7 +59,7 @@ on a test configuration value::
def test_function(...): def test_function(...):
... ...
Create a shortcut for your conditional skip decorator You can create a shortcut for your conditional skip decorator
at module level like this:: at module level like this::
win32only = pytest.mark.skipif("sys.platform != 'win32'") win32only = pytest.mark.skipif("sys.platform != 'win32'")
@ -73,9 +72,9 @@ at module level like this::
skip all test functions of a class skip all test functions of a class
-------------------------------------- --------------------------------------
As with all function :ref:`marking` you can do it at As with all function :ref:`mark` you can skip test functions at the
`whole class- or module level`_. Here is an example `whole class- or module level`_. Here is an example
for skipping all methods of a test class based on platform:: for skipping all methods of a test class based on the platform::
class TestPosixCalls: class TestPosixCalls:
pytestmark = pytest.mark.skipif("sys.platform == 'win32'") pytestmark = pytest.mark.skipif("sys.platform == 'win32'")
@ -93,9 +92,7 @@ the skipif decorator on classes::
def test_function(self): def test_function(self):
"will not be setup or run under 'win32' platform" "will not be setup or run under 'win32' platform"
It is fine in general to apply multiple "skipif" decorators Using multiple "skipif" decorators on a single function is generally fine - it means that if any of the conditions apply the function execution will be skipped.
on a single function - this means that if any of the conditions
apply the function will be skipped.
.. _`whole class- or module level`: mark.html#scoped-marking .. _`whole class- or module level`: mark.html#scoped-marking
@ -122,16 +119,16 @@ By specifying on the commandline::
you can force the running and reporting of an ``xfail`` marked test you can force the running and reporting of an ``xfail`` marked test
as if it weren't marked at all. as if it weren't marked at all.
Same as with skipif_ you can also selectively expect a failure As with skipif_ you can also mark your expectation of a failure
depending on platform:: on a particular platform::
@pytest.mark.xfail("sys.version_info >= (3,0)") @pytest.mark.xfail("sys.version_info >= (3,0)")
def test_function(): def test_function():
... ...
You can also avoid running an "xfail" test at all or You can furthermore prevent the running of an "xfail" test or
specify a reason such as a bug ID or similar. Here is specify a reason such as a bug ID or similar. Here is
a simple test file with usages: a simple test file with the several usages:
.. literalinclude:: example/xfail_demo.py .. literalinclude:: example/xfail_demo.py
@ -139,10 +136,10 @@ Running it with the report-on-xfail option gives this output::
example $ py.test -rx xfail_demo.py example $ py.test -rx xfail_demo.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 5 items collecting ... collected 6 items
xfail_demo.py xxxxx xfail_demo.py xxxxxx
========================= short test summary info ========================== ========================= short test summary info ==========================
XFAIL xfail_demo.py::test_hello XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2 XFAIL xfail_demo.py::test_hello2
@ -152,9 +149,11 @@ Running it with the report-on-xfail option gives this output::
XFAIL xfail_demo.py::test_hello4 XFAIL xfail_demo.py::test_hello4
bug 110 bug 110
XFAIL xfail_demo.py::test_hello5 XFAIL xfail_demo.py::test_hello5
condition: pytest.__version__[0] != "17"
XFAIL xfail_demo.py::test_hello6
reason: reason reason: reason
======================== 5 xfailed in 0.04 seconds ========================= ======================== 6 xfailed in 0.06 seconds =========================
imperative xfail from within a test or setup function imperative xfail from within a test or setup function
------------------------------------------------------ ------------------------------------------------------
@ -177,8 +176,8 @@ or within a test or test setup function::
docutils = pytest.importorskip("docutils") docutils = pytest.importorskip("docutils")
If ``docutils`` cannot be imported here, this will lead to a If ``docutils`` cannot be imported here, this will lead to a
skip outcome of the test. You can also skip depending if skip outcome of the test. You can also skip based on the
if a library does not come with a high enough version:: version number of a library::
docutils = pytest.importorskip("docutils", minversion="0.3") docutils = pytest.importorskip("docutils", minversion="0.3")
@ -188,7 +187,7 @@ imperative skip from within a test or setup function
------------------------------------------------------ ------------------------------------------------------
If for some reason you cannot declare skip-conditions If for some reason you cannot declare skip-conditions
you can also imperatively produce a Skip-outcome from you can also imperatively produce a skip-outcome from
within test or setup code. Example:: within test or setup code. Example::
def test_function(): def test_function():

View File

@ -28,7 +28,7 @@ Running this would result in a passed test except for the last
$ py.test test_tmpdir.py $ py.test test_tmpdir.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 1 items collecting ... collected 1 items
test_tmpdir.py F test_tmpdir.py F
@ -36,7 +36,7 @@ Running this would result in a passed test except for the last
================================= FAILURES ================================= ================================= FAILURES =================================
_____________________________ test_create_file _____________________________ _____________________________ test_create_file _____________________________
tmpdir = local('/tmp/pytest-102/test_create_file0') tmpdir = local('/tmp/pytest-93/test_create_file0')
def test_create_file(tmpdir): def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt") p = tmpdir.mkdir("sub").join("hello.txt")
@ -47,14 +47,14 @@ Running this would result in a passed test except for the last
E assert 0 E assert 0
test_tmpdir.py:7: AssertionError test_tmpdir.py:7: AssertionError
========================= 1 failed in 0.03 seconds ========================= ========================= 1 failed in 0.04 seconds =========================
.. _`base temporary directory`: .. _`base temporary directory`:
the default base temporary directory the default base temporary directory
----------------------------------------------- -----------------------------------------------
Temporary directories are by default created as sub directories of Temporary directories are by default created as sub-directories of
the system temporary directory. The base name will be ``pytest-NUM`` where the system temporary directory. The base name will be ``pytest-NUM`` where
``NUM`` will be incremented with each test run. Moreover, entries older ``NUM`` will be incremented with each test run. Moreover, entries older
than 3 temporary directories will be removed. than 3 temporary directories will be removed.

View File

@ -8,7 +8,7 @@ py.test has limited support for running Python `unittest.py style`_ tests.
It will automatically collect ``unittest.TestCase`` subclasses It will automatically collect ``unittest.TestCase`` subclasses
and their ``test`` methods in test files. It will invoke and their ``test`` methods in test files. It will invoke
``setUp/tearDown`` methods but also perform py.test's standard ways ``setUp/tearDown`` methods but also perform py.test's standard ways
of treating tests like e.g. IO capturing:: of treating tests such as IO capturing::
# content of test_unittest.py # content of test_unittest.py
@ -24,7 +24,7 @@ Running it yields::
$ py.test test_unittest.py $ py.test test_unittest.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev2
collecting ... collected 1 items collecting ... collected 1 items
test_unittest.py F test_unittest.py F

View File

@ -12,7 +12,7 @@ calling pytest through ``python -m pytest``
.. versionadded:: 2.0 .. versionadded:: 2.0
If you use Python-2.5 or above you can invoke testing through the If you use Python-2.5 or later you can invoke testing through the
Python interpreter from the command line:: Python interpreter from the command line::
python -m pytest [...] python -m pytest [...]
@ -20,8 +20,8 @@ Python interpreter from the command line::
This is equivalent to invoking the command line script ``py.test [...]`` This is equivalent to invoking the command line script ``py.test [...]``
directly. directly.
Getting help on version, option names, environment vars Getting help on version, option names, environment variables
----------------------------------------------------------- --------------------------------------------------------------
:: ::
@ -96,7 +96,7 @@ can use a helper::
.. versionadded: 2.0.0 .. versionadded: 2.0.0
In previous versions you could only enter PDB tracing if In previous versions you could only enter PDB tracing if
you :ref:`disable capturing`. you disable capturing on the command line via ``py.test -s``.
creating JUnitXML format files creating JUnitXML format files
---------------------------------------------------- ----------------------------------------------------

View File

@ -7,13 +7,13 @@ xdist: pytest distributed testing plugin
The `pytest-xdist`_ plugin extends py.test with some unique The `pytest-xdist`_ plugin extends py.test with some unique
test execution modes: test execution modes:
* Looponfail: run your tests repeatedly in a subprocess. After each run py.test * Looponfail: run your tests repeatedly in a subprocess. After each
waits until a file in your project changes and then re-runs the previously run, py.test waits until a file in your project changes and then
failing tests. This is repeated until all tests pass after which again re-runs the previously failing tests. This is repeated until all
a full run is performed. tests pass. At this point a full run is again performed.
* multiprocess Load-balancing: if you have multiple CPUs or hosts you can use * multiprocess Load-balancing: if you have multiple CPUs or hosts you can use
those for a combined test run. This allows to speed up them for a combined test run. This allows to speed up
development or to use special resources of remote machines. development or to use special resources of remote machines.
* Multi-Platform coverage: you can specify different Python interpreters * Multi-Platform coverage: you can specify different Python interpreters
@ -25,8 +25,8 @@ are reported back and displayed to your local terminal.
You may specify different Python versions and interpreters. You may specify different Python versions and interpreters.
Installation Installation of xdist plugin
----------------------- ------------------------------
Install the plugin with:: Install the plugin with::
@ -55,13 +55,13 @@ To send tests to multiple CPUs, type::
py.test -n NUM py.test -n NUM
Especially for longer running tests or tests requiring Especially for longer running tests or tests requiring
a lot of IO this can lead to considerable speed ups. a lot of I/O this can lead to considerable speed ups.
Running tests in a Python subprocess Running tests in a Python subprocess
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
To instantiate a python2.4 sub process and send tests to it, you may type:: To instantiate a Python-2.4 subprocess and send tests to it, you may type::
py.test -d --tx popen//python=python2.4 py.test -d --tx popen//python=python2.4
@ -70,10 +70,10 @@ Python interpreter, found in your system binary lookup path.
If you prefix the --tx option value like this:: If you prefix the --tx option value like this::
--tx 3*popen//python=python2.4 py.test -d --tx 3*popen//python=python2.4
then three subprocesses would be created and tests then three subprocesses would be created and the tests
will be load-balanced across these three processes. will be distributed to three subprocesses and run simultanously.
.. _looponfailing: .. _looponfailing:
@ -82,11 +82,13 @@ Running tests in looponfailing mode
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
For refactoring a project with a medium or large test suite For refactoring a project with a medium or large test suite
you can use the looponfailing mode, simply add the ``--f`` option:: you can use the looponfailing mode. Simply add the ``--f`` option::
py.test -f py.test -f
and py.test will run your tests, then wait for file changes and re-run the failing test set. Of course you can pass in more options to select tests or test files. File changes are detected by looking at the root directory - you can override this automatic default by an ini-file setting:: and py.test will run your tests. Assuming you have failures it will then
wait for file changes and re-run the failing test set. File changes are detected by looking at ``looponfailingroots`` root directories and all of their contents (recursively). If the default for this value does not work for you you
can change it in your project by setting a configuration option::
# content of a pytest.ini, setup.cfg or tox.ini file # content of a pytest.ini, setup.cfg or tox.ini file
[pytest] [pytest]
@ -98,19 +100,21 @@ Sending tests to remote SSH accounts
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Suppose you have a package ``mypkg`` which contains some Suppose you have a package ``mypkg`` which contains some
tests that you can successfully run locally. And you tests that you can successfully run locally. And you also
have a ssh-reachable machine ``myhost``. Then have a ssh-reachable machine ``myhost``. Then
you can ad-hoc distribute your tests by typing:: you can ad-hoc distribute your tests by typing::
py.test -d --tx ssh=myhostpopen --rsyncdir mypkg mypkg py.test -d --tx ssh=myhostpopen --rsyncdir mypkg mypkg
This will synchronize your ``mypkg`` package directory This will synchronize your ``mypkg`` package directory
to an remote ssh account and then locally collect tests with a remote ssh account and then collect and run your
and send them to remote places for execution. tests at the remote side.
You can specify multiple ``--rsyncdir`` directories You can specify multiple ``--rsyncdir`` directories
to be sent to the remote side. to be sent to the remote side.
.. XXX CHECK
**NOTE:** For py.test to collect and send tests correctly **NOTE:** For py.test to collect and send tests correctly
you not only need to make sure all code and tests you not only need to make sure all code and tests
directories are rsynced, but that any test (sub) directory directories are rsynced, but that any test (sub) directory
@ -158,8 +162,7 @@ Specifying test exec environments in an ini file
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
pytest (since version 2.0) supports ini-style configuration. pytest (since version 2.0) supports ini-style configuration.
You can for example make running with three subprocesses For example, you could make running with three subprocesses your default::
your default like this::
[pytest] [pytest]
addopts = -n3 addopts = -n3

View File

@ -65,7 +65,7 @@ Similarly, the following methods are called around each method invocation::
with a setup_method call. with a setup_method call.
""" """
If you rather define test functions directly at module level If you would rather define test functions directly at module level
you can also use the following functions to implement fixtures:: you can also use the following functions to implement fixtures::
def setup_function(function): def setup_function(function):