many doc improvements and fixes

This commit is contained in:
holger krekel 2012-10-18 12:24:50 +02:00
parent cf17f1d628
commit dbaedbacde
25 changed files with 558 additions and 253 deletions

View File

@ -24,3 +24,4 @@ env/
3rdparty/
.tox
.cache
.coverage

View File

@ -0,0 +1,45 @@
pytest-2.3: generalized fixtures/funcarg mechanism
=============================================================================
pytest is a popular tool for writing automated tests in Python. Version
2.3 comes with several innovations for writing test fixtures --
needed to provide a fixed base state or preinitialized objects
for your tests. For info and tutorial-style examples, see
http://pytest.org/dev/fixture.html
All changes are backward compatible and you should be able to continue
to run your existing test suites. In particular, dependency-injected
"funcargs" still form the base of the improved fixture system but it is
now easy to create parametrized funcargs/fixtures or cache them across
modules or test sessions. Moreover, there is now support for using
pytest fixtures with unittest-style suites, see here for example:
http://pytest.org/dev/unittest.html
If you are interested in the precise reasoning of the pytest-2.3 fixture
evolution, please consult http://pytest.org/dev/funcarg_compare.html
For general info on installation and getting started:
http://pytest.org/dev/getting-started.html
Docs and PDF access as usual at:
http://pytest.org
and more details for those already in the knowing of pytest can be found
in the CHANGELOG below.
Particular thanks for this release go to Floris Bruynooghe, Alex Okrushko
Carl Meyer, Ronny Pfannschmidt, Benjamin Peterson and Alex Gaynor for helping
to get the new features right and well integrated. Ronny and Floris
also helped to fix a number of bugs and yet more people helped by
providing bug reports.
have fun,
holger krekel

View File

@ -26,7 +26,7 @@ you will see the return value of the function call::
$ py.test test_assert1.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 1 items
test_assert1.py F
@ -110,7 +110,7 @@ if you run this module::
$ py.test test_assert2.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 1 items
test_assert2.py F

186
doc/en/attic_fixtures.txt Normal file
View File

@ -0,0 +1,186 @@
**Test classes, modules or whole projects can make use of
one or more fixtures**. All required fixture functions will execute
before a test from the specifying context executes. As You can use this
to make tests operate from a pre-initialized directory or with
certain environment variables or with pre-configured global application
settings.
For example, the Django_ project requires database
initialization to be able to import from and use its model objects.
For that, the `pytest-django`_ plugin provides fixtures which your
project can then easily depend or extend on, simply by referencing the
name of the particular fixture.
Fixture functions have limited visilibity which depends on where they
are defined. If they are defined on a test class, only its test methods
may use it. A fixture defined in a module can only be used
from that test module. A fixture defined in a conftest.py file
can only be used by the tests below the directory of that file.
Lastly, plugins can define fixtures which are available across all
projects.
Python, Java and many other languages support a so called xUnit_ style
for providing a fixed state, `test fixtures`_, for running tests. It
typically involves calling a autouse function ahead and a teardown
function after test execute. In 2005 pytest introduced a scope-specific
model of automatically detecting and calling autouse and teardown
functions on a per-module, class or function basis. The Python unittest
package and nose have subsequently incorporated them. This model
remains supported by pytest as :ref:`classic xunit`.
One property of xunit fixture functions is that they work implicitely
by preparing global state or setting attributes on TestCase objects.
By contrast, pytest provides :ref:`funcargs` which allow to
dependency-inject application test state into test functions or
methods as function arguments. If your application is sufficiently modular
or if you are creating a new project, we recommend you now rather head over to
:ref:`funcargs` instead because many pytest users agree that using this
paradigm leads to better application and test organisation.
However, not all programs and frameworks work and can be tested in
a fully modular way. They rather require preparation of global state
like database autouse on which further fixtures like preparing application
specific tables or wrapping tests in transactions can take place. For those
needs, pytest-2.3 now supports new **fixture functions** which come with
a ton of improvements over classic xunit fixture writing. Fixture functions:
- allow to separate different autouse concerns into multiple modular functions
- can receive and fully interoperate with :ref:`funcargs <resources>`,
- are called multiple times if its funcargs are parametrized,
- don't need to be defined directly in your test classes or modules,
they can also be defined in a plugin or :ref:`conftest.py <conftest.py>` files and get called
- are called on a per-session, per-module, per-class or per-function basis
by means of a simple "scope" declaration.
- can access the :ref:`request <request>` object which allows to
introspect and interact with the (scoped) testcontext.
- can add cleanup functions which will be invoked when the last test
of the fixture test context has finished executing.
All of these features are now demonstrated by little examples.
test modules accessing a global resource
-------------------------------------------------------
.. note::
Relying on `global state is considered bad programming practise <http://en.wikipedia.org/wiki/Global_variable>`_ but when you work with an application
that relies on it you often have no choice.
If you want test modules to access a global resource,
you can stick the resource to the module globals in
a per-module autouse function. We use a :ref:`resource factory
<@pytest.fixture>` to create our global resource::
# content of conftest.py
import pytest
class GlobalResource:
def __init__(self):
pass
@pytest.fixture(scope="session")
def globresource():
return GlobalResource()
@pytest.fixture(scope="module")
def setresource(request, globresource):
request.module.globresource = globresource
Now any test module can access ``globresource`` as a module global::
# content of test_glob.py
def test_1():
print ("test_1 %s" % globresource)
def test_2():
print ("test_2 %s" % globresource)
Let's run this module without output-capturing::
$ py.test -qs test_glob.py
FF
================================= FAILURES =================================
__________________________________ test_1 __________________________________
def test_1():
> print ("test_1 %s" % globresource)
E NameError: global name 'globresource' is not defined
test_glob.py:3: NameError
__________________________________ test_2 __________________________________
def test_2():
> print ("test_2 %s" % globresource)
E NameError: global name 'globresource' is not defined
test_glob.py:5: NameError
The two tests see the same global ``globresource`` object.
Parametrizing the global resource
+++++++++++++++++++++++++++++++++++++++++++++++++
We extend the previous example and add parametrization to the globresource
factory and also add a finalizer::
# content of conftest.py
import pytest
class GlobalResource:
def __init__(self, param):
self.param = param
@pytest.fixture(scope="session", params=[1,2])
def globresource(request):
g = GlobalResource(request.param)
def fin():
print "finalizing", g
request.addfinalizer(fin)
return g
@pytest.fixture(scope="module")
def setresource(request, globresource):
request.module.globresource = globresource
And then re-run our test module::
$ py.test -qs test_glob.py
FF
================================= FAILURES =================================
__________________________________ test_1 __________________________________
def test_1():
> print ("test_1 %s" % globresource)
E NameError: global name 'globresource' is not defined
test_glob.py:3: NameError
__________________________________ test_2 __________________________________
def test_2():
> print ("test_2 %s" % globresource)
E NameError: global name 'globresource' is not defined
test_glob.py:5: NameError
We are now running the two tests twice with two different global resource
instances. Note that the tests are ordered such that only
one instance is active at any given time: the finalizer of
the first globresource instance is called before the second
instance is created and sent to the autouse functions.

View File

@ -83,13 +83,6 @@ You can ask for available builtin or project-custom
captured output available via ``capsys.readouterr()`` method calls
which return a ``(out, err)`` tuple.
tmpdir
return a temporary directory path object
which is unique to each test function invocation,
created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_
path object.
monkeypatch
The returned ``monkeypatch`` funcarg provides these
helper methods to modify objects, dictionaries or os.environ::
@ -108,6 +101,8 @@ You can ask for available builtin or project-custom
parameter determines if a KeyError or AttributeError
will be raised if the set/deletion operation has no target.
pytestconfig
the pytest config object with access to command line opts.
recwarn
Return a WarningsRecorder instance that provides these methods:
@ -117,4 +112,11 @@ You can ask for available builtin or project-custom
See http://docs.python.org/library/warnings.html for information
on warning categories.
tmpdir
return a temporary directory path object
which is unique to each test function invocation,
created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_
path object.

View File

@ -64,7 +64,7 @@ of the failing function and hide the other one::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 2 items
test_module.py .F
@ -78,7 +78,7 @@ of the failing function and hide the other one::
test_module.py:9: AssertionError
----------------------------- Captured stdout ------------------------------
setting up <function test_func2 at 0x2875488>
setting up <function test_func2 at 0x25dd230>
==================== 1 failed, 1 passed in 0.01 seconds ====================
Accessing captured output from a test function

View File

@ -17,7 +17,7 @@
#
# The full version, including alpha/beta/rc tags.
# The short X.Y version.
version = release = "2.3.0.dev22"
version = release = "2.3.0.dev26"
import sys, os

View File

@ -44,7 +44,7 @@ then you can just invoke ``py.test`` without command line options::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 1 items
mymodule.py .

View File

@ -26,19 +26,19 @@ You can then restrict a test run to only run tests marked with ``webtest``::
$ py.test -v -m webtest
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 2 items
test_server.py:3: test_send_http PASSED
=================== 1 tests deselected by "-m 'webtest'" ===================
================== 1 passed, 1 deselected in 0.00 seconds ==================
================== 1 passed, 1 deselected in 0.01 seconds ==================
Or the inverse, running all tests except the webtest ones::
$ py.test -v -m "not webtest"
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 2 items
test_server.py:6: test_something_quick PASSED
@ -143,7 +143,7 @@ the given argument::
$ py.test -k send_http # running with the above defined examples
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 4 items
test_server.py .
@ -155,7 +155,7 @@ And you can also run all tests except the ones that match the keyword::
$ py.test -k-send_http
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 4 items
test_mark_classlevel.py ..
@ -168,7 +168,7 @@ Or to only select the class::
$ py.test -kTestClass
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 4 items
test_mark_classlevel.py ..
@ -221,18 +221,18 @@ the test needs::
$ py.test -E stage2
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 1 items
test_someenv.py s
======================== 1 skipped in 0.01 seconds =========================
======================== 1 skipped in 0.00 seconds =========================
and here is one that specifies exactly the environment needed::
$ py.test -E stage1
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 1 items
test_someenv.py .
@ -347,12 +347,12 @@ then you will see two test skipped and two executed tests as expected::
$ py.test -rs # this option reports skip reasons
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 4 items
test_plat.py s.s.
========================= short test summary info ==========================
SKIP [2] /tmp/doc-exec-189/conftest.py:12: cannot run on platform linux2
SKIP [2] /tmp/doc-exec-54/conftest.py:12: cannot run on platform linux2
=================== 2 passed, 2 skipped in 0.01 seconds ====================
@ -360,7 +360,7 @@ Note that if you specify a platform via the marker-command line option like this
$ py.test -m linux2
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 4 items
test_plat.py .

View File

@ -27,15 +27,17 @@ now execute the test specification::
nonpython $ py.test test_simple.yml
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
collected 0 items / 1 errors
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 2 items
================================== ERRORS ==================================
_____________________ ERROR collecting test_simple.yml _____________________
conftest.py:11: in collect
> import yaml # we need a yaml parser, e.g. PyYAML
E ImportError: No module named yaml
========================= 1 error in 0.00 seconds ==========================
test_simple.yml .F
================================= FAILURES =================================
______________________________ usecase: hello ______________________________
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
==================== 1 failed, 1 passed in 0.03 seconds ====================
You get one dot for the passing ``sub1: sub1`` check and one failure.
Obviously in the above ``conftest.py`` you'll want to implement a more
@ -54,27 +56,28 @@ consulted when reporting in ``verbose`` mode::
nonpython $ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 0 items / 1 errors
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 2 items
================================== ERRORS ==================================
_____________________ ERROR collecting test_simple.yml _____________________
conftest.py:11: in collect
> import yaml # we need a yaml parser, e.g. PyYAML
E ImportError: No module named yaml
========================= 1 error in 0.01 seconds ==========================
test_simple.yml:1: usecase: ok PASSED
test_simple.yml:1: usecase: hello FAILED
================================= FAILURES =================================
______________________________ usecase: hello ______________________________
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
==================== 1 failed, 1 passed in 0.03 seconds ====================
While developing your custom test collection and execution it's also
interesting to just look at the collection tree::
nonpython $ py.test --collectonly
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
collected 0 items / 1 errors
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 2 items
<YamlFile 'test_simple.yml'>
<YamlItem 'ok'>
<YamlItem 'hello'>
================================== ERRORS ==================================
_____________________ ERROR collecting test_simple.yml _____________________
conftest.py:11: in collect
> import yaml # we need a yaml parser, e.g. PyYAML
E ImportError: No module named yaml
========================= 1 error in 0.01 seconds ==========================
============================= in 0.02 seconds =============================

View File

@ -7,58 +7,11 @@ Parametrizing tests
.. currentmodule:: _pytest.python
py.test allows to easily parametrize test functions.
For basic docs, see :ref:`parametrize-basics`.
In the following we provide some examples using
the builtin mechanisms.
.. _parametrizemark:
Simple "decorator" parametrization of a test function
----------------------------------------------------------------------------
.. versionadded:: 2.2
The builtin ``pytest.mark.parametrize`` decorator directly enables
parametrization of arguments for a test function. Here is an example
of a test function that wants to compare that processing some input
results in expected output::
# content of test_expectation.py
import pytest
@pytest.mark.parametrize(("input", "expected"), [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(input, expected):
assert eval(input) == expected
we parametrize two arguments of the test function so that the test
function is called three times. Let's run it::
$ py.test -q
..F
================================= FAILURES =================================
____________________________ test_eval[6*9-42] _____________________________
input = '6*9', expected = 42
@pytest.mark.parametrize(("input", "expected"), [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(input, expected):
> assert eval(input) == expected
E assert 54 == 42
E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError
As expected only one pair of input/output values fails the simple test function.
Note that there are various ways how you can mark groups of functions,
see :ref:`mark`.
Generating parameters combinations, depending on command line
----------------------------------------------------------------------------
@ -151,7 +104,7 @@ this is a fully self-contained example which you can run with::
$ py.test test_scenarios.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 4 items
test_scenarios.py ....
@ -163,7 +116,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
$ py.test --collectonly test_scenarios.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 4 items
<Module 'test_scenarios.py'>
<Class 'TestSampleWithScenarios'>
@ -203,6 +156,7 @@ the ``test_db_initialized`` function and also implements a factory that
creates a database object for the actual test invocations::
# content of conftest.py
import pytest
def pytest_generate_tests(metafunc):
if 'db' in metafunc.fixturenames:
@ -213,7 +167,8 @@ creates a database object for the actual test invocations::
class DB2:
"alternative database object"
def pytest_funcarg__db(request):
@pytest.fixture
def db(request):
if request.param == "d1":
return DB1()
elif request.param == "d2":
@ -225,7 +180,7 @@ Let's first see how it looks like at collection time::
$ py.test test_backends.py --collectonly
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 2 items
<Module 'test_backends.py'>
<Function 'test_db_initialized[d1]'>
@ -240,7 +195,7 @@ And then when we run the test::
================================= FAILURES =================================
_________________________ test_db_initialized[d2] __________________________
db = <conftest.DB2 instance at 0x25f79e0>
db = <conftest.DB2 instance at 0x1540440>
def test_db_initialized(db):
# a dummy test
@ -250,7 +205,7 @@ And then when we run the test::
test_backends.py:6: Failed
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``pytest_funcarg__db`` factory has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
.. regendoc:wipe
@ -295,7 +250,7 @@ argument sets to use for each test function. Let's run it::
================================= FAILURES =================================
________________________ TestClass.test_equals[1-2] ________________________
self = <test_parametrize.TestClass instance at 0x192c170>, a = 1, b = 2
self = <test_parametrize.TestClass instance at 0x253bd88>, a = 1, b = 2
def test_equals(self, a, b):
> assert a == b
@ -303,13 +258,13 @@ argument sets to use for each test function. Let's run it::
test_parametrize.py:18: AssertionError
Indirect parametrization with multiple resources
Indirect parametrization with multiple fixtures
--------------------------------------------------------------
Here is a stripped down real-life example of using parametrized
testing for testing serialization, invoking different python interpreters.
We define a ``test_basic_objects`` function which is to be run
with different sets of arguments for its three arguments:
testing for testing serialization of objects between different python
interpreters. We define a ``test_basic_objects`` function which
is to be run with different sets of arguments for its three arguments:
* ``python1``: first python interpreter, run to pickle-dump an object to a file
* ``python2``: second interpreter, run to pickle-load an object from a file

View File

@ -43,7 +43,7 @@ then the test collection looks like this::
$ py.test --collectonly
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 2 items
<Module 'check_myapp.py'>
<Class 'CheckMyApp'>
@ -82,7 +82,7 @@ You can always peek at the collection tree without running tests like this::
. $ py.test --collectonly pythoncollection.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 3 items
<Module 'pythoncollection.py'>
<Function 'test_function'>
@ -91,7 +91,7 @@ You can always peek at the collection tree without running tests like this::
<Function 'test_method'>
<Function 'test_anothermethod'>
============================= in 0.00 seconds =============================
============================= in 0.01 seconds =============================
customizing test collection to find all .py files
---------------------------------------------------------
@ -135,7 +135,7 @@ interpreters and will leave out the setup.py file::
$ py.test --collectonly
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 1 items
<Module 'pkg/module_py2.py'>
<Function 'test_only_on_python2'>

View File

@ -63,7 +63,7 @@ That's it, we can now run the test::
$ py.test test_remoteinterpreter.py
Traceback (most recent call last):
File "/home/hpk/p/pytest/.tox/regen/bin/py.test", line 9, in <module>
load_entry_point('pytest==2.3.0.dev20', 'console_scripts', 'py.test')()
load_entry_point('pytest==2.3.0.dev27', 'console_scripts', 'py.test')()
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 473, in main
config = _prepareconfig(args, plugins)
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 463, in _prepareconfig
@ -98,7 +98,7 @@ That's it, we can now run the test::
self._conftestpath2mod[conftestpath] = mod = conftestpath.pyimport()
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/py/_path/local.py", line 532, in pyimport
__import__(modname)
File "/tmp/doc-exec-193/conftest.py", line 2, in <module>
File "/tmp/doc-exec-58/conftest.py", line 2, in <module>
from remoteinterpreter import RemoteInterpreter
ImportError: No module named remoteinterpreter
@ -150,7 +150,7 @@ Running it yields::
$ py.test -q test_ssh.py -rs
Traceback (most recent call last):
File "/home/hpk/p/pytest/.tox/regen/bin/py.test", line 9, in <module>
load_entry_point('pytest==2.3.0.dev20', 'console_scripts', 'py.test')()
load_entry_point('pytest==2.3.0.dev27', 'console_scripts', 'py.test')()
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 473, in main
config = _prepareconfig(args, plugins)
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 463, in _prepareconfig
@ -185,7 +185,7 @@ Running it yields::
self._conftestpath2mod[conftestpath] = mod = conftestpath.pyimport()
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/py/_path/local.py", line 532, in pyimport
__import__(modname)
File "/tmp/doc-exec-193/conftest.py", line 2, in <module>
File "/tmp/doc-exec-58/conftest.py", line 2, in <module>
from myapp import MyApp
ImportError: No module named myapp

View File

@ -13,7 +13,7 @@ get on the terminal - we are working on that):
assertion $ py.test failure_demo.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 39 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
@ -30,7 +30,7 @@ get on the terminal - we are working on that):
failure_demo.py:15: AssertionError
_________________________ TestFailing.test_simple __________________________
self = <failure_demo.TestFailing object at 0x1c4db10>
self = <failure_demo.TestFailing object at 0x13a0990>
def test_simple(self):
def f():
@ -40,13 +40,13 @@ get on the terminal - we are working on that):
> assert f() == g()
E assert 42 == 43
E + where 42 = <function f at 0x1be6230>()
E + and 43 = <function g at 0x1be62a8>()
E + where 42 = <function f at 0x1326f50>()
E + and 43 = <function g at 0x132f050>()
failure_demo.py:28: AssertionError
____________________ TestFailing.test_simple_multiline _____________________
self = <failure_demo.TestFailing object at 0x1c4d7d0>
self = <failure_demo.TestFailing object at 0x13a0e90>
def test_simple_multiline(self):
otherfunc_multi(
@ -66,19 +66,19 @@ get on the terminal - we are working on that):
failure_demo.py:11: AssertionError
___________________________ TestFailing.test_not ___________________________
self = <failure_demo.TestFailing object at 0x1c4d5d0>
self = <failure_demo.TestFailing object at 0x13a0bd0>
def test_not(self):
def f():
return 42
> assert not f()
E assert not 42
E + where 42 = <function f at 0x1be6410>()
E + where 42 = <function f at 0x132f1b8>()
failure_demo.py:38: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c4df50>
self = <failure_demo.TestSpecialisedExplanations object at 0x13a0150>
def test_eq_text(self):
> assert 'spam' == 'eggs'
@ -89,7 +89,7 @@ get on the terminal - we are working on that):
failure_demo.py:42: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c47590>
self = <failure_demo.TestSpecialisedExplanations object at 0x139ee10>
def test_eq_similar_text(self):
> assert 'foo 1 bar' == 'foo 2 bar'
@ -102,7 +102,7 @@ get on the terminal - we are working on that):
failure_demo.py:45: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c45bd0>
self = <failure_demo.TestSpecialisedExplanations object at 0x139be10>
def test_eq_multiline_text(self):
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
@ -115,7 +115,7 @@ get on the terminal - we are working on that):
failure_demo.py:48: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c45ed0>
self = <failure_demo.TestSpecialisedExplanations object at 0x139bfd0>
def test_eq_long_text(self):
a = '1'*100 + 'a' + '2'*100
@ -132,7 +132,7 @@ get on the terminal - we are working on that):
failure_demo.py:53: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c45a90>
self = <failure_demo.TestSpecialisedExplanations object at 0x139bc10>
def test_eq_long_text_multiline(self):
a = '1\n'*100 + 'a' + '2\n'*100
@ -156,7 +156,7 @@ get on the terminal - we are working on that):
failure_demo.py:58: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c451d0>
self = <failure_demo.TestSpecialisedExplanations object at 0x139b310>
def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3]
@ -166,7 +166,7 @@ get on the terminal - we are working on that):
failure_demo.py:61: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c45e50>
self = <failure_demo.TestSpecialisedExplanations object at 0x139b8d0>
def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100
@ -178,7 +178,7 @@ get on the terminal - we are working on that):
failure_demo.py:66: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c45450>
self = <failure_demo.TestSpecialisedExplanations object at 0x139b590>
def test_eq_dict(self):
> assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
@ -191,7 +191,7 @@ get on the terminal - we are working on that):
failure_demo.py:69: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c45050>
self = <failure_demo.TestSpecialisedExplanations object at 0x139b150>
def test_eq_set(self):
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
@ -207,7 +207,7 @@ get on the terminal - we are working on that):
failure_demo.py:72: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c45190>
self = <failure_demo.TestSpecialisedExplanations object at 0x139ba50>
def test_eq_longer_list(self):
> assert [1,2] == [1,2,3]
@ -217,7 +217,7 @@ get on the terminal - we are working on that):
failure_demo.py:75: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c36690>
self = <failure_demo.TestSpecialisedExplanations object at 0x1342910>
def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5]
@ -226,7 +226,7 @@ get on the terminal - we are working on that):
failure_demo.py:78: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c361d0>
self = <failure_demo.TestSpecialisedExplanations object at 0x1342110>
def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
@ -244,7 +244,7 @@ get on the terminal - we are working on that):
failure_demo.py:82: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c36c10>
self = <failure_demo.TestSpecialisedExplanations object at 0x1342d90>
def test_not_in_text_single(self):
text = 'single foo line'
@ -257,7 +257,7 @@ get on the terminal - we are working on that):
failure_demo.py:86: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
self = <failure_demo.TestSpecialisedExplanations object at 0x1c36250>
self = <failure_demo.TestSpecialisedExplanations object at 0x1342410>
def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
@ -270,7 +270,7 @@ get on the terminal - we are working on that):
failure_demo.py:90: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
self = <failure_demo.TestSpecialisedExplanations object at 0x1c36d50>
self = <failure_demo.TestSpecialisedExplanations object at 0x1342f10>
def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
@ -289,7 +289,7 @@ get on the terminal - we are working on that):
i = Foo()
> assert i.b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x1c369d0>.b
E + where 1 = <failure_demo.Foo object at 0x1342b50>.b
failure_demo.py:101: AssertionError
_________________________ test_attribute_instance __________________________
@ -299,8 +299,8 @@ get on the terminal - we are working on that):
b = 1
> assert Foo().b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x1c36610>.b
E + where <failure_demo.Foo object at 0x1c36610> = <class 'failure_demo.Foo'>()
E + where 1 = <failure_demo.Foo object at 0x1342f90>.b
E + where <failure_demo.Foo object at 0x1342f90> = <class 'failure_demo.Foo'>()
failure_demo.py:107: AssertionError
__________________________ test_attribute_failure __________________________
@ -316,7 +316,7 @@ get on the terminal - we are working on that):
failure_demo.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.Foo object at 0x1c36450>
self = <failure_demo.Foo object at 0x1342610>
def _get_b(self):
> raise Exception('Failed to get attrib')
@ -332,15 +332,15 @@ get on the terminal - we are working on that):
b = 2
> assert Foo().b == Bar().b
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x1c36cd0>.b
E + where <failure_demo.Foo object at 0x1c36cd0> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x1bd7290>.b
E + where <failure_demo.Bar object at 0x1bd7290> = <class 'failure_demo.Bar'>()
E + where 1 = <failure_demo.Foo object at 0x1342510>.b
E + where <failure_demo.Foo object at 0x1342510> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x1339790>.b
E + where <failure_demo.Bar object at 0x1339790> = <class 'failure_demo.Bar'>()
failure_demo.py:124: AssertionError
__________________________ TestRaises.test_raises __________________________
self = <failure_demo.TestRaises instance at 0x1c59d40>
self = <failure_demo.TestRaises instance at 0x13af560>
def test_raises(self):
s = 'qwe'
@ -352,10 +352,10 @@ get on the terminal - we are working on that):
> int(s)
E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:833>:1: ValueError
<0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:834>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises instance at 0x1c58b90>
self = <failure_demo.TestRaises instance at 0x13c0d88>
def test_raises_doesnt(self):
> raises(IOError, "int('3')")
@ -364,7 +364,7 @@ get on the terminal - we are working on that):
failure_demo.py:136: Failed
__________________________ TestRaises.test_raise ___________________________
self = <failure_demo.TestRaises instance at 0x1c4f320>
self = <failure_demo.TestRaises instance at 0x13a6248>
def test_raise(self):
> raise ValueError("demo error")
@ -373,7 +373,7 @@ get on the terminal - we are working on that):
failure_demo.py:139: ValueError
________________________ TestRaises.test_tupleerror ________________________
self = <failure_demo.TestRaises instance at 0x1c527a0>
self = <failure_demo.TestRaises instance at 0x13a6f80>
def test_tupleerror(self):
> a,b = [1]
@ -382,7 +382,7 @@ get on the terminal - we are working on that):
failure_demo.py:142: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
self = <failure_demo.TestRaises instance at 0x1c50518>
self = <failure_demo.TestRaises instance at 0x13a1cb0>
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3]
@ -395,7 +395,7 @@ get on the terminal - we are working on that):
l is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________
self = <failure_demo.TestRaises instance at 0x1c4a320>
self = <failure_demo.TestRaises instance at 0x13a5ab8>
def test_some_error(self):
> if namenotexi:
@ -423,7 +423,7 @@ get on the terminal - we are working on that):
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x1c58f38>
self = <failure_demo.TestMoreErrors instance at 0x13b6908>
def test_complex_error(self):
def f():
@ -452,7 +452,7 @@ get on the terminal - we are working on that):
failure_demo.py:5: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________
self = <failure_demo.TestMoreErrors instance at 0x1c57320>
self = <failure_demo.TestMoreErrors instance at 0x139cb00>
def test_z1_unpack_error(self):
l = []
@ -462,7 +462,7 @@ get on the terminal - we are working on that):
failure_demo.py:179: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x1c42170>
self = <failure_demo.TestMoreErrors instance at 0x13a7908>
def test_z2_type_error(self):
l = 3
@ -472,19 +472,19 @@ get on the terminal - we are working on that):
failure_demo.py:183: TypeError
______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors instance at 0x1c42f38>
self = <failure_demo.TestMoreErrors instance at 0x139d710>
def test_startswith(self):
s = "123"
g = "456"
> assert s.startswith(g)
E assert <built-in method startswith of str object at 0x1c4eaa8>('456')
E + where <built-in method startswith of str object at 0x1c4eaa8> = '123'.startswith
E assert <built-in method startswith of str object at 0x13a3af8>('456')
E + where <built-in method startswith of str object at 0x13a3af8> = '123'.startswith
failure_demo.py:188: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors instance at 0x1c57d40>
self = <failure_demo.TestMoreErrors instance at 0x13a7098>
def test_startswith_nested(self):
def f():
@ -492,15 +492,15 @@ get on the terminal - we are working on that):
def g():
return "456"
> assert f().startswith(g())
E assert <built-in method startswith of str object at 0x1c4eaa8>('456')
E + where <built-in method startswith of str object at 0x1c4eaa8> = '123'.startswith
E + where '123' = <function f at 0x1c6f488>()
E + and '456' = <function g at 0x1c6f848>()
E assert <built-in method startswith of str object at 0x13a3af8>('456')
E + where <built-in method startswith of str object at 0x13a3af8> = '123'.startswith
E + where '123' = <function f at 0x13c4230>()
E + and '456' = <function g at 0x13c45f0>()
failure_demo.py:195: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors instance at 0x1c589e0>
self = <failure_demo.TestMoreErrors instance at 0x13c2710>
def test_global_func(self):
> assert isinstance(globf(42), float)
@ -510,18 +510,18 @@ get on the terminal - we are working on that):
failure_demo.py:198: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors instance at 0x1c52098>
self = <failure_demo.TestMoreErrors instance at 0x13a1f38>
def test_instance(self):
self.x = 6*7
> assert self.x != 42
E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors instance at 0x1c52098>.x
E + where 42 = <failure_demo.TestMoreErrors instance at 0x13a1f38>.x
failure_demo.py:202: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors instance at 0x1c44ab8>
self = <failure_demo.TestMoreErrors instance at 0x1398320>
def test_compare(self):
> assert globf(10) < 5
@ -531,7 +531,7 @@ get on the terminal - we are working on that):
failure_demo.py:205: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors instance at 0x1c488c0>
self = <failure_demo.TestMoreErrors instance at 0x1399128>
def test_try_finally(self):
x = 1

View File

@ -106,7 +106,7 @@ directory with the above conftest.py::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 0 items
============================= in 0.00 seconds =============================
@ -150,20 +150,20 @@ and when running it will see a skipped "slow" test::
$ py.test -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 2 items
test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-195/conftest.py:9: need --runslow option to run
SKIP [1] /tmp/doc-exec-60/conftest.py:9: need --runslow option to run
=================== 1 passed, 1 skipped in 0.01 seconds ====================
=================== 1 passed, 1 skipped in 0.03 seconds ====================
Or run it including the ``slow`` marked test::
$ py.test --runslow
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 2 items
test_module.py ..
@ -253,7 +253,7 @@ which will add the string to the test header accordingly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
project deps: mylib-1.1
collected 0 items
@ -276,7 +276,7 @@ which will add info only when run with "--v"::
$ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27 -- /home/hpk/p/pytest/.tox/regen/bin/python
info1: did you know that ...
did you?
collecting ... collected 0 items
@ -287,7 +287,7 @@ and nothing when run plainly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 0 items
============================= in 0.00 seconds =============================
@ -319,7 +319,7 @@ Now we can profile which test functions execute the slowest::
$ py.test --durations=3
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 3 items
test_some_are_slow.py ...
@ -327,5 +327,79 @@ Now we can profile which test functions execute the slowest::
========================= slowest 3 test durations =========================
0.20s call test_some_are_slow.py::test_funcslow2
0.10s call test_some_are_slow.py::test_funcslow1
0.00s call test_some_are_slow.py::test_funcfast
0.00s setup test_some_are_slow.py::test_funcfast
========================= 3 passed in 0.31 seconds =========================
incremental testing - test steps
---------------------------------------------------
.. regendoc:wipe
Sometimes you may have a testing situation which consists of a series
of test steps. If one step fails it makes no sense to execute further
steps as they are all expected to fail anyway and their tracebacks
add no insight. Here is a simple ``conftest.py`` file which introduces
an ``incremental`` marker which is to be used on classes::
# content of conftest.py
import pytest
def pytest_runtest_makereport(item, call):
if hasattr(item.markers, "incremental"):
if call.excinfo is not None:
parent = item.parent
parent._previousfailed = item
def pytest_runtest_setup(item):
if hasattr(item.markers, "incremental"):
previousfailed = getattr(item.parent, "_previousfailed", None)
if previousfailed is not None:
pytest.xfail("previous test failed (%s)" %previousfailed.name)
These two hook implementations work together to abort incremental-marked
tests in a class. Here is a test module example::
# content of test_step.py
import pytest
@pytest.mark.incremental
class TestUserHandling:
def test_login(self):
pass
def test_modification(self):
assert 0
def test_deletion(self):
pass
def test_normal():
pass
If we run this::
$ py.test -rx
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 4 items
test_step.py .Fx.
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling instance at 0x28e1128>
def test_modification(self):
> assert 0
E assert 0
test_step.py:9: AssertionError
========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::()::test_deletion
reason: previous test failed (test_modification)
============== 1 failed, 2 passed, 1 xfailed in 0.01 seconds ===============
We'll see that ``test_deletion`` was not executed because ``test_modification``
failed. It is reported as an "expected failure".

View File

@ -29,8 +29,7 @@ functions:
functional testing, allowing to parametrize fixtures or tests according
to configuration and component options.
In addition to this next-generation (tm) style of organising test fixtures
in Python, pytest continues to support :ref:`xunitsetup` which it
In addition, pytest continues to support :ref:`xunitsetup` which it
originally introduced in 2005. You can mix both styles, moving
incrementally from classic to new style, if you prefer. You can also
start out from existing :ref:`unittest.TestCase style <unittest.TestCase>`
@ -72,8 +71,7 @@ fixture function. Running the test looks like this::
$ py.test test_smtpsimple.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov, timeout
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 1 items
test_smtpsimple.py F
@ -81,7 +79,7 @@ fixture function. Running the test looks like this::
================================= FAILURES =================================
________________________________ test_ehlo _________________________________
smtp = <smtplib.SMTP instance at 0x2391098>
smtp = <smtplib.SMTP instance at 0x1ceeb00>
def test_ehlo(smtp):
response, msg = smtp.ehlo()
@ -91,7 +89,7 @@ fixture function. Running the test looks like this::
E assert 0
test_smtpsimple.py:12: AssertionError
========================= 1 failed in 0.11 seconds =========================
========================= 1 failed in 0.25 seconds =========================
In the failure traceback we see that the test function was called with a
``smtp`` argument, the ``smtplib.SMTP()`` instance created by the fixture
@ -189,8 +187,7 @@ inspect what is going on and can now run the tests::
$ py.test test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov, timeout
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 2 items
test_module.py FF
@ -198,18 +195,19 @@ inspect what is going on and can now run the tests::
================================= FAILURES =================================
________________________________ test_ehlo _________________________________
smtp = <smtplib.SMTP instance at 0x2d96ab8>
smtp = <smtplib.SMTP instance at 0x1ef15f0>
def test_ehlo(smtp):
response = smtp.ehlo()
assert response[0] == 250
> assert "python" in response[1]
E assert 'python' in 'hq.merlinux.eu\nPIPELINING\nSIZE 25000000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN'
assert "merlinux" in response[1]
> assert 0 # for demo purposes
E assert 0
test_module.py:5: AssertionError
test_module.py:6: AssertionError
________________________________ test_noop _________________________________
smtp = <smtplib.SMTP instance at 0x2d96ab8>
smtp = <smtplib.SMTP instance at 0x1ef15f0>
def test_noop(smtp):
response = smtp.noop()
@ -218,7 +216,7 @@ inspect what is going on and can now run the tests::
E assert 0
test_module.py:11: AssertionError
========================= 2 failed in 0.12 seconds =========================
========================= 2 failed in 0.24 seconds =========================
You see the two ``assert 0`` failing and more importantly you can also see
that the same (session-scoped) ``smtp`` object was passed into the two
@ -260,7 +258,7 @@ using it has executed::
$ py.test -s -q --tb=no
FF
finalizing <smtplib.SMTP instance at 0x2ecf1b8>
finalizing <smtplib.SMTP instance at 0x284e488>
We see that the ``smtp`` instance is finalized after the two
tests using it tests executed. If we had specified ``scope='function'``
@ -330,18 +328,19 @@ So let's just do another run::
================================= FAILURES =================================
__________________________ test_ehlo[merlinux.eu] __________________________
smtp = <smtplib.SMTP instance at 0x163ef38>
smtp = <smtplib.SMTP instance at 0x2f69440>
def test_ehlo(smtp):
response = smtp.ehlo()
assert response[0] == 250
> assert "python" in response[1]
E assert 'python' in 'hq.merlinux.eu\nPIPELINING\nSIZE 25000000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN'
assert "merlinux" in response[1]
> assert 0 # for demo purposes
E assert 0
test_module.py:5: AssertionError
test_module.py:6: AssertionError
__________________________ test_noop[merlinux.eu] __________________________
smtp = <smtplib.SMTP instance at 0x163ef38>
smtp = <smtplib.SMTP instance at 0x2f69440>
def test_noop(smtp):
response = smtp.noop()
@ -352,19 +351,18 @@ So let's just do another run::
test_module.py:11: AssertionError
________________________ test_ehlo[mail.python.org] ________________________
smtp = <smtplib.SMTP instance at 0x1645320>
smtp = <smtplib.SMTP instance at 0x2fecea8>
def test_ehlo(smtp):
response = smtp.ehlo()
assert response[0] == 250
assert "python" in response[1]
> assert 0 # for demo purposes
E assert 0
> assert "merlinux" in response[1]
E assert 'merlinux' in 'mail.python.org\nSIZE 10240000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN'
test_module.py:6: AssertionError
test_module.py:5: AssertionError
________________________ test_noop[mail.python.org] ________________________
smtp = <smtplib.SMTP instance at 0x1645320>
smtp = <smtplib.SMTP instance at 0x2fecea8>
def test_noop(smtp):
response = smtp.noop()
@ -412,15 +410,13 @@ Here we declare an ``app`` fixture which receives the previously defined
$ py.test -v test_appsetup.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20 -- /home/hpk/venv/1/bin/python
cachedir: /tmp/doc-exec-135/.cache
plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov, timeout
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 2 items
test_appsetup.py:12: test_smtp_exists[merlinux.eu] PASSED
test_appsetup.py:12: test_smtp_exists[mail.python.org] PASSED
========================= 2 passed in 0.08 seconds =========================
========================= 2 passed in 0.09 seconds =========================
Due to the parametrization of ``smtp`` the test will run twice with two
different ``App`` instances and respective smtp servers. There is no
@ -476,9 +472,7 @@ Let's run the tests in verbose mode and with looking at the print-output::
$ py.test -v -s test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20 -- /home/hpk/venv/1/bin/python
cachedir: /tmp/doc-exec-135/.cache
plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov, timeout
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 8 items
test_module.py:16: test_0[1] PASSED
@ -490,7 +484,7 @@ Let's run the tests in verbose mode and with looking at the print-output::
test_module.py:20: test_2[1-mod2] PASSED
test_module.py:20: test_2[2-mod2] PASSED
========================= 8 passed in 0.02 seconds =========================
========================= 8 passed in 0.01 seconds =========================
test0 1
test0 2
create mod1

41
doc/en/genapi.py Normal file
View File

@ -0,0 +1,41 @@
import textwrap
import inspect
class Writer:
def __init__(self, clsname):
self.clsname = clsname
def __enter__(self):
self.file = open("%s.api" % self.clsname, "w")
return self
def __exit__(self, *args):
self.file.close()
print "wrote", self.file.name
def line(self, line):
self.file.write(line+"\n")
def docmethod(self, method):
doc = " ".join(method.__doc__.split())
indent = " "
w = textwrap.TextWrapper(initial_indent=indent,
subsequent_indent=indent)
spec = inspect.getargspec(method)
del spec.args[0]
self.line(".. py:method:: " + method.__name__ +
inspect.formatargspec(*spec))
self.line("")
self.line(w.fill(doc))
self.line("")
def pytest_funcarg__a(request):
with Writer("request") as writer:
writer.docmethod(request.getfuncargvalue)
writer.docmethod(request.cached_setup)
writer.docmethod(request.addfinalizer)
writer.docmethod(request.applymarker)
def test_hello(a):
pass

View File

@ -23,7 +23,7 @@ Installation options::
To check your installation has installed the correct version::
$ py.test --version
This is py.test version 2.3.0.dev20, imported from /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/pytest.pyc
This is py.test version 2.3.0.dev27, imported from /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/pytest.pyc
If you get an error checkout :ref:`installation issues`.
@ -45,7 +45,7 @@ That's it. You can execute the test function now::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 1 items
test_sample.py F
@ -122,7 +122,7 @@ run the module by passing its filename::
================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________
self = <test_class.TestClass instance at 0x13a5518>
self = <test_class.TestClass instance at 0x1ac71b8>
def test_two(self):
x = "hello"
@ -157,7 +157,7 @@ before performing the test function call. Let's just run it::
================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________
tmpdir = local('/tmp/pytest-46/test_needsfiles0')
tmpdir = local('/tmp/pytest-914/test_needsfiles0')
def test_needsfiles(tmpdir):
print tmpdir
@ -166,7 +166,7 @@ before performing the test function call. Let's just run it::
test_tmpdir.py:3: AssertionError
----------------------------- Captured stdout ------------------------------
/tmp/pytest-46/test_needsfiles0
/tmp/pytest-914/test_needsfiles0
Before the test runs, a unique-per-test-invocation temporary directory
was created. More info at :ref:`tmpdir handling`.

View File

@ -15,22 +15,21 @@ pytest: makes you write better programs
**provides easy no-boilerplate testing**
- makes it :ref:`easy to get started <getstarted>`,
many :ref:`usage options <usage>`
- :ref:`assert with the assert statement`
- helpful :ref:`traceback and failing assertion reporting <tbreportdemo>`
- allows :ref:`print debugging <printdebugging>` and :ref:`the
capturing of standard output during test execution <captures>`
- supports :pep:`8` compliant coding styles in tests
- refined :ref:`usage options <usage>`
**scales from simple unit to complex functional testing**
- (new in 2.3) :ref:`modular parametrizeable fixtures <fixture>`
- :ref:`mark`
- :ref:`parametrized test functions <parametrized test functions>`
- :ref:`mark`
- :ref:`skipping`
- can :ref:`distribute tests to multiple CPUs <xdistcpu>` through :ref:`xdist plugin <xdist>`
- can :ref:`continuously re-run failing tests <looponfailing>`
- many :ref:`builtin helpers <pytest helpers>`
- many :ref:`builtin helpers <pytest helpers>` and :ref:`plugins <plugins>`
- flexible :ref:`Python test discovery`
**integrates many common testing methods**:
@ -42,6 +41,7 @@ pytest: makes you write better programs
- supports domain-specific :ref:`non-python tests`
- supports the generation of testing coverage reports
- `Javascript unit- and functional testing`_
- supports :pep:`8` compliant coding styles in tests
**extensive plugin and customization system**:

View File

@ -4,6 +4,8 @@
.. _`parametrized test functions`:
.. _`parametrize`:
.. _`parametrize-basics`:
Parametrizing fixtures and test functions
==========================================================================
@ -19,6 +21,7 @@ pytest supports test parametrization in several well-integrated ways:
* `pytest_generate_tests`_ enables implementing your own custom
dynamic parametrization scheme or extensions.
.. _parametrizemark:
.. _`@pytest.mark.parametrize`:
@ -50,7 +53,7 @@ which will thus run three times::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 3 items
test_expectation.py ..F
@ -132,8 +135,8 @@ Let's also run with a stringinput that will lead to a failing test::
def test_valid_string(stringinput):
> assert stringinput.isalpha()
E assert <built-in method isalpha of str object at 0x2ab0a94bf030>()
E + where <built-in method isalpha of str object at 0x2ab0a94bf030> = '!'.isalpha
E assert <built-in method isalpha of str object at 0x2b58af1ef030>()
E + where <built-in method isalpha of str object at 0x2b58af1ef030> = '!'.isalpha
test_strings.py:3: AssertionError
@ -146,7 +149,7 @@ listlist::
$ py.test -q -rs test_strings.py
s
========================= short test summary info ==========================
SKIP [1] /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:949: got empty parameter set, function test_valid_string at /tmp/doc-exec-161/test_strings.py:1
SKIP [1] /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:943: got empty parameter set, function test_valid_string at /tmp/doc-exec-26/test_strings.py:1
For further examples, you might want to look at :ref:`more
parametrization examples <paramexamples>`.

View File

@ -68,7 +68,7 @@ there is no need to activate it. Here is a initial list of known plugins:
.. _`django`: https://www.djangoproject.com/
* `pytest-django <https://pypi.python.org/pytest-django>`: write tests
* `pytest-django <http://pypi.python.org/pypi/pytest-django>`_: write tests
for `django`_ apps, using pytest integration.
* `pytest-capturelog <http://pypi.python.org/pypi/pytest-capturelog>`_:

View File

@ -132,7 +132,7 @@ Running it with the report-on-xfail option gives this output::
example $ py.test -rx xfail_demo.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 6 items
xfail_demo.py xxxxxx

View File

@ -29,7 +29,7 @@ Running this would result in a passed test except for the last
$ py.test test_tmpdir.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev20
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 1 items
test_tmpdir.py F
@ -37,7 +37,7 @@ Running this would result in a passed test except for the last
================================= FAILURES =================================
_____________________________ test_create_file _____________________________
tmpdir = local('/tmp/pytest-47/test_create_file0')
tmpdir = local('/tmp/pytest-915/test_create_file0')
def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt")
@ -48,7 +48,7 @@ Running this would result in a passed test except for the last
E assert 0
test_tmpdir.py:7: AssertionError
========================= 1 failed in 0.01 seconds =========================
========================= 1 failed in 0.02 seconds =========================
.. _`base temporary directory`:

View File

@ -81,14 +81,13 @@ fixture definition::
assert 0, self.db # fail for demo purposes
The ``@pytest.mark.usefixtures("db_class")`` class-decorator makes sure that
the pytest fixture function ``db_class`` is called. Due to the deliberately
failing assert statements, we can take a look at the ``self.db`` values
in the traceback::
the pytest fixture function ``db_class`` is called for each test method.
Due to the deliberately failing assert statements, we can take a look at
the ``self.db`` values in the traceback::
$ py.test test_unittest_db.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev22
plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov, timeout
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev27
collected 2 items
test_unittest_db.py FF
@ -101,7 +100,7 @@ in the traceback::
def test_method1(self):
assert hasattr(self, "db")
> assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.DummyDB instance at 0x135dea8>
E AssertionError: <conftest.DummyDB instance at 0x18ff9e0>
test_unittest_db.py:9: AssertionError
___________________________ MyTest.test_method2 ____________________________
@ -110,14 +109,14 @@ in the traceback::
def test_method2(self):
> assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.DummyDB instance at 0x135dea8>
E AssertionError: <conftest.DummyDB instance at 0x18ff9e0>
test_unittest_db.py:12: AssertionError
========================= 2 failed in 0.04 seconds =========================
========================= 2 failed in 0.02 seconds =========================
This default pytest traceback shows that, indeed, the two test methods
see the same ``self.db`` attribute instance which was our intention
when writing the class-scoped fixture function.
share the same ``self.db`` instance which was our intention
when writing the class-scoped fixture function above.
autouse fixtures and accessing other fixtures
@ -128,9 +127,10 @@ for a given test, you may sometimes want to have fixtures that are
automatically used in a given context. For this, you can flag
fixture functions with ``@pytest.fixture(autouse=True)`` and define
the fixture function in the context where you want it used. Let's look
at an example which makes all test methods of a ``TestCase`` class
execute in a clean temporary directory, using a ``initdir`` fixture
which itself uses the pytest builtin ``tmpdir`` fixture::
at an ``initdir`` fixrure which makes all test methods of a ``TestCase`` class
execute in a temporary directory with a pre-initialized ``samplefile.ini``.
Our ``initdir`` fixture itself uses the pytest builtin :ref:`tmpdir <tmpdir>`
fixture to help with creating a temporary dir::
# content of test_unittest_cleandir.py
import pytest
@ -146,12 +146,10 @@ which itself uses the pytest builtin ``tmpdir`` fixture::
s = open("samplefile.ini").read()
assert "testdata" in s
The ``initdir`` fixture function will be used for all methods of the
class where it is defined. This is basically just a shortcut for
using a ``@pytest.mark.usefixtures("initdir")`` on the class like in
the previous example. Note, that the ``initdir`` fixture function
accepts a :ref:`tmpdir <tmpdir>` argument, referencing a pytest
builtin fixture.
Due to the ``autouse`` flag the ``initdir`` fixture function will be
used for all methods of the class where it is defined. This is a
shortcut for using a ``@pytest.mark.usefixtures("initdir")`` on the
class like in the previous example.
Running this test module ...::
@ -163,7 +161,7 @@ was executed ahead of the ``test_method``.
.. note::
``unittest.TestCase`` methods cannot directly receive fixture or
``unittest.TestCase`` methods cannot directly receive fixture
function arguments as implementing that is likely to inflict
on the ability to run general unittest.TestCase test suites.
Given enough demand, attempts might be made, though. If

View File

@ -45,6 +45,7 @@ deps=
basepython=python
changedir=doc/en
deps=:pypi:sphinx
:pypi:PyYAML
pytest
commands=
@ -55,6 +56,7 @@ commands=
basepython=python
changedir=doc/en
deps=:pypi:sphinx
:pypi:PyYAML
pytest
commands=
make regen
@ -90,3 +92,4 @@ python_files=test_*.py *_test.py
python_classes=Test Acceptance
python_functions=test
pep8ignore = E401 E225 E261 E128 E124 E302
norecursedirs = .tox doc/ja