replace py.test module references with pytest

The only remaining 'py.test' references are:
 * those referring to the 'py.test' executable
 * those in code explicitly testing py.test/pytest module compatibility
 * those in old CHANGES documentation
 * those in documentation generated based on external data
 * those in seemingly unfinished & unmaintained Japanese documentation

Minor stylistic changes and typo corrections made to documentation next to
several applied py.test --> pytest content changes.
This commit is contained in:
Jurko Gospodnetić 2014-01-18 12:31:33 +01:00
parent 83620ced2e
commit 9fb2079458
66 changed files with 716 additions and 711 deletions

View File

@ -122,8 +122,8 @@ customize test function collection
-------------------------------------------------------
tags: feature
- introduce py.test.mark.nocollect for not considering a function for
test collection at all. maybe also introduce a py.test.mark.test to
- introduce pytest.mark.nocollect for not considering a function for
test collection at all. maybe also introduce a pytest.mark.test to
explicitely mark a function to become a tested one. Lookup JUnit ways
of tagging tests.
@ -135,18 +135,18 @@ in addition to the imperative pytest.importorskip also introduce
a pytest.mark.importorskip so that the test count is more correct.
introduce py.test.mark.platform
introduce pytest.mark.platform
-------------------------------------------------------
tags: feature
Introduce nice-to-spell platform-skipping, examples:
@py.test.mark.platform("python3")
@py.test.mark.platform("not python3")
@py.test.mark.platform("win32 and not python3")
@py.test.mark.platform("darwin")
@py.test.mark.platform("not (jython and win32)")
@py.test.mark.platform("not (jython and win32)", xfail=True)
@pytest.mark.platform("python3")
@pytest.mark.platform("not python3")
@pytest.mark.platform("win32 and not python3")
@pytest.mark.platform("darwin")
@pytest.mark.platform("not (jython and win32)")
@pytest.mark.platform("not (jython and win32)", xfail=True)
etc. Idea is to allow Python expressions which can operate
on common spellings for operating systems and python
@ -181,8 +181,8 @@ tags: feature
allow to name conftest.py files (in sub directories) that should
be imported early, as to include command line options.
improve central py.test ini file
----------------------------------
improve central pytest ini file
-------------------------------
tags: feature
introduce more declarative configuration options:
@ -196,7 +196,7 @@ new documentation
----------------------------------
tags: feature
- logo py.test
- logo pytest
- examples for unittest or functional testing
- resource management for functional testing
- patterns: page object
@ -205,17 +205,17 @@ have imported module mismatch honour relative paths
--------------------------------------------------------
tags: bug
With 1.1.1 py.test fails at least on windows if an import
With 1.1.1 pytest fails at least on windows if an import
is relative and compared against an absolute conftest.py
path. Normalize.
consider globals: py.test.ensuretemp and config
consider globals: pytest.ensuretemp and config
--------------------------------------------------------------
tags: experimental-wish
consider deprecating py.test.ensuretemp and py.test.config
to further reduce py.test globality. Also consider
having py.test.config and ensuretemp coming from
consider deprecating pytest.ensuretemp and pytest.config
to further reduce pytest globality. Also consider
having pytest.config and ensuretemp coming from
a plugin rather than being there from the start.
@ -223,7 +223,7 @@ consider pytest_addsyspath hook
-----------------------------------------
tags: wish
py.test could call a new pytest_addsyspath() in order to systematically
pytest could call a new pytest_addsyspath() in order to systematically
allow manipulation of sys.path and to inhibit it via --no-addsyspath
in order to more easily run against installed packages.
@ -232,11 +232,11 @@ and pytest_configure.
deprecate global py.test.config usage
deprecate global pytest.config usage
----------------------------------------------------------------
tags: feature
py.test.ensuretemp and py.test.config are probably the last
pytest.ensuretemp and pytest.config are probably the last
objects containing global state. Often using them is not
neccessary. This is about trying to get rid of them, i.e.
deprecating them and checking with PyPy's usages as well

View File

@ -5,7 +5,7 @@ Changelog: http://pytest.org/latest/changelog.html
Issues: https://bitbucket.org/hpk42/pytest/issues?status=open
The ``py.test`` testing tool makes it easy to write small tests, yet
The ``pytest`` testing tool makes it easy to write small tests, yet
scales to support complex functional testing. It provides
- `auto-discovery
@ -14,7 +14,7 @@ scales to support complex functional testing. It provides
- detailed info on failing `assert statements <http://pytest.org/latest/assert.html>`_ (no need to remember ``self.assert*`` names)
- `modular fixtures <http://pytest.org/latest/fixture.html>`_ for
managing small or parametrized long-lived test resources.
- multi-paradigm support: you can use ``py.test`` to run test suites based
- multi-paradigm support: you can use ``pytest`` to run test suites based
on `unittest <http://pytest.org/latest/unittest.html>`_ (or trial),
`nose <http://pytest.org/latest/nose.html>`_
- single-source compatibility to Python2.4 all the way up to Python3.3,

View File

@ -15,7 +15,7 @@ import py
from _pytest.assertion import util
# py.test caches rewritten pycs in __pycache__.
# pytest caches rewritten pycs in __pycache__.
if hasattr(imp, "get_tag"):
PYTEST_TAG = imp.get_tag() + "-PYTEST"
else:
@ -102,7 +102,7 @@ class AssertionRewritingHook(object):
# the most magical part of the process: load the source, rewrite the
# asserts, and load the rewritten source. We also cache the rewritten
# module code in a special pyc. We must be aware of the possibility of
# concurrent py.test processes rewriting and loading pycs. To avoid
# concurrent pytest processes rewriting and loading pycs. To avoid
# tricky race conditions, we maintain the following invariant: The
# cached pyc is always a complete, valid pyc. Operations on it must be
# atomic. POSIX's atomic rename comes in handy.
@ -290,7 +290,7 @@ def _make_rewritten_pyc(state, fn, pyc, co):
os.rename(proc_pyc, pyc)
def _read_pyc(source, pyc):
"""Possibly read a py.test pyc containing rewritten code.
"""Possibly read a pytest pyc containing rewritten code.
Return rewritten code if successful or None if not.
"""

View File

@ -1,6 +1,7 @@
""" command line options, ini-file and conftest.py processing. """
import py
import pytest
import sys, os
from _pytest import hookspec # the extension point definitions
from _pytest.core import PluginManager
@ -22,7 +23,7 @@ class cmdline: # compatibility namespace
main = staticmethod(main)
class UsageError(Exception):
""" error in py.test usage or invocation"""
""" error in pytest usage or invocation"""
_preinit = []
@ -225,7 +226,7 @@ class Argument:
help = attrs['help']
if '%default' in help:
py.std.warnings.warn(
'py.test now uses argparse. "%default" should be'
'pytest now uses argparse. "%default" should be'
' changed to "%(default)s" ',
FutureWarning,
stacklevel=3)
@ -448,7 +449,7 @@ class DropShorterLongHelpFormatter(py.std.argparse.HelpFormatter):
class Conftest(object):
""" the single place for accessing values and interacting
towards conftest modules from py.test objects.
towards conftest modules from pytest objects.
"""
def __init__(self, onimport=None, confcutdir=None):
self._path2confmods = {}
@ -808,7 +809,7 @@ class Config(object):
def getvalueorskip(self, name, path=None):
""" (deprecated) return getvalue(name) or call
py.test.skip if no value exists. """
pytest.skip if no value exists. """
__tracebackhide__ = True
try:
val = self.getvalue(name, path)
@ -816,7 +817,7 @@ class Config(object):
raise KeyError(name)
return val
except KeyError:
py.test.skip("no %r value found" %(name,))
pytest.skip("no %r value found" %(name,))
def exists(path, ignore=EnvironmentError):
try:

View File

@ -4,6 +4,7 @@ pytest PluginManager, basic initialization and tracing.
import sys
import inspect
import py
import pytest
assert py.__version__.split(".")[:2] >= ['1', '4'], ("installation problem: "
"%s is too old, remove or upgrade 'py'" % (py.__version__))
@ -136,7 +137,7 @@ class PluginManager(object):
def skipifmissing(self, name):
if not self.hasplugin(name):
py.test.skip("plugin %r is missing" % name)
pytest.skip("plugin %r is missing" % name)
def hasplugin(self, name):
return bool(self.getplugin(name))
@ -220,9 +221,9 @@ class PluginManager(object):
raise
except:
e = py.std.sys.exc_info()[1]
if not hasattr(py.test, 'skip'):
if not hasattr(pytest, 'skip'):
raise
elif not isinstance(e, py.test.skip.Exception):
elif not isinstance(e, pytest.skip.Exception):
raise
self._hints.append("skipped plugin %r: %s" %((modname, e.msg)))
else:

View File

@ -1,4 +1,4 @@
""" generate a single-file self-contained version of py.test """
""" generate a single-file self-contained version of pytest """
import py
import sys
@ -55,7 +55,7 @@ def pytest_addoption(parser):
group = parser.getgroup("debugconfig")
group.addoption("--genscript", action="store", default=None,
dest="genscript", metavar="path",
help="create standalone py.test script at given target path.")
help="create standalone pytest script at given target path.")
def pytest_cmdline_main(config):
genscript = config.getvalue("genscript")
@ -70,7 +70,7 @@ def pytest_cmdline_main(config):
"or below due to 'argparse' dependency. Use python2.6 "
"to generate a python2.5/6 compatible script", red=True)
script = generate_script(
'import py; raise SystemExit(py.test.cmdline.main())',
'import pytest; raise SystemExit(pytest.cmdline.main())',
deps,
)
genscript = py.path.local(genscript)

View File

@ -46,7 +46,7 @@ def pytest_unconfigure(config):
def pytest_cmdline_main(config):
if config.option.version:
p = py.path.local(pytest.__file__)
sys.stderr.write("This is py.test version %s, imported from %s\n" %
sys.stderr.write("This is pytest version %s, imported from %s\n" %
(pytest.__version__, p))
plugininfo = getpluginversioninfo(config)
if plugininfo:

View File

@ -11,8 +11,8 @@ def pytest_addhooks(pluginmanager):
def pytest_namespace():
"""return dict of name->object to be made globally available in
the py.test/pytest namespace. This hook is called before command
line options are parsed.
the pytest namespace. This hook is called before command line options
are parsed.
"""
def pytest_cmdline_parse(pluginmanager, args):

View File

@ -63,7 +63,7 @@ def pytest_namespace():
return dict(collect=collect)
def pytest_configure(config):
py.test.config = config # compatibiltiy
pytest.config = config # compatibiltiy
if config.option.exitfirst:
config.option.maxfail = 1

View File

@ -157,10 +157,10 @@ def pytest_configure(config):
class MarkGenerator:
""" Factory for :class:`MarkDecorator` objects - exposed as
a ``py.test.mark`` singleton instance. Example::
a ``pytest.mark`` singleton instance. Example::
import py
@py.test.mark.slowtest
@pytest.mark.slowtest
def test_function():
pass
@ -198,8 +198,8 @@ class MarkDecorator:
:ref:`retrieved by hooks as item keywords <excontrolskip>`.
MarkDecorator instances are often created like this::
mark1 = py.test.mark.NAME # simple MarkDecorator
mark2 = py.test.mark.NAME(name1=value) # parametrized MarkDecorator
mark1 = pytest.mark.NAME # simple MarkDecorator
mark2 = pytest.mark.NAME(name1=value) # parametrized MarkDecorator
and can then be applied as decorators to test functions::

View File

@ -8,7 +8,7 @@ def pytest_runtest_makereport(__multicall__, item, call):
SkipTest = getattr(sys.modules.get('nose', None), 'SkipTest', None)
if SkipTest:
if call.excinfo and call.excinfo.errisinstance(SkipTest):
# let's substitute the excinfo with a py.test.skip one
# let's substitute the excinfo with a pytest.skip one
call2 = call.__class__(lambda:
pytest.skip(str(call.excinfo.value)), call.when)
call.excinfo = call2.excinfo

View File

@ -1,4 +1,4 @@
""" (disabled by default) support for testing py.test and py.test plugins. """
""" (disabled by default) support for testing pytest and pytest plugins. """
import py, pytest
import sys, os
@ -137,7 +137,7 @@ class HookRecorder:
break
print_("NONAMEMATCH", name, "with", call)
else:
py.test.fail("could not find %r check %r" % (name, check))
pytest.fail("could not find %r check %r" % (name, check))
def popcall(self, name):
__tracebackhide__ = True
@ -147,7 +147,7 @@ class HookRecorder:
return call
lines = ["could not find call %r, in:" % (name,)]
lines.extend([" %s" % str(x) for x in self.calls])
py.test.fail("\n".join(lines))
pytest.fail("\n".join(lines))
def getcall(self, name):
l = self.getcalls(name)
@ -472,7 +472,7 @@ class TmpTestdir:
# becaue on windows the script is e.g. a py.test.exe
return (py.std.sys.executable, _pytest_fullpath,) # noqa
else:
py.test.skip("cannot run %r with --no-tools-on-path" % scriptname)
pytest.skip("cannot run %r with --no-tools-on-path" % scriptname)
def runpython(self, script, prepend=True):
if prepend:
@ -509,14 +509,14 @@ class TmpTestdir:
def spawn_pytest(self, string, expect_timeout=10.0):
if self.request.config.getvalue("notoolsonpath"):
py.test.skip("--no-tools-on-path prevents running pexpect-spawn tests")
pytest.skip("--no-tools-on-path prevents running pexpect-spawn tests")
basetemp = self.tmpdir.mkdir("pexpect")
invoke = " ".join(map(str, self._getpybinargs("py.test")))
cmd = "%s --basetemp=%s %s" % (invoke, basetemp, string)
return self.spawn(cmd, expect_timeout=expect_timeout)
def spawn(self, cmd, expect_timeout=10.0):
pexpect = py.test.importorskip("pexpect", "3.0")
pexpect = pytest.importorskip("pexpect", "3.0")
if hasattr(sys, 'pypy_version_info') and '64' in py.std.platform.machine():
pytest.skip("pypy-64 bit not supported")
if sys.platform == "darwin":
@ -688,4 +688,4 @@ class LineMatcher:
show(" and:", repr(nextline))
extralines.append(nextline)
else:
py.test.fail("remains unmatched: %r, see stderr" % (line,))
pytest.fail("remains unmatched: %r, see stderr" % (line,))

View File

@ -485,7 +485,7 @@ class Module(pytest.File, PyCollector):
fin = getattr(self.obj, 'teardown_module', None)
if fin is not None:
#XXX: nose compat hack, move to nose plugin
# if it takes a positional arg, its probably a py.test style one
# if it takes a positional arg, its probably a pytest style one
# so we pass the current module object
if inspect.getargspec(fin)[0]:
finalizer = lambda: fin(self.obj)
@ -1011,7 +1011,7 @@ class RaisesContext(object):
return issubclass(self.excinfo.type, self.ExpectedException)
#
# the basic py.test Function item
# the basic pytest Function item
#
class Function(FunctionMixin, pytest.Item, FuncargnamesCompatAttr):
@ -1225,7 +1225,7 @@ class FixtureRequest(FuncargnamesCompatAttr):
on all function invocations.
:arg marker: a :py:class:`_pytest.mark.MarkDecorator` object
created by a call to ``py.test.mark.NAME(...)``.
created by a call to ``pytest.mark.NAME(...)``.
"""
try:
self.node.keywords[marker.markname] = marker

View File

@ -1,6 +1,8 @@
""" basic collect and runtest protocol implementations """
import py, sys
import py
import pytest
import sys
from time import time
from py._code.code import TerminalRepr
@ -196,7 +198,7 @@ def pytest_runtest_makereport(item, call):
if not isinstance(excinfo, py.code.ExceptionInfo):
outcome = "failed"
longrepr = excinfo
elif excinfo.errisinstance(py.test.skip.Exception):
elif excinfo.errisinstance(pytest.skip.Exception):
outcome = "skipped"
r = excinfo._getreprcrash()
longrepr = (str(r.path), r.lineno, r.message)
@ -418,7 +420,7 @@ class Skipped(OutcomeException):
__module__ = 'builtins'
class Failed(OutcomeException):
""" raised from an explicit call to py.test.fail() """
""" raised from an explicit call to pytest.fail() """
__module__ = 'builtins'
class Exit(KeyboardInterrupt):
@ -438,7 +440,7 @@ exit.Exception = Exit
def skip(msg=""):
""" skip an executing test with the given message. Note: it's usually
better to use the py.test.mark.skipif marker to declare a test to be
better to use the pytest.mark.skipif marker to declare a test to be
skipped under certain conditions like mismatching platforms or
dependencies. See the pytest_skipping plugin for details.
"""

View File

@ -37,7 +37,7 @@ def pytest_namespace():
return dict(xfail=xfail)
class XFailed(pytest.fail.Exception):
""" raised from an explicit call to py.test.xfail() """
""" raised from an explicit call to pytest.xfail() """
def xfail(reason=""):
""" xfail an executing test or setup functions with the given reason."""
@ -129,7 +129,7 @@ def pytest_runtest_setup(item):
return
evalskip = MarkEvaluator(item, 'skipif')
if evalskip.istrue():
py.test.skip(evalskip.getexplanation())
pytest.skip(evalskip.getexplanation())
item._evalxfail = MarkEvaluator(item, 'xfail')
check_xfail_no_run(item)
@ -141,7 +141,7 @@ def check_xfail_no_run(item):
evalxfail = item._evalxfail
if evalxfail.istrue():
if not evalxfail.get('run', True):
py.test.xfail("[NOTRUN] " + evalxfail.getexplanation())
pytest.xfail("[NOTRUN] " + evalxfail.getexplanation())
def pytest_runtest_makereport(__multicall__, item, call):
if not isinstance(item, pytest.Function):
@ -150,16 +150,16 @@ def pytest_runtest_makereport(__multicall__, item, call):
if hasattr(item, '_unexpectedsuccess'):
rep = __multicall__.execute()
if rep.when == "call":
# we need to translate into how py.test encodes xpass
# we need to translate into how pytest encodes xpass
rep.wasxfail = "reason: " + repr(item._unexpectedsuccess)
rep.outcome = "failed"
return rep
if not (call.excinfo and
call.excinfo.errisinstance(py.test.xfail.Exception)):
call.excinfo.errisinstance(pytest.xfail.Exception)):
evalxfail = getattr(item, '_evalxfail', None)
if not evalxfail:
return
if call.excinfo and call.excinfo.errisinstance(py.test.xfail.Exception):
if call.excinfo and call.excinfo.errisinstance(pytest.xfail.Exception):
if not item.config.getvalue("runxfail"):
rep = __multicall__.execute()
rep.wasxfail = "reason: " + call.excinfo.value.msg

View File

@ -259,7 +259,7 @@ class TerminalReporter:
if hasattr(sys, 'pypy_version_info'):
verinfo = ".".join(map(str, sys.pypy_version_info[:3]))
msg += "[pypy-%s-%s]" % (verinfo, sys.pypy_version_info[3])
msg += " -- pytest-%s" % (py.test.__version__)
msg += " -- pytest-%s" % (pytest.__version__)
if self.verbosity > 0 or self.config.option.debug or \
getattr(self.config.option, 'pastebin', None):
msg += " -- " + str(sys.executable)

View File

@ -2,10 +2,10 @@ import sys
if __name__ == '__main__':
import cProfile
import py
import pytest
import pstats
script = sys.argv[1] if len(sys.argv) > 1 else "empty.py"
stats = cProfile.run('py.test.cmdline.main([%r])' % script, 'prof')
stats = cProfile.run('pytest.cmdline.main([%r])' % script, 'prof')
p = pstats.Stats("prof")
p.strip_dirs()
p.sort_stats('cumulative')

View File

@ -1,7 +1,7 @@
.. _apiref:
py.test reference documentation
pytest reference documentation
================================================
.. toctree::

View File

@ -10,7 +10,7 @@ The writing and reporting of assertions in tests
Asserting with the ``assert`` statement
---------------------------------------------------------
``py.test`` allows you to use the standard python ``assert`` for verifying
``pytest`` allows you to use the standard python ``assert`` for verifying
expectations and values in Python tests. For example, you can write the
following::
@ -28,21 +28,21 @@ you will see the return value of the function call::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 1 items
test_assert1.py F
================================= FAILURES =================================
______________________________ test_function _______________________________
def test_function():
> assert f() == 4
E assert 3 == 4
E + where 3 = f()
test_assert1.py:5: AssertionError
========================= 1 failed in 0.01 seconds =========================
py.test has support for showing the values of the most common subexpressions
``pytest`` has support for showing the values of the most common subexpressions
including calls, attributes, comparisons, and binary and unary
operators. (See :ref:`tbreportdemo`). This allows you to use the
idiomatic python constructs without boilerplate code while not losing
@ -102,7 +102,7 @@ Making use of context-sensitive comparisons
.. versionadded:: 2.0
py.test has rich support for providing context-sensitive information
``pytest`` has rich support for providing context-sensitive information
when it encounters comparisons. For example::
# content of test_assert2.py
@ -118,12 +118,12 @@ if you run this module::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 1 items
test_assert2.py F
================================= FAILURES =================================
___________________________ test_set_comparison ____________________________
def test_set_comparison():
set1 = set("1308")
set2 = set("8035")
@ -133,7 +133,7 @@ if you run this module::
E '1'
E Extra items in the right set:
E '5'
test_assert2.py:5: AssertionError
========================= 1 failed in 0.01 seconds =========================
@ -175,21 +175,21 @@ now, given this test module::
f2 = Foo(2)
assert f1 == f2
you can run the test module and get the custom output defined in
you can run the test module and get the custom output defined in
the conftest file::
$ py.test -q test_foocompare.py
F
================================= FAILURES =================================
_______________________________ test_compare _______________________________
def test_compare():
f1 = Foo(1)
f2 = Foo(2)
> assert f1 == f2
E assert Comparing Foo instances:
E vals: 1 != 2
test_foocompare.py:8: AssertionError
1 failed in 0.01 seconds
@ -205,33 +205,33 @@ Advanced assertion introspection
Reporting details about a failing assertion is achieved either by rewriting
assert statements before they are run or re-evaluating the assert expression and
recording the intermediate values. Which technique is used depends on the
location of the assert, py.test's configuration, and Python version being used
to run py.test. Note that for assert statements with a manually provided
location of the assert, ``pytest`` configuration, and Python version being used
to run ``pytest``. Note that for assert statements with a manually provided
message, i.e. ``assert expr, message``, no assertion introspection takes place
and the manually provided message will be rendered in tracebacks.
By default, if the Python version is greater than or equal to 2.6, py.test
By default, if the Python version is greater than or equal to 2.6, ``pytest``
rewrites assert statements in test modules. Rewritten assert statements put
introspection information into the assertion failure message. py.test only
introspection information into the assertion failure message. ``pytest`` only
rewrites test modules directly discovered by its test collection process, so
asserts in supporting modules which are not themselves test modules will not be
rewritten.
.. note::
py.test rewrites test modules on import. It does this by using an import hook
to write a new pyc files. Most of the time this works transparently. However,
if you are messing with import yourself, the import hook may interfere. If
this is the case, simply use ``--assert=reinterp`` or
``pytest`` rewrites test modules on import. It does this by using an import
hook to write a new pyc files. Most of the time this works transparently.
However, if you are messing with import yourself, the import hook may
interfere. If this is the case, simply use ``--assert=reinterp`` or
``--assert=plain``. Additionally, rewriting will fail silently if it cannot
write new pycs, i.e. in a read-only filesystem or a zipfile.
If an assert statement has not been rewritten or the Python version is less than
2.6, py.test falls back on assert reinterpretation. In assert reinterpretation,
py.test walks the frame of the function containing the assert statement to
discover sub-expression results of the failing assert statement. You can force
py.test to always use assertion reinterpretation by passing the
``--assert=reinterp`` option.
2.6, ``pytest`` falls back on assert reinterpretation. In assert
reinterpretation, ``pytest`` walks the frame of the function containing the
assert statement to discover sub-expression results of the failing assert
statement. You can force ``pytest`` to always use assertion reinterpretation by
passing the ``--assert=reinterp`` option.
Assert reinterpretation has a caveat not present with assert rewriting: If
evaluating the assert expression has side effects you may get a warning that the
@ -250,7 +250,7 @@ easy to rewrite the assertion and avoid any trouble::
All assert introspection can be turned off by passing ``--assert=plain``.
For further information, Benjamin Peterson wrote up `Behind the scenes of py.test's new assertion rewriting <http://pybites.blogspot.com/2011/07/behind-scenes-of-pytests-new-assertion.html>`_.
For further information, Benjamin Peterson wrote up `Behind the scenes of pytest's new assertion rewriting <http://pybites.blogspot.com/2011/07/behind-scenes-of-pytests-new-assertion.html>`_.
.. versionadded:: 2.1
Add assert rewriting as an alternate introspection technique.

View File

@ -4,7 +4,7 @@
Setting up bash completion
==========================
When using bash as your shell, ``py.test`` can use argcomplete
When using bash as your shell, ``pytest`` can use argcomplete
(https://argcomplete.readthedocs.org/) for auto-completion.
For this ``argcomplete`` needs to be installed **and** enabled.
@ -16,11 +16,11 @@ For global activation of all argcomplete enabled python applications run::
sudo activate-global-python-argcomplete
For permanent (but not global) ``py.test`` activation, use::
For permanent (but not global) ``pytest`` activation, use::
register-python-argcomplete py.test >> ~/.bashrc
For one-time activation of argcomplete for ``py.test`` only, use::
For one-time activation of argcomplete for ``pytest`` only, use::
eval "$(register-python-argcomplete py.test)"

View File

@ -23,7 +23,7 @@ a test.
Setting capturing methods or disabling capturing
-------------------------------------------------
There are two ways in which ``py.test`` can perform capturing:
There are two ways in which ``pytest`` can perform capturing:
* file descriptor (FD) level capturing (default): All writes going to the
operating system file descriptors 1 and 2 will be captured.
@ -49,7 +49,7 @@ One primary benefit of the default capturing of stdout/stderr output
is that you can use print statements for debugging::
# content of test_module.py
def setup_function(function):
print ("setting up %s" % function)
@ -66,16 +66,16 @@ of the failing function and hide the other one::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 2 items
test_module.py .F
================================= FAILURES =================================
________________________________ test_func2 ________________________________
def test_func2():
> assert False
E assert False
test_module.py:9: AssertionError
----------------------------- Captured stdout ------------------------------
setting up <function test_func2 at 0x1eb37d0>
@ -105,7 +105,7 @@ and capturing will be continued. After the test
function finishes the original streams will
be restored. Using ``capsys`` this way frees your
test from having to care about setting/resetting
output streams and also interacts well with py.test's
output streams and also interacts well with pytest's
own per-test capturing.
If you want to capture on ``fd`` level you can use

View File

@ -17,7 +17,7 @@ which were registered by installed plugins.
How test configuration is read from configuration INI-files
-------------------------------------------------------------
py.test searches for the first matching ini-style configuration file
``pytest`` searches for the first matching ini-style configuration file
in the directories of command line argument and the directories above.
It looks for file basenames in this order::
@ -26,8 +26,8 @@ It looks for file basenames in this order::
setup.cfg
Searching stops when the first ``[pytest]`` section is found in any of
these files. There is no merging of configuration values from multiple
files. Example::
these files. There is no merging of configuration values from multiple
files. Example::
py.test path/to/testdir
@ -41,7 +41,7 @@ will look in the following dirs for a config file::
path/to/setup.cfg
... # up until root of filesystem
If argument is provided to a py.test run, the current working directory
If argument is provided to a ``pytest`` run, the current working directory
is used to start the search.
.. _`how to change command line options defaults`:
@ -51,7 +51,7 @@ How to change command line options defaults
------------------------------------------------
It can be tedious to type the same series of command line options
every time you use py.test . For example, if you always want to see
every time you use ``pytest``. For example, if you always want to see
detailed info on skipped and xfailed tests, as well as have terser "dot"
progress output, you can write it into a configuration file::
@ -60,7 +60,7 @@ progress output, you can write it into a configuration file::
[pytest]
addopts = -rsxX -q
From now on, running ``py.test`` will add the specified options.
From now on, running ``pytest`` will add the specified options.
Builtin configuration file options
----------------------------------------------
@ -105,7 +105,7 @@ Builtin configuration file options
[pytest]
norecursedirs = .svn _build tmp*
This would tell py.test to not look into typical subversion or
This would tell ``pytest`` to not look into typical subversion or
sphinx-build directories or into any ``tmp`` prefixed directory.
.. confval:: python_files
@ -122,7 +122,7 @@ Builtin configuration file options
One or more name prefixes determining which test functions
and methods are considered as test modules. Note that this
has no effect on methods that live on a ``unittest.TestCase``
has no effect on methods that live on a ``unittest.TestCase``
derived class.
See :ref:`change naming conventions` for examples.

View File

@ -1,5 +1,5 @@
=================================================
Feedback and contribute to py.test
Feedback and contribute to pytest
=================================================
.. toctree::

View File

@ -1,4 +1,4 @@
from py.test import raises
from pytest import raises
import py
def otherfunc(a,b):

View File

@ -30,22 +30,22 @@ You can then restrict a test run to only run tests marked with ``webtest``::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:3: test_send_http PASSED
=================== 2 tests deselected by "-m 'webtest'" ===================
================== 1 passed, 2 deselected in 0.01 seconds ==================
Or the inverse, running all tests except the webtest ones::
$ py.test -v -m "not webtest"
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:6: test_something_quick PASSED
test_server.py:8: test_another PASSED
================= 1 tests deselected by "-m 'not webtest'" =================
================== 2 passed, 1 deselected in 0.01 seconds ==================
@ -63,9 +63,9 @@ select tests based on their names::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:3: test_send_http PASSED
====================== 2 tests deselected by '-khttp' ======================
================== 1 passed, 2 deselected in 0.01 seconds ==================
@ -75,10 +75,10 @@ And you can also run all tests except the ones that match the keyword::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:6: test_something_quick PASSED
test_server.py:8: test_another PASSED
================= 1 tests deselected by '-knot send_http' ==================
================== 2 passed, 1 deselected in 0.01 seconds ==================
@ -88,10 +88,10 @@ Or to select "http" and "quick" tests::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:3: test_send_http PASSED
test_server.py:6: test_something_quick PASSED
================= 1 tests deselected by '-khttp or quick' ==================
================== 2 passed, 1 deselected in 0.01 seconds ==================
@ -124,19 +124,19 @@ You can ask which markers exist for your test suite - the list includes our just
$ py.test --markers
@pytest.mark.webtest: mark a test as a webtest.
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
For an example on how to add and work with markers from a plugin, see
:ref:`adding a custom marker from a plugin`.
@ -150,8 +150,8 @@ For an example on how to add and work with markers from a plugin, see
* asking for existing markers via ``py.test --markers`` gives good output
* typos in function markers are treated as an error if you use
the ``--strict`` option. Later versions of py.test are probably
going to treat non-registered markers as an error.
the ``--strict`` option. Future versions of ``pytest`` are probably
going to start treating non-registered markers as errors at some point.
.. _`scoped-marking`:
@ -268,40 +268,40 @@ the test needs::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 1 items
test_someenv.py s
======================== 1 skipped in 0.01 seconds =========================
and here is one that specifies exactly the environment needed::
$ py.test -E stage1
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 1 items
test_someenv.py .
========================= 1 passed in 0.01 seconds =========================
The ``--markers`` option always gives you a list of available markers::
$ py.test --markers
@pytest.mark.env(name): mark test to run only on named environment
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
Reading markers which were set from multiple places
----------------------------------------------------
@ -337,7 +337,7 @@ test function. From a conftest file we can read it like this::
Let's run this without capturing output and see what we get::
$ py.test -q -s
$ py.test -q -s
glob args=('function',) kwargs={'x': 3}
glob args=('class',) kwargs={'x': 2}
glob args=('module',) kwargs={'x': 1}
@ -352,7 +352,7 @@ marking platform specific tests with pytest
Consider you have a test suite which marks tests for particular platforms,
namely ``pytest.mark.osx``, ``pytest.mark.win32`` etc. and you
also have tests that run on all platforms and have no specific
marker. If you now want to have a way to only run the tests
marker. If you now want to have a way to only run the tests
for your particular platform, you could use the following plugin::
# content of conftest.py
@ -397,11 +397,11 @@ then you will see two test skipped and two executed tests as expected::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 4 items
test_plat.py s.s.
========================= short test summary info ==========================
SKIP [2] /tmp/doc-exec-63/conftest.py:12: cannot run on platform linux2
=================== 2 passed, 2 skipped in 0.01 seconds ====================
Note that if you specify a platform via the marker-command line option like this::
@ -410,13 +410,13 @@ Note that if you specify a platform via the marker-command line option like this
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 4 items
test_plat.py .
=================== 3 tests deselected by "-m 'linux2'" ====================
================== 1 passed, 3 deselected in 0.01 seconds ==================
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
Automatically adding markers based on test names
--------------------------------------------------------
@ -435,7 +435,7 @@ at this test module::
def test_interface_complex():
assert 0
def test_event_simple():
assert 0
@ -446,7 +446,7 @@ We want to dynamically define two markers and can do it in a
``conftest.py`` plugin::
# content of conftest.py
import pytest
def pytest_collection_modifyitems(items):
for item in items:
@ -461,9 +461,9 @@ We can now use the ``-m option`` to select one set::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 4 items
test_module.py FF
================================= FAILURES =================================
__________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple
@ -482,9 +482,9 @@ or to select both "event" and "interface" tests::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 4 items
test_module.py FFF
================================= FAILURES =================================
__________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple

View File

@ -2,7 +2,8 @@
module containing a parametrized tests testing cross-python
serialization via the pickle module.
"""
import py, pytest
import py
import pytest
pythonlist = ['python2.4', 'python2.5', 'python2.6', 'python2.7', 'python2.8']
@pytest.fixture(params=pythonlist)
@ -18,7 +19,7 @@ class Python:
def __init__(self, version, picklefile):
self.pythonpath = py.path.local.sysfind(version)
if not self.pythonpath:
py.test.skip("%r not found" %(version,))
pytest.skip("%r not found" %(version,))
self.picklefile = picklefile
def dumps(self, obj):
dumpfile = self.picklefile.dirpath("dump.py")

View File

@ -6,7 +6,7 @@ Parametrizing tests
.. currentmodule:: _pytest.python
py.test allows to easily parametrize test functions.
``pytest`` allows to easily parametrize test functions.
For basic docs, see :ref:`parametrize-basics`.
In the following we provide some examples using
@ -55,13 +55,13 @@ let's run the full monty::
....F
================================= FAILURES =================================
_____________________________ test_compute[4] ______________________________
param1 = 4
def test_compute(param1):
> assert param1 < 4
E assert 4 < 4
test_compute.py:3: AssertionError
1 failed, 4 passed in 0.01 seconds
@ -108,9 +108,9 @@ this is a fully self-contained example which you can run with::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 4 items
test_scenarios.py ....
========================= 4 passed in 0.01 seconds =========================
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
@ -127,7 +127,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
<Function 'test_demo2[basic]'>
<Function 'test_demo1[advanced]'>
<Function 'test_demo2[advanced]'>
============================= in 0.01 seconds =============================
Note that we told ``metafunc.parametrize()`` that your scenario values
@ -187,7 +187,7 @@ Let's first see how it looks like at collection time::
<Module 'test_backends.py'>
<Function 'test_db_initialized[d1]'>
<Function 'test_db_initialized[d2]'>
============================= in 0.00 seconds =============================
And then when we run the test::
@ -196,15 +196,15 @@ And then when we run the test::
.F
================================= FAILURES =================================
_________________________ test_db_initialized[d2] __________________________
db = <conftest.DB2 instance at 0x12d4128>
def test_db_initialized(db):
# a dummy test
if db.__class__.__name__ == "DB2":
> pytest.fail("deliberately failing for demo purposes")
E Failed: deliberately failing for demo purposes
test_backends.py:6: Failed
1 failed, 1 passed in 0.01 seconds
@ -252,13 +252,13 @@ argument sets to use for each test function. Let's run it::
F..
================================= FAILURES =================================
________________________ TestClass.test_equals[2-1] ________________________
self = <test_parametrize.TestClass instance at 0x14493f8>, a = 1, b = 2
def test_equals(self, a, b):
> assert a == b
E assert 1 == 2
test_parametrize.py:18: AssertionError
1 failed, 2 passed in 0.01 seconds
@ -290,7 +290,7 @@ Indirect parametrization of optional implementations/imports
If you want to compare the outcomes of several implementations of a given
API, you can write test functions that receive the already imported implementations
and get skipped in case the implementation is not importable/available. Let's
say we have a "base" implementation and the other (possibly optimized ones)
say we have a "base" implementation and the other (possibly optimized ones)
need to provide similar results::
# content of conftest.py
@ -331,24 +331,24 @@ If you run this with reporting for skips enabled::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 2 items
test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-65/conftest.py:10: could not import 'opt2'
=================== 1 passed, 1 skipped in 0.01 seconds ====================
You'll see that we don't have a ``opt2`` module and thus the second test run
of our ``test_func1`` was skipped. A few notes:
- the fixture functions in the ``conftest.py`` file are "session-scoped" because we
don't need to import more than once
don't need to import more than once
- if you have multiple test functions and a skipped import, you will see
the ``[1]`` count increasing in the report
- you can put :ref:`@pytest.mark.parametrize <@pytest.mark.parametrize>` style
parametrization on the test functions to parametrize input/output
parametrization on the test functions to parametrize input/output
values as well.

View File

@ -10,7 +10,7 @@ You can set the :confval:`norecursedirs` option in an ini-file, for example your
[pytest]
norecursedirs = .svn _build tmp*
This would tell py.test to not recurse into typical subversion or sphinx-build directories or into any ``tmp`` prefixed directory.
This would tell ``pytest`` to not recurse into typical subversion or sphinx-build directories or into any ``tmp`` prefixed directory.
.. _`change naming conventions`:
@ -28,7 +28,7 @@ the :confval:`python_files`, :confval:`python_classes` and
python_classes=Check
python_functions=check
This would make py.test look for ``check_`` prefixes in
This would make ``pytest`` look for ``check_`` prefixes in
Python filenames, ``Check`` prefixes in classes and ``check`` prefixes
in functions and classes. For example, if we have::
@ -50,11 +50,11 @@ then the test collection looks like this::
<Instance '()'>
<Function 'check_simple'>
<Function 'check_complex'>
============================= in 0.01 seconds =============================
.. note::
the ``python_functions`` and ``python_classes`` has no effect
for ``unittest.TestCase`` test discovery because pytest delegates
detection of test case methods to unittest code.
@ -62,7 +62,7 @@ then the test collection looks like this::
Interpreting cmdline arguments as Python packages
-----------------------------------------------------
You can use the ``--pyargs`` option to make py.test try
You can use the ``--pyargs`` option to make ``pytest`` try
interpreting arguments as python package names, deriving
their file system path and then running the test. For
example if you have unittest2 installed you can type::
@ -96,7 +96,7 @@ You can always peek at the collection tree without running tests like this::
<Instance '()'>
<Function 'test_method'>
<Function 'test_anothermethod'>
============================= in 0.01 seconds =============================
customizing test collection to find all .py files
@ -104,7 +104,7 @@ customizing test collection to find all .py files
.. regendoc:wipe
You can easily instruct py.test to discover tests from every python file::
You can easily instruct ``pytest`` to discover tests from every python file::
# content of pytest.ini
@ -112,8 +112,8 @@ You can easily instruct py.test to discover tests from every python file::
python_files = *.py
However, many projects will have a ``setup.py`` which they don't want to be imported. Moreover, there may files only importable by a specific python version.
For such cases you can dynamically define files to be ignored by listing
them in a ``conftest.py`` file::
For such cases you can dynamically define files to be ignored by listing
them in a ``conftest.py`` file::
# content of conftest.py
import sys
@ -136,16 +136,16 @@ and a setup.py dummy file like this::
# content of setup.py
0/0 # will raise exeption if imported
then a pytest run on python2 will find the one test when run with a python2
then a pytest run on python2 will find the one test when run with a python2
interpreters and will leave out the setup.py file::
$ py.test --collect-only
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 1 items
<Module 'pkg/module_py2.py'>
<Function 'test_only_on_python2'>
============================= in 0.01 seconds =============================
If you run with a Python3 interpreter the moduled added through the conftest.py file will not be considered for test collection.

View File

@ -1,11 +1,11 @@
.. _`tbreportdemo`:
Demo of Python failure reports with py.test
Demo of Python failure reports with pytest
==================================================
Here is a nice run of several tens of failures
and how py.test presents things (unfortunately
and how ``pytest`` presents things (unfortunately
not showing the nice colors here in the HTML that you
get on the terminal - we are working on that):
@ -15,82 +15,82 @@ get on the terminal - we are working on that):
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 39 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
================================= FAILURES =================================
____________________________ test_generative[0] ____________________________
param1 = 3, param2 = 6
def test_generative(param1, param2):
> assert param1 * 2 < param2
E assert (3 * 2) < 6
failure_demo.py:15: AssertionError
_________________________ TestFailing.test_simple __________________________
self = <failure_demo.TestFailing object at 0x12d9250>
def test_simple(self):
def f():
return 42
def g():
return 43
> assert f() == g()
E assert 42 == 43
E + where 42 = <function f at 0x1278b90>()
E + and 43 = <function g at 0x1278c08>()
failure_demo.py:28: AssertionError
____________________ TestFailing.test_simple_multiline _____________________
self = <failure_demo.TestFailing object at 0x1287210>
def test_simple_multiline(self):
otherfunc_multi(
42,
> 6*9)
failure_demo.py:33:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:33:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 42, b = 54
def otherfunc_multi(a,b):
> assert (a ==
b)
E assert 42 == 54
failure_demo.py:11: AssertionError
___________________________ TestFailing.test_not ___________________________
self = <failure_demo.TestFailing object at 0x12c6e10>
def test_not(self):
def f():
return 42
> assert not f()
E assert not 42
E + where 42 = <function f at 0x12861b8>()
failure_demo.py:38: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1290c50>
def test_eq_text(self):
> assert 'spam' == 'eggs'
E assert 'spam' == 'eggs'
E - spam
E + eggs
failure_demo.py:42: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
self = <failure_demo.TestSpecialisedExplanations object at 0x12877d0>
def test_eq_similar_text(self):
> assert 'foo 1 bar' == 'foo 2 bar'
E assert 'foo 1 bar' == 'foo 2 bar'
@ -98,12 +98,12 @@ get on the terminal - we are working on that):
E ? ^
E + foo 2 bar
E ? ^
failure_demo.py:45: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x12de1d0>
def test_eq_multiline_text(self):
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
E assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
@ -111,12 +111,12 @@ get on the terminal - we are working on that):
E - spam
E + eggs
E bar
failure_demo.py:48: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x143b5d0>
def test_eq_long_text(self):
a = '1'*100 + 'a' + '2'*100
b = '1'*100 + 'b' + '2'*100
@ -128,12 +128,12 @@ get on the terminal - we are working on that):
E ? ^
E + 1111111111b222222222
E ? ^
failure_demo.py:53: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x1287810>
def test_eq_long_text_multiline(self):
a = '1\n'*100 + 'a' + '2\n'*100
b = '1\n'*100 + 'b' + '2\n'*100
@ -152,34 +152,34 @@ get on the terminal - we are working on that):
E 2
E 2
E 2
failure_demo.py:58: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x12900d0>
def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3]
E assert [0, 1, 2] == [0, 1, 3]
E At index 2 diff: 2 != 3
failure_demo.py:61: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x12c62d0>
def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100
b = [0]*100 + [2] + [3]*100
> assert a == b
E assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...]
E At index 100 diff: 1 != 2
failure_demo.py:66: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x12deb50>
def test_eq_dict(self):
> assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
@ -190,12 +190,12 @@ get on the terminal - we are working on that):
E {'c': 0}
E Right contains more items:
E {'d': 0}
failure_demo.py:69: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________
self = <failure_demo.TestSpecialisedExplanations object at 0x128b4d0>
def test_eq_set(self):
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
E assert set([0, 10, 11, 12]) == set([0, 20, 21])
@ -206,31 +206,31 @@ get on the terminal - we are working on that):
E Extra items in the right set:
E 20
E 21
failure_demo.py:72: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
self = <failure_demo.TestSpecialisedExplanations object at 0x12c6b10>
def test_eq_longer_list(self):
> assert [1,2] == [1,2,3]
E assert [1, 2] == [1, 2, 3]
E Right contains more items, first extra item: 3
failure_demo.py:75: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x143b650>
def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5]
E assert 1 in [0, 2, 3, 4, 5]
failure_demo.py:78: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x128be10>
def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
> assert 'foo' not in text
@ -243,12 +243,12 @@ get on the terminal - we are working on that):
E ? +++
E and a
E tail
failure_demo.py:82: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x12d9fd0>
def test_not_in_text_single(self):
text = 'single foo line'
> assert 'foo' not in text
@ -256,36 +256,36 @@ get on the terminal - we are working on that):
E 'foo' is contained here:
E single foo line
E ? +++
failure_demo.py:86: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
self = <failure_demo.TestSpecialisedExplanations object at 0x143bdd0>
def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
> assert 'foo' not in text
E assert 'foo' not in 'head head head head hea...ail tail tail tail tail '
E 'foo' is contained here:
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? +++
failure_demo.py:90: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
self = <failure_demo.TestSpecialisedExplanations object at 0x12c6390>
def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
> assert 'f'*70 not in text
E assert 'fffffffffff...ffffffffffff' not in 'head head he...l tail tail '
E 'ffffffffffffffffff...fffffffffffffffffff' is contained here:
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
failure_demo.py:94: AssertionError
______________________________ test_attribute ______________________________
def test_attribute():
class Foo(object):
b = 1
@ -293,10 +293,10 @@ get on the terminal - we are working on that):
> assert i.b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x1287790>.b
failure_demo.py:101: AssertionError
_________________________ test_attribute_instance __________________________
def test_attribute_instance():
class Foo(object):
b = 1
@ -304,10 +304,10 @@ get on the terminal - we are working on that):
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x12c6bd0>.b
E + where <failure_demo.Foo object at 0x12c6bd0> = <class 'failure_demo.Foo'>()
failure_demo.py:107: AssertionError
__________________________ test_attribute_failure __________________________
def test_attribute_failure():
class Foo(object):
def _get_b(self):
@ -315,19 +315,19 @@ get on the terminal - we are working on that):
b = property(_get_b)
i = Foo()
> assert i.b == 2
failure_demo.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.Foo object at 0x12daed0>
def _get_b(self):
> raise Exception('Failed to get attrib')
E Exception: Failed to get attrib
failure_demo.py:113: Exception
_________________________ test_attribute_multiple __________________________
def test_attribute_multiple():
class Foo(object):
b = 1
@ -339,74 +339,74 @@ get on the terminal - we are working on that):
E + where <failure_demo.Foo object at 0x128bcd0> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x128b050>.b
E + where <failure_demo.Bar object at 0x128b050> = <class 'failure_demo.Bar'>()
failure_demo.py:124: AssertionError
__________________________ TestRaises.test_raises __________________________
self = <failure_demo.TestRaises instance at 0x145c7e8>
def test_raises(self):
s = 'qwe'
> raises(TypeError, "int(s)")
failure_demo.py:133:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:133:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> int(s)
E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:983>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises instance at 0x1455f38>
def test_raises_doesnt(self):
> raises(IOError, "int('3')")
E Failed: DID NOT RAISE
failure_demo.py:136: Failed
__________________________ TestRaises.test_raise ___________________________
self = <failure_demo.TestRaises instance at 0x1453998>
def test_raise(self):
> raise ValueError("demo error")
E ValueError: demo error
failure_demo.py:139: ValueError
________________________ TestRaises.test_tupleerror ________________________
self = <failure_demo.TestRaises instance at 0x1465560>
def test_tupleerror(self):
> a,b = [1]
E ValueError: need more than 1 value to unpack
failure_demo.py:142: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
self = <failure_demo.TestRaises instance at 0x1465758>
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3]
print ("l is %r" % l)
> a,b = l.pop()
E TypeError: 'int' object is not iterable
failure_demo.py:147: TypeError
----------------------------- Captured stdout ------------------------------
l is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________
self = <failure_demo.TestRaises instance at 0x1468ab8>
def test_some_error(self):
> if namenotexi:
E NameError: global name 'namenotexi' is not defined
failure_demo.py:150: NameError
____________________ test_dynamic_compile_shows_nicely _____________________
def test_dynamic_compile_shows_nicely():
src = 'def foo():\n assert 1 == 0\n'
name = 'abc-123'
@ -415,80 +415,80 @@ get on the terminal - we are working on that):
py.builtin.exec_(code, module.__dict__)
py.std.sys.modules[name] = module
> module.foo()
failure_demo.py:165:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:165:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def foo():
> assert 1 == 0
E assert 1 == 0
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x1442908>
def test_complex_error(self):
def f():
return 44
def g():
return 43
> somefunc(f(), g())
failure_demo.py:175:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:175:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
x = 44, y = 43
def somefunc(x,y):
> otherfunc(x,y)
failure_demo.py:8:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:8:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 44, b = 43
def otherfunc(a,b):
> assert a==b
E assert 44 == 43
failure_demo.py:5: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________
self = <failure_demo.TestMoreErrors instance at 0x145bab8>
def test_z1_unpack_error(self):
l = []
> a,b = l
E ValueError: need more than 0 values to unpack
failure_demo.py:179: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x1444368>
def test_z2_type_error(self):
l = 3
> a,b = l
E TypeError: 'int' object is not iterable
failure_demo.py:183: TypeError
______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors instance at 0x146e4d0>
def test_startswith(self):
s = "123"
g = "456"
> assert s.startswith(g)
E assert <built-in method startswith of str object at 0x12dfa58>('456')
E + where <built-in method startswith of str object at 0x12dfa58> = '123'.startswith
failure_demo.py:188: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors instance at 0x143ed40>
def test_startswith_nested(self):
def f():
return "123"
@ -499,48 +499,48 @@ get on the terminal - we are working on that):
E + where <built-in method startswith of str object at 0x12dfa58> = '123'.startswith
E + where '123' = <function f at 0x1286500>()
E + and '456' = <function g at 0x126db18>()
failure_demo.py:195: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors instance at 0x1453b90>
def test_global_func(self):
> assert isinstance(globf(42), float)
E assert isinstance(43, float)
E + where 43 = globf(42)
failure_demo.py:198: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors instance at 0x146b128>
def test_instance(self):
self.x = 6*7
> assert self.x != 42
E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors instance at 0x146b128>.x
failure_demo.py:202: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors instance at 0x1469368>
def test_compare(self):
> assert globf(10) < 5
E assert 11 < 5
E + where 11 = globf(10)
failure_demo.py:205: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors instance at 0x12c4098>
def test_try_finally(self):
x = 1
try:
> assert x == 0
E assert 1 == 0
failure_demo.py:210: AssertionError
======================== 39 failed in 0.20 seconds =========================

View File

@ -41,9 +41,9 @@ Let's run this without supplying our new option::
F
================================= FAILURES =================================
_______________________________ test_answer ________________________________
cmdopt = 'type1'
def test_answer(cmdopt):
if cmdopt == "type1":
print ("first")
@ -51,7 +51,7 @@ Let's run this without supplying our new option::
print ("second")
> assert 0 # to see what was printed
E assert 0
test_sample.py:6: AssertionError
----------------------------- Captured stdout ------------------------------
first
@ -63,9 +63,9 @@ And now with supplying a command line option::
F
================================= FAILURES =================================
_______________________________ test_answer ________________________________
cmdopt = 'type2'
def test_answer(cmdopt):
if cmdopt == "type1":
print ("first")
@ -73,7 +73,7 @@ And now with supplying a command line option::
print ("second")
> assert 0 # to see what was printed
E assert 0
test_sample.py:6: AssertionError
----------------------------- Captured stdout ------------------------------
second
@ -110,7 +110,7 @@ directory with the above conftest.py::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 0 items
============================= in 0.00 seconds =============================
.. _`excontrolskip`:
@ -154,11 +154,11 @@ and when running it will see a skipped "slow" test::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 2 items
test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-68/conftest.py:9: need --runslow option to run
=================== 1 passed, 1 skipped in 0.01 seconds ====================
Or run it including the ``slow`` marked test::
@ -167,9 +167,9 @@ Or run it including the ``slow`` marked test::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 2 items
test_module.py ..
========================= 2 passed in 0.01 seconds =========================
Writing well integrated assertion helpers
@ -193,7 +193,7 @@ Example::
def test_something():
checkconfig(42)
The ``__tracebackhide__`` setting influences py.test showing
The ``__tracebackhide__`` setting influences ``pytest`` showing
of tracebacks: the ``checkconfig`` function will not be shown
unless the ``--fulltrace`` command line option is specified.
Let's run our little function::
@ -202,15 +202,15 @@ Let's run our little function::
F
================================= FAILURES =================================
______________________________ test_something ______________________________
def test_something():
> checkconfig(42)
E Failed: not configured: 42
test_checkconfig.py:8: Failed
1 failed in 0.01 seconds
Detect if running from within a py.test run
Detect if running from within a pytest run
--------------------------------------------------------------
.. regendoc:wipe
@ -245,10 +245,10 @@ Adding info to test report header
.. regendoc:wipe
It's easy to present extra information in a py.test run::
It's easy to present extra information in a ``pytest`` run::
# content of conftest.py
def pytest_report_header(config):
return "project deps: mylib-1.1"
@ -259,7 +259,7 @@ which will add the string to the test header accordingly::
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
project deps: mylib-1.1
collected 0 items
============================= in 0.00 seconds =============================
.. regendoc:wipe
@ -283,7 +283,7 @@ which will add info only when run with "--v"::
info1: did you know that ...
did you?
collecting ... collected 0 items
============================= in 0.00 seconds =============================
and nothing when run plainly::
@ -292,7 +292,7 @@ and nothing when run plainly::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 0 items
============================= in 0.00 seconds =============================
profiling test duration
@ -324,9 +324,9 @@ Now we can profile which test functions execute the slowest::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 3 items
test_some_are_slow.py ...
========================= slowest 3 test durations =========================
0.20s call test_some_are_slow.py::test_funcslow2
0.10s call test_some_are_slow.py::test_funcslow1
@ -385,18 +385,18 @@ If we run this::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 4 items
test_step.py .Fx.
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling instance at 0x2758c20>
def test_modification(self):
> assert 0
E assert 0
test_step.py:9: AssertionError
========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::()::test_deletion
@ -448,19 +448,19 @@ the ``db`` fixture::
# content of b/test_error.py
def test_root(db): # no db here, will error out
pass
We can run this::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 7 items
test_step.py .Fx.
a/test_db.py F
a/test_db2.py F
b/test_error.py E
================================== ERRORS ==================================
_______________________ ERROR at setup of test_root ________________________
file /tmp/doc-exec-68/b/test_error.py, line 1
@ -468,35 +468,35 @@ We can run this::
fixture 'db' not found
available fixtures: recwarn, capfd, pytestconfig, capsys, tmpdir, monkeypatch
use 'py.test --fixtures [testpath]' for help on them.
/tmp/doc-exec-68/b/test_error.py:1
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling instance at 0x131fc20>
def test_modification(self):
> assert 0
E assert 0
test_step.py:9: AssertionError
_________________________________ test_a1 __________________________________
db = <conftest.DB instance at 0x1328878>
def test_a1(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB instance at 0x1328878>
a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________
db = <conftest.DB instance at 0x1328878>
def test_a2(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB instance at 0x1328878>
a/test_db2.py:2: AssertionError
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.03 seconds ==========
@ -512,7 +512,7 @@ post-process test reports / failures
---------------------------------------
If you want to postprocess test reports and need access to the executing
environment you can implement a hook that gets called when the test
environment you can implement a hook that gets called when the test
"report" object is about to be created. Here we write out all failing
test calls and also access a fixture (if it was used by the test) in
case you want to query/look at it during your post processing. In our
@ -529,7 +529,7 @@ case we just write some informations out to a ``failures`` file::
rep = __multicall__.execute()
# we only look at actual failing test calls, not setup/teardown
if rep.when == "call" and rep.failed:
if rep.when == "call" and rep.failed:
mode = "a" if os.path.exists("failures") else "w"
with open("failures", mode) as f:
# let's also access a fixture for the fun of it
@ -537,7 +537,7 @@ case we just write some informations out to a ``failures`` file::
extra = " (%s)" % item.funcargs["tmpdir"]
else:
extra = ""
f.write(rep.nodeid + extra + "\n")
return rep
@ -545,35 +545,35 @@ if you then have failing tests::
# content of test_module.py
def test_fail1(tmpdir):
assert 0
assert 0
def test_fail2():
assert 0
assert 0
and run them::
$ py.test test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 2 items
test_module.py FF
================================= FAILURES =================================
________________________________ test_fail1 ________________________________
tmpdir = local('/tmp/pytest-42/test_fail10')
def test_fail1(tmpdir):
> assert 0
E assert 0
test_module.py:2: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:4: AssertionError
========================= 2 failed in 0.01 seconds =========================
@ -623,58 +623,58 @@ here is a little example implemented via a local plugin::
if you then have failing tests::
# content of test_module.py
import pytest
@pytest.fixture
def other():
assert 0
def test_setup_fails(something, other):
pass
def test_call_fails(something):
assert 0
assert 0
def test_fail2():
assert 0
assert 0
and run it::
$ py.test -s test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 3 items
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
Fexecuting test failed test_module.py::test_call_fails
F
================================== ERRORS ==================================
____________________ ERROR at setup of test_setup_fails ____________________
@pytest.fixture
def other():
> assert 0
E assert 0
test_module.py:6: AssertionError
================================= FAILURES =================================
_____________________________ test_call_fails ______________________________
something = None
def test_call_fails(something):
> assert 0
E assert 0
test_module.py:12: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:15: AssertionError
==================== 2 failed, 1 error in 0.01 seconds =====================

View File

@ -4,37 +4,37 @@ Some Issues and Questions
.. note::
This FAQ is here only mostly for historic reasons. Checkout
`pytest Q&A at Stackoverflow <http://stackoverflow.com/search?q=pytest>`_
`pytest Q&A at Stackoverflow <http://stackoverflow.com/search?q=pytest>`_
for many questions and answers related to pytest and/or use
:ref:`contact channels` to get help.
On naming, nosetests, licensing and magic
------------------------------------------------
How does py.test relate to nose and unittest?
How does pytest relate to nose and unittest?
+++++++++++++++++++++++++++++++++++++++++++++++++
py.test and nose_ share basic philosophy when it comes
``pytest`` and nose_ share basic philosophy when it comes
to running and writing Python tests. In fact, you can run many tests
written for nose with py.test. nose_ was originally created
as a clone of ``py.test`` when py.test was in the ``0.8`` release
written for nose with ``pytest``. nose_ was originally created
as a clone of ``pytest`` when ``pytest`` was in the ``0.8`` release
cycle. Note that starting with pytest-2.0 support for running unittest
test suites is majorly improved.
how does py.test relate to twisted's trial?
how does pytest relate to twisted's trial?
++++++++++++++++++++++++++++++++++++++++++++++
Since some time py.test has builtin support for supporting tests
Since some time ``pytest`` has builtin support for supporting tests
written using trial. It does not itself start a reactor, however,
and does not handle Deferreds returned from a test in pytest style.
and does not handle Deferreds returned from a test in pytest style.
If you are using trial's unittest.TestCase chances are that you can
just run your tests even if you return Deferreds. In addition,
there also is a dedicated `pytest-twisted
<http://pypi.python.org/pypi/pytest-twisted>`_ plugin which allows to
return deferreds from pytest-style tests, allowing to use
:ref:`fixtures` and other features.
how does py.test work with Django?
how does pytest work with Django?
++++++++++++++++++++++++++++++++++++++++++++++
In 2012, some work is going into the `pytest-django plugin <http://pypi.python.org/pypi/pytest-django>`_. It substitutes the usage of Django's
@ -44,36 +44,36 @@ are not available from Django directly.
.. _features: features.html
What's this "magic" with py.test? (historic notes)
What's this "magic" with pytest? (historic notes)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Around 2007 (version ``0.8``) some people thought that py.test
Around 2007 (version ``0.8``) some people thought that ``pytest``
was using too much "magic". It had been part of the `pylib`_ which
contains a lot of unreleated python library code. Around 2010 there
was a major cleanup refactoring, which removed unused or deprecated code
was a major cleanup refactoring, which removed unused or deprecated code
and resulted in the new ``pytest`` PyPI package which strictly contains
only test-related code. This relese also brought a complete pluginification
only test-related code. This release also brought a complete pluginification
such that the core is around 300 lines of code and everything else is
implemented in plugins. Thus ``pytest`` today is a small, universally runnable
implemented in plugins. Thus ``pytest`` today is a small, universally runnable
and customizable testing framework for Python. Note, however, that
``pytest`` uses metaprogramming techniques and reading its source is
``pytest`` uses metaprogramming techniques and reading its source is
thus likely not something for Python beginners.
A second "magic" issue was the assert statement debugging feature.
Nowadays, py.test explicitely rewrites assert statements in test modules
A second "magic" issue was the assert statement debugging feature.
Nowadays, ``pytest`` explicitely rewrites assert statements in test modules
in order to provide more useful :ref:`assert feedback <assertfeedback>`.
This completely avoids previous issues of confusing assertion-reporting.
This completely avoids previous issues of confusing assertion-reporting.
It also means, that you can use Python's ``-O`` optimization without loosing
assertions in test modules.
py.test contains a second mostly obsolete assert debugging technique,
``pytest`` contains a second, mostly obsolete, assert debugging technique,
invoked via ``--assert=reinterpret``, activated by default on
Python-2.5: When an ``assert`` statement fails, py.test re-interprets
Python-2.5: When an ``assert`` statement fails, ``pytest`` re-interprets
the expression part to show intermediate values. This technique suffers
from a caveat that the rewriting does not: If your expression has side
effects (better to avoid them anyway!) the intermediate values may not
be the same, confusing the reinterpreter and obfuscating the initial
error (this is also explained at the command line if it happens).
error (this is also explained at the command line if it happens).
You can also turn off all assertion interaction using the
``--assertmode=off`` option.
@ -85,7 +85,7 @@ You can also turn off all assertion interaction using the
Why a ``py.test`` instead of a ``pytest`` command?
++++++++++++++++++++++++++++++++++++++++++++++++++
Some of the reasons are historic, others are practical. ``py.test``
Some of the reasons are historic, others are practical. ``pytest``
used to be part of the ``py`` package which provided several developer
utilities, all starting with ``py.<TAB>``, thus providing nice
TAB-completion. If
@ -140,16 +140,16 @@ However, with pytest-2.3 you can use the :ref:`@pytest.fixture` decorator
and specify ``params`` so that all tests depending on the factory-created
resource will run multiple times with different parameters.
You can also use the `pytest_generate_tests`_ hook to
You can also use the `pytest_generate_tests`_ hook to
implement the `parametrization scheme of your choice`_.
.. _`pytest_generate_tests`: test/funcargs.html#parametrizing-tests
.. _`parametrization scheme of your choice`: http://tetamap.wordpress.com/2009/05/13/parametrizing-python-tests-generalized/
py.test interaction with other packages
pytest interaction with other packages
---------------------------------------------------
Issues with py.test, multiprocess and setuptools?
Issues with pytest, multiprocess and setuptools?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
On windows the multiprocess package will instantiate sub processes

View File

@ -15,7 +15,7 @@ pytest fixtures: explicit, modular, scalable
The `purpose of test fixtures`_ is to provide a fixed baseline
upon which tests can reliably and repeatedly execute. pytest fixtures
offer dramatic improvements over the classic xUnit style of setup/teardown
offer dramatic improvements over the classic xUnit style of setup/teardown
functions:
* fixtures have explicit names and are activated by declaring their use
@ -50,7 +50,7 @@ Fixtures as Function arguments
-----------------------------------------
Test functions can receive fixture objects by naming them as an input
argument. For each argument name, a fixture function with that name provides
argument. For each argument name, a fixture function with that name provides
the fixture object. Fixture functions are registered by marking them with
:py:func:`@pytest.fixture <_pytest.python.fixture>`. Let's look at a simple
self-contained test module containing a fixture and a test function
@ -70,36 +70,36 @@ using it::
assert "merlinux" in msg
assert 0 # for demo purposes
Here, the ``test_ehlo`` needs the ``smtp`` fixture value. pytest
will discover and call the :py:func:`@pytest.fixture <_pytest.python.fixture>`
Here, the ``test_ehlo`` needs the ``smtp`` fixture value. pytest
will discover and call the :py:func:`@pytest.fixture <_pytest.python.fixture>`
marked ``smtp`` fixture function. Running the test looks like this::
$ py.test test_smtpsimple.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 1 items
test_smtpsimple.py F
================================= FAILURES =================================
________________________________ test_ehlo _________________________________
smtp = <smtplib.SMTP instance at 0x2ae3469203f8>
def test_ehlo(smtp):
response, msg = smtp.ehlo()
assert response == 250
assert "merlinux" in msg
> assert 0 # for demo purposes
E assert 0
test_smtpsimple.py:12: AssertionError
========================= 1 failed in 0.21 seconds =========================
In the failure traceback we see that the test function was called with a
``smtp`` argument, the ``smtplib.SMTP()`` instance created by the fixture
function. The test function fails on our deliberate ``assert 0``. Here is
an exact protocol of how py.test comes to call the test function this way:
the exact protocol used by ``pytest`` to call the test function this way:
1. pytest :ref:`finds <test discovery>` the ``test_ehlo`` because
of the ``test_`` prefix. The test function needs a function argument
@ -125,7 +125,7 @@ with a list of available function arguments.
In versions prior to 2.3 there was no ``@pytest.fixture`` marker
and you had to use a magic ``pytest_funcarg__NAME`` prefix
for the fixture factory. This remains and will remain supported
for the fixture factory. This remains and will remain supported
but is not anymore advertised as the primary means of declaring fixture
functions.
@ -153,15 +153,15 @@ Sharing a fixture across tests in a module (or class/session)
.. regendoc:wipe
Fixtures requiring network access depend on connectivity and are
Fixtures requiring network access depend on connectivity and are
usually time-expensive to create. Extending the previous example, we
can add a ``scope='module'`` parameter to the
can add a ``scope='module'`` parameter to the
:py:func:`@pytest.fixture <_pytest.python.fixture>` invocation
to cause the decorated ``smtp`` fixture function to only be invoked once
to cause the decorated ``smtp`` fixture function to only be invoked once
per test module. Multiple test functions in a test module will thus
each receive the same ``smtp`` fixture instance. The next example puts
the fixture function into a separate ``conftest.py`` file so
that tests from multiple test modules in the directory can
that tests from multiple test modules in the directory can
access the fixture function::
# content of conftest.py
@ -180,7 +180,7 @@ function (in or below the directory where ``conftest.py`` is located)::
def test_ehlo(smtp):
response = smtp.ehlo()
assert response[0] == 250
assert response[0] == 250
assert "merlinux" in response[1]
assert 0 # for demo purposes
@ -196,38 +196,38 @@ inspect what is going on and can now run the tests::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 2 items
test_module.py FF
================================= FAILURES =================================
________________________________ test_ehlo _________________________________
smtp = <smtplib.SMTP instance at 0x1af5440>
def test_ehlo(smtp):
response = smtp.ehlo()
assert response[0] == 250
assert "merlinux" in response[1]
> assert 0 # for demo purposes
E assert 0
test_module.py:6: AssertionError
________________________________ test_noop _________________________________
smtp = <smtplib.SMTP instance at 0x1af5440>
def test_noop(smtp):
response = smtp.noop()
assert response[0] == 250
> assert 0 # for demo purposes
E assert 0
test_module.py:11: AssertionError
========================= 2 failed in 0.17 seconds =========================
You see the two ``assert 0`` failing and more importantly you can also see
that the same (module-scoped) ``smtp`` object was passed into the two
test functions because pytest shows the incoming argument values in the
You see the two ``assert 0`` failing and more importantly you can also see
that the same (module-scoped) ``smtp`` object was passed into the two
test functions because pytest shows the incoming argument values in the
traceback. As a result, the two test functions using ``smtp`` run as
quick as a single one because they reuse the same instance.
@ -236,15 +236,15 @@ instance, you can simply declare it::
@pytest.fixture(scope="session")
def smtp(...):
# the returned fixture value will be shared for
# the returned fixture value will be shared for
# all tests needing it
.. _`finalization`:
fixture finalization / executing teardown code
fixture finalization / executing teardown code
-------------------------------------------------------------
pytest supports execution of fixture specific finalization code
pytest supports execution of fixture specific finalization code
when the fixture goes out of scope. By accepting a ``request`` object
into your fixture function you can call its ``request.addfinalizer`` one
or multiple times::
@ -270,13 +270,13 @@ Let's execute it::
$ py.test -s -q --tb=no
FFteardown smtp
2 failed in 0.17 seconds
We see that the ``smtp`` instance is finalized after the two
tests finished execution. Note that if we decorated our fixture
function with ``scope='function'`` then fixture setup and cleanup would
occur around each single test. In either case the test
occur around each single test. In either case the test
module itself does not need to change or know about these details
of fixture setup.
@ -288,7 +288,7 @@ Fixtures can introspect the requesting test context
Fixture function can accept the :py:class:`request <FixtureRequest>` object
to introspect the "requesting" test function, class or module context.
Further extending the previous ``smtp`` fixture example, let's
Further extending the previous ``smtp`` fixture example, let's
read an optional server URL from the test module which uses our fixture::
# content of conftest.py
@ -299,12 +299,12 @@ read an optional server URL from the test module which uses our fixture::
def smtp(request):
server = getattr(request.module, "smtpserver", "merlinux.eu")
smtp = smtplib.SMTP(server)
def fin():
print ("finalizing %s (%s)" % (smtp, server))
smtp.close()
return smtp
return smtp
We use the ``request.module`` attribute to optionally obtain an
``smtpserver`` attribute from the test module. If we just execute
@ -316,9 +316,9 @@ again, nothing much has changed::
Let's quickly create another test module that actually sets the
server URL in its module namespace::
# content of test_anothersmtp.py
smtpserver = "mail.python.org" # will be read by smtp fixture
def test_showhelo(smtp):
@ -346,7 +346,7 @@ Fixture functions can be parametrized in which case they will be called
multiple times, each time executing the set of dependent tests, i. e. the
tests that depend on this fixture. Test functions do usually not need
to be aware of their re-running. Fixture parametrization helps to
write exhaustive functional tests for components which themselves can be
write exhaustive functional tests for components which themselves can be
configured in multiple ways.
Extending the previous example, we can flag the fixture to create two
@ -358,7 +358,7 @@ through the special :py:class:`request <FixtureRequest>` object::
import pytest
import smtplib
@pytest.fixture(scope="module",
@pytest.fixture(scope="module",
params=["merlinux.eu", "mail.python.org"])
def smtp(request):
smtp = smtplib.SMTP(request.param)
@ -368,67 +368,67 @@ through the special :py:class:`request <FixtureRequest>` object::
request.addfinalizer(fin)
return smtp
The main change is the declaration of ``params`` with
The main change is the declaration of ``params`` with
:py:func:`@pytest.fixture <_pytest.python.fixture>`, a list of values
for each of which the fixture function will execute and can access
a value via ``request.param``. No test function code needs to change.
a value via ``request.param``. No test function code needs to change.
So let's just do another run::
$ py.test -q test_module.py
FFFF
================================= FAILURES =================================
__________________________ test_ehlo[merlinux.eu] __________________________
smtp = <smtplib.SMTP instance at 0x100ac20>
def test_ehlo(smtp):
response = smtp.ehlo()
assert response[0] == 250
assert "merlinux" in response[1]
> assert 0 # for demo purposes
E assert 0
test_module.py:6: AssertionError
__________________________ test_noop[merlinux.eu] __________________________
smtp = <smtplib.SMTP instance at 0x100ac20>
def test_noop(smtp):
response = smtp.noop()
assert response[0] == 250
> assert 0 # for demo purposes
E assert 0
test_module.py:11: AssertionError
________________________ test_ehlo[mail.python.org] ________________________
smtp = <smtplib.SMTP instance at 0x105b638>
def test_ehlo(smtp):
response = smtp.ehlo()
assert response[0] == 250
> assert "merlinux" in response[1]
E assert 'merlinux' in 'mail.python.org\nSIZE 25600000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN'
test_module.py:5: AssertionError
----------------------------- Captured stdout ------------------------------
finalizing <smtplib.SMTP instance at 0x100ac20>
________________________ test_noop[mail.python.org] ________________________
smtp = <smtplib.SMTP instance at 0x105b638>
def test_noop(smtp):
response = smtp.noop()
assert response[0] == 250
> assert 0 # for demo purposes
E assert 0
test_module.py:11: AssertionError
4 failed in 6.58 seconds
We see that our two test functions each ran twice, against the different
``smtp`` instances. Note also, that with the ``mail.python.org``
connection the second test fails in ``test_ehlo`` because a
``smtp`` instances. Note also, that with the ``mail.python.org``
connection the second test fails in ``test_ehlo`` because a
different server string is expected than what arrived.
@ -445,7 +445,7 @@ and instantiate an object ``app`` where we stick the already defined
``smtp`` resource into it::
# content of test_appsetup.py
import pytest
class App:
@ -466,20 +466,20 @@ Here we declare an ``app`` fixture which receives the previously defined
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 2 items
test_appsetup.py:12: test_smtp_exists[merlinux.eu] PASSED
test_appsetup.py:12: test_smtp_exists[mail.python.org] PASSED
========================= 2 passed in 5.95 seconds =========================
Due to the parametrization of ``smtp`` the test will run twice with two
different ``App`` instances and respective smtp servers. There is no
need for the ``app`` fixture to be aware of the ``smtp`` parametrization
as pytest will fully analyse the fixture dependency graph.
need for the ``app`` fixture to be aware of the ``smtp`` parametrization
as pytest will fully analyse the fixture dependency graph.
Note, that the ``app`` fixture has a scope of ``module`` and uses a
module-scoped ``smtp`` fixture. The example would still work if ``smtp``
was cached on a ``session`` scope: it is fine for fixtures to use
was cached on a ``session`` scope: it is fine for fixtures to use
"broader" scoped fixtures but not the other way round:
A session-scoped fixture could not use a module-scoped one in a
meaningful way.
@ -494,11 +494,11 @@ Automatic grouping of tests by fixture instances
pytest minimizes the number of active fixtures during test runs.
If you have a parametrized fixture, then all the tests using it will
first execute with one instance and then finalizers are called
first execute with one instance and then finalizers are called
before the next fixture instance is created. Among other things,
this eases testing of applications which create and use global state.
The following example uses two parametrized funcargs, one of which is
The following example uses two parametrized funcargs, one of which is
scoped on a per-module basis, and all the functions perform ``print`` calls
to show the setup/teardown flow::
@ -530,7 +530,7 @@ Let's run the tests in verbose mode and with looking at the print-output::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 8 items
test_module.py:15: test_0[1] test0 1
PASSED
test_module.py:15: test_0[2] test0 2
@ -549,11 +549,11 @@ Let's run the tests in verbose mode and with looking at the print-output::
PASSED
test_module.py:19: test_2[2-mod2] test2 2 mod2
PASSED
========================= 8 passed in 0.01 seconds =========================
You can see that the parametrized module-scoped ``modarg`` resource caused
an ordering of test execution that lead to the fewest possible "active" resources. The finalizer for the ``mod1`` parametrized resource was executed
an ordering of test execution that lead to the fewest possible "active" resources. The finalizer for the ``mod1`` parametrized resource was executed
before the ``mod2`` resource was setup.
@ -573,7 +573,7 @@ achieve it. We separate the creation of the fixture into a conftest.py
file::
# content of conftest.py
import pytest
import tempfile
import os
@ -612,12 +612,12 @@ You can specify multiple fixtures like this::
@pytest.mark.usefixtures("cleandir", "anotherfixture")
and you may specify fixture usage at the test module level, using
and you may specify fixture usage at the test module level, using
a generic feature of the mark mechanism::
pytestmark = pytest.mark.usefixtures("cleandir")
Lastly you can put fixtures required by all tests in your project
Lastly you can put fixtures required by all tests in your project
into an ini-file::
# content of pytest.ini
@ -635,14 +635,14 @@ autouse fixtures (xUnit setup on steroids)
.. regendoc:wipe
Occasionally, you may want to have fixtures get invoked automatically
without a `usefixtures`_ or `funcargs`_ reference. As a practical
without a `usefixtures`_ or `funcargs`_ reference. As a practical
example, suppose we have a database fixture which has a
begin/rollback/commit architecture and we want to automatically surround
each test method by a transaction and a rollback. Here is a dummy
self-contained implementation of this idea::
# content of test_db_transact.py
import pytest
class DB:
@ -682,11 +682,11 @@ If we run it, we get two passing tests::
Here is how autouse fixtures work in other scopes:
- if an autouse fixture is defined in a test module, all its test
functions automatically use it.
- if an autouse fixture is defined in a test module, all its test
functions automatically use it.
- if an autouse fixture is defined in a conftest.py file then all tests in
all test modules belows its directory will invoke the fixture.
- if an autouse fixture is defined in a conftest.py file then all tests in
all test modules belows its directory will invoke the fixture.
- lastly, and **please use that with care**: if you define an autouse
fixture in a plugin, it will be invoked for all tests in all projects
@ -697,7 +697,7 @@ Here is how autouse fixtures work in other scopes:
Note that the above ``transact`` fixture may very well be a fixture that
you want to make available in your project without having it generally
active. The canonical way to do that is to put the transact definition
active. The canonical way to do that is to put the transact definition
into a conftest.py file **without** using ``autouse``::
# content of conftest.py
@ -714,7 +714,7 @@ and then e.g. have a TestClass using it by declaring the need::
...
All test methods in this TestClass will use the transaction fixture while
other test classes or functions in the module will not use it unless
other test classes or functions in the module will not use it unless
they also add a ``transact`` reference.
Shifting (visibility of) fixture functions

View File

@ -23,7 +23,7 @@ Installation options::
To check your installation has installed the correct version::
$ py.test --version
This is py.test version 2.5.1, imported from /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/pytest.pyc
This is pytest version 2.5.1, imported from /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/pytest.pyc
If you get an error checkout :ref:`installation issues`.
@ -47,21 +47,21 @@ That's it. You can execute the test function now::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 1 items
test_sample.py F
================================= FAILURES =================================
_______________________________ test_answer ________________________________
def test_answer():
> assert func(3) == 5
E assert 4 == 5
E + where 4 = func(3)
test_sample.py:5: AssertionError
========================= 1 failed in 0.01 seconds =========================
py.test found the ``test_answer`` function by following :ref:`standard test discovery rules <test discovery>`, basically detecting the ``test_`` prefixes. We got a failure report because our little ``func(3)`` call did not return ``5``.
``pytest`` found the ``test_answer`` function by following :ref:`standard test discovery rules <test discovery>`, basically detecting the ``test_`` prefixes. We got a failure report because our little ``func(3)`` call did not return ``5``.
.. note::
@ -122,14 +122,14 @@ run the module by passing its filename::
.F
================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________
self = <test_class.TestClass instance at 0x2b57dd0>
def test_two(self):
x = "hello"
> assert hasattr(x, 'check')
E assert hasattr('hello', 'check')
test_class.py:8: AssertionError
1 failed, 1 passed in 0.01 seconds
@ -142,7 +142,7 @@ Going functional: requesting a unique temporary directory
For functional tests one often needs to create some files
and pass them to application objects. pytest provides
:ref:`builtinfixtures` which allow to request arbitrary
:ref:`builtinfixtures` which allow to request arbitrary
resources, for example a unique temporary directory::
# content of test_tmpdir.py
@ -151,21 +151,21 @@ resources, for example a unique temporary directory::
assert 0
We list the name ``tmpdir`` in the test function signature and
py.test will lookup and call a fixture factory to create the resource
``pytest`` will lookup and call a fixture factory to create the resource
before performing the test function call. Let's just run it::
$ py.test -q test_tmpdir.py
F
================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________
tmpdir = local('/tmp/pytest-38/test_needsfiles0')
def test_needsfiles(tmpdir):
print tmpdir
> assert 0
E assert 0
test_tmpdir.py:3: AssertionError
----------------------------- Captured stdout ------------------------------
/tmp/pytest-38/test_needsfiles0
@ -186,7 +186,7 @@ Here are a few suggestions where to go next:
* :ref:`cmdline` for command line invocation examples
* :ref:`good practises <goodpractises>` for virtualenv, test layout, genscript support
* :ref:`fixtures` for providing a functional baseline to your tests
* :ref:`apiref` for documentation and examples on using py.test
* :ref:`apiref` for documentation and examples on using ``pytest``
* :ref:`plugins` managing and writing plugins
.. _`installation issues`:
@ -221,7 +221,7 @@ py.test not found on Windows despite installation?
- **Jython2.5.1 on Windows XP**: `Jython does not create command line launchers`_
so ``py.test`` will not work correctly. You may install py.test on
CPython and type ``py.test --genscript=mytest`` and then use
``jython mytest`` to run py.test for your tests to run with Jython.
``jython mytest`` to run your tests with Jython using ``pytest``.
:ref:`examples` for more complex examples

View File

@ -8,8 +8,8 @@ Good Integration Practises
Work with virtual environments
-----------------------------------------------------------
We recommend to use virtualenv_ environments and use pip_
(or easy_install_) for installing your application and any dependencies
We recommend to use virtualenv_ environments and use pip_
(or easy_install_) for installing your application and any dependencies
as well as the ``pytest`` package itself. This way you will get an isolated
and reproducible environment. Given you have installed virtualenv_
and execute it from the command line, here is an example session for unix
@ -27,19 +27,19 @@ We can now install pytest::
Due to the ``activate`` step above the ``pip`` will come from
the virtualenv directory and install any package into the isolated
virtual environment.
virtual environment.
Choosing a test layout / import rules
------------------------------------------
py.test supports two common test layouts:
``pytest`` supports two common test layouts:
* putting tests into an extra directory outside your actual application
code, useful if you have many functional tests or for other reasons
want to keep tests separate from actual application code (often a good
idea)::
setup.py # your distutils/setuptools Python package metadata
setup.py # your distutils/setuptools Python package metadata
mypkg/
__init__.py
appmodule.py
@ -48,11 +48,11 @@ py.test supports two common test layouts:
...
* inlining test directories into your application package, useful if you
* inlining test directories into your application package, useful if you
have direct relation between (unit-)test and application modules and
want to distribute your tests along with your application::
setup.py # your distutils/setuptools Python package metadata
setup.py # your distutils/setuptools Python package metadata
mypkg/
__init__.py
appmodule.py
@ -68,11 +68,11 @@ Important notes relating to both schemes:
pip install -e . # install package using setup.py in editable mode
- **avoid "__init__.py" files in your test directories**.
This way your tests can run easily against an installed version
of ``mypkg``, independently from if the installed package contains
the tests or not.
This way your tests can run easily against an installed version
of ``mypkg``, independently from if the installed package contains
the tests or not.
- With inlined tests you might put ``__init__.py`` into test
- With inlined tests you might put ``__init__.py`` into test
directories and make them installable as part of your application.
Using the ``py.test --pyargs mypkg`` invocation pytest will
discover where mypkg is installed and collect tests from there.
@ -87,19 +87,19 @@ Typically you can run tests by pointing to test directories or modules::
py.test # run all tests below current dir
...
Because of the above ``editable install`` mode you can change your
Because of the above ``editable install`` mode you can change your
source code (both tests and the app) and rerun tests at will.
Once you are done with your work, you can `use tox`_ to make sure
that the package is really correct and tests pass in all
that the package is really correct and tests pass in all
required configurations.
.. note::
You can use Python3 namespace packages (PEP420) for your application
but pytest will still perform `test package name`_ discovery based on the
presence of ``__init__.py`` files. If you use one of the
presence of ``__init__.py`` files. If you use one of the
two recommended file system layouts above but leave away the ``__init__.py``
files from your directories it should just work on Python3.3 and above. From
files from your directories it should just work on Python3.3 and above. From
"inlined tests", however, you will need to use absolute imports for
getting at your application code.
@ -107,7 +107,7 @@ required configurations.
.. note::
If py.test finds a "a/b/test_module.py" test file while
If ``pytest`` finds a "a/b/test_module.py" test file while
recursing into the filesystem it determines the import name
as follows:
@ -155,17 +155,17 @@ to create a JUnitXML file that Jenkins_ can pick up and generate reports.
.. _standalone:
.. _`genscript method`:
Create a py.test standalone script
Create a pytest standalone script
-------------------------------------------
If you are a maintainer or application developer and want people
who don't deal with python much to easily run tests you may generate
a standalone "py.test" script::
who don't deal with python much to easily run tests you may generate
a standalone ``pytest`` script::
py.test --genscript=runtests.py
This generates a ``runtests.py`` script which is a fully functional basic
``py.test`` script, running unchanged under Python2 and Python3.
``pytest`` script, running unchanged under Python2 and Python3.
You can tell people to download the script and then e.g. run it like this::
python runtests.py
@ -176,7 +176,7 @@ Integrating with distutils / ``python setup.py test``
You can integrate test runs into your distutils or
setuptools based project. Use the `genscript method`_
to generate a standalone py.test script::
to generate a standalone ``pytest`` script::
py.test --genscript=runtests.py
@ -207,7 +207,7 @@ If you now type::
python setup.py test
this will execute your tests using ``runtests.py``. As this is a
standalone version of ``py.test`` no prior installation whatsoever is
standalone version of ``pytest`` no prior installation whatsoever is
required for calling the test command. You can also pass additional
arguments to the subprocess-calls such as your test directory or other
options.
@ -244,7 +244,7 @@ Now if you run::
python setup.py test
this will download py.test if needed and then run py.test
this will download ``pytest`` if needed and then run your tests
as you would expect it to.
.. _`test discovery`:
@ -253,7 +253,7 @@ as you would expect it to.
Conventions for Python test discovery
-------------------------------------------------
``py.test`` implements the following standard test discovery:
``pytest`` implements the following standard test discovery:
* collection starts from the initial command line arguments
which may be directories, filenames or test ids.
@ -264,7 +264,7 @@ Conventions for Python test discovery
For examples of how to customize your test discovery :doc:`example/pythoncollection`.
Within Python modules, py.test also discovers tests using the standard
Within Python modules, ``pytest`` also discovers tests using the standard
:ref:`unittest.TestCase <unittest.TestCase>` subclassing technique.
.. include:: links.inc

View File

@ -3,7 +3,7 @@ Running tests written for nose
.. include:: links.inc
py.test has basic support for running tests written for nose_.
``pytest`` has basic support for running tests written for nose_.
.. _nosestyle:
@ -16,7 +16,7 @@ After :ref:`installation` type::
py.test # instead of 'nosetests'
and you should be able to run your nose style tests and
make use of py.test's capabilities.
make use of pytest's capabilities.
Supported nose Idioms
----------------------
@ -30,10 +30,10 @@ Supported nose Idioms
Unsupported idioms / known issues
----------------------------------
- unittest-style ``setUp, tearDown, setUpClass, tearDownClass``
- unittest-style ``setUp, tearDown, setUpClass, tearDownClass``
are recognized only on ``unittest.TestCase`` classes but not
on plain classes. ``nose`` supports these methods also on plain
classes but pytest deliberately does not. As nose and pytest already
classes but pytest deliberately does not. As nose and pytest already
both support ``setup_class, teardown_class, setup_method, teardown_method``
it doesn't seem useful to duplicate the unittest-API like nose does.
If you however rather think pytest should support the unittest-spelling on
@ -44,9 +44,9 @@ Unsupported idioms / known issues
``tests.test_mod``) but different file system paths
(e.g. ``tests/test_mode.py`` and ``other/tests/test_mode.py``)
by extending sys.path/import semantics. pytest does not do that
but there is discussion in `issue268 <https://bitbucket.org/hpk42/pytest/issue/268>`_ for adding some support. Note that
but there is discussion in `issue268 <https://bitbucket.org/hpk42/pytest/issue/268>`_ for adding some support. Note that
`nose2 choose to avoid this sys.path/import hackery <https://nose2.readthedocs.org/en/latest/differences.html#test-discovery-and-loading>`_.
- nose-style doctests are not collected and executed correctly,
also doctest fixtures don't work.

View File

@ -15,10 +15,10 @@ pytest supports test parametrization in several well-integrated ways:
at the level of fixture functions <fixture-parametrize>`.
* `@pytest.mark.parametrize`_ allows to define parametrization at the
function or class level, provides multiple argument/fixture sets
function or class level, provides multiple argument/fixture sets
for a particular test function or class.
* `pytest_generate_tests`_ enables implementing your own custom
* `pytest_generate_tests`_ enables implementing your own custom
dynamic parametrization scheme or extensions.
.. _parametrizemark:
@ -51,18 +51,18 @@ Here, the ``@parametrize`` decorator defines three different ``(input,expected)`
tuples so that the ``test_eval`` function will run three times using
them in turn::
$ py.test
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 3 items
test_expectation.py ..F
================================= FAILURES =================================
____________________________ test_eval[6*9-42] _____________________________
input = '6*9', expected = 42
@pytest.mark.parametrize("input,expected", [
("3+5", 8),
("2+4", 6),
@ -72,7 +72,7 @@ them in turn::
> assert eval(input) == expected
E assert 54 == 42
E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError
==================== 1 failed, 2 passed in 0.01 seconds ====================
@ -102,9 +102,9 @@ Let's run this::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 3 items
test_expectation.py ..x
=================== 2 passed, 1 xfailed in 0.01 seconds ====================
The one parameter set which caused a failure previously now
@ -114,7 +114,7 @@ shows up as an "xfailed (expected to fail)" test.
In versions prior to 2.4 one needed to specify the argument
names as a tuple. This remains valid but the simpler ``"name1,name2,..."``
comma-separated-string syntax is now advertised first because
comma-separated-string syntax is now advertised first because
it's easier to write and produces less line noise.
.. _`pytest_generate_tests`:
@ -124,14 +124,14 @@ Basic ``pytest_generate_tests`` example
Sometimes you may want to implement your own parametrization scheme
or implement some dynamism for determining the parameters or scope
of a fixture. For this, you can use the ``pytest_generate_tests`` hook
of a fixture. For this, you can use the ``pytest_generate_tests`` hook
which is called when collecting a test function. Through the passed in
`metafunc` object you can inspect the requesting test context and, most
importantly, you can call ``metafunc.parametrize()`` to cause
parametrization.
parametrization.
For example, let's say we want to run a test taking string inputs which
we want to set via a new py.test command line option. Let's first write
we want to set via a new ``pytest`` command line option. Let's first write
a simple test accepting a ``stringinput`` fixture function argument::
# content of test_strings.py
@ -139,7 +139,7 @@ a simple test accepting a ``stringinput`` fixture function argument::
def test_valid_string(stringinput):
assert stringinput.isalpha()
Now we add a ``conftest.py`` file containing the addition of a
Now we add a ``conftest.py`` file containing the addition of a
command line option and the parametrization of our test function::
# content of conftest.py
@ -150,7 +150,7 @@ command line option and the parametrization of our test function::
def pytest_generate_tests(metafunc):
if 'stringinput' in metafunc.fixturenames:
metafunc.parametrize("stringinput",
metafunc.parametrize("stringinput",
metafunc.config.option.stringinput)
If we now pass two stringinput values, our test will run twice::
@ -165,24 +165,24 @@ Let's also run with a stringinput that will lead to a failing test::
F
================================= FAILURES =================================
___________________________ test_valid_string[!] ___________________________
stringinput = '!'
def test_valid_string(stringinput):
> assert stringinput.isalpha()
E assert <built-in method isalpha of str object at 0x2b72934ca198>()
E + where <built-in method isalpha of str object at 0x2b72934ca198> = '!'.isalpha
test_strings.py:3: AssertionError
1 failed in 0.01 seconds
As expected our test function fails.
As expected our test function fails.
If you don't specify a stringinput it will be skipped because
``metafunc.parametrize()`` will be called with an empty parameter
listlist::
$ py.test -q -rs test_strings.py
$ py.test -q -rs test_strings.py
s
========================= short test summary info ==========================
SKIP [1] /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:1094: got empty parameter set, function test_valid_string at /tmp/doc-exec-24/test_strings.py:1

View File

@ -3,9 +3,9 @@
Working with plugins and conftest files
=============================================
py.test implements all aspects of configuration, collection, running and reporting by calling `well specified hooks`_. Virtually any Python module can be registered as a plugin. It can implement any number of hook functions (usually two or three) which all have a ``pytest_`` prefix, making hook functions easy to distinguish and find. There are three basic location types:
``pytest`` implements all aspects of configuration, collection, running and reporting by calling `well specified hooks`_. Virtually any Python module can be registered as a plugin. It can implement any number of hook functions (usually two or three) which all have a ``pytest_`` prefix, making hook functions easy to distinguish and find. There are three basic location types:
* `builtin plugins`_: loaded from py.test's internal ``_pytest`` directory.
* `builtin plugins`_: loaded from pytest's internal ``_pytest`` directory.
* `external plugins`_: modules discovered through `setuptools entry points`_
* `conftest.py plugins`_: modules auto-discovered in test directories
@ -63,7 +63,7 @@ tool, for example::
pip install pytest-NAME
pip uninstall pytest-NAME
If a plugin is installed, py.test automatically finds and integrates it,
If a plugin is installed, ``pytest`` automatically finds and integrates it,
there is no need to activate it. Here is a initial list of known plugins:
.. _`django`: https://www.djangoproject.com/
@ -84,14 +84,14 @@ there is no need to activate it. Here is a initial list of known plugins:
* `pytest-xdist <http://pypi.python.org/pypi/pytest-xdist>`_:
to distribute tests to CPUs and remote hosts, to run in boxed
mode which allows to survive segmentation faults, to run in
looponfailing mode, automatically re-running failing tests
looponfailing mode, automatically re-running failing tests
on file changes, see also :ref:`xdist`
* `pytest-instafail <http://pypi.python.org/pypi/pytest-instafail>`_:
to report failures while the test run is happening.
* `pytest-bdd <http://pypi.python.org/pypi/pytest-bdd>`_ and
`pytest-konira <http://pypi.python.org/pypi/pytest-konira>`_
`pytest-konira <http://pypi.python.org/pypi/pytest-konira>`_
to write tests using behaviour-driven testing.
* `pytest-timeout <http://pypi.python.org/pypi/pytest-timeout>`_:
@ -122,7 +122,7 @@ If you want to write a plugin, there are many real-life examples
you can copy from:
* a custom collection example plugin: :ref:`yaml plugin`
* around 20 `builtin plugins`_ which provide py.test's own functionality
* around 20 `builtin plugins`_ which provide pytest's own functionality
* many `external plugins`_ providing additional features
All of these plugins implement the documented `well specified hooks`_
@ -135,9 +135,9 @@ Making your plugin installable by others
If you want to make your plugin externally available, you
may define a so-called entry point for your distribution so
that ``py.test`` finds your plugin module. Entry points are
that ``pytest`` finds your plugin module. Entry points are
a feature that is provided by `setuptools`_ or `Distribute`_.
py.test looks up the ``pytest11`` entrypoint to discover its
pytest looks up the ``pytest11`` entrypoint to discover its
plugins and you can thus make your plugin available by definig
it in your setuptools/distribute-based setup-invocation:
@ -150,7 +150,7 @@ it in your setuptools/distribute-based setup-invocation:
name="myproject",
packages = ['myproject']
# the following makes a plugin available to py.test
# the following makes a plugin available to pytest
entry_points = {
'pytest11': [
'name_of_plugin = myproject.pluginmodule',
@ -158,7 +158,7 @@ it in your setuptools/distribute-based setup-invocation:
},
)
If a package is installed this way, py.test will load
If a package is installed this way, ``pytest`` will load
``myproject.pluginmodule`` as a plugin which can define
`well specified hooks`_.
@ -167,7 +167,7 @@ If a package is installed this way, py.test will load
Plugin discovery order at tool startup
--------------------------------------------
py.test loads plugin modules at tool startup in the following way:
``pytest`` loads plugin modules at tool startup in the following way:
* by loading all builtin plugins
@ -197,7 +197,7 @@ will be loaded as well. You can also use dotted path like this::
pytest_plugins = "myapp.testsupport.myplugin"
which will import the specified module as a py.test plugin.
which will import the specified module as a ``pytest`` plugin.
Accessing another plugin by name
@ -243,7 +243,7 @@ how to obtain the name of a plugin.
.. _`builtin plugins`:
py.test default plugin reference
pytest default plugin reference
====================================
@ -277,14 +277,14 @@ in the `pytest repository <http://bitbucket.org/hpk42/pytest/>`_.
.. _`well specified hooks`:
py.test hook reference
pytest hook reference
====================================
Hook specification and validation
-----------------------------------------
py.test calls hook functions to implement initialization, running,
test execution and reporting. When py.test loads a plugin it validates
``pytest`` calls hook functions to implement initialization, running,
test execution and reporting. When ``pytest`` loads a plugin it validates
that each hook function conforms to its respective hook specification.
Each hook function name and its argument names need to match a hook
specification. However, a hook function may accept *fewer* parameters
@ -327,7 +327,7 @@ the reporting hook to print information about a test run.
Collection hooks
------------------------------
py.test calls the following hooks for collecting files and directories:
``pytest`` calls the following hooks for collecting files and directories:
.. autofunction:: pytest_ignore_collect
.. autofunction:: pytest_collect_directory

View File

@ -20,9 +20,9 @@
Project examples
==========================
Here are some examples of projects using py.test (please send notes via :ref:`contact`):
Here are some examples of projects using ``pytest`` (please send notes via :ref:`contact`):
* `PyPy <http://pypy.org>`_, Python with a JIT compiler, running over
* `PyPy <http://pypy.org>`_, Python with a JIT compiler, running over
`21000 tests <http://buildbot.pypy.org/summary?branch=%3Ctrunk%3E>`_
* the `MoinMoin <http://moinmo.in>`_ Wiki Engine
* `sentry <https://getsentry.com/welcome/>`_, realtime app-maintenance and exception tracking
@ -60,7 +60,7 @@ Here are some examples of projects using py.test (please send notes via :ref:`co
* `pytest-localserver <https://bitbucket.org/basti/pytest-localserver/>`_ a plugin for pytest that provides a httpserver and smtpserver
* `pytest-monkeyplus <http://pypi.python.org/pypi/pytest-monkeyplus/>`_ a plugin that extends monkeypatch
These projects help integrate py.test into other Python frameworks:
These projects help integrate ``pytest`` into other Python frameworks:
* `pytest-django <http://pypi.python.org/pypi/pytest-django/>`_ for Django
* `zope.pytest <http://packages.python.org/zope.pytest/>`_ for Zope and Grok
@ -68,7 +68,7 @@ These projects help integrate py.test into other Python frameworks:
* There is `some work <https://github.com/Kotti/Kotti/blob/master/kotti/testing.py>`_ underway for Kotti, a CMS built in Pyramid/Pylons
Some organisations using py.test
Some organisations using pytest
-----------------------------------
* `Square Kilometre Array, Cape Town <http://ska.ac.za/>`_

View File

@ -14,7 +14,7 @@ A *skip* means that you expect your test to pass unless the environment
And *xfail* means that your test can run but you expect it to fail
because there is an implementation problem.
py.test counts and lists *skip* and *xfail* tests separately. Detailed
``pytest`` counts and lists *skip* and *xfail* tests separately. Detailed
information about skipped/xfailed tests is not shown by default to avoid
cluttering the output. You can use the ``-r`` option to see details
corresponding to the "short" letters shown in the test progress::
@ -29,7 +29,7 @@ corresponding to the "short" letters shown in the test progress::
Marking a test function to be skipped
-------------------------------------------
.. versionadded:: 2.0, 2.4
.. versionadded:: 2.0, 2.4
Here is an example of marking a test function to be skipped
when run on a Python3.3 interpreter::
@ -104,7 +104,7 @@ you can set the ``pytestmark`` attribute of a class::
"will not be setup or run under 'win32' platform"
As with the class-decorator, the ``pytestmark`` special name tells
py.test to apply it to each test function in the class.
``pytest`` to apply it to each test function in the class.
If you want to skip all test functions of a module, you must use
the ``pytestmark`` name on the global level::
@ -161,12 +161,12 @@ Running it with the report-on-xfail option gives this output::
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.5.1
collected 6 items
xfail_demo.py xxxxxx
========================= short test summary info ==========================
XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2
reason: [NOTRUN]
reason: [NOTRUN]
XFAIL xfail_demo.py::test_hello3
condition: hasattr(os, 'sep')
XFAIL xfail_demo.py::test_hello4
@ -175,7 +175,7 @@ Running it with the report-on-xfail option gives this output::
condition: pytest.__version__[0] != "17"
XFAIL xfail_demo.py::test_hello6
reason: reason
======================== 6 xfailed in 0.04 seconds =========================
.. _`skip/xfail with parametrize`:
@ -255,7 +255,7 @@ because markers can then be freely imported between test modules.
With strings you need to import not only the marker but all variables
everything used by the marker, which violates encapsulation.
The reason for specifying the condition as a string was that py.test can
The reason for specifying the condition as a string was that ``pytest`` can
report a summary of skip conditions based purely on the condition string.
With conditions as booleans you are required to specify a ``reason`` string.

View File

@ -20,7 +20,7 @@ Basic usage and fixtures:
- `pycon australia 2012 pytest talk from Brianna Laugher
<http://2012.pycon-au.org/schedule/52/view_talk?day=sunday>`_ (`video <http://www.youtube.com/watch?v=DTNejE9EraI>`_, `slides <http://www.slideshare.net/pfctdayelise/funcargs-other-fun-with-pytest>`_, `code <https://gist.github.com/3386951>`_)
- `pycon 2012 US talk video from Holger Krekel <http://www.youtube.com/watch?v=9LVqBQcFmyw>`_
- `pycon 2012 US talk video from Holger Krekel <http://www.youtube.com/watch?v=9LVqBQcFmyw>`_
- `pycon 2010 tutorial PDF`_ and `tutorial1 repository`_
@ -39,8 +39,8 @@ Test parametrization:
Assertion introspection:
- `(07/2011) Behind the scenes of py.test's new assertion rewriting
<http://pybites.blogspot.com/2011/07/behind-scenes-of-pytests-new-assertion.html>`_
- `(07/2011) Behind the scenes of pytest's new assertion rewriting
<http://pybites.blogspot.com/2011/07/behind-scenes-of-pytests-new-assertion.html>`_
Distributed testing:
@ -48,11 +48,11 @@ Distributed testing:
Plugin specific examples:
- `skipping slow tests by default in py.test`_ (blog entry)
- `skipping slow tests by default in pytest`_ (blog entry)
- `many examples in the docs for plugins`_
.. _`skipping slow tests by default in py.test`: http://bruynooghe.blogspot.com/2009/12/skipping-slow-test-by-default-in-pytest.html
.. _`skipping slow tests by default in pytest`: http://bruynooghe.blogspot.com/2009/12/skipping-slow-test-by-default-in-pytest.html
.. _`many examples in the docs for plugins`: plugin/index.html
.. _`monkeypatch plugin`: plugin/monkeypatch.html
.. _`application setup in test functions with funcargs`: funcargs.html#appsetup
@ -69,14 +69,14 @@ Older conference talks and tutorials
- `ep2009-rapidtesting.pdf`_ tutorial slides (July 2009):
- testing terminology
- basic py.test usage, file system layout
- basic pytest usage, file system layout
- test function arguments (funcargs_) and test fixtures
- existing plugins
- distributed testing
- `ep2009-pytest.pdf`_ 60 minute py.test talk, highlighting unique features and a roadmap (July 2009)
- `ep2009-pytest.pdf`_ 60 minute pytest talk, highlighting unique features and a roadmap (July 2009)
- `pycon2009-pytest-introduction.zip`_ slides and files, extended version of py.test basic introduction, discusses more options, also introduces old-style xUnit setup, looponfailing and other features.
- `pycon2009-pytest-introduction.zip`_ slides and files, extended version of pytest basic introduction, discusses more options, also introduces old-style xUnit setup, looponfailing and other features.
- `pycon2009-pytest-advanced.pdf`_ contain a slightly older version of funcargs and distributed testing, compared to the EuroPython 2009 slides.

View File

@ -13,7 +13,7 @@ writing conftest.py files
You may put conftest.py files containing project-specific
configuration in your project's root directory, it's usually
best to put it just into the same directory level as your
topmost ``__init__.py``. In fact, ``py.test`` performs
topmost ``__init__.py``. In fact, ``pytest`` performs
an "upwards" search starting from the directory that you specify
to be tested and will lookup configuration values right-to-left.
You may have options that reside e.g. in your home directory

View File

@ -1,5 +1,5 @@
=======================================
py.test documentation index
pytest documentation index
=======================================
@ -17,7 +17,7 @@ customize_: configuration, customization, extensions
changelog_: history of changes covering last releases
**Continuous Integration of py.test's own tests and plugins with Hudson**:
**Continuous Integration of pytest's own tests and plugins with Hudson**:
`http://hudson.testrun.org/view/pytest`_

View File

@ -2,10 +2,10 @@
Mission
====================================
py.test strives to make testing a fun and no-boilerplate effort.
``pytest`` strives to make testing a fun and no-boilerplate effort.
The tool is distributed as part of the `py` package which contains supporting APIs that
are also usable independently. The project independent ``py.test`` command line tool helps you to:
The tool is distributed as a `pytest` package. Its project independent
``py.test`` command line tool helps you to:
* rapidly collect and run tests
* run unit- or doctests, functional or integration tests

View File

@ -1,7 +1,7 @@
pytest_django plugin (EXTERNAL)
==========================================
pytest_django is a plugin for py.test that provides a set of useful tools for testing Django applications, checkout Ben Firshman's `pytest_django github page`_.
pytest_django is a plugin for ``pytest`` that provides a set of useful tools for testing Django applications, checkout Ben Firshman's `pytest_django github page`_.
.. _`pytest_django github page`: http://github.com/bfirsh/pytest_django/tree/master

View File

@ -13,7 +13,7 @@ command line options
``--genscript=path``
create standalone py.test script at given target path.
create standalone ``pytest`` script at given target path.
Start improving this plugin in 30 seconds
=========================================
@ -21,7 +21,7 @@ Start improving this plugin in 30 seconds
1. Download `pytest_genscript.py`_ plugin source code
2. put it somewhere as ``pytest_genscript.py`` into your import path
3. a subsequent ``py.test`` run will use your local version
3. a subsequent ``pytest`` run will use your local version
Checkout customize_, other plugins_ or `get in contact`_.

View File

@ -19,7 +19,7 @@ command line options
``--traceconfig``
trace considerations of conftest.py files.
``--nomagic``
don't reinterpret asserts, no traceback cutting.
don't reinterpret asserts, no traceback cutting.
``--debug``
generate and show internal debugging information.
``--help-config``
@ -31,7 +31,7 @@ Start improving this plugin in 30 seconds
1. Download `pytest_helpconfig.py`_ plugin source code
2. put it somewhere as ``pytest_helpconfig.py`` into your import path
3. a subsequent ``py.test`` run will use your local version
3. a subsequent ``pytest`` run will use your local version
Checkout customize_, other plugins_ or `get in contact`_.

View File

@ -22,7 +22,7 @@
.. _`capturelog`: capturelog.html
.. _`junitxml`: junitxml.html
.. _`pytest_skipping.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.3.4/py/_plugin/pytest_skipping.py
.. _`checkout the py.test development version`: ../../install.html#checkout
.. _`checkout the pytest development version`: ../../install.html#checkout
.. _`pytest_helpconfig.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.3.4/py/_plugin/pytest_helpconfig.py
.. _`oejskit`: oejskit.html
.. _`doctest`: doctest.html

View File

@ -7,7 +7,7 @@ nose-compatibility plugin: allow to run nose test suites natively.
:local:
This is an experimental plugin for allowing to run tests written
in 'nosetests style with py.test.
in 'nosetests' style with ``pytest``.
Usage
-------------
@ -17,7 +17,7 @@ type::
py.test # instead of 'nosetests'
and you should be able to run nose style tests and at the same
time can make full use of py.test's capabilities.
time can make full use of pytest's capabilities.
Supported nose Idioms
----------------------
@ -40,7 +40,7 @@ If you find other issues or have suggestions please run::
py.test --pastebin=all
and send the resulting URL to a py.test contact channel,
and send the resulting URL to a ``pytest`` contact channel,
at best to the mailing list.
Start improving this plugin in 30 seconds
@ -49,7 +49,7 @@ Start improving this plugin in 30 seconds
1. Download `pytest_nose.py`_ plugin source code
2. put it somewhere as ``pytest_nose.py`` into your import path
3. a subsequent ``py.test`` run will use your local version
3. a subsequent ``pytest`` run will use your local version
Checkout customize_, other plugins_ or `get in contact`_.

View File

@ -1,7 +1,7 @@
pytest_oejskit plugin (EXTERNAL)
==========================================
The `oejskit`_ offers a py.test plugin for running Javascript tests in life browsers. Running inside the browsers comes with some speed cost, on the other hand it means for example the code is tested against the real-word DOM implementations.
The `oejskit`_ offers a ``pytest`` plugin for running Javascript tests in live browsers. Running inside the browsers comes with some speed cost, on the other hand it means for example the code is tested against the real-word DOM implementations.
The approach enables to write integration tests such that the JavaScript code is tested against server-side Python code mocked as necessary. Any server-side framework that can already be exposed through WSGI (or for which a subset of WSGI can be written to accommodate the jskit own needs) can play along.
For more info and download please visit the `oejskit PyPI`_ page.

View File

@ -33,7 +33,7 @@ Start improving this plugin in 30 seconds
1. Download `pytest_terminal.py`_ plugin source code
2. put it somewhere as ``pytest_terminal.py`` into your import path
3. a subsequent ``py.test`` run will use your local version
3. a subsequent ``pytest`` run will use your local version
Checkout customize_, other plugins_ or `get in contact`_.

View File

@ -6,13 +6,13 @@ loop on failing tests, distribute test runs to CPUs and hosts.
.. contents::
:local:
The `pytest-xdist`_ plugin extends py.test with some unique
The `pytest-xdist`_ plugin extends ``pytest`` with some unique
test execution modes:
* Looponfail: run your tests repeatedly in a subprocess. After each run py.test
waits until a file in your project changes and then re-runs the previously
failing tests. This is repeated until all tests pass after which again
a full run is performed.
* Looponfail: run your tests repeatedly in a subprocess. After each run
``pytest`` waits until a file in your project changes and then re-runs the
previously failing tests. This is repeated until all tests pass after which
again a full run is performed.
* Load-balancing: if you have multiple CPUs or hosts you can use
those for a combined test run. This allows to speed up
@ -21,7 +21,7 @@ test execution modes:
* Multi-Platform coverage: you can specify different Python interpreters
or different platforms and run tests in parallel on all of them.
Before running tests remotely, ``py.test`` efficiently synchronizes your
Before running tests remotely, ``pytest`` efficiently synchronizes your
program source code to the remote place. All test results
are reported back and displayed to your local test session.
You may specify different Python versions and interpreters.
@ -77,11 +77,11 @@ and send them to remote places for execution.
You can specify multiple ``--rsyncdir`` directories
to be sent to the remote side.
**NOTE:** For py.test to collect and send tests correctly
**NOTE:** For ``pytest`` to collect and send tests correctly
you not only need to make sure all code and tests
directories are rsynced, but that any test (sub) directory
also has an ``__init__.py`` file because internally
py.test references tests as a fully qualified python
``pytest`` references tests using their fully qualified python
module path. **You will otherwise get strange errors**
during setup of the remote side.
@ -156,11 +156,11 @@ command line options
box each test run in a separate process (unix)
``--dist=distmode``
set mode for distributing tests to exec environments.
each: send each test to each available environment.
load: send each test to available environment.
(default) no: run tests inprocess, don't distribute.
``--tx=xspec``
add a test execution environment. some examples: --tx popen//python=python2.5 --tx socket=192.168.1.102:8888 --tx ssh=user@codespeak.net//chdir=testcache

View File

@ -64,7 +64,7 @@ You can override the default temporary directory setting like this::
py.test --basetemp=mydir
When distributing tests on the local machine, ``py.test`` takes care to
When distributing tests on the local machine, ``pytest`` takes care to
configure a basetemp directory for the sub processes such that all temporary
data lands below a single per-test run basetemp directory.

View File

@ -6,13 +6,13 @@ Support for unittest.TestCase / Integration of fixtures
.. _`unittest.py style`: http://docs.python.org/library/unittest.html
py.test has support for running Python `unittest.py style`_ tests.
``pytest`` has support for running Python `unittest.py style`_ tests.
It's meant for leveraging existing unittest-style projects
to use pytest features. Concretely, pytest will automatically
collect ``unittest.TestCase`` subclasses and their ``test`` methods in
test files. It will invoke typical setup/teardown methods and
generally try to make test suites written to run on unittest, to also
run using ``py.test``. We assume here that you are familiar with writing
run using ``pytest``. We assume here that you are familiar with writing
``unittest.TestCase`` style tests and rather focus on
integration aspects.
@ -30,12 +30,12 @@ you can make use of most :ref:`pytest features <features>`, for example
:ref:`more informative tracebacks <tbreportdemo>`, stdout-capturing or
distributing tests to multiple CPUs via the ``-nNUM`` option if you
installed the ``pytest-xdist`` plugin. Please refer to
the general pytest documentation for many more examples.
the general ``pytest`` documentation for many more examples.
Mixing pytest fixtures into unittest.TestCase style tests
-----------------------------------------------------------
Running your unittest with ``py.test`` allows you to use its
Running your unittest with ``pytest`` allows you to use its
:ref:`fixture mechanism <fixture>` with ``unittest.TestCase`` style
tests. Assuming you have at least skimmed the pytest fixture features,
let's jump-start into an example that integrates a pytest ``db_class``

View File

@ -72,7 +72,7 @@ Dropping to PDB (Python Debugger) on failures
.. _PDB: http://docs.python.org/library/pdb.html
Python comes with a builtin Python debugger called PDB_. ``py.test``
Python comes with a builtin Python debugger called PDB_. ``pytest``
allows one to drop into the PDB prompt via a command line option::
py.test --pdb
@ -160,7 +160,7 @@ Calling pytest from Python code
.. versionadded:: 2.0
You can invoke ``py.test`` from Python code directly::
You can invoke ``pytest`` from Python code directly::
pytest.main()

View File

@ -4,11 +4,11 @@
xdist: pytest distributed testing plugin
===============================================================
The `pytest-xdist`_ plugin extends py.test with some unique
The `pytest-xdist`_ plugin extends ``pytest`` with some unique
test execution modes:
* Looponfail: run your tests repeatedly in a subprocess. After each
run, py.test waits until a file in your project changes and then
run, ``pytest`` waits until a file in your project changes and then
re-runs the previously failing tests. This is repeated until all
tests pass. At this point a full run is again performed.
@ -19,7 +19,7 @@ test execution modes:
* Multi-Platform coverage: you can specify different Python interpreters
or different platforms and run tests in parallel on all of them.
Before running tests remotely, ``py.test`` efficiently "rsyncs" your
Before running tests remotely, ``pytest`` efficiently "rsyncs" your
program source code to the remote place. All test results
are reported back and displayed to your local terminal.
You may specify different Python versions and interpreters.
@ -86,7 +86,7 @@ you can use the looponfailing mode. Simply add the ``--f`` option::
py.test -f
and py.test will run your tests. Assuming you have failures it will then
and ``pytest`` will run your tests. Assuming you have failures it will then
wait for file changes and re-run the failing test set. File changes are detected by looking at ``looponfailingroots`` root directories and all of their contents (recursively). If the default for this value does not work for you you
can change it in your project by setting a configuration option::
@ -115,11 +115,11 @@ to be sent to the remote side.
.. XXX CHECK
**NOTE:** For py.test to collect and send tests correctly
**NOTE:** For ``pytest`` to collect and send tests correctly
you not only need to make sure all code and tests
directories are rsynced, but that any test (sub) directory
also has an ``__init__.py`` file because internally
py.test references tests as a fully qualified python
``pytest`` references tests as a fully qualified python
module path. **You will otherwise get strange errors**
during setup of the remote side.

View File

@ -25,7 +25,7 @@ def main():
setup(
name='pytest',
description='py.test: simple powerful testing with Python',
description='pytest: simple powerful testing with Python',
long_description = long_description,
version='2.5.2.dev1',
url='http://pytest.org',

View File

@ -28,7 +28,7 @@ def test_assert_with_explicit_message():
assert e.msg == 'hello'
def test_assert_within_finally():
excinfo = py.test.raises(ZeroDivisionError, """
excinfo = pytest.raises(ZeroDivisionError, """
try:
1/0
finally:
@ -79,7 +79,7 @@ def test_is():
assert s.startswith("assert 1 is 2")
@py.test.mark.skipif("sys.version_info < (2,6)")
@pytest.mark.skipif("sys.version_info < (2,6)")
def test_attrib():
class Foo(object):
b = 1
@ -91,7 +91,7 @@ def test_attrib():
s = str(e)
assert s.startswith("assert 1 == 2")
@py.test.mark.skipif("sys.version_info < (2,6)")
@pytest.mark.skipif("sys.version_info < (2,6)")
def test_attrib_inst():
class Foo(object):
b = 1
@ -168,7 +168,7 @@ def test_assert_with_brokenrepr_arg():
def __repr__(self): 0 / 0
e = AssertionError(BrokenRepr())
if e.msg.find("broken __repr__") == -1:
py.test.fail("broken __repr__ not handle correctly")
pytest.fail("broken __repr__ not handle correctly")
def test_multiple_statements_per_line():
try:
@ -244,7 +244,7 @@ class TestView:
assert codelines == ["4 + 5", "getitem('', 'join')",
"setattr('x', 'y', 3)", "12 - 1"]
@py.test.mark.skipif("sys.version_info < (2,6)")
@pytest.mark.skipif("sys.version_info < (2,6)")
def test_assert_customizable_reprcompare(monkeypatch):
monkeypatch.setattr(util, '_reprcompare', lambda *args: 'hello')
try:
@ -323,7 +323,7 @@ def test_assert_raises_in_nonzero_of_object_pytest_issue10():
s = str(e)
assert "<MY42 object> < 0" in s
@py.test.mark.skipif("sys.version_info >= (2,6)")
@pytest.mark.skipif("sys.version_info >= (2,6)")
def test_oldinterpret_importation():
# we had a cyclic import there
# requires pytest on sys.path

View File

@ -280,7 +280,7 @@ def test_options_on_small_file_do_not_blow_up(testdir):
runfiletest(opts + [path])
def test_preparse_ordering_with_setuptools(testdir, monkeypatch):
pkg_resources = py.test.importorskip("pkg_resources")
pkg_resources = pytest.importorskip("pkg_resources")
def my_iter(name):
assert name == "pytest11"
class EntryPoint:
@ -302,7 +302,7 @@ def test_preparse_ordering_with_setuptools(testdir, monkeypatch):
assert plugin.x == 42
def test_plugin_preparse_prevents_setuptools_loading(testdir, monkeypatch):
pkg_resources = py.test.importorskip("pkg_resources")
pkg_resources = pytest.importorskip("pkg_resources")
def my_iter(name):
assert name == "pytest11"
class EntryPoint:

View File

@ -65,7 +65,7 @@ class TestBootstrapping:
assert l2 == l3
def test_consider_setuptools_instantiation(self, monkeypatch):
pkg_resources = py.test.importorskip("pkg_resources")
pkg_resources = pytest.importorskip("pkg_resources")
def my_iter(name):
assert name == "pytest11"
class EntryPoint:
@ -334,11 +334,11 @@ class TestPytestPluginInteractions:
return {'hello': 'world'}
""")
p = testdir.makepyfile("""
from py.test import hello
import py
from pytest import hello
import pytest
def test_hello():
assert hello == "world"
assert 'hello' in py.test.__all__
assert 'hello' in pytest.__all__
""")
reprec = testdir.inline_run(p)
reprec.assertoutcome(passed=1)

View File

@ -6,7 +6,7 @@ def test_version(testdir, pytestconfig):
assert result.ret == 0
#p = py.path.local(py.__file__).dirpath()
result.stderr.fnmatch_lines([
'*py.test*%s*imported from*' % (pytest.__version__, )
'*pytest*%s*imported from*' % (pytest.__version__, )
])
if pytestconfig.pluginmanager._plugin_distinfo:
result.stderr.fnmatch_lines([

View File

@ -1,7 +1,7 @@
import py, pytest
def setup_module(mod):
mod.nose = py.test.importorskip("nose")
mod.nose = pytest.importorskip("nose")
def test_nose_setup(testdir):
p = testdir.makepyfile("""
@ -112,7 +112,7 @@ def test_nose_setup_func_failure_2(testdir):
reprec.assertoutcome(passed=1)
def test_nose_setup_partial(testdir):
py.test.importorskip("functools")
pytest.importorskip("functools")
p = testdir.makepyfile("""
from functools import partial

View File

@ -59,7 +59,7 @@ def test_parseconfig(testdir):
config1 = testdir.parseconfig()
config2 = testdir.parseconfig()
assert config2 != config1
assert config1 != py.test.config
assert config1 != pytest.config
def test_testdir_runs_with_plugin(testdir):
testdir.makepyfile("""

View File

@ -37,7 +37,7 @@ def test_recwarn_functional(testdir):
assert tuple(res) == (2, 0, 0), res
#
# ============ test py.test.deprecated_call() ==============
# ============ test pytest.deprecated_call() ==============
#
def dep(i):
@ -53,14 +53,14 @@ def dep_explicit(i):
def test_deprecated_call_raises():
excinfo = pytest.raises(AssertionError,
"py.test.deprecated_call(dep, 3)")
"pytest.deprecated_call(dep, 3)")
assert str(excinfo).find("did not produce") != -1
def test_deprecated_call():
py.test.deprecated_call(dep, 0)
pytest.deprecated_call(dep, 0)
def test_deprecated_call_ret():
ret = py.test.deprecated_call(dep, 0)
ret = pytest.deprecated_call(dep, 0)
assert ret == 42
def test_deprecated_call_preserves():
@ -73,9 +73,9 @@ def test_deprecated_call_preserves():
def test_deprecated_explicit_call_raises():
pytest.raises(AssertionError,
"py.test.deprecated_call(dep_explicit, 3)")
"pytest.deprecated_call(dep_explicit, 3)")
def test_deprecated_explicit_call():
py.test.deprecated_call(dep_explicit, 0)
py.test.deprecated_call(dep_explicit, 0)
pytest.deprecated_call(dep_explicit, 0)
pytest.deprecated_call(dep_explicit, 0)

View File

@ -268,7 +268,7 @@ class BaseFunctionalTests:
raise SystemExit(42)
""")
except SystemExit:
py.test.fail("runner did not catch SystemExit")
pytest.fail("runner did not catch SystemExit")
rep = reports[1]
assert rep.failed
assert rep.when == "call"
@ -280,10 +280,10 @@ class BaseFunctionalTests:
def test_func():
raise pytest.exit.Exception()
""")
except py.test.exit.Exception:
except pytest.exit.Exception:
pass
else:
py.test.fail("did not raise")
pytest.fail("did not raise")
class TestExecutionNonForked(BaseFunctionalTests):
def getrunner(self):
@ -300,14 +300,14 @@ class TestExecutionNonForked(BaseFunctionalTests):
except KeyboardInterrupt:
pass
else:
py.test.fail("did not raise")
pytest.fail("did not raise")
class TestExecutionForked(BaseFunctionalTests):
pytestmark = pytest.mark.skipif("not hasattr(os, 'fork')")
def getrunner(self):
# XXX re-arrange this test to live in pytest-xdist
xplugin = py.test.importorskip("xdist.plugin")
xplugin = pytest.importorskip("xdist.plugin")
return xplugin.forked_run_report
def test_suicide(self, testdir):
@ -417,15 +417,15 @@ def test_outcomeexception_exceptionattributes():
def test_pytest_exit():
try:
py.test.exit("hello")
except py.test.exit.Exception:
pytest.exit("hello")
except pytest.exit.Exception:
excinfo = py.code.ExceptionInfo()
assert excinfo.errisinstance(KeyboardInterrupt)
def test_pytest_fail():
try:
py.test.fail("hello")
except py.test.fail.Exception:
pytest.fail("hello")
except pytest.fail.Exception:
excinfo = py.code.ExceptionInfo()
s = excinfo.exconly(tryshort=True)
assert s.startswith("Failed")
@ -454,45 +454,45 @@ def test_exception_printing_skip():
assert s.startswith("Skipped")
def test_importorskip():
importorskip = py.test.importorskip
importorskip = pytest.importorskip
def f():
importorskip("asdlkj")
try:
sys = importorskip("sys") # noqa
assert sys == py.std.sys
#path = py.test.importorskip("os.path")
#path = pytest.importorskip("os.path")
#assert path == py.std.os.path
excinfo = pytest.raises(pytest.skip.Exception, f)
path = py.path.local(excinfo.getrepr().reprcrash.path)
# check that importorskip reports the actual call
# in this test the test_runner.py file
assert path.purebasename == "test_runner"
pytest.raises(SyntaxError, "py.test.importorskip('x y z')")
pytest.raises(SyntaxError, "py.test.importorskip('x=y')")
pytest.raises(SyntaxError, "pytest.importorskip('x y z')")
pytest.raises(SyntaxError, "pytest.importorskip('x=y')")
mod = py.std.types.ModuleType("hello123")
mod.__version__ = "1.3"
sys.modules["hello123"] = mod
pytest.raises(pytest.skip.Exception, """
py.test.importorskip("hello123", minversion="1.3.1")
pytest.importorskip("hello123", minversion="1.3.1")
""")
mod2 = pytest.importorskip("hello123", minversion="1.3")
assert mod2 == mod
except pytest.skip.Exception:
print(py.code.ExceptionInfo())
py.test.fail("spurious skip")
pytest.fail("spurious skip")
def test_importorskip_imports_last_module_part():
ospath = py.test.importorskip("os.path")
ospath = pytest.importorskip("os.path")
assert os.path == ospath
def test_pytest_cmdline_main(testdir):
p = testdir.makepyfile("""
import py
import pytest
def test_hello():
assert 1
if __name__ == '__main__':
py.test.cmdline.main([__file__])
pytest.cmdline.main([__file__])
""")
import subprocess
popen = subprocess.Popen([sys.executable, str(p)], stdout=subprocess.PIPE)

View File

@ -54,7 +54,7 @@ class SessionTests:
out = failed[0].longrepr.reprcrash.message
if not out.find("DID NOT RAISE") != -1:
print(out)
py.test.fail("incorrect raises() output")
pytest.fail("incorrect raises() output")
def test_generator_yields_None(self, testdir):
reprec = testdir.inline_runsource("""
@ -127,7 +127,7 @@ class SessionTests:
try:
reprec = testdir.inline_run(testdir.tmpdir)
except pytest.skip.Exception:
py.test.fail("wrong skipped caught")
pytest.fail("wrong skipped caught")
reports = reprec.getreports("pytest_collectreport")
assert len(reports) == 1
assert reports[0].skipped

View File

@ -27,9 +27,9 @@ def test_funcarg(testdir):
assert bn == "qwe__abc"
def test_ensuretemp(recwarn):
#py.test.deprecated_call(py.test.ensuretemp, 'hello')
d1 = py.test.ensuretemp('hello')
d2 = py.test.ensuretemp('hello')
#pytest.deprecated_call(pytest.ensuretemp, 'hello')
d1 = pytest.ensuretemp('hello')
d2 = pytest.ensuretemp('hello')
assert d1 == d2
assert d1.check(dir=1)