Merge branch 'master' into master

This commit is contained in:
Skylar Downes 2017-04-06 16:35:38 -07:00 committed by GitHub
commit 3aa4fb62d6
140 changed files with 2749 additions and 914 deletions

View File

@ -3,6 +3,9 @@ Thanks for submitting a PR, your contribution is really appreciated!
Here's a quick checklist that should be present in PRs:
- [ ] Target: for bug or doc fixes, target `master`; for new features, target `features`;
Unless your change is trivial documentation fix (e.g., a typo or reword of a small section) please:
- [ ] Make sure to include one or more tests for your change;
- [ ] Add yourself to `AUTHORS`;
- [ ] Add a new entry to `CHANGELOG.rst`

View File

@ -8,27 +8,37 @@ install: "pip install -U tox"
env:
matrix:
# coveralls is not listed in tox's envlist, but should run in travis
- TESTENV=coveralls
- TOXENV=coveralls
# note: please use "tox --listenvs" to populate the build matrix below
- TESTENV=linting
- TESTENV=py26
- TESTENV=py27
- TESTENV=py33
- TESTENV=py34
- TESTENV=py35
- TESTENV=pypy
- TESTENV=py27-pexpect
- TESTENV=py27-xdist
- TESTENV=py27-trial
- TESTENV=py35-pexpect
- TESTENV=py35-xdist
- TESTENV=py35-trial
- TESTENV=py27-nobyte
- TESTENV=doctesting
- TESTENV=freeze
- TESTENV=docs
- TOXENV=linting
- TOXENV=py26
- TOXENV=py27
- TOXENV=py33
- TOXENV=py34
- TOXENV=py35
- TOXENV=pypy
- TOXENV=py27-pexpect
- TOXENV=py27-xdist
- TOXENV=py27-trial
- TOXENV=py35-pexpect
- TOXENV=py35-xdist
- TOXENV=py35-trial
- TOXENV=py27-nobyte
- TOXENV=doctesting
- TOXENV=freeze
- TOXENV=docs
script: tox --recreate -e $TESTENV
matrix:
include:
- env: TOXENV=py36
python: '3.6-dev'
- env: TOXENV=py37
python: 'nightly'
allow_failures:
- env: TOXENV=py37
python: 'nightly'
script: tox --recreate
notifications:
irc:

23
AUTHORS
View File

@ -6,6 +6,7 @@ Contributors include::
Abdeali JK
Abhijeet Kasurde
Ahn Ki-Wook
Alexander Johnson
Alexei Kozlenok
Anatoly Bubenkoff
Andreas Zeidler
@ -16,6 +17,7 @@ Antony Lee
Armin Rigo
Aron Curzon
Aviv Palivoda
Barney Gale
Ben Webb
Benjamin Peterson
Bernard Pratz
@ -36,16 +38,20 @@ Christopher Gilling
Daniel Grana
Daniel Hahler
Daniel Nuri
Daniel Wandschneider
Danielle Jenkins
Dave Hunt
David Díaz-Barquero
David Mohr
David Vierra
Denis Kirisov
Diego Russo
Dmitry Dygalo
Duncan Betts
Edison Gustavo Muenz
Edoardo Batini
Eduardo Schettino
Eli Boyarski
Elizaveta Shashkova
Endre Galaczi
Eric Hunsberger
@ -59,6 +65,7 @@ Georgy Dyuldin
Graham Horler
Greg Price
Grig Gheorghiu
Grigorii Eremeev (budulianin)
Guido Wesdorp
Harald Armin Massa
Ian Bicking
@ -68,6 +75,7 @@ Janne Vanhala
Jason R. Coombs
Javier Domingo Cansino
Javier Romero
Jeff Widman
John Towler
Jon Sonesen
Jordan Guymon
@ -78,7 +86,10 @@ Kale Kundert
Katarzyna Jachim
Kevin Cox
Lee Kamentsky
Lev Maximov
Loic Esteve
Lukas Bednar
Luke Murphy
Maciek Fijalkowski
Maho
Marc Schlaich
@ -88,6 +99,7 @@ Markus Unterwaditzer
Martijn Faassen
Martin K. Scherer
Martin Prusse
Mathieu Clabaut
Matt Bachmann
Matt Williams
Matthias Hafner
@ -95,17 +107,25 @@ mbyt
Michael Aquilina
Michael Birtwell
Michael Droettboom
Michael Seifert
Mike Lundy
Ned Batchelder
Neven Mundar
Nicolas Delaby
Oleg Pidsadnyi
Oliver Bestwalter
Omar Kohl
Omer Hadari
Patrick Hayes
Paweł Adamczak
Pieter Mulder
Piotr Banaszkiewicz
Punyashloka Biswal
Quentin Pradet
Ralf Schmitt
Ran Benita
Raphael Pierzina
Raquel Alegre
Roberto Polli
Romain Dorgueil
Roman Bolshakov
@ -126,6 +146,9 @@ Ted Xiao
Thomas Grainger
Tom Viner
Trevor Bekolay
Tyler Goodlet
Vasily Kuznetsov
Victor Uriarte
Vidar T. Fauske
Wouter van Ackooy
Xuecong Liao

File diff suppressed because it is too large Load Diff

View File

@ -79,6 +79,16 @@ Pytest could always use more documentation. What exactly is needed?
You can also edit documentation files directly in the GitHub web interface,
without using a local copy. This can be convenient for small fixes.
.. note::
Build the documentation locally with the following command:
.. code:: bash
$ tox -e docs
The built documentation should be available in the ``doc/en/_build/``.
Where 'en' refers to the documentation language.
.. _submitplugin:
@ -199,13 +209,10 @@ but here is a simple overview:
You need to have Python 2.7 and 3.5 available in your system. Now
running tests is as simple as issuing this command::
$ python3 runtox.py -e linting,py27,py35
$ tox -e linting,py27,py35
This command will run tests via the "tox" tool against Python 2.7 and 3.5
and also perform "lint" coding-style checks. ``runtox.py`` is
a thin wrapper around ``tox`` which installs from a development package
index where newer (not yet released to PyPI) versions of dependencies
(especially ``py``) might be present.
and also perform "lint" coding-style checks.
#. You can now edit your local working copy.
@ -214,11 +221,11 @@ but here is a simple overview:
To run tests on Python 2.7 and pass options to pytest (e.g. enter pdb on
failure) to pytest you can do::
$ python3 runtox.py -e py27 -- --pdb
$ tox -e py27 -- --pdb
Or to only run tests in a particular test module on Python 3.5::
$ python3 runtox.py -e py35 -- testing/test_config.py
$ tox -e py35 -- testing/test_config.py
#. Commit and push once your tests pass and you are happy with your change(s)::

View File

@ -9,24 +9,28 @@ include HOWTORELEASE.rst
include tox.ini
include setup.py
include .coveragerc
recursive-include scripts *.py
recursive-include scripts *.bat
include plugin-test.sh
include requirements-docs.txt
include runtox.py
include .coveragerc
recursive-include bench *.py
recursive-include extra *.py
graft testing
graft doc
prune doc/en/_build
exclude _pytest/impl
graft _pytest/vendored_packages
recursive-exclude * *.pyc *.pyo
recursive-exclude testing/.hypothesis *
recursive-exclude testing/freeze/~ *
recursive-exclude testing/freeze/build *
recursive-exclude testing/freeze/dist *
exclude appveyor/install.ps1
exclude appveyor.yml
exclude appveyor
exclude .travis.yml
prune .github

View File

@ -24,31 +24,31 @@ An example of a simple test:
.. code-block:: python
# content of test_sample.py
def func(x):
def inc(x):
return x + 1
def test_answer():
assert func(3) == 5
assert inc(3) == 5
To execute it::
$ pytest
======= test session starts ========
============================= test session starts =============================
collected 1 items
test_sample.py F
======= FAILURES ========
_______ test_answer ________
================================== FAILURES ===================================
_________________________________ test_answer _________________________________
def test_answer():
> assert func(3) == 5
> assert inc(3) == 5
E assert 4 == 5
E + where 4 = func(3)
E + where 4 = inc(3)
test_sample.py:5: AssertionError
======= 1 failed in 0.12 seconds ========
========================== 1 failed in 0.04 seconds ===========================
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used. See `getting-started <http://docs.pytest.org/en/latest/getting-started.html#our-first-test-run>`_ for more examples.
@ -89,7 +89,7 @@ Please use the `GitHub issue tracker <https://github.com/pytest-dev/pytest/issue
Changelog
---------
Consult the `Changelog <http://docs.pytest.org/en/latest/changelog.html>`_ page for fixes and enhancements of each version.
Consult the `Changelog <http://docs.pytest.org/en/latest/changelog.html>`__ page for fixes and enhancements of each version.
License

View File

@ -1,2 +1,2 @@
#
__version__ = '3.0.3.dev0'
__version__ = '3.0.8.dev'

View File

@ -87,6 +87,7 @@ class FastFilesCompleter:
completion.append(x[prefix_dir:])
return completion
if os.environ.get('_ARGCOMPLETE'):
try:
import argcomplete.completers

View File

@ -1,6 +1,7 @@
import sys
from inspect import CO_VARARGS, CO_VARKEYWORDS
import re
from weakref import ref
import py
builtin_repr = repr
@ -12,6 +13,7 @@ if sys.version_info[0] >= 3:
else:
from ._py2traceback import format_exception_only
class Code(object):
""" wrapper around Python code objects """
def __init__(self, rawcode):
@ -28,6 +30,8 @@ class Code(object):
def __eq__(self, other):
return self.raw == other.raw
__hash__ = None
def __ne__(self, other):
return not self == other
@ -227,7 +231,7 @@ class TracebackEntry(object):
return False
if py.builtin.callable(tbh):
return tbh(self._excinfo)
return tbh(None if self._excinfo is None else self._excinfo())
else:
return tbh
@ -339,6 +343,7 @@ class Traceback(list):
l.append(entry.frame.f_locals)
return None
co_equal = compile('__recursioncache_locals_1 == __recursioncache_locals_2',
'?', 'eval')
@ -347,6 +352,8 @@ class ExceptionInfo(object):
help for navigating the traceback.
"""
_striptext = ''
_assert_start_repr = "AssertionError(u\'assert " if sys.version_info[0] < 3 else "AssertionError(\'assert "
def __init__(self, tup=None, exprinfo=None):
import _pytest._code
if tup is None:
@ -354,8 +361,8 @@ class ExceptionInfo(object):
if exprinfo is None and isinstance(tup[1], AssertionError):
exprinfo = getattr(tup[1], 'msg', None)
if exprinfo is None:
exprinfo = py._builtin._totext(tup[1])
if exprinfo and exprinfo.startswith('assert '):
exprinfo = py.io.saferepr(tup[1])
if exprinfo and exprinfo.startswith(self._assert_start_repr):
self._striptext = 'AssertionError: '
self._excinfo = tup
#: the exception class
@ -367,7 +374,7 @@ class ExceptionInfo(object):
#: the exception type name
self.typename = self.type.__name__
#: the exception traceback (_pytest._code.Traceback instance)
self.traceback = _pytest._code.Traceback(self.tb, excinfo=self)
self.traceback = _pytest._code.Traceback(self.tb, excinfo=ref(self))
def __repr__(self):
return "<ExceptionInfo %s tblen=%d>" % (self.typename, len(self.traceback))
@ -620,16 +627,23 @@ class FormattedExcinfo(object):
e = excinfo.value
descr = None
while e is not None:
reprtraceback = self.repr_traceback(excinfo)
reprcrash = excinfo._getreprcrash()
if excinfo:
reprtraceback = self.repr_traceback(excinfo)
reprcrash = excinfo._getreprcrash()
else:
# fallback to native repr if the exception doesn't have a traceback:
# ExceptionInfo objects require a full traceback to work
reprtraceback = ReprTracebackNative(py.std.traceback.format_exception(type(e), e, None))
reprcrash = None
repr_chain += [(reprtraceback, reprcrash, descr)]
if e.__cause__ is not None:
e = e.__cause__
excinfo = ExceptionInfo((type(e), e, e.__traceback__))
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
descr = 'The above exception was the direct cause of the following exception:'
elif e.__context__ is not None:
e = e.__context__
excinfo = ExceptionInfo((type(e), e, e.__traceback__))
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
descr = 'During handling of the above exception, another exception occurred:'
else:
e = None
@ -835,6 +849,7 @@ def getrawcode(obj, trycall=True):
return x
return obj
if sys.version_info[:2] >= (3, 5): # RecursionError introduced in 3.5
def is_recursion_error(excinfo):
return excinfo.errisinstance(RecursionError) # noqa

View File

@ -4,7 +4,6 @@ from bisect import bisect_right
import sys
import inspect, tokenize
import py
from types import ModuleType
cpy_compile = compile
try:
@ -52,22 +51,21 @@ class Source(object):
return str(self) == other
return False
__hash__ = None
def __getitem__(self, key):
if isinstance(key, int):
return self.lines[key]
else:
if key.step not in (None, 1):
raise IndexError("cannot slice a Source with a step")
return self.__getslice__(key.start, key.stop)
newsource = Source()
newsource.lines = self.lines[key.start:key.stop]
return newsource
def __len__(self):
return len(self.lines)
def __getslice__(self, start, end):
newsource = Source()
newsource.lines = self.lines[start:end]
return newsource
def strip(self):
""" return new source object with trailing
and leading blank lines removed.
@ -193,14 +191,6 @@ class Source(object):
if flag & _AST_FLAG:
return co
lines = [(x + "\n") for x in self.lines]
if sys.version_info[0] >= 3:
# XXX py3's inspect.getsourcefile() checks for a module
# and a pep302 __loader__ ... we don't have a module
# at code compile-time so we need to fake it here
m = ModuleType("_pycodecompile_pseudo_module")
py.std.inspect.modulesbyfile[filename] = None
py.std.sys.modules[None] = m
m.__loader__ = 1
py.std.linecache.cache[filename] = (1, None, lines, filename)
return co
@ -266,6 +256,7 @@ def findsource(obj):
source.lines = [line.rstrip() for line in sourcelines]
return source, lineno
def getsource(obj, **kwargs):
import _pytest._code
obj = _pytest._code.getrawcode(obj)
@ -276,6 +267,7 @@ def getsource(obj, **kwargs):
assert isinstance(strsrc, str)
return Source(strsrc, **kwargs)
def deindent(lines, offset=None):
if offset is None:
for line in lines:
@ -289,6 +281,7 @@ def deindent(lines, offset=None):
if offset == 0:
return list(lines)
newlines = []
def readline_generator(lines):
for line in lines:
yield line + '\n'

View File

@ -29,7 +29,7 @@ def pytest_namespace():
def register_assert_rewrite(*names):
"""Register a module name to be rewritten on import.
"""Register one or more module names to be rewritten on import.
This function will make sure that this module or all modules inside
the package will get their assert statements rewritten.
@ -80,10 +80,12 @@ def install_importhook(config):
config._assertstate.hook = hook = rewrite.AssertionRewritingHook(config)
sys.meta_path.insert(0, hook)
config._assertstate.trace('installed rewrite import hook')
def undo():
hook = config._assertstate.hook
if hook is not None and hook in sys.meta_path:
sys.meta_path.remove(hook)
config.add_cleanup(undo)
return hook

View File

@ -51,6 +51,7 @@ class AssertionRewritingHook(object):
self.fnpats = config.getini("python_files")
self.session = None
self.modules = {}
self._rewritten_names = set()
self._register_with_pkg_resources()
self._must_rewrite = set()
@ -79,7 +80,12 @@ class AssertionRewritingHook(object):
tp = desc[2]
if tp == imp.PY_COMPILED:
if hasattr(imp, "source_from_cache"):
fn = imp.source_from_cache(fn)
try:
fn = imp.source_from_cache(fn)
except ValueError:
# Python 3 doesn't like orphaned but still-importable
# .pyc files.
fn = fn[:-1]
else:
fn = fn[:-1]
elif tp != imp.PY_SOURCE:
@ -92,6 +98,8 @@ class AssertionRewritingHook(object):
if not self._should_rewrite(name, fn_pypath, state):
return None
self._rewritten_names.add(name)
# The requested module looks like a test file, so rewrite it. This is
# the most magical part of the process: load the source, rewrite the
# asserts, and load the rewritten source. We also cache the rewritten
@ -178,14 +186,15 @@ class AssertionRewritingHook(object):
"""
already_imported = set(names).intersection(set(sys.modules))
if already_imported:
self._warn_already_imported(already_imported)
for name in already_imported:
if name not in self._rewritten_names:
self._warn_already_imported(name)
self._must_rewrite.update(names)
def _warn_already_imported(self, names):
def _warn_already_imported(self, name):
self.config.warn(
'P1',
'Modules are already imported so can not be re-written: %s' %
','.join(names))
'Module already imported so can not be re-written: %s' % name)
def load_module(self, name):
# If there is an existing module object named 'fullname' in
@ -206,7 +215,8 @@ class AssertionRewritingHook(object):
mod.__loader__ = self
py.builtin.exec_(co, mod.__dict__)
except:
del sys.modules[name]
if name in sys.modules:
del sys.modules[name]
raise
return sys.modules[name]
@ -271,6 +281,7 @@ def _write_pyc(state, co, source_stat, pyc):
fp.close()
return True
RN = "\r\n".encode("utf-8")
N = "\n".encode("utf-8")

View File

@ -105,7 +105,7 @@ except NameError:
def assertrepr_compare(config, op, left, right):
"""Return specialised explanations for some operators/operands"""
width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op
left_repr = py.io.saferepr(left, maxsize=int(width/2))
left_repr = py.io.saferepr(left, maxsize=int(width//2))
right_repr = py.io.saferepr(right, maxsize=width-len(left_repr))
summary = u('%s %s %s') % (ecu(left_repr), op, ecu(right_repr))

View File

@ -1,7 +1,7 @@
"""
merged implementation of the cache provider
the name cache was not choosen to ensure pluggy automatically
the name cache was not chosen to ensure pluggy automatically
ignores the external pytest-cache
"""

View File

@ -13,6 +13,7 @@ import py
import pytest
from py.io import TextIO
from io import UnsupportedOperation
unicode = py.builtin.text
patchsysdict = {0: 'stdin', 1: 'stdout', 2: 'stderr'}
@ -152,6 +153,7 @@ class CaptureManager:
item.add_report_section(when, "stdout", out)
item.add_report_section(when, "stderr", err)
error_capsysfderror = "cannot use capsys and capfd at the same time"
@ -447,7 +449,7 @@ class DontReadFromInput:
__iter__ = read
def fileno(self):
raise ValueError("redirected Stdin is pseudofile, has no fileno()")
raise UnsupportedOperation("redirected Stdin is pseudofile, has no fileno()")
def isatty(self):
return False

View File

@ -19,6 +19,7 @@ except ImportError: # pragma: no cover
# Only available in Python 3.4+ or as a backport
enum = None
_PY3 = sys.version_info > (3, 0)
_PY2 = not _PY3
@ -26,6 +27,9 @@ _PY2 = not _PY3
NoneType = type(None)
NOTSET = object()
PY36 = sys.version_info[:2] >= (3, 6)
MODULE_NOT_FOUND_ERROR = 'ModuleNotFoundError' if PY36 else 'ImportError'
if hasattr(inspect, 'signature'):
def _format_args(func):
return str(inspect.signature(func))
@ -42,11 +46,18 @@ REGEX_TYPE = type(re.compile(''))
def is_generator(func):
try:
return _pytest._code.getrawcode(func).co_flags & 32 # generator function
except AttributeError: # builtin functions have no bytecode
# assume them to not be generators
return False
genfunc = inspect.isgeneratorfunction(func)
return genfunc and not iscoroutinefunction(func)
def iscoroutinefunction(func):
"""Return True if func is a decorated coroutine function.
Note: copied and modified from Python 3.5's builtin couroutines.py to avoid import asyncio directly,
which in turns also initializes the "logging" module as side-effect (see issue #8).
"""
return (getattr(func, '_is_coroutine', False) or
(hasattr(inspect, 'iscoroutinefunction') and inspect.iscoroutinefunction(func)))
def getlocation(function, curdir):
@ -213,4 +224,20 @@ def _is_unittest_unexpected_success_a_failure():
Changed in version 3.4: Returns False if there were any
unexpectedSuccesses from tests marked with the expectedFailure() decorator.
"""
return sys.version_info >= (3, 4)
return sys.version_info >= (3, 4)
if _PY3:
def safe_str(v):
"""returns v as string"""
return str(v)
else:
def safe_str(v):
"""returns v as string, converting to ascii if necessary"""
try:
return str(v)
except UnicodeError:
if not isinstance(v, unicode):
v = unicode(v)
errors = 'replace'
return v.encode('utf-8', errors)

View File

@ -5,7 +5,6 @@ import traceback
import types
import warnings
import pkg_resources
import py
# DON't import pytest here because it causes import cycle troubles
import sys, os
@ -13,6 +12,7 @@ import _pytest._code
import _pytest.hookspec # the extension point definitions
import _pytest.assertion
from _pytest._pluggy import PluginManager, HookimplMarker, HookspecMarker
from _pytest.compat import safe_str
hookimpl = HookimplMarker("pytest")
hookspec = HookspecMarker("pytest")
@ -65,9 +65,33 @@ def main(args=None, plugins=None):
class cmdline: # compatibility namespace
main = staticmethod(main)
class UsageError(Exception):
""" error in pytest usage or invocation"""
def filename_arg(path, optname):
""" Argparse type validator for filename arguments.
:path: path of filename
:optname: name of the option
"""
if os.path.isdir(path):
raise UsageError("{0} must be a filename, given: {1}".format(optname, path))
return path
def directory_arg(path, optname):
"""Argparse type validator for directory arguments.
:path: path of directory
:optname: name of the option
"""
if not os.path.isdir(path):
raise UsageError("{0} must be a directory, given: {1}".format(optname, path))
return path
_preinit = []
default_plugins = (
@ -378,35 +402,30 @@ class PytestPluginManager(PluginManager):
self._import_plugin_specs(os.environ.get("PYTEST_PLUGINS"))
def consider_module(self, mod):
plugins = getattr(mod, 'pytest_plugins', [])
if isinstance(plugins, str):
plugins = [plugins]
self.rewrite_hook.mark_rewrite(*plugins)
self._import_plugin_specs(plugins)
self._import_plugin_specs(getattr(mod, 'pytest_plugins', []))
def _import_plugin_specs(self, spec):
if spec:
if isinstance(spec, str):
spec = spec.split(",")
for import_spec in spec:
self.import_plugin(import_spec)
plugins = _get_plugin_specs_as_list(spec)
for import_spec in plugins:
self.import_plugin(import_spec)
def import_plugin(self, modname):
# most often modname refers to builtin modules, e.g. "pytester",
# "terminal" or "capture". Those plugins are registered under their
# basename for historic purposes but must be imported with the
# _pytest prefix.
assert isinstance(modname, str)
assert isinstance(modname, str), "module name as string required, got %r" % modname
if self.get_plugin(modname) is not None:
return
if modname in builtin_plugins:
importspec = "_pytest." + modname
else:
importspec = modname
self.rewrite_hook.mark_rewrite(importspec)
try:
__import__(importspec)
except ImportError as e:
new_exc = ImportError('Error importing plugin "%s": %s' % (modname, e))
new_exc = ImportError('Error importing plugin "%s": %s' % (modname, safe_str(e.args[0])))
# copy over name and path attributes
for attr in ('name', 'path'):
if hasattr(e, attr):
@ -423,6 +442,24 @@ class PytestPluginManager(PluginManager):
self.consider_module(mod)
def _get_plugin_specs_as_list(specs):
"""
Parses a list of "plugin specs" and returns a list of plugin names.
Plugin specs can be given as a list of strings separated by "," or already as a list/tuple in
which case it is returned as a list. Specs can also be `None` in which case an
empty list is returned.
"""
if specs is not None:
if isinstance(specs, str):
specs = specs.split(',') if specs else []
if not isinstance(specs, (list, tuple)):
raise UsageError("Plugin specs must be a ','-separated string or a "
"list/tuple of strings for plugin names. Given: %r" % specs)
return list(specs)
return []
class Parser:
""" Parser for command line arguments and ini-file values.
@ -594,7 +631,7 @@ class Argument:
if typ == 'choice':
warnings.warn(
'type argument to addoption() is a string %r.'
' For parsearg this is optional and when supplied '
' For parsearg this is optional and when supplied'
' should be a type.'
' (options: %s)' % (typ, names),
DeprecationWarning,
@ -793,7 +830,7 @@ class DropShorterLongHelpFormatter(argparse.HelpFormatter):
if len(option) == 2 or option[2] == ' ':
return_list.append(option)
if option[2:] == short_long.get(option.replace('-', '')):
return_list.append(option.replace(' ', '='))
return_list.append(option.replace(' ', '=', 1))
action._formatted_action_invocation = ', '.join(return_list)
return action._formatted_action_invocation
@ -818,9 +855,11 @@ class Notset:
def __repr__(self):
return "<NOTSET>"
notset = Notset()
FILE_OR_DIR = 'file_or_dir'
class Config(object):
""" access to configuration values, pluginmanager and plugin hooks. """
@ -838,14 +877,17 @@ class Config(object):
self.trace = self.pluginmanager.trace.root.get("config")
self.hook = self.pluginmanager.hook
self._inicache = {}
self._override_ini = ()
self._opt2dest = {}
self._cleanup = []
self._warn = self.pluginmanager._warn
self.pluginmanager.register(self, "pytestconfig")
self._configured = False
def do_setns(dic):
import pytest
setns(pytest, dic)
self.hook.pytest_namespace.call_historic(do_setns, {})
self.hook.pytest_addoption.call_historic(kwargs=dict(parser=self._parser))
@ -936,6 +978,7 @@ class Config(object):
self.invocation_dir = py.path.local()
self._parser.addini('addopts', 'extra command line options', 'args')
self._parser.addini('minversion', 'minimally required pytest version')
self._override_ini = ns.override_ini or ()
def _consider_importhook(self, args, entrypoint_name):
"""Install the PEP 302 import hook if using assertion re-writing.
@ -952,6 +995,7 @@ class Config(object):
except SystemError:
mode = 'plain'
else:
import pkg_resources
self.pluginmanager.rewrite_hook = hook
for entrypoint in pkg_resources.iter_entry_points('pytest11'):
# 'RECORD' available for plugins installed normally (pip install)
@ -1000,6 +1044,7 @@ class Config(object):
self.pluginmanager.load_setuptools_entrypoints(entrypoint_name)
self.pluginmanager.consider_env()
self.known_args_namespace = ns = self._parser.parse_known_args(args, namespace=self.option.copy())
confcutdir = self.known_args_namespace.confcutdir
if self.known_args_namespace.confcutdir is None and self.inifile:
confcutdir = py.path.local(self.inifile).dirname
self.known_args_namespace.confcutdir = confcutdir
@ -1116,12 +1161,14 @@ class Config(object):
# and -o foo1=bar1 -o foo2=bar2 options
# always use the last item if multiple value set for same ini-name,
# e.g. -o foo=bar1 -o foo=bar2 will set foo to bar2
if self.getoption("override_ini", None):
for ini_config_list in self.option.override_ini:
for ini_config in ini_config_list:
for ini_config_list in self._override_ini:
for ini_config in ini_config_list:
try:
(key, user_ini_value) = ini_config.split("=", 1)
if key == name:
value = user_ini_value
except ValueError:
raise UsageError("-o/--override-ini expects option=value style.")
if key == name:
value = user_ini_value
return value
def getoption(self, name, default=notset, skip=False):
@ -1195,25 +1242,20 @@ def getcfg(args, warnfunc=None):
return None, None, None
def get_common_ancestor(args):
# args are what we get after early command line parsing (usually
# strings, but can be py.path.local objects as well)
def get_common_ancestor(paths):
common_ancestor = None
for arg in args:
if str(arg)[0] == "-":
continue
p = py.path.local(arg)
if not p.exists():
for path in paths:
if not path.exists():
continue
if common_ancestor is None:
common_ancestor = p
common_ancestor = path
else:
if p.relto(common_ancestor) or p == common_ancestor:
if path.relto(common_ancestor) or path == common_ancestor:
continue
elif common_ancestor.relto(p):
common_ancestor = p
elif common_ancestor.relto(path):
common_ancestor = path
else:
shared = p.common(common_ancestor)
shared = path.common(common_ancestor)
if shared is not None:
common_ancestor = shared
if common_ancestor is None:
@ -1224,9 +1266,29 @@ def get_common_ancestor(args):
def get_dirs_from_args(args):
return [d for d in (py.path.local(x) for x in args
if not str(x).startswith("-"))
if d.exists()]
def is_option(x):
return str(x).startswith('-')
def get_file_part_from_node_id(x):
return str(x).split('::')[0]
def get_dir_from_path(path):
if path.isdir():
return path
return py.path.local(path.dirname)
# These look like paths but may not exist
possible_paths = (
py.path.local(get_file_part_from_node_id(arg))
for arg in args
if not is_option(arg)
)
return [
get_dir_from_path(path)
for path in possible_paths
if path.exists()
]
def determine_setup(inifile, args, warnfunc=None):

View File

@ -31,10 +31,12 @@ def pytest_configure(config):
pytestPDB._pdb_cls = pdb_cls
old = (pdb.set_trace, pytestPDB._pluginmanager)
def fin():
pdb.set_trace, pytestPDB._pluginmanager = old
pytestPDB._config = None
pytestPDB._pdb_cls = pdb.Pdb
pdb.set_trace = pytest.set_trace
pytestPDB._pluginmanager = config.pluginmanager
pytestPDB._config = config

View File

@ -14,6 +14,7 @@ from _pytest.compat import (
getfslineno, get_real_func,
is_generator, isclass, getimfunc,
getlocation, getfuncargnames,
safe_getattr,
)
def pytest_sessionstart(session):
@ -32,11 +33,13 @@ scope2props["function"] = scope2props["instance"] + ("function", "keywords")
def scopeproperty(name=None, doc=None):
def decoratescope(func):
scopename = name or func.__name__
def provide(self):
if func.__name__ in scope2props[self.scope]:
return func(self)
raise AttributeError("%s not available in %s-scoped context" % (
scopename, self.scope))
return property(provide, None, None, func.__doc__)
return decoratescope
@ -122,8 +125,6 @@ def getfixturemarker(obj):
exceptions."""
try:
return getattr(obj, "_pytestfixturefunction", None)
except KeyboardInterrupt:
raise
except Exception:
# some objects raise errors like request (from flask import request)
# we don't expect them to be fixture functions
@ -599,12 +600,29 @@ class ScopeMismatchError(Exception):
which has a lower scope (e.g. a Session one calls a function one)
"""
scopes = "session module class function".split()
scopenum_function = scopes.index("function")
def scopemismatch(currentscope, newscope):
return scopes.index(newscope) > scopes.index(currentscope)
def scope2index(scope, descr, where=None):
"""Look up the index of ``scope`` and raise a descriptive value error
if not defined.
"""
try:
return scopes.index(scope)
except ValueError:
raise ValueError(
"{0} {1}has an unsupported scope value '{2}'".format(
descr, 'from {0} '.format(where) if where else '',
scope)
)
class FixtureLookupError(LookupError):
""" could not return a requested Fixture (missing or invalid). """
def __init__(self, argname, request, msg=None):
@ -703,6 +721,7 @@ def call_fixture_func(fixturefunc, request, kwargs):
res = fixturefunc(**kwargs)
return res
class FixtureDef:
""" A container for a factory definition. """
def __init__(self, fixturemanager, baseid, argname, func, scope, params,
@ -713,7 +732,11 @@ class FixtureDef:
self.func = func
self.argname = argname
self.scope = scope
self.scopenum = scopes.index(scope or "function")
self.scopenum = scope2index(
scope or "function",
descr='fixture {0}'.format(func.__name__),
where=baseid
)
self.params = params
startindex = unittest and 1 or None
self.argnames = getfuncargnames(func, startindex=startindex)
@ -1044,7 +1067,9 @@ class FixtureManager:
self._holderobjseen.add(holderobj)
autousenames = []
for name in dir(holderobj):
obj = getattr(holderobj, name, None)
# The attribute can be an arbitrary descriptor, so the attribute
# access below can raise. safe_getatt() ignores such exceptions.
obj = safe_getattr(holderobj, name, None)
# fixture functions have a pytest_funcarg__ prefix (pre-2.3 style)
# or are "@pytest.fixture" marked
marker = getfixturemarker(obj)

View File

@ -23,7 +23,7 @@ def pytest_addoption(parser):
group._addoption(
'-o', '--override-ini', nargs='*', dest="override_ini",
action="append",
help="override config option, e.g. `-o xfail_strict=True`.")
help="override config option with option=value style, e.g. `-o xfail_strict=True`.")
@pytest.hookimpl(hookwrapper=True)
@ -41,12 +41,14 @@ def pytest_cmdline_parse():
config.trace.root.setwriter(debugfile.write)
undo_tracing = config.pluginmanager.enable_tracing()
sys.stderr.write("writing pytestdebug information to %s\n" % path)
def unset_tracing():
debugfile.close()
sys.stderr.write("wrote pytestdebug information to %s\n" %
debugfile.name)
config.trace.root.setwriter(None)
undo_tracing()
config.add_cleanup(unset_tracing)
def pytest_cmdline_main(config):
@ -71,8 +73,8 @@ def showhelp(config):
tw.write(config._parser.optparser.format_help())
tw.line()
tw.line()
tw.line("[pytest] ini-options in the next "
"pytest.ini|tox.ini|setup.cfg file:")
tw.line("[pytest] ini-options in the first "
"pytest.ini|tox.ini|setup.cfg file found:")
tw.line()
for name in config._parser._ininames:

View File

@ -246,7 +246,7 @@ def pytest_unconfigure(config):
# -------------------------------------------------------------------------
# hooks for customising the assert methods
# hooks for customizing the assert methods
# -------------------------------------------------------------------------
def pytest_assertrepr_compare(config, op, left, right):
@ -255,7 +255,7 @@ def pytest_assertrepr_compare(config, op, left, right):
Return None for no custom explanation, otherwise return a list
of strings. The strings will be joined by newlines but any newlines
*in* a string will be escaped. Note that all but the first line will
be indented sligthly, the intention is for the first line to be a summary.
be indented slightly, the intention is for the first line to be a summary.
"""
# -------------------------------------------------------------------------
@ -263,7 +263,14 @@ def pytest_assertrepr_compare(config, op, left, right):
# -------------------------------------------------------------------------
def pytest_report_header(config, startdir):
""" return a string to be displayed as header info for terminal reporting."""
""" return a string to be displayed as header info for terminal reporting.
.. note::
This function should be implemented only in plugins or ``conftest.py``
files situated at the tests root directory due to how pytest
:ref:`discovers plugins during startup <pluginorder>`.
"""
@hookspec(firstresult=True)
def pytest_report_teststatus(report):

View File

@ -8,12 +8,14 @@ Based on initial code from Ross Lawley.
# Output conforms to https://github.com/jenkinsci/xunit-plugin/blob/master/
# src/main/resources/org/jenkinsci/plugins/xunit/types/model/xsd/junit-10.xsd
import functools
import py
import os
import re
import sys
import time
import pytest
from _pytest.config import filename_arg
# Python 2.X and 3.X compatibility
if sys.version_info[0] < 3:
@ -27,6 +29,7 @@ else:
class Junit(py.xml.Namespace):
pass
# We need to get the subset of the invalid unicode ranges according to
# XML 1.0 which are valid in this python build. Hence we calculate
# this dynamically instead of hardcoding it. The spec range of valid
@ -116,7 +119,7 @@ class _NodeReporter(object):
node = kind(data, message=message)
self.append(node)
def _write_captured_output(self, report):
def write_captured_output(self, report):
for capname in ('out', 'err'):
content = getattr(report, 'capstd' + capname)
if content:
@ -125,7 +128,6 @@ class _NodeReporter(object):
def append_pass(self, report):
self.add_stats('passed')
self._write_captured_output(report)
def append_failure(self, report):
# msg = str(report.longrepr.reprtraceback.extraline)
@ -144,7 +146,6 @@ class _NodeReporter(object):
fail = Junit.failure(message=message)
fail.append(bin_xml_escape(report.longrepr))
self.append(fail)
self._write_captured_output(report)
def append_collect_error(self, report):
# msg = str(report.longrepr.reprtraceback.extraline)
@ -156,9 +157,12 @@ class _NodeReporter(object):
Junit.skipped, "collection skipped", report.longrepr)
def append_error(self, report):
if getattr(report, 'when', None) == 'teardown':
msg = "test teardown failure"
else:
msg = "test setup failure"
self._add_simple(
Junit.error, "test setup failure", report.longrepr)
self._write_captured_output(report)
Junit.error, msg, report.longrepr)
def append_skipped(self, report):
if hasattr(report, "wasxfail"):
@ -173,7 +177,7 @@ class _NodeReporter(object):
Junit.skipped("%s:%s: %s" % (filename, lineno, skipreason),
type="pytest.skip",
message=skipreason))
self._write_captured_output(report)
self.write_captured_output(report)
def finalize(self):
data = self.to_xml().unicode(indent=0)
@ -209,6 +213,7 @@ def pytest_addoption(parser):
action="store",
dest="xmlpath",
metavar="path",
type=functools.partial(filename_arg, optname="--junitxml"),
default=None,
help="create junit-xml style report file at given path.")
group.addoption(
@ -337,6 +342,8 @@ class LogXML(object):
reporter.append_skipped(report)
self.update_testcase_duration(report)
if report.when == "teardown":
reporter = self._opentestcase(report)
reporter.write_captured_output(report)
self.finalize(report)
def update_testcase_duration(self, report):

View File

@ -1,4 +1,5 @@
""" core implementation of testing process: init, session, runtest loop. """
import functools
import os
import sys
@ -11,6 +12,7 @@ try:
except ImportError:
from UserDict import DictMixin as MappingMixin
from _pytest.config import directory_arg
from _pytest.runner import collect_one_node
tracebackcutdir = py.path.local(_pytest.__file__).dirpath()
@ -58,7 +60,7 @@ def pytest_addoption(parser):
# when changing this to --conf-cut-dir, config.py Conftest.setinitial
# needs upgrading as well
group.addoption('--confcutdir', dest="confcutdir", default=None,
metavar="dir",
metavar="dir", type=functools.partial(directory_arg, optname="--confcutdir"),
help="only load conftest.py's relative to specified dir.")
group.addoption('--noconftest', action="store_true",
dest="noconftest", default=False,
@ -79,7 +81,7 @@ def pytest_namespace():
def pytest_configure(config):
pytest.config = config # compatibiltiy
pytest.config = config # compatibility
def wrap_session(config, doit):
@ -188,12 +190,22 @@ class FSHookProxy:
self.__dict__[name] = x
return x
def compatproperty(name):
def fget(self):
# deprecated - use pytest.name
return getattr(pytest, name)
class _CompatProperty(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, owner):
if obj is None:
return self
# TODO: reenable in the features branch
# warnings.warn(
# "usage of {owner!r}.{name} is deprecated, please use pytest.{name} instead".format(
# name=self.name, owner=type(owner).__name__),
# PendingDeprecationWarning, stacklevel=2)
return getattr(pytest, self.name)
return property(fget)
class NodeKeywords(MappingMixin):
def __init__(self, node):
@ -265,19 +277,23 @@ class Node(object):
""" fspath sensitive hook proxy used to call pytest hooks"""
return self.session.gethookproxy(self.fspath)
Module = compatproperty("Module")
Class = compatproperty("Class")
Instance = compatproperty("Instance")
Function = compatproperty("Function")
File = compatproperty("File")
Item = compatproperty("Item")
Module = _CompatProperty("Module")
Class = _CompatProperty("Class")
Instance = _CompatProperty("Instance")
Function = _CompatProperty("Function")
File = _CompatProperty("File")
Item = _CompatProperty("Item")
def _getcustomclass(self, name):
cls = getattr(self, name)
if cls != getattr(pytest, name):
py.log._apiwarn("2.0", "use of node.%s is deprecated, "
"use pytest_pycollect_makeitem(...) to create custom "
"collection nodes" % name)
maybe_compatprop = getattr(type(self), name)
if isinstance(maybe_compatprop, _CompatProperty):
return getattr(pytest, name)
else:
cls = getattr(self, name)
# TODO: reenable in the features branch
# warnings.warn("use of node.%s is deprecated, "
# "use pytest_pycollect_makeitem(...) to create custom "
# "collection nodes" % name, category=DeprecationWarning)
return cls
def __repr__(self):
@ -535,7 +551,6 @@ class Session(FSCollector):
def __init__(self, config):
FSCollector.__init__(self, config.rootdir, parent=None,
config=config, session=self)
self._fs2hookproxy = {}
self.testsfailed = 0
self.testscollected = 0
self.shouldstop = False
@ -566,23 +581,18 @@ class Session(FSCollector):
return path in self._initialpaths
def gethookproxy(self, fspath):
try:
return self._fs2hookproxy[fspath]
except KeyError:
# check if we have the common case of running
# hooks with all conftest.py filesall conftest.py
pm = self.config.pluginmanager
my_conftestmodules = pm._getconftestmodules(fspath)
remove_mods = pm._conftest_plugins.difference(my_conftestmodules)
if remove_mods:
# one or more conftests are not in use at this fspath
proxy = FSHookProxy(fspath, pm, remove_mods)
else:
# all plugis are active for this fspath
proxy = self.config.hook
self._fs2hookproxy[fspath] = proxy
return proxy
# check if we have the common case of running
# hooks with all conftest.py filesall conftest.py
pm = self.config.pluginmanager
my_conftestmodules = pm._getconftestmodules(fspath)
remove_mods = pm._conftest_plugins.difference(my_conftestmodules)
if remove_mods:
# one or more conftests are not in use at this fspath
proxy = FSHookProxy(fspath, pm, remove_mods)
else:
# all plugis are active for this fspath
proxy = self.config.hook
return proxy
def perform_collect(self, args=None, genitems=True):
hook = self.config.hook
@ -704,10 +714,9 @@ class Session(FSCollector):
path = self.config.invocation_dir.join(relpath, abs=True)
if not path.check():
if self.config.option.pyargs:
msg = "file or package not found: "
raise pytest.UsageError("file or package not found: " + arg + " (missing __init__.py?)")
else:
msg = "file not found: "
raise pytest.UsageError(msg + arg)
raise pytest.UsageError("file not found: " + arg)
parts[0] = path
return parts

View File

@ -54,6 +54,8 @@ def pytest_cmdline_main(config):
tw.line()
config._ensure_unconfigure()
return 0
pytest_cmdline_main.tryfirst = True
@ -64,7 +66,7 @@ def pytest_collection_modifyitems(items, config):
return
# pytest used to allow "-" for negating
# but today we just allow "-" at the beginning, use "not" instead
# we probably remove "-" alltogether soon
# we probably remove "-" altogether soon
if keywordexpr.startswith("-"):
keywordexpr = "not " + keywordexpr[1:]
selectuntil = False

View File

@ -11,7 +11,7 @@ RE_IMPORT_ERROR_NAME = re.compile("^No module named (.*)$")
@pytest.fixture
def monkeypatch(request):
def monkeypatch():
"""The returned ``monkeypatch`` fixture provides these
helper methods to modify objects, dictionaries or os.environ::
@ -30,8 +30,8 @@ def monkeypatch(request):
will be raised if the set/deletion operation has no target.
"""
mpatch = MonkeyPatch()
request.addfinalizer(mpatch.undo)
return mpatch
yield mpatch
mpatch.undo()
def resolve(name):

View File

@ -11,6 +11,7 @@ def pytest_addoption(parser):
choices=['failed', 'all'],
help="send failed|all info to bpaste.net pastebin service.")
@pytest.hookimpl(trylast=True)
def pytest_configure(config):
import py
@ -23,13 +24,16 @@ def pytest_configure(config):
# pastebin file will be utf-8 encoded binary file
config._pastebinfile = tempfile.TemporaryFile('w+b')
oldwrite = tr._tw.write
def tee_write(s, **kwargs):
oldwrite(s, **kwargs)
if py.builtin._istext(s):
s = s.encode('utf-8')
config._pastebinfile.write(s)
tr._tw.write = tee_write
def pytest_unconfigure(config):
if hasattr(config, '_pastebinfile'):
# get terminal contents and delete file
@ -45,6 +49,7 @@ def pytest_unconfigure(config):
pastebinurl = create_new_paste(sessionlog)
tr.write_line("pastebin session-log: %s\n" % pastebinurl)
def create_new_paste(contents):
"""
Creates a new paste using bpaste.net service.
@ -72,6 +77,7 @@ def create_new_paste(contents):
else:
return 'bad response: ' + response
def pytest_terminal_summary(terminalreporter):
import _pytest.config
if terminalreporter.config.option.pastebin != "failed":

View File

@ -332,7 +332,7 @@ def testdir(request, tmpdir_factory):
return Testdir(request, tmpdir_factory)
rex_outcome = re.compile("(\d+) ([\w-]+)")
rex_outcome = re.compile(r"(\d+) ([\w-]+)")
class RunResult:
"""The result of running a command.
@ -367,6 +367,7 @@ class RunResult:
for num, cat in outcomes:
d[cat] = int(num)
return d
raise ValueError("Pytest terminal report not found")
def assert_outcomes(self, passed=0, skipped=0, failed=0):
""" assert that the specified outcomes appear with the respective
@ -446,9 +447,9 @@ class Testdir:
the module is re-imported.
"""
for name in set(sys.modules).difference(self._savemodulekeys):
# it seems zope.interfaces is keeping some state
# (used by twisted related tests)
if name != "zope.interface":
# zope.interface (used by twisted-related tests) keeps internal
# state and can't be deleted
if not name.startswith("zope.interface"):
del sys.modules[name]
def make_hook_recorder(self, pluginmanager):
@ -478,11 +479,14 @@ class Testdir:
ret = None
for name, value in items:
p = self.tmpdir.join(name).new(ext=ext)
p.dirpath().ensure_dir()
source = Source(value)
def my_totext(s, encoding="utf-8"):
if py.builtin._isbytes(s):
s = py.builtin._totext(s, encoding=encoding)
return s
source_unicode = "\n".join([my_totext(line) for line in source.lines])
source = py.builtin._totext(source_unicode)
content = source.strip().encode("utf-8") # + "\n"
@ -562,7 +566,7 @@ class Testdir:
def mkpydir(self, name):
"""Create a new python package.
This creates a (sub)direcotry with an empty ``__init__.py``
This creates a (sub)directory with an empty ``__init__.py``
file so that is recognised as a python package.
"""
@ -657,7 +661,7 @@ class Testdir:
def inline_genitems(self, *args):
"""Run ``pytest.main(['--collectonly'])`` in-process.
Retuns a tuple of the collected items and a
Returns a tuple of the collected items and a
:py:class:`HookRecorder` instance.
This runs the :py:func:`pytest.main` function to run all of
@ -692,12 +696,15 @@ class Testdir:
# warning which will trigger to say they can no longer be
# re-written, which is fine as they are already re-written.
orig_warn = AssertionRewritingHook._warn_already_imported
def revert():
AssertionRewritingHook._warn_already_imported = orig_warn
self.request.addfinalizer(revert)
AssertionRewritingHook._warn_already_imported = lambda *a: None
rec = []
class Collect:
def pytest_configure(x, config):
rec.append(self.make_hook_recorder(config.pluginmanager))
@ -732,10 +739,13 @@ class Testdir:
try:
reprec = self.inline_run(*args, **kwargs)
except SystemExit as e:
class reprec:
ret = e.args[0]
except Exception:
traceback.print_exc()
class reprec:
ret = 3
finally:
@ -847,7 +857,7 @@ class Testdir:
:py:meth:`parseconfigure`.
:param withinit: Whether to also write a ``__init__.py`` file
to the temporarly directory to ensure it is a package.
to the temporary directory to ensure it is a package.
"""
kw = {self.request.function.__name__: Source(source).strip()}
@ -1002,8 +1012,6 @@ class Testdir:
pexpect = pytest.importorskip("pexpect", "3.0")
if hasattr(sys, 'pypy_version_info') and '64' in platform.machine():
pytest.skip("pypy-64 bit not supported")
if sys.platform == "darwin":
pytest.xfail("pexpect does not work reliably on darwin?!")
if sys.platform.startswith("freebsd"):
pytest.xfail("pexpect does not work reliably on freebsd")
logfile = self.tmpdir.join("spawn.out").open("wb")

View File

@ -19,14 +19,19 @@ from _pytest.compat import (
isclass, isfunction, is_generator, _escape_strings,
REGEX_TYPE, STRING_TYPES, NoneType, NOTSET,
get_real_func, getfslineno, safe_getattr,
getlocation, enum,
safe_str, getlocation, enum,
)
cutdir2 = py.path.local(_pytest.__file__).dirpath()
cutdir1 = py.path.local(pluggy.__file__.rstrip("oc"))
cutdir2 = py.path.local(_pytest.__file__).dirpath()
cutdir3 = py.path.local(py.__file__).dirpath()
def filter_traceback(entry):
"""Return True if a TracebackEntry instance should be removed from tracebacks:
* dynamically generated code (no code to show up for it);
* internal traceback from pytest or its internal libraries, py and pluggy.
"""
# entry.path might sometimes return a str object when the entry
# points to dynamically generated code
# see https://bitbucket.org/pytest-dev/py/issues/71
@ -37,7 +42,7 @@ def filter_traceback(entry):
# entry.path might point to an inexisting file, in which case it will
# alsso return a str object. see #1133
p = py.path.local(entry.path)
return p != cutdir1 and not p.relto(cutdir2)
return p != cutdir1 and not p.relto(cutdir2) and not p.relto(cutdir3)
@ -169,7 +174,7 @@ def pytest_pycollect_makeitem(collector, name, obj):
outcome = yield
res = outcome.get_result()
if res is not None:
raise StopIteration
return
# nothing was collected elsewhere, let's do it here
if isclass(obj):
if collector.istestclass(obj, name):
@ -205,14 +210,16 @@ class PyobjContext(object):
class PyobjMixin(PyobjContext):
def obj():
def fget(self):
try:
return self._obj
except AttributeError:
obj = getattr(self, '_obj', None)
if obj is None:
self._obj = obj = self._getobj()
return obj
return obj
def fset(self, value):
self._obj = value
return property(fget, fset, None, "underlying python object")
obj = obj()
def _getobj(self):
@ -425,20 +432,25 @@ class Module(pytest.File, PyCollector):
% e.args
)
except ImportError:
exc_class, exc, _ = sys.exc_info()
from _pytest._code.code import ExceptionInfo
exc_info = ExceptionInfo()
if self.config.getoption('verbose') < 2:
exc_info.traceback = exc_info.traceback.filter(filter_traceback)
exc_repr = exc_info.getrepr(style='short') if exc_info.traceback else exc_info.exconly()
formatted_tb = safe_str(exc_repr)
raise self.CollectError(
"ImportError while importing test module '%s'.\n"
"Original error message:\n'%s'\n"
"Make sure your test modules/packages have valid Python names."
% (self.fspath, exc or exc_class)
"ImportError while importing test module '{fspath}'.\n"
"Hint: make sure your test modules/packages have valid Python names.\n"
"Traceback:\n"
"{traceback}".format(fspath=self.fspath, traceback=formatted_tb)
)
except _pytest.runner.Skipped as e:
if e.allow_module_level:
raise
raise self.CollectError(
"Using @pytest.skip outside of a test (e.g. as a test "
"function decorator) is not allowed. Use @pytest.mark.skip or "
"@pytest.mark.skipif instead."
"Using pytest.skip outside of a test is not allowed. If you are "
"trying to decorate a test function, use the @pytest.mark.skip "
"or @pytest.mark.skipif decorators instead."
)
self.config.pluginmanager.consider_module(mod)
return mod
@ -617,7 +629,7 @@ class Generator(FunctionMixin, PyCollector):
def getcallargs(self, obj):
if not isinstance(obj, (tuple, list)):
obj = (obj,)
# explict naming
# explicit naming
if isinstance(obj[0], py.builtin._basestring):
name = obj[0]
obj = obj[1:]
@ -771,7 +783,7 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
It will also override any fixture-function defined scope, allowing
to set a dynamic scope using test context or configuration.
"""
from _pytest.fixtures import scopes
from _pytest.fixtures import scope2index
from _pytest.mark import extract_argvalue
from py.io import saferepr
@ -800,7 +812,8 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
if scope is None:
scope = _find_parametrized_scope(argnames, self._arg2fixturedefs, indirect)
scopenum = scopes.index(scope)
scopenum = scope2index(
scope, descr='call to {0}'.format(self.parametrize))
valtypes = {}
for arg in argnames:
if arg not in self.fixturenames:
@ -1095,7 +1108,9 @@ def raises(expected_exception, *args, **kwargs):
>>> with raises(ZeroDivisionError, message="Expecting ZeroDivisionError"):
... pass
... Failed: Expecting ZeroDivisionError
Traceback (most recent call last):
...
Failed: Expecting ZeroDivisionError
.. note::
@ -1106,19 +1121,21 @@ def raises(expected_exception, *args, **kwargs):
Lines of code after that, within the scope of the context manager will
not be executed. For example::
>>> with raises(OSError) as exc_info:
assert 1 == 1 # this will execute as expected
raise OSError(errno.EEXISTS, 'directory exists')
assert exc_info.value.errno == errno.EEXISTS # this will not execute
>>> value = 15
>>> with raises(ValueError) as exc_info:
... if value > 10:
... raise ValueError("value must be <= 10")
... assert str(exc_info.value) == "value must be <= 10" # this will not execute
Instead, the following approach must be taken (note the difference in
scope)::
>>> with raises(OSError) as exc_info:
assert 1 == 1 # this will execute as expected
raise OSError(errno.EEXISTS, 'directory exists')
>>> with raises(ValueError) as exc_info:
... if value > 10:
... raise ValueError("value must be <= 10")
...
>>> assert str(exc_info.value) == "value must be <= 10"
assert exc_info.value.errno == errno.EEXISTS # this will now execute
Or you can specify a callable by passing a to-be-called lambda::
@ -1223,7 +1240,11 @@ class RaisesContext(object):
exc_type, value, traceback = tp
tp = exc_type, exc_type(value), traceback
self.excinfo.__init__(tp)
return issubclass(self.excinfo.type, self.expected_exception)
suppress_exception = issubclass(self.excinfo.type, self.expected_exception)
if sys.version_info[0] == 2 and suppress_exception:
sys.exc_clear()
return suppress_exception
# builtin pytest.approx helper
@ -1357,6 +1378,8 @@ class approx(object):
return False
return all(a == x for a, x in zip(actual, self.expected))
__hash__ = None
def __ne__(self, actual):
return not (actual == self)
@ -1396,6 +1419,9 @@ class ApproxNonIterable(object):
self.rel = rel
def __repr__(self):
if isinstance(self.expected, complex):
return str(self.expected)
# Infinities aren't compared using tolerances, so don't show a
# tolerance.
if math.isinf(self.expected):
@ -1408,16 +1434,10 @@ class ApproxNonIterable(object):
except ValueError:
vetted_tolerance = '???'
plus_minus = u'{0} \u00b1 {1}'.format(self.expected, vetted_tolerance)
# In python2, __repr__() must return a string (i.e. not a unicode
# object). In python3, __repr__() must return a unicode object
# (although now strings are unicode objects and bytes are what
# strings were).
if sys.version_info[0] == 2:
return plus_minus.encode('utf-8')
return '{0} +- {1}'.format(self.expected, vetted_tolerance)
else:
return plus_minus
return u'{0} \u00b1 {1}'.format(self.expected, vetted_tolerance)
def __eq__(self, actual):
# Short-circuit exact equality.
@ -1436,6 +1456,8 @@ class ApproxNonIterable(object):
# Return true if the two numbers are within the tolerance.
return abs(self.expected - actual) <= self.tolerance
__hash__ = None
def __ne__(self, actual):
return not (actual == self)

View File

@ -36,8 +36,13 @@ def deprecated_call(func=None, *args, **kwargs):
This function can be used as a context manager::
>>> import warnings
>>> def api_call_v2():
... warnings.warn('use v3 of this api', DeprecationWarning)
... return 200
>>> with deprecated_call():
... myobject.deprecated_method()
... assert api_call_v2() == 200
Note: we cannot use WarningsRecorder here because it is still subject
to the mechanism that prevents warnings of the same type from being
@ -218,4 +223,7 @@ class WarningsChecker(WarningsRecorder):
if self.expected_warning is not None:
if not any(r.category in self.expected_warning for r in self):
__tracebackhide__ = True
pytest.fail("DID NOT WARN")
pytest.fail("DID NOT WARN. No warnings of type {0} was emitted. "
"The list of emitted warnings is: {1}.".format(
self.expected_warning,
[each.message for each in self]))

View File

@ -515,8 +515,10 @@ def exit(msg):
__tracebackhide__ = True
raise Exit(msg)
exit.Exception = Exit
def skip(msg=""):
""" skip an executing test with the given message. Note: it's usually
better to use the pytest.mark.skipif marker to declare a test to be
@ -525,8 +527,11 @@ def skip(msg=""):
"""
__tracebackhide__ = True
raise Skipped(msg=msg)
skip.Exception = Skipped
def fail(msg="", pytrace=True):
""" explicitly fail an currently-executing test with the given Message.
@ -535,6 +540,8 @@ def fail(msg="", pytrace=True):
"""
__tracebackhide__ = True
raise Failed(msg=msg, pytrace=pytrace)
fail.Exception = Failed

View File

@ -5,9 +5,9 @@ import sys
def pytest_addoption(parser):
group = parser.getgroup("debugconfig")
group.addoption('--setuponly', '--setup-only', action="store_true",
help="only setup fixtures, don't execute the tests.")
help="only setup fixtures, do not execute tests.")
group.addoption('--setupshow', '--setup-show', action="store_true",
help="show setup fixtures while executing the tests.")
help="show setup of fixtures while executing tests.")
@pytest.hookimpl(hookwrapper=True)

View File

@ -25,8 +25,10 @@ def pytest_configure(config):
if config.option.runxfail:
old = pytest.xfail
config._cleanup.append(lambda: setattr(pytest, "xfail", old))
def nop(*args, **kwargs):
pass
nop.Exception = XFailed
setattr(pytest, "xfail", nop)
@ -44,7 +46,7 @@ def pytest_configure(config):
)
config.addinivalue_line("markers",
"xfail(condition, reason=None, run=True, raises=None, strict=False): "
"mark the the test function as an expected failure if eval(condition) "
"mark the test function as an expected failure if eval(condition) "
"has a True value. Optionally specify a reason for better reporting "
"and run=False if you don't even want to execute the test function. "
"If only specific exception(s) are expected, you can list them in "
@ -65,6 +67,8 @@ def xfail(reason=""):
""" xfail an executing test or setup functions with the given reason."""
__tracebackhide__ = True
raise XFailed(reason)
xfail.Exception = XFailed
@ -108,14 +112,14 @@ class MarkEvaluator:
def _getglobals(self):
d = {'os': os, 'sys': sys, 'config': self.item.config}
d.update(self.item.obj.__globals__)
if hasattr(self.item, 'obj'):
d.update(self.item.obj.__globals__)
return d
def _istrue(self):
if hasattr(self, 'result'):
return self.result
if self.holder:
d = self._getglobals()
if self.holder.args or 'condition' in self.holder.kwargs:
self.result = False
# "holder" might be a MarkInfo or a MarkDecorator; only
@ -131,6 +135,7 @@ class MarkEvaluator:
for expr in args:
self.expr = expr
if isinstance(expr, py.builtin._basestring):
d = self._getglobals()
result = cached_eval(self.item.config, expr, d)
else:
if "reason" not in kwargs:

View File

@ -295,8 +295,8 @@ class TerminalReporter:
def pytest_report_header(self, config):
inifile = ""
if config.inifile:
inifile = config.rootdir.bestrelpath(config.inifile)
lines = ["rootdir: %s, inifile: %s" %(config.rootdir, inifile)]
inifile = " " + config.rootdir.bestrelpath(config.inifile)
lines = ["rootdir: %s, inifile:%s" % (config.rootdir, inifile)]
plugininfo = config.pluginmanager.list_plugin_distinfo()
if plugininfo:
@ -458,6 +458,15 @@ class TerminalReporter:
self.write_sep("_", msg)
self._outrep_summary(rep)
def print_teardown_sections(self, rep):
for secname, content in rep.sections:
if 'teardown' in secname:
self._tw.sep('-', secname)
if content[-1:] == "\n":
content = content[:-1]
self._tw.line(content)
def summary_failures(self):
if self.config.option.tbstyle != "no":
reports = self.getreports('failed')
@ -473,6 +482,9 @@ class TerminalReporter:
markup = {'red': True, 'bold': True}
self.write_sep("_", msg, **markup)
self._outrep_summary(rep)
for report in self.getreports(''):
if report.nodeid == rep.nodeid and report.when == 'teardown':
self.print_teardown_sections(report)
def summary_errors(self):
if self.config.option.tbstyle != "no":

View File

@ -81,6 +81,7 @@ def get_user():
except (ImportError, KeyError):
return None
# backward compatibility
TempdirHandler = TempdirFactory
@ -115,7 +116,7 @@ def tmpdir(request, tmpdir_factory):
path object.
"""
name = request.node.name
name = re.sub("[\W]", "_", name)
name = re.sub(r"[\W]", "_", name)
MAXVAL = 30
if len(name) > MAXVAL:
name = name[:MAXVAL]

View File

@ -5,7 +5,7 @@ import sys
import traceback
import pytest
# for transfering markers
# for transferring markers
import _pytest._code
from _pytest.python import transfer_markers
from _pytest.skipping import MarkEvaluator
@ -65,7 +65,6 @@ class UnitTestCase(pytest.Class):
yield TestCaseFunction('runTest', parent=self)
class TestCaseFunction(pytest.Function):
_excinfo = None
@ -94,6 +93,9 @@ class TestCaseFunction(pytest.Function):
def teardown(self):
if hasattr(self._testcase, 'teardown_method'):
self._testcase.teardown_method(self._obj)
# Allow garbage collection on TestCase instance attributes.
self._testcase = None
self._obj = None
def startTest(self, testcase):
pass
@ -149,14 +151,33 @@ class TestCaseFunction(pytest.Function):
def stopTest(self, testcase):
pass
def _handle_skip(self):
# implements the skipping machinery (see #2137)
# analog to pythons Lib/unittest/case.py:run
testMethod = getattr(self._testcase, self._testcase._testMethodName)
if (getattr(self._testcase.__class__, "__unittest_skip__", False) or
getattr(testMethod, "__unittest_skip__", False)):
# If the class or method was skipped.
skip_why = (getattr(self._testcase.__class__, '__unittest_skip_why__', '') or
getattr(testMethod, '__unittest_skip_why__', ''))
try: # PY3, unittest2 on PY2
self._testcase._addSkip(self, self._testcase, skip_why)
except TypeError: # PY2
if sys.version_info[0] != 2:
raise
self._testcase._addSkip(self, skip_why)
return True
return False
def runtest(self):
if self.config.pluginmanager.get_plugin("pdbinvoke") is None:
self._testcase(result=self)
else:
# disables tearDown and cleanups for post mortem debugging (see #1890)
if self._handle_skip():
return
self._testcase.debug()
def _prunetraceback(self, excinfo):
pytest.Function._prunetraceback(self, excinfo)
traceback = excinfo.traceback.filter(
@ -183,6 +204,7 @@ def pytest_runtest_protocol(item):
ut = sys.modules['twisted.python.failure']
Failure__init__ = ut.Failure.__init__
check_testcase_implements_trial_reporter()
def excstore(self, exc_value=None, exc_type=None, exc_tb=None,
captureVars=None):
if exc_value is None:
@ -196,6 +218,7 @@ def pytest_runtest_protocol(item):
captureVars=captureVars)
except TypeError:
Failure__init__(self, exc_value, exc_type, exc_tb)
ut.Failure.__init__ = excstore
yield
ut.Failure.__init__ = Failure__init__

View File

@ -10,4 +10,4 @@ $ pip install -U pluggy==<version> --no-compile --target=_pytest/vendored_packag
```
And commit the modified files. The `pluggy-<version>.dist-info` directory
created by `pip` should be ignored.
created by `pip` should be added as well.

View File

@ -1,8 +0,0 @@
pluggy.py,sha256=v_RfWzyW6DPU1cJu_EFoL_OHq3t13qloVdR6UaMCXQA,29862
pluggy-0.3.1.dist-info/top_level.txt,sha256=xKSCRhai-v9MckvMuWqNz16c1tbsmOggoMSwTgcpYHE,7
pluggy-0.3.1.dist-info/pbr.json,sha256=xX3s6__wOcAyF-AZJX1sdZyW6PUXT-FkfBlM69EEUCg,47
pluggy-0.3.1.dist-info/RECORD,,
pluggy-0.3.1.dist-info/metadata.json,sha256=nLKltOT78dMV-00uXD6Aeemp4xNsz2q59j6ORSDeLjw,1027
pluggy-0.3.1.dist-info/METADATA,sha256=1b85Ho2u4iK30M099k7axMzcDDhLcIMb-A82JUJZnSo,1334
pluggy-0.3.1.dist-info/WHEEL,sha256=AvR0WeTpDaxT645bl5FQxUK6NPsTls2ttpcGJg3j1Xg,110
pluggy-0.3.1.dist-info/DESCRIPTION.rst,sha256=P5Akh1EdIBR6CeqtV2P8ZwpGSpZiTKPw0NyS7jEiD-g,306

View File

@ -1 +0,0 @@
{"license": "MIT license", "name": "pluggy", "metadata_version": "2.0", "generator": "bdist_wheel (0.24.0)", "summary": "plugin and hook calling mechanisms for python", "platform": "unix", "version": "0.3.1", "extensions": {"python.details": {"document_names": {"description": "DESCRIPTION.rst"}, "contacts": [{"role": "author", "email": "holger at merlinux.eu", "name": "Holger Krekel"}]}}, "classifiers": ["Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: POSIX", "Operating System :: Microsoft :: Windows", "Operating System :: MacOS :: MacOS X", "Topic :: Software Development :: Testing", "Topic :: Software Development :: Libraries", "Topic :: Utilities", "Programming Language :: Python :: 2", "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.3", "Programming Language :: Python :: 3.4", "Programming Language :: Python :: 3.5"]}

View File

@ -1 +0,0 @@
{"is_release": false, "git_version": "7d4c9cd"}

View File

@ -1,3 +1,4 @@
Plugin registration and hook calling for Python
===============================================

View File

@ -0,0 +1 @@
pip

View File

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 holger krekel (rather uses bitbucket/hpk42)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,8 +1,8 @@
Metadata-Version: 2.0
Name: pluggy
Version: 0.3.1
Version: 0.4.0
Summary: plugin and hook calling mechanisms for python
Home-page: UNKNOWN
Home-page: https://github.com/pytest-dev/pluggy
Author: Holger Krekel
Author-email: holger at merlinux.eu
License: MIT license
@ -27,6 +27,7 @@ Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Plugin registration and hook calling for Python
===============================================

View File

@ -0,0 +1,9 @@
pluggy.py,sha256=u0oG9cv-oLOkNvEBlwnnu8pp1AyxpoERgUO00S3rvpQ,31543
pluggy-0.4.0.dist-info/DESCRIPTION.rst,sha256=ltvjkFd40LW_xShthp6RRVM6OB_uACYDFR3kTpKw7o4,307
pluggy-0.4.0.dist-info/LICENSE.txt,sha256=ruwhUOyV1HgE9F35JVL9BCZ9vMSALx369I4xq9rhpkM,1134
pluggy-0.4.0.dist-info/METADATA,sha256=pe2hbsqKFaLHC6wAQPpFPn0KlpcPfLBe_BnS4O70bfk,1364
pluggy-0.4.0.dist-info/RECORD,,
pluggy-0.4.0.dist-info/WHEEL,sha256=9Z5Xm-eel1bTS7e6ogYiKz0zmPEqDwIypurdHN1hR40,116
pluggy-0.4.0.dist-info/metadata.json,sha256=T3go5L2qOa_-H-HpCZi3EoVKb8sZ3R-fOssbkWo2nvM,1119
pluggy-0.4.0.dist-info/top_level.txt,sha256=xKSCRhai-v9MckvMuWqNz16c1tbsmOggoMSwTgcpYHE,7
pluggy-0.4.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4

View File

@ -1,5 +1,5 @@
Wheel-Version: 1.0
Generator: bdist_wheel (0.24.0)
Generator: bdist_wheel (0.29.0)
Root-Is-Purelib: true
Tag: py2-none-any
Tag: py3-none-any

View File

@ -0,0 +1 @@
{"classifiers": ["Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: POSIX", "Operating System :: Microsoft :: Windows", "Operating System :: MacOS :: MacOS X", "Topic :: Software Development :: Testing", "Topic :: Software Development :: Libraries", "Topic :: Utilities", "Programming Language :: Python :: 2", "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.3", "Programming Language :: Python :: 3.4", "Programming Language :: Python :: 3.5"], "extensions": {"python.details": {"contacts": [{"email": "holger at merlinux.eu", "name": "Holger Krekel", "role": "author"}], "document_names": {"description": "DESCRIPTION.rst", "license": "LICENSE.txt"}, "project_urls": {"Home": "https://github.com/pytest-dev/pluggy"}}}, "generator": "bdist_wheel (0.29.0)", "license": "MIT license", "metadata_version": "2.0", "name": "pluggy", "platform": "unix", "summary": "plugin and hook calling mechanisms for python", "version": "0.4.0"}

View File

@ -67,8 +67,9 @@ Pluggy currently consists of functionality for:
import sys
import inspect
__version__ = '0.3.1'
__all__ = ["PluginManager", "PluginValidationError",
__version__ = '0.4.0'
__all__ = ["PluginManager", "PluginValidationError", "HookCallError",
"HookspecMarker", "HookimplMarker"]
_py3 = sys.version_info > (3, 0)
@ -308,7 +309,7 @@ class PluginManager(object):
""" Core Pluginmanager class which manages registration
of plugin objects and 1:N hook calling.
You can register new hooks by calling ``addhooks(module_or_class)``.
You can register new hooks by calling ``add_hookspec(module_or_class)``.
You can register plugin objects (which contain hooks) by calling
``register(plugin)``. The Pluginmanager is initialized with a
prefix that is searched for in the names of the dict of registered
@ -374,7 +375,10 @@ class PluginManager(object):
def parse_hookimpl_opts(self, plugin, name):
method = getattr(plugin, name)
res = getattr(method, self.project_name + "_impl", None)
try:
res = getattr(method, self.project_name + "_impl", None)
except Exception:
res = {}
if res is not None and not isinstance(res, dict):
# false positive
res = None
@ -455,6 +459,10 @@ class PluginManager(object):
""" Return a plugin or None for the given name. """
return self._name2plugin.get(name)
def has_plugin(self, name):
""" Return True if a plugin with the given name is registered. """
return self.get_plugin(name) is not None
def get_name(self, plugin):
""" Return name for registered plugin or None if not registered. """
for name, val in self._name2plugin.items():
@ -492,7 +500,8 @@ class PluginManager(object):
def load_setuptools_entrypoints(self, entrypoint_name):
""" Load modules from querying the specified setuptools entrypoint name.
Return the number of loaded plugins. """
from pkg_resources import iter_entry_points, DistributionNotFound
from pkg_resources import (iter_entry_points, DistributionNotFound,
VersionConflict)
for ep in iter_entry_points(entrypoint_name):
# is the plugin registered or blocked?
if self.get_plugin(ep.name) or self.is_blocked(ep.name):
@ -501,6 +510,9 @@ class PluginManager(object):
plugin = ep.load()
except DistributionNotFound:
continue
except VersionConflict as e:
raise PluginValidationError(
"Plugin %r could not be loaded: %s!" % (ep.name, e))
self.register(plugin, name=ep.name)
self._plugin_distinfo.append((plugin, ep.dist))
return len(self._plugin_distinfo)
@ -573,7 +585,7 @@ class _MultiCall:
# XXX note that the __multicall__ argument is supported only
# for pytest compatibility reasons. It was never officially
# supported there and is explicitly deprecated since 2.8
# supported there and is explicitely deprecated since 2.8
# so we can remove it soon, allowing to avoid the below recursion
# in execute() and simplify/speed up the execute loop.
@ -590,7 +602,13 @@ class _MultiCall:
while self.hook_impls:
hook_impl = self.hook_impls.pop()
args = [all_kwargs[argname] for argname in hook_impl.argnames]
try:
args = [all_kwargs[argname] for argname in hook_impl.argnames]
except KeyError:
for argname in hook_impl.argnames:
if argname not in all_kwargs:
raise HookCallError(
"hook call must provide argument %r" % (argname,))
if hook_impl.hookwrapper:
return _wrapped_call(hook_impl.function(*args), self.execute)
res = hook_impl.function(*args)
@ -629,7 +647,10 @@ def varnames(func, startindex=None):
startindex = 1
else:
if not inspect.isfunction(func) and not inspect.ismethod(func):
func = getattr(func, '__call__', func)
try:
func = getattr(func, '__call__', func)
except Exception:
return ()
if startindex is None:
startindex = int(inspect.ismethod(func))
@ -763,6 +784,10 @@ class PluginValidationError(Exception):
""" plugin failed validation. """
class HookCallError(Exception):
""" Hook was called wrongly. """
if hasattr(inspect, 'signature'):
def _formatdef(func):
return "%s%s" % (

View File

@ -6,30 +6,37 @@ environment:
# https://www.appveyor.com/docs/build-configuration#secure-variables
matrix:
# create multiple jobs to execute a set of tox runs on each; this is to workaround having
# builds timing out in AppVeyor
- TOXENV: "linting,py26,py27,py33,py34,py35,pypy"
- TOXENV: "py27-pexpect,py27-xdist,py27-trial,py35-pexpect,py35-xdist,py35-trial"
- TOXENV: "py27-nobyte,doctesting,freeze,docs"
# coveralls is not in the default env list
- TOXENV: "coveralls"
# note: please use "tox --listenvs" to populate the build matrix below
- TOXENV: "linting"
- TOXENV: "py26"
- TOXENV: "py27"
- TOXENV: "py33"
- TOXENV: "py34"
- TOXENV: "py35"
- TOXENV: "py36"
- TOXENV: "pypy"
- TOXENV: "py27-pexpect"
- TOXENV: "py27-xdist"
- TOXENV: "py27-trial"
- TOXENV: "py35-pexpect"
- TOXENV: "py35-xdist"
- TOXENV: "py35-trial"
- TOXENV: "py27-nobyte"
- TOXENV: "doctesting"
- TOXENV: "freeze"
- TOXENV: "docs"
install:
- echo Installed Pythons
- dir c:\Python*
# install pypy using choco (redirect to a file and write to console in case
# choco install returns non-zero, because choco install python.pypy is too
# noisy)
- choco install python.pypy > pypy-inst.log 2>&1 || (type pypy-inst.log & exit /b 1)
- set PATH=C:\tools\pypy\pypy;%PATH% # so tox can find pypy
- echo PyPy installed
- pypy --version
- if "%TOXENV%" == "pypy" call scripts\install-pypy.bat
- C:\Python35\python -m pip install tox
build: false # Not a C# project, build stuff at the test step instead.
test_script:
- C:\Python35\python -m tox
# coveralls is not in tox's envlist, plus for PRs the secure variable
# is not defined so we have to check for it
- if defined COVERALLS_REPO_TOKEN C:\Python35\python -m tox -e coveralls
- call scripts\call-tox.bat

View File

@ -6,6 +6,11 @@ Release announcements
:maxdepth: 2
release-3.0.7
release-3.0.6
release-3.0.5
release-3.0.4
release-3.0.3
release-3.0.2
release-3.0.1
release-3.0.0

View File

@ -63,9 +63,9 @@ Changes between 2.0.1 and 2.0.2
this.
- fixed typos in the docs (thanks Victor Garcia, Brianna Laugher) and particular
thanks to Laura Creighton who also revieved parts of the documentation.
thanks to Laura Creighton who also reviewed parts of the documentation.
- fix slighly wrong output of verbose progress reporting for classes
- fix slightly wrong output of verbose progress reporting for classes
(thanks Amaury)
- more precise (avoiding of) deprecation warnings for node.Class|Function accesses

View File

@ -13,7 +13,7 @@ If you want to install or upgrade pytest, just type one of::
easy_install -U pytest
There also is a bugfix release 1.6 of pytest-xdist, the plugin
that enables seemless distributed and "looponfail" testing for Python.
that enables seamless distributed and "looponfail" testing for Python.
best,
holger krekel
@ -33,7 +33,7 @@ Changes between 2.0.2 and 2.0.3
- don't require zlib (and other libs) for genscript plugin without
--genscript actually being used.
- speed up skips (by not doing a full traceback represenation
- speed up skips (by not doing a full traceback representation
internally)
- fix issue37: avoid invalid characters in junitxml's output

View File

@ -2,7 +2,7 @@ pytest-2.2.1: bug fixes, perfect teardowns
===========================================================================
pytest-2.2.1 is a minor backward-compatible release of the the py.test
pytest-2.2.1 is a minor backward-compatible release of the py.test
testing tool. It contains bug fixes and little improvements, including
documentation fixes. If you are using the distributed testing
pluginmake sure to upgrade it to pytest-xdist-1.8.

View File

@ -29,7 +29,7 @@ Changes between 2.2.3 and 2.2.4
- fix issue with unittest: now @unittest.expectedFailure markers should
be processed correctly (you can also use @pytest.mark markers)
- document integration with the extended distribute/setuptools test commands
- fix issue 140: propperly get the real functions
- fix issue 140: properly get the real functions
of bound classmethods for setup/teardown_class
- fix issue #141: switch from the deceased paste.pocoo.org to bpaste.net
- fix issue #143: call unconfigure/sessionfinish always when

View File

@ -89,7 +89,7 @@ Changes between 2.2.4 and 2.3.0
- fix issue128: show captured output when capsys/capfd are used
- fix issue179: propperly show the dependency chain of factories
- fix issue179: properly show the dependency chain of factories
- pluginmanager.register(...) now raises ValueError if the
plugin has been already registered or the name is taken
@ -130,5 +130,5 @@ Changes between 2.2.4 and 2.3.0
- don't show deselected reason line if there is none
- py.test -vv will show all of assert comparisations instead of truncating
- py.test -vv will show all of assert comparisons instead of truncating

View File

@ -1,7 +1,7 @@
pytest-2.3.2: some fixes and more traceback-printing speed
===========================================================================
pytest-2.3.2 is a another stabilization release:
pytest-2.3.2 is another stabilization release:
- issue 205: fixes a regression with conftest detection
- issue 208/29: fixes traceback-printing speed in some bad cases

View File

@ -1,7 +1,7 @@
pytest-2.3.3: integration fixes, py24 suport, ``*/**`` shown in traceback
pytest-2.3.3: integration fixes, py24 support, ``*/**`` shown in traceback
===========================================================================
pytest-2.3.3 is a another stabilization release of the py.test tool
pytest-2.3.3 is another stabilization release of the py.test tool
which offers uebersimple assertions, scalable fixture mechanisms
and deep customization for testing with Python. Particularly,
this release provides:
@ -46,7 +46,7 @@ Changes between 2.3.2 and 2.3.3
- fix issue209 - reintroduce python2.4 support by depending on newer
pylib which re-introduced statement-finding for pre-AST interpreters
- nose support: only call setup if its a callable, thanks Andrew
- nose support: only call setup if it's a callable, thanks Andrew
Taumoefolau
- fix issue219 - add py2.4-3.3 classifiers to TROVE list

View File

@ -44,11 +44,11 @@ Changes between 2.3.4 and 2.3.5
(thanks Adam Goucher)
- Issue 265 - integrate nose setup/teardown with setupstate
so it doesnt try to teardown if it did not setup
so it doesn't try to teardown if it did not setup
- issue 271 - dont write junitxml on slave nodes
- issue 271 - don't write junitxml on slave nodes
- Issue 274 - dont try to show full doctest example
- Issue 274 - don't try to show full doctest example
when doctest does not know the example location
- issue 280 - disable assertion rewriting on buggy CPython 2.6.0
@ -84,7 +84,7 @@ Changes between 2.3.4 and 2.3.5
- allow to specify prefixes starting with "_" when
customizing python_functions test discovery. (thanks Graham Horler)
- improve PYTEST_DEBUG tracing output by puting
- improve PYTEST_DEBUG tracing output by putting
extra data on a new lines with additional indent
- ensure OutcomeExceptions like skip/fail have initialized exception attributes

View File

@ -36,7 +36,7 @@ a full list of details. A few feature highlights:
- reporting: color the last line red or green depending if
failures/errors occurred or everything passed.
The documentation has been updated to accomodate the changes,
The documentation has been updated to accommodate the changes,
see `http://pytest.org <http://pytest.org>`_
To install or upgrade pytest::
@ -118,7 +118,7 @@ new features:
- fix issue322: tearDownClass is not run if setUpClass failed. Thanks
Mathieu Agopian for the initial fix. Also make all of pytest/nose
finalizer mimick the same generic behaviour: if a setupX exists and
finalizer mimic the same generic behaviour: if a setupX exists and
fails, don't run teardownX. This internally introduces a new method
"node.addfinalizer()" helper which can only be called during the setup
phase of a node.

View File

@ -70,7 +70,7 @@ holger krekel
to problems for more than >966 non-function scoped parameters).
- fix issue290 - there is preliminary support now for parametrizing
with repeated same values (sometimes useful to to test if calling
with repeated same values (sometimes useful to test if calling
a second time works as with the first time).
- close issue240 - document precisely how pytest module importing
@ -149,7 +149,7 @@ holger krekel
would not work correctly because pytest assumes @pytest.mark.some
gets a function to be decorated already. We now at least detect if this
arg is an lambda and thus the example will work. Thanks Alex Gaynor
arg is a lambda and thus the example will work. Thanks Alex Gaynor
for bringing it up.
- xfail a test on pypy that checks wrong encoding/ascii (pypy does

View File

@ -60,5 +60,5 @@ holger krekel
- fix issue429: comparing byte strings with non-ascii chars in assert
expressions now work better. Thanks Floris Bruynooghe.
- make capfd/capsys.capture private, its unused and shouldnt be exposed
- make capfd/capsys.capture private, its unused and shouldn't be exposed

View File

@ -42,7 +42,7 @@ Changes 2.6.3
- fix conftest related fixture visibility issue: when running with a
CWD outside of a test package pytest would get fixture discovery wrong.
Thanks to Wolfgang Schnerring for figuring out a reproducable example.
Thanks to Wolfgang Schnerring for figuring out a reproducible example.
- Introduce pytest_enter_pdb hook (needed e.g. by pytest_timeout to cancel the
timeout when interactively entering pdb). Thanks Wolfgang Schnerring.

View File

@ -32,7 +32,7 @@ The py.test Development Team
explanations. Thanks Carl Meyer for the report and test case.
- fix issue553: properly handling inspect.getsourcelines failures in
FixtureLookupError which would lead to to an internal error,
FixtureLookupError which would lead to an internal error,
obfuscating the original problem. Thanks talljosh for initial
diagnose/patch and Bruno Oliveira for final patch.

View File

@ -46,7 +46,7 @@ The py.test Development Team
Thanks `@astraw38`_ for reporting the issue (`#1496`_) and `@tomviner`_
for PR the (`#1524`_).
* Fix win32 path issue when puttinging custom config file with absolute path
* Fix win32 path issue when putting custom config file with absolute path
in ``pytest.main("-c your_absolute_path")``.
* Fix maximum recursion depth detection when raised error class is not aware

View File

@ -0,0 +1,27 @@
pytest-3.0.3
============
pytest 3.0.3 has just been released to PyPI.
This release fixes some regressions and bugs reported in the last version,
being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Bruno Oliveira
* Florian Bruhin
* Floris Bruynooghe
* Huayi Zhang
* Lev Maximov
* Raquel Alegre
* Ronny Pfannschmidt
* Roy Williams
* Tyler Goodlet
* mbyt
Happy testing,
The pytest Development Team

View File

@ -0,0 +1,29 @@
pytest-3.0.4
============
pytest 3.0.4 has just been released to PyPI.
This release fixes some regressions and bugs reported in the last version,
being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Bruno Oliveira
* Dan Wandschneider
* Florian Bruhin
* Georgy Dyuldin
* Grigorii Eremeev
* Jason R. Coombs
* Manuel Jacob
* Mathieu Clabaut
* Michael Seifert
* Nikolaus Rath
* Ronny Pfannschmidt
* Tom V
Happy testing,
The pytest Development Team

View File

@ -0,0 +1,27 @@
pytest-3.0.5
============
pytest 3.0.5 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Ana Vojnovic
* Bruno Oliveira
* Daniel Hahler
* Duncan Betts
* Igor Starikov
* Ismail
* Luke Murphy
* Ned Batchelder
* Ronny Pfannschmidt
* Sebastian Ramacher
* nmundar
Happy testing,
The pytest Development Team

View File

@ -0,0 +1,33 @@
pytest-3.0.6
============
pytest 3.0.6 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Andreas Pelme
* Bruno Oliveira
* Dmitry Malinovsky
* Eli Boyarski
* Jakub Wilk
* Jeff Widman
* Loïc Estève
* Luke Murphy
* Miro Hrončok
* Oscar Hellström
* Peter Heatwole
* Philippe Ombredanne
* Ronny Pfannschmidt
* Rutger Prins
* Stefan Scherfke
Happy testing,
The pytest Development Team

View File

@ -0,0 +1,33 @@
pytest-3.0.7
============
pytest 3.0.7 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Anthony Sottile
* Barney Gale
* Bruno Oliveira
* Florian Bruhin
* Floris Bruynooghe
* Ionel Cristian Mărieș
* Katerina Koukiou
* NODA, Kai
* Omer Hadari
* Patrick Hayes
* Ran Benita
* Ronny Pfannschmidt
* Victor Uriarte
* Vidar Tonaas Fauske
* Ville Skyttä
* fbjorn
* mbyt
Happy testing,
The pytest Development Team

View File

@ -26,8 +26,8 @@ you will see the return value of the function call::
$ pytest test_assert1.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items
test_assert1.py F
@ -170,8 +170,8 @@ if you run this module::
$ pytest test_assert2.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items
test_assert2.py F
@ -183,7 +183,7 @@ if you run this module::
set1 = set("1308")
set2 = set("8035")
> assert set1 == set2
E assert {'0', '1', '3', '8'} == {'0', '3', '5', '8'}
E AssertionError: assert {'0', '1', '3', '8'} == {'0', '3', '5', '8'}
E Extra items in the left set:
E '1'
E Extra items in the right set:
@ -262,50 +262,20 @@ Advanced assertion introspection
.. versionadded:: 2.1
Reporting details about a failing assertion is achieved either by rewriting
assert statements before they are run or re-evaluating the assert expression and
recording the intermediate values. Which technique is used depends on the
location of the assert, ``pytest`` configuration, and Python version being used
to run ``pytest``.
By default, ``pytest`` rewrites assert statements in test modules.
Rewritten assert statements put introspection information into the assertion failure message.
``pytest`` only rewrites test modules directly discovered by its test collection process, so
asserts in supporting modules which are not themselves test modules will not be
rewritten.
Reporting details about a failing assertion is achieved by rewriting assert
statements before they are run. Rewritten assert statements put introspection
information into the assertion failure message. ``pytest`` only rewrites test
modules directly discovered by its test collection process, so asserts in
supporting modules which are not themselves test modules will not be rewritten.
.. note::
``pytest`` rewrites test modules on import. It does this by using an import
hook to write a new pyc files. Most of the time this works transparently.
hook to write new pyc files. Most of the time this works transparently.
However, if you are messing with import yourself, the import hook may
interfere. If this is the case, simply use ``--assert=reinterp`` or
``--assert=plain``. Additionally, rewriting will fail silently if it cannot
write new pycs, i.e. in a read-only filesystem or a zipfile.
If an assert statement has not been rewritten or the Python version is less than
2.6, ``pytest`` falls back on assert reinterpretation. In assert
reinterpretation, ``pytest`` walks the frame of the function containing the
assert statement to discover sub-expression results of the failing assert
statement. You can force ``pytest`` to always use assertion reinterpretation by
passing the ``--assert=reinterp`` option.
Assert reinterpretation has a caveat not present with assert rewriting: If
evaluating the assert expression has side effects you may get a warning that the
intermediate values could not be determined safely. A common example of this
issue is an assertion which reads from a file::
assert f.read() != '...'
If this assertion fails then the re-evaluation will probably succeed!
This is because ``f.read()`` will return an empty string when it is
called the second time during the re-evaluation. However, it is
easy to rewrite the assertion and avoid any trouble::
content = f.read()
assert content != '...'
All assert introspection can be turned off by passing ``--assert=plain``.
interfere. If this is the case, use ``--assert=plain``. Additionally,
rewriting will fail silently if it cannot write new pycs, i.e. in a read-only
filesystem or a zipfile.
For further information, Benjamin Peterson wrote up `Behind the scenes of pytest's new assertion rewriting <http://pybites.blogspot.com/2011/07/behind-scenes-of-pytests-new-assertion.html>`_.
@ -317,4 +287,5 @@ For further information, Benjamin Peterson wrote up `Behind the scenes of pytest
``--nomagic``.
.. versionchanged:: 3.0
Removes the ``--no-assert`` and``--nomagic`` options.
Removes the ``--no-assert`` and ``--nomagic`` options.
Removes the ``--assert=reinterp`` option.

View File

@ -80,9 +80,9 @@ If you then run it with ``--lf``::
$ pytest --lf
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
run-last-failure: rerun last 2 failures
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collected 50 items
test_50.py FF
@ -122,9 +122,9 @@ of ``FF`` and dots)::
$ pytest --ff
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
run-last-failure: rerun last 2 failures first
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collected 50 items
test_50.py FF................................................
@ -227,14 +227,14 @@ You can always peek at the content of the cache using the
$ py.test --cache-show
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
cachedir: $REGENDOC_TMPDIR/.cache
------------------------------- cache values -------------------------------
example/value contains:
42
cache/lastfailed contains:
{'test_caching.py::test_function': True}
example/value contains:
42
======= no tests ran in 0.12 seconds ========
@ -246,7 +246,7 @@ by adding the ``--cache-clear`` option like this::
pytest --cache-clear
This is recommended for invocations from Continous Integration
This is recommended for invocations from Continuous Integration
servers where isolation and correctness is more important
than speed.

View File

@ -64,8 +64,8 @@ of the failing function and hide the other one::
$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_module.py .F

View File

@ -303,7 +303,7 @@ texinfo_documents = [
('Holger Krekel@*Benjamin Peterson@*Ronny Pfannschmidt@*'
'Floris Bruynooghe@*others'),
'pytest',
'simple powerful testing with Pytho',
'simple powerful testing with Python',
'Programming',
1),
]

View File

@ -49,7 +49,7 @@ then you can just invoke ``pytest`` without command line options::
$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 1 items

View File

@ -31,9 +31,9 @@ You can then restrict a test run to only run tests marked with ``webtest``::
$ pytest -v -m webtest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items
test_server.py::test_send_http PASSED
@ -45,9 +45,9 @@ Or the inverse, running all tests except the webtest ones::
$ pytest -v -m "not webtest"
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items
test_server.py::test_something_quick PASSED
@ -66,9 +66,9 @@ tests based on their module, class, method, or function name::
$ pytest -v test_server.py::TestClass::test_method
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 5 items
test_server.py::TestClass::test_method PASSED
@ -79,9 +79,9 @@ You can also select on the class::
$ pytest -v test_server.py::TestClass
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items
test_server.py::TestClass::test_method PASSED
@ -92,9 +92,9 @@ Or select multiple nodes::
$ pytest -v test_server.py::TestClass test_server.py::test_send_http
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 8 items
test_server.py::TestClass::test_method PASSED
@ -130,9 +130,9 @@ select tests based on their names::
$ pytest -v -k http # running with the above defined example module
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items
test_server.py::test_send_http PASSED
@ -144,9 +144,9 @@ And you can also run all tests except the ones that match the keyword::
$ pytest -k "not send_http" -v
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items
test_server.py::test_something_quick PASSED
@ -160,9 +160,9 @@ Or to select "http" and "quick" tests::
$ pytest -k "http or quick" -v
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items
test_server.py::test_send_http PASSED
@ -205,7 +205,7 @@ You can ask which markers exist for your test suite - the list includes our just
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
@ -352,8 +352,8 @@ the test needs::
$ pytest -E stage2
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items
test_someenv.py s
@ -364,8 +364,8 @@ and here is one that specifies exactly the environment needed::
$ pytest -E stage1
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items
test_someenv.py .
@ -381,7 +381,7 @@ The ``--markers`` option always gives you a list of available markers::
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
@ -450,7 +450,7 @@ for your particular platform, you could use the following plugin::
import sys
import pytest
ALL = set("darwin linux2 win32".split())
ALL = set("darwin linux win32".split())
def pytest_runtest_setup(item):
if isinstance(item, item.Function):
@ -470,7 +470,7 @@ Let's do a little test file to show how this looks like::
def test_if_apple_is_evil():
pass
@pytest.mark.linux2
@pytest.mark.linux
def test_if_linux_works():
pass
@ -481,32 +481,32 @@ Let's do a little test file to show how this looks like::
def test_runs_everywhere():
pass
then you will see two test skipped and two executed tests as expected::
then you will see two tests skipped and two executed tests as expected::
$ pytest -rs # this option reports skip reasons
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
test_plat.py sss.
test_plat.py s.s.
======= short test summary info ========
SKIP [3] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux
SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux
======= 1 passed, 3 skipped in 0.12 seconds ========
======= 2 passed, 2 skipped in 0.12 seconds ========
Note that if you specify a platform via the marker-command line option like this::
$ pytest -m linux2
$ pytest -m linux
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
test_plat.py s
test_plat.py .
======= 3 tests deselected ========
======= 1 skipped, 3 deselected in 0.12 seconds ========
======= 1 passed, 3 deselected in 0.12 seconds ========
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
@ -551,8 +551,8 @@ We can now use the ``-m option`` to select one set::
$ pytest -m interface --tb=short
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
test_module.py FF
@ -573,8 +573,8 @@ or to select both "event" and "interface" tests::
$ pytest -m "interface or event" --tb=short
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
test_module.py FFF

View File

@ -27,8 +27,8 @@ now execute the test specification::
nonpython $ pytest test_simple.yml
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collected 2 items
test_simple.yml F.
@ -59,9 +59,9 @@ consulted when reporting in ``verbose`` mode::
nonpython $ pytest -v
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collecting ... collected 2 items
test_simple.yml::hello FAILED
@ -81,8 +81,8 @@ interesting to just look at the collection tree::
nonpython $ pytest --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collected 2 items
<YamlFile 'test_simple.yml'>
<YamlItem 'hello'>

View File

@ -130,8 +130,8 @@ objects, they are still using the default pytest representation::
$ pytest test_time.py --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 6 items
<Module 'test_time.py'>
<Function 'test_timedistance_v0[a0-b0-expected0]'>
@ -181,8 +181,8 @@ this is a fully self-contained example which you can run with::
$ pytest test_scenarios.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
test_scenarios.py ....
@ -194,8 +194,8 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
$ pytest --collect-only test_scenarios.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
<Module 'test_scenarios.py'>
<Class 'TestSampleWithScenarios'>
@ -259,8 +259,8 @@ Let's first see how it looks like at collection time::
$ pytest test_backends.py --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
<Module 'test_backends.py'>
<Function 'test_db_initialized[d1]'>
@ -320,8 +320,8 @@ The result of this test will be successful::
$ pytest test_indirect_list.py --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items
<Module 'test_indirect_list.py'>
<Function 'test_indirect[a-b]'>
@ -447,8 +447,8 @@ If you run this with reporting for skips enabled::
$ pytest -rs test_module.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_module.py .s

View File

@ -95,7 +95,7 @@ the :confval:`python_files`, :confval:`python_classes` and
:confval:`python_functions` configuration options. Example::
# content of pytest.ini
# can also be defined in in tox.ini or setup.cfg file, although the section
# can also be defined in tox.ini or setup.cfg file, although the section
# name in setup.cfg files should be "tool:pytest"
[pytest]
python_files=check_*.py
@ -117,7 +117,7 @@ then the test collection looks like this::
$ pytest --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 2 items
<Module 'check_myapp.py'>
@ -163,7 +163,7 @@ You can always peek at the collection tree without running tests like this::
. $ pytest --collect-only pythoncollection.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 3 items
<Module 'CWD/pythoncollection.py'>
@ -230,7 +230,7 @@ will be left out::
$ pytest --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 0 items

View File

@ -11,8 +11,8 @@ get on the terminal - we are working on that)::
assertion $ pytest failure_demo.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
collected 42 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
@ -81,7 +81,7 @@ get on the terminal - we are working on that)::
def test_eq_text(self):
> assert 'spam' == 'eggs'
E assert 'spam' == 'eggs'
E AssertionError: assert 'spam' == 'eggs'
E - spam
E + eggs
@ -92,7 +92,7 @@ get on the terminal - we are working on that)::
def test_eq_similar_text(self):
> assert 'foo 1 bar' == 'foo 2 bar'
E assert 'foo 1 bar' == 'foo 2 bar'
E AssertionError: assert 'foo 1 bar' == 'foo 2 bar'
E - foo 1 bar
E ? ^
E + foo 2 bar
@ -105,7 +105,7 @@ get on the terminal - we are working on that)::
def test_eq_multiline_text(self):
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
E assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
E AssertionError: assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
E foo
E - spam
E + eggs
@ -120,7 +120,7 @@ get on the terminal - we are working on that)::
a = '1'*100 + 'a' + '2'*100
b = '1'*100 + 'b' + '2'*100
> assert a == b
E assert '111111111111...2222222222222' == '1111111111111...2222222222222'
E AssertionError: assert '111111111111...2222222222222' == '1111111111111...2222222222222'
E Skipping 90 identical leading characters in diff, use -v to show
E Skipping 91 identical trailing characters in diff, use -v to show
E - 1111111111a222222222
@ -137,7 +137,7 @@ get on the terminal - we are working on that)::
a = '1\n'*100 + 'a' + '2\n'*100
b = '1\n'*100 + 'b' + '2\n'*100
> assert a == b
E assert '1\n1\n1\n1\n...n2\n2\n2\n2\n' == '1\n1\n1\n1\n1...n2\n2\n2\n2\n'
E AssertionError: assert '1\n1\n1\n1\n...n2\n2\n2\n2\n' == '1\n1\n1\n1\n1...n2\n2\n2\n2\n'
E Skipping 190 identical leading characters in diff, use -v to show
E Skipping 191 identical trailing characters in diff, use -v to show
E 1
@ -183,7 +183,7 @@ get on the terminal - we are working on that)::
def test_eq_dict(self):
> assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E AssertionError: assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E Omitting 1 identical items, use -v to show
E Differing items:
E {'b': 1} != {'b': 2}
@ -238,7 +238,7 @@ get on the terminal - we are working on that)::
def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
> assert 'foo' not in text
E assert 'foo' not in 'some multiline\ntext\nw...ncludes foo\nand a\ntail'
E AssertionError: assert 'foo' not in 'some multiline\ntext\nw...ncludes foo\nand a\ntail'
E 'foo' is contained here:
E some multiline
E text
@ -256,7 +256,7 @@ get on the terminal - we are working on that)::
def test_not_in_text_single(self):
text = 'single foo line'
> assert 'foo' not in text
E assert 'foo' not in 'single foo line'
E AssertionError: assert 'foo' not in 'single foo line'
E 'foo' is contained here:
E single foo line
E ? +++
@ -269,7 +269,7 @@ get on the terminal - we are working on that)::
def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
> assert 'foo' not in text
E assert 'foo' not in 'head head head head hea...ail tail tail tail tail '
E AssertionError: assert 'foo' not in 'head head head head hea...ail tail tail tail tail '
E 'foo' is contained here:
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? +++
@ -282,7 +282,7 @@ get on the terminal - we are working on that)::
def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
> assert 'f'*70 not in text
E assert 'fffffffffff...ffffffffffff' not in 'head head he...l tail tail '
E AssertionError: assert 'fffffffffff...ffffffffffff' not in 'head head he...l tail tail '
E 'ffffffffffffffffff...fffffffffffffffffff' is contained here:
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
@ -305,7 +305,7 @@ get on the terminal - we are working on that)::
class Foo(object):
b = 1
> assert Foo().b == 2
E assert 1 == 2
E AssertionError: assert 1 == 2
E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
@ -338,7 +338,7 @@ get on the terminal - we are working on that)::
class Bar(object):
b = 2
> assert Foo().b == Bar().b
E assert 1 == 2
E AssertionError: assert 1 == 2
E + where 1 = <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Foo'>()
E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef>.b
@ -359,7 +359,7 @@ get on the terminal - we are working on that)::
> int(s)
E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python.py:1190>:1: ValueError
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python.py:1207>:1: ValueError
_______ TestRaises.test_raises_doesnt ________
self = <failure_demo.TestRaises object at 0xdeadbeef>
@ -480,7 +480,7 @@ get on the terminal - we are working on that)::
s = "123"
g = "456"
> assert s.startswith(g)
E assert False
E AssertionError: assert False
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
@ -495,7 +495,7 @@ get on the terminal - we are working on that)::
def g():
return "456"
> assert f().startswith(g())
E assert False
E AssertionError: assert False
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef>()

View File

@ -113,8 +113,8 @@ directory with the above conftest.py::
$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items
======= no tests ran in 0.12 seconds ========
@ -164,8 +164,8 @@ and when running it will see a skipped "slow" test::
$ pytest -rs # "-rs" means report details on the little 's'
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_module.py .s
@ -178,8 +178,8 @@ Or run it including the ``slow`` marked test::
$ pytest --runslow
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_module.py ..
@ -302,9 +302,9 @@ which will add the string to the test header accordingly::
$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
project deps: mylib-1.1
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items
======= no tests ran in 0.12 seconds ========
@ -327,11 +327,11 @@ which will add info only when run with "--v"::
$ pytest -v
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
info1: did you know that ...
did you?
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 0 items
======= no tests ran in 0.12 seconds ========
@ -340,8 +340,8 @@ and nothing when run plainly::
$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items
======= no tests ran in 0.12 seconds ========
@ -374,8 +374,8 @@ Now we can profile which test functions execute the slowest::
$ pytest --durations=3
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_some_are_slow.py ...
@ -440,8 +440,8 @@ If we run this::
$ pytest -rx
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
test_step.py .Fx.
@ -476,7 +476,7 @@ concept. It's however recommended to have explicit fixture references in your
tests or test classes rather than relying on implicitly executing
setup/teardown functions, especially if they are far away from the actual tests.
Here is a an example for making a ``db`` fixture available in a directory:
Here is an example for making a ``db`` fixture available in a directory:
.. code-block:: python
@ -519,8 +519,8 @@ We can run this::
$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 7 items
test_step.py .Fx.
@ -585,7 +585,7 @@ environment you can implement a hook that gets called when the test
"report" object is about to be created. Here we write out all failing
test calls and also access a fixture (if it was used by the test) in
case you want to query/look at it during your post processing. In our
case we just write some informations out to a ``failures`` file:
case we just write some information out to a ``failures`` file:
.. code-block:: python
@ -627,8 +627,8 @@ and run them::
$ pytest test_module.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_module.py FF
@ -678,7 +678,7 @@ here is a little example implemented via a local plugin:
outcome = yield
rep = outcome.get_result()
# set an report attribute for each phase of a call, which can
# set a report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
setattr(item, "rep_" + rep.when, rep)
@ -721,8 +721,8 @@ and run it::
$ pytest -s test_module.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_module.py Esetting up a test failed! test_module.py::test_setup_fails

View File

@ -66,14 +66,6 @@ This completely avoids previous issues of confusing assertion-reporting.
It also means, that you can use Python's ``-O`` optimization without losing
assertions in test modules.
``pytest`` contains a second, mostly obsolete, assert debugging technique
invoked via ``--assert=reinterpret``: When an ``assert`` statement fails, ``pytest`` re-interprets
the expression part to show intermediate values. This technique suffers
from a caveat that the rewriting does not: If your expression has side
effects (better to avoid them anyway!) the intermediate values may not
be the same, confusing the reinterpreter and obfuscating the initial
error (this is also explained at the command line if it happens).
You can also turn off all assertion interaction using the
``--assert=plain`` option.

View File

@ -11,7 +11,7 @@ pytest fixtures: explicit, modular, scalable
.. _`xUnit`: http://en.wikipedia.org/wiki/XUnit
.. _`purpose of test fixtures`: http://en.wikipedia.org/wiki/Test_fixture#Software
.. _`Dependency injection`: http://en.wikipedia.org/wiki/Dependency_injection#Definition
.. _`Dependency injection`: http://en.wikipedia.org/wiki/Dependency_injection
The `purpose of test fixtures`_ is to provide a fixed baseline
upon which tests can reliably and repeatedly execute. pytest fixtures
@ -70,8 +70,8 @@ marked ``smtp`` fixture function. Running the test looks like this::
$ pytest test_smtpsimple.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items
test_smtpsimple.py F
@ -188,8 +188,8 @@ inspect what is going on and can now run the tests::
$ pytest test_module.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_module.py FF
@ -243,7 +243,9 @@ Fixture finalization / executing teardown code
pytest supports execution of fixture specific finalization code
when the fixture goes out of scope. By using a ``yield`` statement instead of ``return``, all
the code after the *yield* statement serves as the teardown code.::
the code after the *yield* statement serves as the teardown code:
.. code-block:: python
# content of conftest.py
@ -257,8 +259,9 @@ the code after the *yield* statement serves as the teardown code.::
print("teardown smtp")
smtp.close()
The ``print`` and ``smtp.close()`` statements will execute when the last test using
the fixture in the module has finished execution, regardless of the exception status of the tests.
The ``print`` and ``smtp.close()`` statements will execute when the last test in
the module has finished execution, regardless of the exception status of the
tests.
Let's execute it::
@ -274,22 +277,23 @@ occur around each single test. In either case the test
module itself does not need to change or know about these details
of fixture setup.
Note that we can also seamlessly use the ``yield`` syntax with ``with`` statements::
Note that we can also seamlessly use the ``yield`` syntax with ``with`` statements:
.. code-block:: python
# content of test_yield2.py
import smtplib
import pytest
@pytest.fixture
def passwd():
with open("/etc/passwd") as f:
yield f.readlines()
@pytest.fixture(scope="module")
def smtp(request):
with smtplib.SMTP("smtp.gmail.com") as smtp:
yield smtp # provide the fixture value
def test_has_lines(passwd):
assert len(passwd) >= 1
The file ``f`` will be closed after the test finished execution
because the Python ``file`` object supports finalization when
The ``smtp`` connection will be closed after the test finished execution
because the ``smtp`` object automatically closes when
the ``with`` statement ends.
@ -318,8 +322,7 @@ the ``with`` statement ends.
request.addfinalizer(fin)
return smtp # provide the fixture value
The ``fin`` function will execute when the last test using
the fixture in the module has finished execution.
The ``fin`` function will execute when the last test in the module has finished execution.
This method is still fully supported, but ``yield`` is recommended from 2.10 onward because
it is considered simpler and better describes the natural code flow.
@ -352,8 +355,8 @@ again, nothing much has changed::
$ pytest -s -q --tb=no
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
.
2 failed, 1 passed in 0.12 seconds
2 failed in 0.12 seconds
Let's quickly create another test module that actually sets the
server URL in its module namespace::
@ -375,6 +378,8 @@ Running it::
assert 0, smtp.helo()
E AssertionError: (250, b'mail.python.org')
E assert 0
------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef> (mail.python.org)
voila! The ``smtp`` fixture function picked up our mail server name
from the module namespace.
@ -448,7 +453,7 @@ So let's just do another run::
response, msg = smtp.ehlo()
assert response == 250
> assert b"smtp.gmail.com" in msg
E assert b'smtp.gmail.com' in b'mail.python.org\nSIZE 51200000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8'
E AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nSIZE 51200000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8'
test_module.py:5: AssertionError
-------------------------- Captured stdout setup ---------------------------
@ -464,6 +469,8 @@ So let's just do another run::
E assert 0
test_module.py:11: AssertionError
------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef>
4 failed in 0.12 seconds
We see that our two test functions each ran twice, against the different
@ -516,9 +523,9 @@ Running the above tests results in the following test IDs being used::
$ pytest --collect-only
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
collected 11 items
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 10 items
<Module 'test_anothersmtp.py'>
<Function 'test_showhelo[smtp.gmail.com]'>
<Function 'test_showhelo[mail.python.org]'>
@ -532,8 +539,6 @@ Running the above tests results in the following test IDs being used::
<Function 'test_noop[smtp.gmail.com]'>
<Function 'test_ehlo[mail.python.org]'>
<Function 'test_noop[mail.python.org]'>
<Module 'test_yield2.py'>
<Function 'test_has_lines'>
======= no tests ran in 0.12 seconds ========
@ -569,9 +574,9 @@ Here we declare an ``app`` fixture which receives the previously defined
$ pytest -v test_appsetup.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items
test_appsetup.py::test_smtp_exists[smtp.gmail.com] PASSED
@ -638,9 +643,9 @@ Let's run the tests in verbose mode and with looking at the print-output::
$ pytest -v -s test_module.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile:
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 8 items
test_module.py::test_0[1] SETUP otherarg 1
@ -701,7 +706,7 @@ Using fixtures from classes, modules or projects
Sometimes test functions do not directly need access to a fixture object.
For example, tests may require to operate with an empty directory as the
current working directory but otherwise do not care for the concrete
directory. Here is how you can can use the standard `tempfile
directory. Here is how you can use the standard `tempfile
<http://docs.python.org/library/tempfile.html>`_ and pytest fixtures to
achieve it. We separate the creation of the fixture into a conftest.py
file::
@ -998,7 +1003,7 @@ Given the tests file structure is:
@pytest.mark.parametrize('username', ['directly-overridden-username-other'])
def test_username_other(other_username):
assert username == 'other-directly-overridden-username-other'
assert other_username == 'other-directly-overridden-username-other'
In the example above, a fixture value is overridden by the test parameter value. Note that the value of the fixture
can be overridden this way even if the test doesn't use it directly (doesn't mention it in the function prototype).

View File

@ -97,7 +97,7 @@ sets. pytest-2.3 introduces a decorator for use on the factory itself::
... # use request.param
Here the factory will be invoked twice (with the respective "mysql"
and "pg" values set as ``request.param`` attributes) and and all of
and "pg" values set as ``request.param`` attributes) and all of
the tests requiring "db" will run twice as well. The "mysql" and
"pg" values will also be used for reporting the test-invocation variants.

View File

@ -26,7 +26,7 @@ Installation::
To check your installation has installed the correct version::
$ pytest --version
This is pytest version 3.0.2, imported from $PYTHON_PREFIX/lib/python3.5/site-packages/pytest.py
This is pytest version 3.0.7, imported from $PYTHON_PREFIX/lib/python3.5/site-packages/pytest.py
.. _`simpletest`:
@ -46,8 +46,8 @@ That's it. You can execute the test function now::
$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items
test_sample.py F
@ -134,7 +134,7 @@ run the module by passing its filename::
def test_two(self):
x = "hello"
> assert hasattr(x, 'check')
E assert False
E AssertionError: assert False
E + where False = hasattr('hello', 'check')
test_class.py:8: AssertionError

View File

@ -16,10 +16,12 @@ Conventions for Python test discovery
* If no arguments are specified then collection starts from :confval:`testpaths`
(if configured) or the current directory. Alternatively, command line arguments
can be used in any combination of directories, file names or node ids.
* recurse into directories, unless they match :confval:`norecursedirs`
* ``test_*.py`` or ``*_test.py`` files, imported by their `test package name`_.
* ``Test`` prefixed test classes (without an ``__init__`` method)
* ``test_`` prefixed test functions or methods are test items
* Recurse into directories, unless they match :confval:`norecursedirs`.
* In those directories, search for ``test_*.py`` or ``*_test.py`` files, imported by their `test package name`_.
* From those files, collect test items:
* ``test_`` prefixed test functions or methods outside of class
* ``test_`` prefixed test functions or methods inside ``Test`` prefixed test classes (without an ``__init__`` method)
For examples of how to customize your test discovery :doc:`example/pythoncollection`.
@ -28,75 +30,108 @@ Within Python modules, ``pytest`` also discovers tests using the standard
Choosing a test layout / import rules
------------------------------------------
-------------------------------------
``pytest`` supports two common test layouts:
* putting tests into an extra directory outside your actual application
code, useful if you have many functional tests or for other reasons
want to keep tests separate from actual application code (often a good
idea)::
Tests outside application code
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
setup.py # your setuptools Python package metadata
Putting tests into an extra directory outside your actual application code
might be useful if you have many functional tests or for other reasons want
to keep tests separate from actual application code (often a good idea)::
setup.py
mypkg/
__init__.py
appmodule.py
app.py
view.py
tests/
test_app.py
test_view.py
...
This way your tests can run easily against an installed version
of ``mypkg``.
* inlining test directories into your application package, useful if you
have direct relation between (unit-)test and application modules and
want to distribute your tests along with your application::
Note that using this scheme your test files must have **unique names**, because
``pytest`` will import them as *top-level* modules since there are no packages
to derive a full package name from. In other words, the test files in the example above will
be imported as ``test_app`` and ``test_view`` top-level modules by adding ``tests/`` to
``sys.path``.
setup.py # your setuptools Python package metadata
If you need to have test modules with the same name, you might add ``__init__.py`` files to your
``tests`` folder and subfolders, changing them to packages::
setup.py
mypkg/
...
tests/
__init__.py
foo/
__init__.py
test_view.py
bar/
__init__.py
test_view.py
Now pytest will load the modules as ``tests.foo.test_view`` and ``tests.bar.test_view``, allowing
you to have modules with the same name. But now this introduces a subtle problem: in order to load
the test modules from the ``tests`` directory, pytest prepends the root of the repository to
``sys.path``, which adds the side-effect that now ``mypkg`` is also importable.
This is problematic if you are using a tool like `tox`_ to test your package in a virtual environment,
because you want to test the *installed* version of your package, not the local code from the repository.
In this situation, it is **strongly** suggested to use a ``src`` layout where application root package resides in a
sub-directory of your root::
setup.py
src/
mypkg/
__init__.py
app.py
view.py
tests/
__init__.py
foo/
__init__.py
test_view.py
bar/
__init__.py
test_view.py
This layout prevents a lot of common pitfalls and has many benefits, which are better explained in this excellent
`blog post by Ionel Cristian Mărieș <https://blog.ionelmc.ro/2014/05/25/python-packaging/#the-structure>`_.
Tests as part of application code
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Inlining test directories into your application package
is useful if you have direct relation between tests and application modules and
want to distribute them along with your application::
setup.py
mypkg/
__init__.py
appmodule.py
...
app.py
view.py
test/
__init__.py
test_app.py
test_view.py
...
Important notes relating to both schemes:
In this scheme, it is easy to your run tests using the ``--pyargs`` option::
- **make sure that "mypkg" is importable**, for example by typing once::
pytest --pyargs mypkg
pip install -e . # install package using setup.py in editable mode
# similar to running `python setup.py develop` or
# `conda develop`
This installs your package with a symlink to your development code
instead of placing the code directly in the install directory.
This way you can edit the code and run tests on your edits without
having to reinstall every time.
``pytest`` will discover where ``mypkg`` is installed and collect tests from there.
- **avoid "__init__.py" files in your test directories**.
This way your tests can run easily against an installed version
of ``mypkg``, independently from the installed package if it contains
the tests or not.
Note that this layout also works in conjunction with the ``src`` layout mentioned in the previous section.
- With inlined tests you might put ``__init__.py`` into test
directories and make them installable as part of your application.
Using the ``pytest --pyargs mypkg`` invocation pytest will
discover where mypkg is installed and collect tests from there.
With the "external" test you can still distribute tests but they
will not be installed or become importable.
Typically you can run tests by pointing to test directories or modules::
pytest tests/test_app.py # for external test dirs
pytest mypkg/test/test_app.py # for inlined test dirs
pytest mypkg # run tests in all below test directories
pytest # run all tests below current dir
...
Because of the above ``editable install`` mode you can change your
source code (both tests and the app) and rerun tests at will.
Once you are done with your work, you can `use tox`_ to make sure
that the package is really correct and tests pass in all
required configurations.
.. note::
@ -132,7 +167,7 @@ required configurations.
The reason for this somewhat evolved importing technique is
that in larger projects multiple test modules might import
from each other and thus deriving a canonical import name helps
to avoid surprises such as a test modules getting imported twice.
to avoid surprises such as a test module getting imported twice.
.. _`virtualenv`: http://pypi.python.org/pypi/virtualenv
@ -149,21 +184,24 @@ for installing your application and any dependencies
as well as the ``pytest`` package itself. This ensures your code and
dependencies are isolated from the system Python installation.
If you frequently release code and want to make sure that your actual
You can then install your package in "editable" mode::
pip install -e .
which lets you change your source code (both tests and application) and rerun tests at will.
This is similar to running `python setup.py develop` or `conda develop` in that it installs
your package using a symlink to your development code.
Once you are done with your work and want to make sure that your actual
package passes all tests you may want to look into `tox`_, the
virtualenv test automation tool and its `pytest support
<http://testrun.org/tox/latest/example/pytest.html>`_.
<https://tox.readthedocs.io/en/latest/example/pytest.html>`_.
Tox helps you to setup virtualenv environments with pre-defined
dependencies and then executing a pre-configured test command with
options. It will run tests against the installed package and not
against your source code checkout, helping to detect packaging
glitches.
Continuous integration services such as Jenkins_ can make use of the
``--junitxml=PATH`` option to create a JUnitXML file and generate reports (e.g.
by publishing the results in a nice format with the `Jenkins xUnit Plugin
<https://wiki.jenkins-ci.org/display/JENKINS/xUnit+Plugin>`_).
Integrating with setuptools / ``python setup.py test`` / ``pytest-runner``
--------------------------------------------------------------------------
@ -243,9 +281,10 @@ your own setuptools Test command for invoking pytest.
self.pytest_args = []
def run_tests(self):
import shlex
#import here, cause outside the eggs aren't loaded
import pytest
errno = pytest.main(self.pytest_args)
errno = pytest.main(shlex.split(self.pytest_args))
sys.exit(errno)

View File

@ -14,19 +14,19 @@ An example of a simple test:
.. code-block:: python
# content of test_sample.py
def func(x):
def inc(x):
return x + 1
def test_answer():
assert func(3) == 5
assert inc(3) == 5
To execute it::
$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items
test_sample.py F
@ -35,9 +35,9 @@ To execute it::
_______ test_answer ________
def test_answer():
> assert func(3) == 5
> assert inc(3) == 5
E assert 4 == 5
E + where 4 = func(3)
E + where 4 = inc(3)
test_sample.py:5: AssertionError
======= 1 failed in 0.12 seconds ========

View File

@ -35,7 +35,7 @@ patch this function before calling into a function which uses it::
assert x == '/abc/.ssh'
Here our test function monkeypatches ``os.path.expanduser`` and
then calls into an function that calls it. After the test function
then calls into a function that calls it. After the test function
finishes the ``os.path.expanduser`` modification will be undone.
example: preventing "requests" from remote operations
@ -55,6 +55,14 @@ will delete the method ``request.session.Session.request``
so that any attempts within tests to create http requests will fail.
.. note::
Be advised that it is not recommended to patch builtin functions such as ``open``,
``compile``, etc., because it might break pytest's internals. If that's
unavoidable, passing ``--tb=native``, ``--assert=plain`` and ``--capture=no`` might
help although there's no guarantee.
Method reference of the monkeypatch fixture
-------------------------------------------

View File

@ -55,8 +55,8 @@ them in turn::
$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_expectation.py ..F
@ -73,7 +73,7 @@ them in turn::
])
def test_eval(test_input, expected):
> assert eval(test_input) == expected
E assert 54 == 42
E AssertionError: assert 54 == 42
E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError
@ -103,8 +103,8 @@ Let's run this::
$ pytest
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_expectation.py ..x
@ -186,7 +186,7 @@ Let's also run with a stringinput that will lead to a failing test::
def test_valid_string(stringinput):
> assert stringinput.isalpha()
E assert False
E AssertionError: assert False
E + where False = <built-in method isalpha of str object at 0xdeadbeef>()
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha

View File

@ -58,7 +58,7 @@ Here are some examples of projects using ``pytest`` (please send notes via :ref:
* `katcp <https://bitbucket.org/hodgestar/katcp>`_ Telescope communication protocol over Twisted
* `kss plugin timer <http://pypi.python.org/pypi/kss.plugin.timer>`_
* `pyudev <https://pyudev.readthedocs.io/en/latest/tests/plugins.html>`_ a pure Python binding to the Linux library libudev
* `pytest-localserver <https://bitbucket.org/basti/pytest-localserver/>`_ a plugin for pytest that provides a httpserver and smtpserver
* `pytest-localserver <https://bitbucket.org/basti/pytest-localserver/>`_ a plugin for pytest that provides an httpserver and smtpserver
* `pytest-monkeyplus <http://pypi.python.org/pypi/pytest-monkeyplus/>`_ a plugin that extends monkeypatch
These projects help integrate ``pytest`` into other Python frameworks:

View File

@ -1,8 +1,12 @@
.. _`asserting warnings`:
.. _assertwarnings:
Asserting Warnings
=====================================================
.. _`asserting warnings with the warns function`:
.. _warns:
Asserting warnings with the warns function
@ -46,6 +50,8 @@ Alternatively, you can examine raised warnings in detail using the
``DeprecationWarning`` and ``PendingDeprecationWarning`` are treated
differently; see :ref:`ensuring_function_triggers`.
.. _`recording warnings`:
.. _recwarn:
Recording warnings
@ -96,6 +102,8 @@ class of the warning. The ``message`` is the warning itself; calling
``DeprecationWarning`` and ``PendingDeprecationWarning`` are treated
differently; see :ref:`ensuring_function_triggers`.
.. _`ensuring a function triggers a deprecation warning`:
.. _ensuring_function_triggers:
Ensuring a function triggers a deprecation warning

3
doc/en/requirements.txt Normal file
View File

@ -0,0 +1,3 @@
# pinning sphinx to 1.4.* due to search issues with rtd:
# https://github.com/rtfd/readthedocs-sphinx-ext/issues/25
sphinx ==1.4.*

View File

@ -2,7 +2,7 @@
.. _skipping:
Skip and xfail: dealing with tests that can not succeed
Skip and xfail: dealing with tests that cannot succeed
=====================================================================
If you have test functions that cannot be run on certain platforms
@ -224,8 +224,8 @@ Running it with the report-on-xfail option gives this output::
example $ pytest -rx xfail_demo.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR/example, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR/example, inifile:
collected 7 items
xfail_demo.py xxxxxxx
@ -293,6 +293,20 @@ imperatively, in test or setup code::
# or
pytest.skip("unsupported configuration")
Note that calling ``pytest.skip`` at the module level
is not allowed since pytest 3.0. If you are upgrading
and ``pytest.skip`` was being used at the module level, you can set a
``pytestmark`` variable:
.. code-block:: python
# before pytest 3.0
pytest.skip('skipping all tests because of reasons')
# after pytest 3.0
pytestmark = pytest.mark.skip('skipping all tests because of reasons')
``pytestmark`` applies a mark or list of marks to all tests in a module.
Skipping on a missing import dependency
--------------------------------------------------
@ -371,3 +385,27 @@ The equivalent with "boolean conditions" is::
imported before pytest's argument parsing takes place. For example,
``conftest.py`` files are imported before command line parsing and thus
``config.getvalue()`` will not execute correctly.
Summary
-------
Here's a quick guide on how to skip tests in a module in different situations:
1. Skip all tests in a module unconditionally:
.. code-block:: python
pytestmark = pytest.mark.skip('all tests still WIP')
2. Skip all tests in a module based on some condition:
.. code-block:: python
pytestmark = pytest.mark.skipif(sys.platform == 'win32', 'tests for linux only')
3. Skip all tests in a module if some import is missing:
.. code-block:: python
pexpect = pytest.importorskip('pexpect')

View File

@ -4,13 +4,18 @@ Talks and Tutorials
.. sidebar:: Next Open Trainings
`professional testing with pytest and tox <http://www.python-academy.com/courses/specialtopics/python_course_testing.html>`_, 27-29th June 2016, Freiburg, Germany
`Professional Testing with Python
<http://www.python-academy.com/courses/specialtopics/python_course_testing.html>`_,
26-28 April 2017, Leipzig, Germany.
.. _`funcargs`: funcargs.html
Talks and blog postings
---------------------------------------------
- `Pythonic testing, Igor Starikov (Russian, PyNsk, November 2016)
<https://www.youtube.com/watch?v=_92nfdd5nK8>`_.
- `pytest - Rapid Simple Testing, Florian Bruhin, Swiss Python Summit 2016
<https://www.youtube.com/watch?v=rCBHkQ_LVIs>`_.

View File

@ -71,7 +71,7 @@ you can ad-hoc distribute your tests by typing::
pytest -d --tx ssh=myhostpopen --rsyncdir mypkg mypkg
This will synchronize your ``mypkg`` package directory
to an remote ssh account and then locally collect tests
to a remote ssh account and then locally collect tests
and send them to remote places for execution.
You can specify multiple ``--rsyncdir`` directories

View File

@ -29,8 +29,8 @@ Running this would result in a passed test except for the last
$ pytest test_tmpdir.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items
test_tmpdir.py F

View File

@ -17,6 +17,14 @@ run using ``pytest``. We assume here that you are familiar with writing
``unittest.TestCase`` style tests and rather focus on
integration aspects.
Note that this is meant as a provisional way of running your test code
until you fully convert to pytest-style tests. To fully take advantage of
:ref:`fixtures <fixture>`, :ref:`parametrization <parametrize>` and
:ref:`hooks <writing-plugins>` you should convert (tools like `unittest2pytest
<https://pypi.python.org/pypi/unittest2pytest/>`__ are helpful).
Also, not all 3rd party pluging are expected to work best with
``unittest.TestCase`` style tests.
Usage
-------------------------------------------------------------------
@ -38,7 +46,11 @@ the general ``pytest`` documentation for many more examples.
Running tests from ``unittest.TestCase`` subclasses with ``--pdb`` will
disable tearDown and cleanup methods for the case that an Exception
occurs. This allows proper post mortem debugging for all applications
which have significant logic in their tearDown machinery.
which have significant logic in their tearDown machinery. However,
supporting this feature has the following side effect: If people
overwrite ``unittest.TestCase`` ``__call__`` or ``run``, they need to
to overwrite ``debug`` in the same way (this is also true for standard
unittest).
Mixing pytest fixtures into unittest.TestCase style tests
-----------------------------------------------------------
@ -96,8 +108,8 @@ the ``self.db`` values in the traceback::
$ pytest test_unittest_db.py
======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_unittest_db.py FF
@ -187,12 +199,3 @@ was executed ahead of the ``test_method``.
pytest fixtures into unittest suites. And of course you can also start
to selectively leave away the ``unittest.TestCase`` subclassing, use
plain asserts and get the unlimited pytest feature set.
Converting from unittest to pytest
---------------------------------------
If you want to convert your unittest testcases to pytest, there are
some helpers like `unittest2pytest
<https://pypi.python.org/pypi/unittest2pytest/>`__, which uses lib2to3
and introspection for the transformation.

View File

@ -16,8 +16,20 @@ You can invoke testing through the Python interpreter from the command line::
python -m pytest [...]
This is equivalent to invoking the command line script ``pytest [...]``
directly.
This is almost equivalent to invoking the command line script ``pytest [...]``
directly, except that python will also add the current directory to ``sys.path``.
Possible exit codes
--------------------------------------------------------------
Running ``pytest`` can result in six different exit codes:
:Exit code 0: All tests were collected and passed successfully
:Exit code 1: Tests were collected and run but some of the tests failed
:Exit code 2: Test execution was interrupted by the user
:Exit code 3: Internal error happened while executing tests
:Exit code 4: pytest command line usage error
:Exit code 5: No tests were collected
Getting help on version, option names, environment variables
--------------------------------------------------------------
@ -49,7 +61,7 @@ Several test run options::
# will select TestMyClass.test_something
# but not TestMyClass.test_method_simple
pytest test_mod.py::test_func # only run tests that match the "node ID",
# e.g "test_mod.py::test_func" will select
# e.g. "test_mod.py::test_func" will select
# only test_func in test_mod.py
pytest test_mod.py::TestClass::test_method # run a single method in
# a single class
@ -76,7 +88,7 @@ Examples for modifying traceback printing::
The ``--full-trace`` causes very long traces to be printed on error (longer
than ``--tb=long``). It also ensures that a stack trace is printed on
**KeyboardInterrrupt** (Ctrl+C).
**KeyboardInterrupt** (Ctrl+C).
This is very useful if the tests are taking too long and you interrupt them
with Ctrl+C to find out where the tests are *hanging*. By default no output
will be shown (because KeyboardInterrupt is caught by pytest). By using this
@ -192,7 +204,7 @@ This will add an extra property ``example_key="1"`` to the generated
.. warning::
This is an experimental feature, and its interface might be replaced
``record_xml_property`` is an experimental feature, and its interface might be replaced
by something more powerful and general in future versions. The
functionality per-se will be kept, however.
@ -310,10 +322,6 @@ You can pass in options and arguments::
pytest.main(['-x', 'mytestdir'])
or pass in a string::
pytest.main("-x mytestdir")
You can specify additional plugins to ``pytest.main``::
# content of myinvoke.py

View File

@ -172,7 +172,7 @@ If a package is installed this way, ``pytest`` will load
.. note::
Make sure to include ``Framework :: Pytest`` in your list of
`PyPI classifiers <https://python-packaging-user-guide.readthedocs.io/en/latest/distributing/#classifiers>`_
`PyPI classifiers <https://python-packaging-user-guide.readthedocs.io/distributing/#classifiers>`_
to make it easy for users to find your plugin.
@ -236,22 +236,33 @@ import ``helper.py`` normally. The contents of
Requiring/Loading plugins in a test module or conftest file
-----------------------------------------------------------
You can require plugins in a test module or a conftest file like this::
You can require plugins in a test module or a ``conftest.py`` file like this:
pytest_plugins = "name1", "name2",
.. code-block:: python
pytest_plugins = ["name1", "name2"]
When the test module or conftest plugin is loaded the specified plugins
will be loaded as well. You can also use dotted path like this::
will be loaded as well. Any module can be blessed as a plugin, including internal
application modules:
.. code-block:: python
pytest_plugins = "myapp.testsupport.myplugin"
which will import the specified module as a ``pytest`` plugin.
``pytest_plugins`` variables are processed recursively, so note that in the example above
if ``myapp.testsupport.myplugin`` also declares ``pytest_plugins``, the contents
of the variable will also be loaded as plugins, and so on.
Plugins imported like this will automatically be marked to require
assertion rewriting using the :func:`pytest.register_assert_rewrite`
mechanism. However for this to have any effect the module must not be
imported already, it it was already imported at the time the
``pytest_plugins`` statement is processed a warning will result and
This mechanism makes it easy to share fixtures within applications or even
external applications without the need to create external plugins using
the ``setuptools``'s entry point technique.
Plugins imported by ``pytest_plugins`` will also automatically be marked
for assertion rewriting (see :func:`pytest.register_assert_rewrite`).
However for this to have any effect the module must not be
imported already; if it was already imported at the time the
``pytest_plugins`` statement is processed, a warning will result and
assertions inside the plugin will not be re-written. To fix this you
can either call :func:`pytest.register_assert_rewrite` yourself before
the module is imported, or you can arrange the code to delay the

View File

@ -1,20 +0,0 @@
#!/bin/bash
# this assumes plugins are installed as sister directories
set -e
cd ../pytest-pep8
pytest
cd ../pytest-instafail
pytest
cd ../pytest-cache
pytest
cd ../pytest-xprocess
pytest
#cd ../pytest-cov
#pytest
cd ../pytest-capturelog
pytest
cd ../pytest-xdist
pytest

Some files were not shown because too many files have changed in this diff Show More