#1642 Resolve conflicts

This commit is contained in:
feuillemorte 2018-02-01 00:18:28 +03:00
commit 9f1772e679
79 changed files with 1476 additions and 600 deletions

1
.gitignore vendored
View File

@ -33,6 +33,7 @@ env/
3rdparty/
.tox
.cache
.pytest_cache
.coverage
.ropeproject
.idea

View File

@ -3,12 +3,15 @@ merlinux GmbH, Germany, office at merlinux eu
Contributors include::
Aaron Coleman
Abdeali JK
Abhijeet Kasurde
Ahn Ki-Wook
Alan Velasco
Alexander Johnson
Alexei Kozlenok
Anatoly Bubenkoff
Anders Hovmöller
Andras Tim
Andreas Zeidler
Andrzej Ostrowski
@ -17,6 +20,7 @@ Anthon van der Neut
Anthony Sottile
Antony Lee
Armin Rigo
Aron Coyle
Aron Curzon
Aviv Palivoda
Barney Gale
@ -38,6 +42,7 @@ Christian Boelsen
Christian Theunert
Christian Tismer
Christopher Gilling
Cyrus Maden
Daniel Grana
Daniel Hahler
Daniel Nuri
@ -150,6 +155,7 @@ Punyashloka Biswal
Quentin Pradet
Ralf Schmitt
Ran Benita
Raphael Castaneda
Raphael Pierzina
Raquel Alegre
Ravi Chandra

View File

@ -8,6 +8,138 @@
.. towncrier release notes start
Pytest 3.4.0 (2018-01-30)
=========================
Deprecations and Removals
-------------------------
- All pytest classes now subclass ``object`` for better Python 2/3 compatibility.
This should not affect user code except in very rare edge cases. (`#2147
<https://github.com/pytest-dev/pytest/issues/2147>`_)
Features
--------
- Introduce ``empty_parameter_set_mark`` ini option to select which mark to
apply when ``@pytest.mark.parametrize`` is given an empty set of parameters.
Valid options are ``skip`` (default) and ``xfail``. Note that it is planned
to change the default to ``xfail`` in future releases as this is considered
less error prone. (`#2527
<https://github.com/pytest-dev/pytest/issues/2527>`_)
- **Incompatible change**: after community feedback the `logging
<https://docs.pytest.org/en/latest/logging.html>`_ functionality has
undergone some changes. Please consult the `logging documentation
<https://docs.pytest.org/en/latest/logging.html#incompatible-changes-in-pytest-3-4>`_
for details. (`#3013 <https://github.com/pytest-dev/pytest/issues/3013>`_)
- Console output falls back to "classic" mode when capturing is disabled (``-s``),
otherwise the output gets garbled to the point of being useless. (`#3038
<https://github.com/pytest-dev/pytest/issues/3038>`_)
- New `pytest_runtest_logfinish
<https://docs.pytest.org/en/latest/writing_plugins.html#_pytest.hookspec.pytest_runtest_logfinish>`_
hook which is called when a test item has finished executing, analogous to
`pytest_runtest_logstart
<https://docs.pytest.org/en/latest/writing_plugins.html#_pytest.hookspec.pytest_runtest_start>`_.
(`#3101 <https://github.com/pytest-dev/pytest/issues/3101>`_)
- Improve performance when collecting tests using many fixtures. (`#3107
<https://github.com/pytest-dev/pytest/issues/3107>`_)
- New ``caplog.get_records(when)`` method which provides access to the captured
records for the ``"setup"``, ``"call"`` and ``"teardown"``
testing stages. (`#3117 <https://github.com/pytest-dev/pytest/issues/3117>`_)
- New fixture ``record_xml_attribute`` that allows modifying and inserting
attributes on the ``<testcase>`` xml node in JUnit reports. (`#3130
<https://github.com/pytest-dev/pytest/issues/3130>`_)
- The default cache directory has been renamed from ``.cache`` to
``.pytest_cache`` after community feedback that the name ``.cache`` did not
make it clear that it was used by pytest. (`#3138
<https://github.com/pytest-dev/pytest/issues/3138>`_)
- Colorize the levelname column in the live-log output. (`#3142
<https://github.com/pytest-dev/pytest/issues/3142>`_)
Bug Fixes
---------
- Fix hanging pexpect test on MacOS by using flush() instead of wait().
(`#2022 <https://github.com/pytest-dev/pytest/issues/2022>`_)
- Fix restoring Python state after in-process pytest runs with the
``pytester`` plugin; this may break tests using multiple inprocess
pytest runs if later ones depend on earlier ones leaking global interpreter
changes. (`#3016 <https://github.com/pytest-dev/pytest/issues/3016>`_)
- Fix skipping plugin reporting hook when test aborted before plugin setup
hook. (`#3074 <https://github.com/pytest-dev/pytest/issues/3074>`_)
- Fix progress percentage reported when tests fail during teardown. (`#3088
<https://github.com/pytest-dev/pytest/issues/3088>`_)
- **Incompatible change**: ``-o/--override`` option no longer eats all the
remaining options, which can lead to surprising behavior: for example,
``pytest -o foo=1 /path/to/test.py`` would fail because ``/path/to/test.py``
would be considered as part of the ``-o`` command-line argument. One
consequence of this is that now multiple configuration overrides need
multiple ``-o`` flags: ``pytest -o foo=1 -o bar=2``. (`#3103
<https://github.com/pytest-dev/pytest/issues/3103>`_)
Improved Documentation
----------------------
- Document hooks (defined with ``historic=True``) which cannot be used with
``hookwrapper=True``. (`#2423
<https://github.com/pytest-dev/pytest/issues/2423>`_)
- Clarify that warning capturing doesn't change the warning filter by default.
(`#2457 <https://github.com/pytest-dev/pytest/issues/2457>`_)
- Clarify a possible confusion when using pytest_fixture_setup with fixture
functions that return None. (`#2698
<https://github.com/pytest-dev/pytest/issues/2698>`_)
- Fix the wording of a sentence on doctest flags used in pytest. (`#3076
<https://github.com/pytest-dev/pytest/issues/3076>`_)
- Prefer ``https://*.readthedocs.io`` over ``http://*.rtfd.org`` for links in
the documentation. (`#3092
<https://github.com/pytest-dev/pytest/issues/3092>`_)
- Improve readability (wording, grammar) of Getting Started guide (`#3131
<https://github.com/pytest-dev/pytest/issues/3131>`_)
- Added note that calling pytest.main multiple times from the same process is
not recommended because of import caching. (`#3143
<https://github.com/pytest-dev/pytest/issues/3143>`_)
Trivial/Internal Changes
------------------------
- Show a simple and easy error when keyword expressions trigger a syntax error
(for example, ``"-k foo and import"`` will show an error that you can not use
the ``import`` keyword in expressions). (`#2953
<https://github.com/pytest-dev/pytest/issues/2953>`_)
- Change parametrized automatic test id generation to use the ``__name__``
attribute of functions instead of the fallback argument name plus counter.
(`#2976 <https://github.com/pytest-dev/pytest/issues/2976>`_)
- Replace py.std with stdlib imports. (`#3067
<https://github.com/pytest-dev/pytest/issues/3067>`_)
- Corrected 'you' to 'your' in logging docs. (`#3129
<https://github.com/pytest-dev/pytest/issues/3129>`_)
Pytest 3.3.2 (2017-12-25)
=========================

View File

@ -12,7 +12,7 @@ taking a lot of time to make a new one.
#. Install development dependencies in a virtual environment with::
pip3 install -r tasks/requirements.txt
pip3 install -U -r tasks/requirements.txt
#. Create a branch ``release-X.Y.Z`` with the version for the release.

View File

@ -60,7 +60,7 @@ import os
from glob import glob
class FastFilesCompleter:
class FastFilesCompleter(object):
'Fast file completer class'
def __init__(self, directories=True):

View File

@ -1,5 +1,7 @@
from __future__ import absolute_import, division, print_function
import inspect
import sys
import traceback
from inspect import CO_VARARGS, CO_VARKEYWORDS
import re
from weakref import ref
@ -422,7 +424,7 @@ class ExceptionInfo(object):
"""
if style == 'native':
return ReprExceptionInfo(ReprTracebackNative(
py.std.traceback.format_exception(
traceback.format_exception(
self.type,
self.value,
self.traceback[0]._rawentry,
@ -556,7 +558,7 @@ class FormattedExcinfo(object):
# else:
# self._line("%-10s =\\" % (name,))
# # XXX
# py.std.pprint.pprint(value, stream=self.excinfowriter)
# pprint.pprint(value, stream=self.excinfowriter)
return ReprLocals(lines)
def repr_traceback_entry(self, entry, excinfo=None):
@ -669,7 +671,7 @@ class FormattedExcinfo(object):
else:
# fallback to native repr if the exception doesn't have a traceback:
# ExceptionInfo objects require a full traceback to work
reprtraceback = ReprTracebackNative(py.std.traceback.format_exception(type(e), e, None))
reprtraceback = ReprTracebackNative(traceback.format_exception(type(e), e, None))
reprcrash = None
repr_chain += [(reprtraceback, reprcrash, descr)]
@ -886,7 +888,7 @@ def getrawcode(obj, trycall=True):
obj = getattr(obj, 'f_code', obj)
obj = getattr(obj, '__code__', obj)
if trycall and not hasattr(obj, 'co_firstlineno'):
if hasattr(obj, '__call__') and not py.std.inspect.isclass(obj):
if hasattr(obj, '__call__') and not inspect.isclass(obj):
x = getrawcode(obj.__call__, trycall=False)
if hasattr(x, 'co_firstlineno'):
return x

View File

@ -3,6 +3,7 @@ from __future__ import absolute_import, division, generators, print_function
import ast
from ast import PyCF_ONLY_AST as _AST_FLAG
from bisect import bisect_right
import linecache
import sys
import six
import inspect
@ -191,7 +192,7 @@ class Source(object):
if flag & _AST_FLAG:
return co
lines = [(x + "\n") for x in self.lines]
py.std.linecache.cache[filename] = (1, None, lines, filename)
linecache.cache[filename] = (1, None, lines, filename)
return co
#
@ -223,8 +224,7 @@ def getfslineno(obj):
code = _pytest._code.Code(obj)
except TypeError:
try:
fn = (py.std.inspect.getsourcefile(obj) or
py.std.inspect.getfile(obj))
fn = inspect.getsourcefile(obj) or inspect.getfile(obj)
except TypeError:
return "", -1
@ -248,7 +248,7 @@ def getfslineno(obj):
def findsource(obj):
try:
sourcelines, lineno = py.std.inspect.findsource(obj)
sourcelines, lineno = inspect.findsource(obj)
except py.builtin._sysex:
raise
except: # noqa

View File

@ -56,7 +56,7 @@ class DummyRewriteHook(object):
pass
class AssertionState:
class AssertionState(object):
"""State for the assertion plugin."""
def __init__(self, config, mode):

View File

@ -17,7 +17,7 @@ class Cache(object):
self.config = config
self._cachedir = Cache.cache_dir_from_config(config)
self.trace = config.trace.root.get("cache")
if config.getvalue("cacheclear"):
if config.getoption("cacheclear"):
self.trace("clearing cachedir")
if self._cachedir.check():
self._cachedir.remove()
@ -98,13 +98,13 @@ class Cache(object):
json.dump(value, f, indent=2, sort_keys=True)
class LFPlugin:
class LFPlugin(object):
""" Plugin which implements the --lf (run last-failing) option """
def __init__(self, config):
self.config = config
active_keys = 'lf', 'failedfirst'
self.active = any(config.getvalue(key) for key in active_keys)
self.active = any(config.getoption(key) for key in active_keys)
self.lastfailed = config.cache.get("cache/lastfailed", {})
self._previously_failed_count = None
@ -114,7 +114,8 @@ class LFPlugin:
mode = "run all (no recorded failures)"
else:
noun = 'failure' if self._previously_failed_count == 1 else 'failures'
suffix = " first" if self.config.getvalue("failedfirst") else ""
suffix = " first" if self.config.getoption(
"failedfirst") else ""
mode = "rerun previous {count} {noun}{suffix}".format(
count=self._previously_failed_count, suffix=suffix, noun=noun
)
@ -151,7 +152,7 @@ class LFPlugin:
# running a subset of all tests with recorded failures outside
# of the set of tests currently executing
return
if self.config.getvalue("lf"):
if self.config.getoption("lf"):
items[:] = previously_failed
config.hook.pytest_deselected(items=previously_passed)
else:
@ -159,7 +160,7 @@ class LFPlugin:
def pytest_sessionfinish(self, session):
config = self.config
if config.getvalue("cacheshow") or hasattr(config, "slaveinput"):
if config.getoption("cacheshow") or hasattr(config, "slaveinput"):
return
saved_lastfailed = config.cache.get("cache/lastfailed", {})
@ -185,7 +186,7 @@ def pytest_addoption(parser):
'--cache-clear', action='store_true', dest="cacheclear",
help="remove all cache contents at start of test run.")
parser.addini(
"cache_dir", default='.cache',
"cache_dir", default='.pytest_cache',
help="cache directory path.")

View File

@ -61,7 +61,7 @@ def pytest_load_initial_conftests(early_config, parser, args):
sys.stderr.write(err)
class CaptureManager:
class CaptureManager(object):
"""
Capture plugin, manages that the appropriate capture method is enabled/disabled during collection and each
test phase (setup, call, teardown). After each of those points, the captured output is obtained and
@ -271,7 +271,7 @@ def _install_capture_fixture_on_item(request, capture_class):
del request.node._capture_fixture
class CaptureFixture:
class CaptureFixture(object):
def __init__(self, captureclass, request):
self.captureclass = captureclass
self.request = request
@ -416,11 +416,11 @@ class MultiCapture(object):
self.err.snap() if self.err is not None else "")
class NoCapture:
class NoCapture(object):
__init__ = start = done = suspend = resume = lambda *args: None
class FDCaptureBinary:
class FDCaptureBinary(object):
"""Capture IO to/from a given os-level filedescriptor.
snap() produces `bytes`
@ -506,7 +506,7 @@ class FDCapture(FDCaptureBinary):
return res
class SysCapture:
class SysCapture(object):
def __init__(self, fd, tmpfile=None):
name = patchsysdict[fd]
self._old = getattr(sys, name)
@ -551,7 +551,7 @@ class SysCaptureBinary(SysCapture):
return res
class DontReadFromInput:
class DontReadFromInput(object):
"""Temporary stub class. Ideally when stdin is accessed, the
capturing should be turned off, with possibly all data captured
so far sent to the screen. This should be configurable, though,

View File

@ -60,12 +60,13 @@ def main(args=None, plugins=None):
finally:
config._ensure_unconfigure()
except UsageError as e:
tw = py.io.TerminalWriter(sys.stderr)
for msg in e.args:
sys.stderr.write("ERROR: %s\n" % (msg,))
tw.line("ERROR: {}\n".format(msg), red=True)
return 4
class cmdline: # compatibility namespace
class cmdline(object): # compatibility namespace
main = staticmethod(main)
@ -462,7 +463,7 @@ def _get_plugin_specs_as_list(specs):
return []
class Parser:
class Parser(object):
""" Parser for command line arguments and ini-file values.
:ivar extra_info: dict of generic param -> value to display in case
@ -597,7 +598,7 @@ class ArgumentError(Exception):
return self.msg
class Argument:
class Argument(object):
"""class that mimics the necessary behaviour of optparse.Option
its currently a least effort implementation
@ -727,7 +728,7 @@ class Argument:
return 'Argument({0})'.format(', '.join(args))
class OptionGroup:
class OptionGroup(object):
def __init__(self, name, description="", parser=None):
self.name = name
self.description = description
@ -858,7 +859,7 @@ class CmdOptions(object):
return CmdOptions(self.__dict__)
class Notset:
class Notset(object):
def __repr__(self):
return "<NOTSET>"
@ -1188,16 +1189,15 @@ class Config(object):
def _get_override_ini_value(self, name):
value = None
# override_ini is a list of list, to support both -o foo1=bar1 foo2=bar2 and
# and -o foo1=bar1 -o foo2=bar2 options
# always use the last item if multiple value set for same ini-name,
# override_ini is a list of "ini=value" options
# always use the last item if multiple values are set for same ini-name,
# e.g. -o foo=bar1 -o foo=bar2 will set foo to bar2
for ini_config_list in self._override_ini:
for ini_config in ini_config_list:
try:
(key, user_ini_value) = ini_config.split("=", 1)
except ValueError:
raise UsageError("-o/--override-ini expects option=value style.")
for ini_config in self._override_ini:
try:
key, user_ini_value = ini_config.split("=", 1)
except ValueError:
raise UsageError("-o/--override-ini expects option=value style.")
else:
if key == name:
value = user_ini_value
return value

View File

@ -40,7 +40,7 @@ def pytest_configure(config):
config._cleanup.append(fin)
class pytestPDB:
class pytestPDB(object):
""" Pseudo PDB that defers to the real pdb. """
_pluginmanager = None
_config = None
@ -62,7 +62,7 @@ class pytestPDB:
cls._pdb_cls().set_trace(frame)
class PdbInvoke:
class PdbInvoke(object):
def pytest_exception_interact(self, node, call, report):
capman = node.config.pluginmanager.getplugin("capturemanager")
if capman:

View File

@ -4,7 +4,7 @@ import functools
import inspect
import sys
import warnings
from collections import OrderedDict
from collections import OrderedDict, deque, defaultdict
import attr
import py
@ -163,62 +163,51 @@ def get_parametrized_fixture_keys(item, scopenum):
def reorder_items(items):
argkeys_cache = {}
items_by_argkey = {}
for scopenum in range(0, scopenum_function):
argkeys_cache[scopenum] = d = {}
items_by_argkey[scopenum] = item_d = defaultdict(list)
for item in items:
keys = OrderedDict.fromkeys(get_parametrized_fixture_keys(item, scopenum))
if keys:
d[item] = keys
return reorder_items_atscope(items, set(), argkeys_cache, 0)
for key in keys:
item_d[key].append(item)
items = OrderedDict.fromkeys(items)
return list(reorder_items_atscope(items, set(), argkeys_cache, items_by_argkey, 0))
def reorder_items_atscope(items, ignore, argkeys_cache, scopenum):
def reorder_items_atscope(items, ignore, argkeys_cache, items_by_argkey, scopenum):
if scopenum >= scopenum_function or len(items) < 3:
return items
items_done = []
while 1:
items_before, items_same, items_other, newignore = \
slice_items(items, ignore, argkeys_cache[scopenum])
items_before = reorder_items_atscope(
items_before, ignore, argkeys_cache, scopenum + 1)
if items_same is None:
# nothing to reorder in this scope
assert items_other is None
return items_done + items_before
items_done.extend(items_before)
items = items_same + items_other
ignore = newignore
def slice_items(items, ignore, scoped_argkeys_cache):
# we pick the first item which uses a fixture instance in the
# requested scope and which we haven't seen yet. We slice the input
# items list into a list of items_nomatch, items_same and
# items_other
if scoped_argkeys_cache: # do we need to do work at all?
it = iter(items)
# first find a slicing key
for i, item in enumerate(it):
argkeys = scoped_argkeys_cache.get(item)
if argkeys is not None:
newargkeys = OrderedDict.fromkeys(k for k in argkeys if k not in ignore)
if newargkeys: # found a slicing key
slicing_argkey, _ = newargkeys.popitem()
items_before = items[:i]
items_same = [item]
items_other = []
# now slice the remainder of the list
for item in it:
argkeys = scoped_argkeys_cache.get(item)
if argkeys and slicing_argkey in argkeys and \
slicing_argkey not in ignore:
items_same.append(item)
else:
items_other.append(item)
newignore = ignore.copy()
newignore.add(slicing_argkey)
return (items_before, items_same, items_other, newignore)
return items, None, None, None
items_deque = deque(items)
items_done = OrderedDict()
scoped_items_by_argkey = items_by_argkey[scopenum]
scoped_argkeys_cache = argkeys_cache[scopenum]
while items_deque:
no_argkey_group = OrderedDict()
slicing_argkey = None
while items_deque:
item = items_deque.popleft()
if item in items_done or item in no_argkey_group:
continue
argkeys = OrderedDict.fromkeys(k for k in scoped_argkeys_cache.get(item, []) if k not in ignore)
if not argkeys:
no_argkey_group[item] = None
else:
slicing_argkey, _ = argkeys.popitem()
# we don't have to remove relevant items from later in the deque because they'll just be ignored
for i in reversed(scoped_items_by_argkey[slicing_argkey]):
if i in items:
items_deque.appendleft(i)
break
if no_argkey_group:
no_argkey_group = reorder_items_atscope(
no_argkey_group, set(), argkeys_cache, items_by_argkey, scopenum + 1)
for item in no_argkey_group:
items_done[item] = None
ignore.add(slicing_argkey)
return items_done
def fillfixtures(function):
@ -247,7 +236,7 @@ def get_direct_param_fixture_func(request):
return request.param
class FuncFixtureInfo:
class FuncFixtureInfo(object):
def __init__(self, argnames, names_closure, name2fixturedefs):
self.argnames = argnames
self.names_closure = names_closure
@ -443,7 +432,7 @@ class FixtureRequest(FuncargnamesCompatAttr):
fixturedef = self._getnextfixturedef(argname)
except FixtureLookupError:
if argname == "request":
class PseudoFixtureDef:
class PseudoFixtureDef(object):
cached_result = (self, [0], None)
scope = "function"
return PseudoFixtureDef
@ -719,7 +708,7 @@ def call_fixture_func(fixturefunc, request, kwargs):
return res
class FixtureDef:
class FixtureDef(object):
""" A container for a factory definition. """
def __init__(self, fixturemanager, baseid, argname, func, scope, params,
@ -925,7 +914,7 @@ def pytestconfig(request):
return request.config
class FixtureManager:
class FixtureManager(object):
"""
pytest fixtures definitions and information is stored and managed
from this class.

View File

@ -57,9 +57,9 @@ def pytest_addoption(parser):
action="store_true", dest="debug", default=False,
help="store internal tracing debug information in 'pytestdebug.log'.")
group._addoption(
'-o', '--override-ini', nargs='*', dest="override_ini",
'-o', '--override-ini', dest="override_ini",
action="append",
help="override config option with option=value style, e.g. `-o xfail_strict=True`.")
help='override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`.')
@pytest.hookimpl(hookwrapper=True)

View File

@ -16,6 +16,9 @@ def pytest_addhooks(pluginmanager):
:param _pytest.config.PytestPluginManager pluginmanager: pytest plugin manager
.. note::
This hook is incompatible with ``hookwrapper=True``.
"""
@ -27,6 +30,9 @@ def pytest_namespace():
the pytest namespace.
This hook is called at plugin registration time.
.. note::
This hook is incompatible with ``hookwrapper=True``.
"""
@ -36,6 +42,9 @@ def pytest_plugin_registered(plugin, manager):
:param plugin: the plugin module or instance
:param _pytest.config.PytestPluginManager manager: pytest plugin manager
.. note::
This hook is incompatible with ``hookwrapper=True``.
"""
@ -66,6 +75,9 @@ def pytest_addoption(parser):
The config object is passed around on many internal objects via the ``.config``
attribute or can be retrieved as the ``pytestconfig`` fixture.
.. note::
This hook is incompatible with ``hookwrapper=True``.
"""
@ -80,13 +92,15 @@ def pytest_configure(config):
After that, the hook is called for other conftest files as they are
imported.
.. note::
This hook is incompatible with ``hookwrapper=True``.
:arg _pytest.config.Config config: pytest config object
"""
# -------------------------------------------------------------------------
# Bootstrapping hooks called for plugins registered early enough:
# internal and 3rd party plugins as well as directly
# discoverable conftest.py local plugins.
# internal and 3rd party plugins.
# -------------------------------------------------------------------------
@ -96,6 +110,9 @@ def pytest_cmdline_parse(pluginmanager, args):
Stops at first non-None result, see :ref:`firstresult`
.. note::
This hook will not be called for ``conftest.py`` files, only for setuptools plugins.
:param _pytest.config.PytestPluginManager pluginmanager: pytest plugin manager
:param list[str] args: list of arguments passed on the command line
"""
@ -107,6 +124,9 @@ def pytest_cmdline_preparse(config, args):
This hook is considered deprecated and will be removed in a future pytest version. Consider
using :func:`pytest_load_initial_conftests` instead.
.. note::
This hook will not be called for ``conftest.py`` files, only for setuptools plugins.
:param _pytest.config.Config config: pytest config object
:param list[str] args: list of arguments passed on the command line
"""
@ -117,6 +137,9 @@ def pytest_cmdline_main(config):
""" called for performing the main command line action. The default
implementation will invoke the configure hooks and runtest_mainloop.
.. note::
This hook will not be called for ``conftest.py`` files, only for setuptools plugins.
Stops at first non-None result, see :ref:`firstresult`
:param _pytest.config.Config config: pytest config object
@ -127,6 +150,9 @@ def pytest_load_initial_conftests(early_config, parser, args):
""" implements the loading of initial conftest files ahead
of command line option parsing.
.. note::
This hook will not be called for ``conftest.py`` files, only for setuptools plugins.
:param _pytest.config.Config early_config: pytest config object
:param list[str] args: list of arguments passed on the command line
:param _pytest.config.Parser parser: to add command line options
@ -365,7 +391,15 @@ def pytest_runtest_logreport(report):
def pytest_fixture_setup(fixturedef, request):
""" performs fixture setup execution.
Stops at first non-None result, see :ref:`firstresult` """
:return: The return value of the call to the fixture function
Stops at first non-None result, see :ref:`firstresult`
.. note::
If the fixture function returns None, other implementations of
this hook function will continue to be called, according to the
behavior of the :ref:`firstresult` option.
"""
def pytest_fixture_post_finalizer(fixturedef, request):
@ -463,7 +497,11 @@ def pytest_terminal_summary(terminalreporter, exitstatus):
def pytest_logwarning(message, code, nodeid, fslocation):
""" process a warning specified by a message, a code string,
a nodeid and fslocation (both of which may be None
if the warning is not tied to a particular node/location)."""
if the warning is not tied to a particular node/location).
.. note::
This hook is incompatible with ``hookwrapper=True``.
"""
# -------------------------------------------------------------------------
# doctest hooks

View File

@ -85,6 +85,9 @@ class _NodeReporter(object):
def add_property(self, name, value):
self.properties.append((str(name), bin_xml_escape(value)))
def add_attribute(self, name, value):
self.attrs[str(name)] = bin_xml_escape(value)
def make_properties_node(self):
"""Return a Junit node containing custom properties, if any.
"""
@ -98,6 +101,7 @@ class _NodeReporter(object):
def record_testreport(self, testreport):
assert not self.testcase
names = mangle_test_address(testreport.nodeid)
existing_attrs = self.attrs
classnames = names[:-1]
if self.xml.prefix:
classnames.insert(0, self.xml.prefix)
@ -111,6 +115,7 @@ class _NodeReporter(object):
if hasattr(testreport, "url"):
attrs["url"] = testreport.url
self.attrs = attrs
self.attrs.update(existing_attrs) # restore any user-defined attributes
def to_xml(self):
testcase = Junit.testcase(time=self.duration, **self.attrs)
@ -211,6 +216,27 @@ def record_xml_property(request):
return add_property_noop
@pytest.fixture
def record_xml_attribute(request):
"""Add extra xml attributes to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being automatically
xml-encoded
"""
request.node.warn(
code='C3',
message='record_xml_attribute is an experimental feature',
)
xml = getattr(request.config, "_xml", None)
if xml is not None:
node_reporter = xml.node_reporter(request.node.nodeid)
return node_reporter.add_attribute
else:
def add_attr_noop(name, value):
pass
return add_attr_noop
def pytest_addoption(parser):
group = parser.getgroup("terminal reporting")
group.addoption(

View File

@ -2,9 +2,10 @@ from __future__ import absolute_import, division, print_function
import logging
from contextlib import closing, contextmanager
import sys
import re
import six
from _pytest.config import create_terminal_writer
import pytest
import py
@ -13,6 +14,58 @@ DEFAULT_LOG_FORMAT = '%(filename)-25s %(lineno)4d %(levelname)-8s %(message)s'
DEFAULT_LOG_DATE_FORMAT = '%H:%M:%S'
class ColoredLevelFormatter(logging.Formatter):
"""
Colorize the %(levelname)..s part of the log format passed to __init__.
"""
LOGLEVEL_COLOROPTS = {
logging.CRITICAL: {'red'},
logging.ERROR: {'red', 'bold'},
logging.WARNING: {'yellow'},
logging.WARN: {'yellow'},
logging.INFO: {'green'},
logging.DEBUG: {'purple'},
logging.NOTSET: set(),
}
LEVELNAME_FMT_REGEX = re.compile(r'%\(levelname\)([+-]?\d*s)')
def __init__(self, terminalwriter, *args, **kwargs):
super(ColoredLevelFormatter, self).__init__(
*args, **kwargs)
if six.PY2:
self._original_fmt = self._fmt
else:
self._original_fmt = self._style._fmt
self._level_to_fmt_mapping = {}
levelname_fmt_match = self.LEVELNAME_FMT_REGEX.search(self._fmt)
if not levelname_fmt_match:
return
levelname_fmt = levelname_fmt_match.group()
for level, color_opts in self.LOGLEVEL_COLOROPTS.items():
formatted_levelname = levelname_fmt % {
'levelname': logging.getLevelName(level)}
# add ANSI escape sequences around the formatted levelname
color_kwargs = {name: True for name in color_opts}
colorized_formatted_levelname = terminalwriter.markup(
formatted_levelname, **color_kwargs)
self._level_to_fmt_mapping[level] = self.LEVELNAME_FMT_REGEX.sub(
colorized_formatted_levelname,
self._fmt)
def format(self, record):
fmt = self._level_to_fmt_mapping.get(
record.levelno, self._original_fmt)
if six.PY2:
self._fmt = fmt
else:
self._style._fmt = fmt
return super(ColoredLevelFormatter, self).format(record)
def get_option_ini(config, *names):
for name in names:
ret = config.getoption(name) # 'default' arg won't work as expected
@ -48,6 +101,9 @@ def pytest_addoption(parser):
'--log-date-format',
dest='log_date_format', default=DEFAULT_LOG_DATE_FORMAT,
help='log date format as used by the logging module.')
parser.addini(
'log_cli', default=False, type='bool',
help='enable log display during test run (also known as "live logging").')
add_option_ini(
'--log-cli-level',
dest='log_cli_level', default=None,
@ -79,13 +135,14 @@ def pytest_addoption(parser):
@contextmanager
def catching_logs(handler, formatter=None, level=logging.NOTSET):
def catching_logs(handler, formatter=None, level=None):
"""Context manager that prepares the whole logging machinery properly."""
root_logger = logging.getLogger()
if formatter is not None:
handler.setFormatter(formatter)
handler.setLevel(level)
if level is not None:
handler.setLevel(level)
# Adding the same handler twice would confuse logging system.
# Just don't do that.
@ -93,12 +150,14 @@ def catching_logs(handler, formatter=None, level=logging.NOTSET):
if add_new_handler:
root_logger.addHandler(handler)
orig_level = root_logger.level
root_logger.setLevel(min(orig_level, level))
if level is not None:
orig_level = root_logger.level
root_logger.setLevel(level)
try:
yield handler
finally:
root_logger.setLevel(orig_level)
if level is not None:
root_logger.setLevel(orig_level)
if add_new_handler:
root_logger.removeHandler(handler)
@ -123,11 +182,40 @@ class LogCaptureFixture(object):
def __init__(self, item):
"""Creates a new funcarg."""
self._item = item
self._initial_log_levels = {} # type: Dict[str, int] # dict of log name -> log level
def _finalize(self):
"""Finalizes the fixture.
This restores the log levels changed by :meth:`set_level`.
"""
# restore log levels
for logger_name, level in self._initial_log_levels.items():
logger = logging.getLogger(logger_name)
logger.setLevel(level)
@property
def handler(self):
return self._item.catch_log_handler
def get_records(self, when):
"""
Get the logging records for one of the possible test phases.
:param str when:
Which test phase to obtain the records from. Valid values are: "setup", "call" and "teardown".
:rtype: List[logging.LogRecord]
:return: the list of captured records at the given stage
.. versionadded:: 3.4
"""
handler = self._item.catch_log_handlers.get(when)
if handler:
return handler.records
else:
return []
@property
def text(self):
"""Returns the log text."""
@ -154,31 +242,31 @@ class LogCaptureFixture(object):
self.handler.records = []
def set_level(self, level, logger=None):
"""Sets the level for capturing of logs.
"""Sets the level for capturing of logs. The level will be restored to its previous value at the end of
the test.
By default, the level is set on the handler used to capture
logs. Specify a logger name to instead set the level of any
logger.
:param int level: the logger to level.
:param str logger: the logger to update the level. If not given, the root logger level is updated.
.. versionchanged:: 3.4
The levels of the loggers changed by this function will be restored to their initial values at the
end of the test.
"""
if logger is None:
logger = self.handler
else:
logger = logging.getLogger(logger)
logger_name = logger
logger = logging.getLogger(logger_name)
# save the original log-level to restore it during teardown
self._initial_log_levels.setdefault(logger_name, logger.level)
logger.setLevel(level)
@contextmanager
def at_level(self, level, logger=None):
"""Context manager that sets the level for capturing of logs.
"""Context manager that sets the level for capturing of logs. After the end of the 'with' statement the
level is restored to its original value.
By default, the level is set on the handler used to capture
logs. Specify a logger name to instead set the level of any
logger.
:param int level: the logger to level.
:param str logger: the logger to update the level. If not given, the root logger level is updated.
"""
if logger is None:
logger = self.handler
else:
logger = logging.getLogger(logger)
logger = logging.getLogger(logger)
orig_level = logger.level
logger.setLevel(level)
try:
@ -197,7 +285,9 @@ def caplog(request):
* caplog.records() -> list of logging.LogRecord instances
* caplog.record_tuples() -> list of (logger_name, level, message) tuples
"""
return LogCaptureFixture(request.node)
result = LogCaptureFixture(request.node)
yield result
result._finalize()
def get_actual_log_level(config, *setting_names):
@ -227,8 +317,12 @@ def get_actual_log_level(config, *setting_names):
def pytest_configure(config):
config.pluginmanager.register(LoggingPlugin(config),
'logging-plugin')
config.pluginmanager.register(LoggingPlugin(config), 'logging-plugin')
@contextmanager
def _dummy_context_manager():
yield
class LoggingPlugin(object):
@ -241,57 +335,52 @@ class LoggingPlugin(object):
The formatter can be safely shared across all handlers so
create a single one for the entire test session here.
"""
self.log_cli_level = get_actual_log_level(
config, 'log_cli_level', 'log_level') or logging.WARNING
self._config = config
# enable verbose output automatically if live logging is enabled
if self._config.getini('log_cli') and not config.getoption('verbose'):
# sanity check: terminal reporter should not have been loaded at this point
assert self._config.pluginmanager.get_plugin('terminalreporter') is None
config.option.verbose = 1
self.print_logs = get_option_ini(config, 'log_print')
self.formatter = logging.Formatter(
get_option_ini(config, 'log_format'),
get_option_ini(config, 'log_date_format'))
log_cli_handler = logging.StreamHandler(sys.stderr)
log_cli_format = get_option_ini(
config, 'log_cli_format', 'log_format')
log_cli_date_format = get_option_ini(
config, 'log_cli_date_format', 'log_date_format')
log_cli_formatter = logging.Formatter(
log_cli_format,
datefmt=log_cli_date_format)
self.log_cli_handler = log_cli_handler # needed for a single unittest
self.live_logs = catching_logs(log_cli_handler,
formatter=log_cli_formatter,
level=self.log_cli_level)
self.formatter = logging.Formatter(get_option_ini(config, 'log_format'),
get_option_ini(config, 'log_date_format'))
self.log_level = get_actual_log_level(config, 'log_level')
log_file = get_option_ini(config, 'log_file')
if log_file:
self.log_file_level = get_actual_log_level(
config, 'log_file_level') or logging.WARNING
self.log_file_level = get_actual_log_level(config, 'log_file_level')
log_file_format = get_option_ini(
config, 'log_file_format', 'log_format')
log_file_date_format = get_option_ini(
config, 'log_file_date_format', 'log_date_format')
self.log_file_handler = logging.FileHandler(
log_file,
# Each pytest runtests session will write to a clean logfile
mode='w')
log_file_formatter = logging.Formatter(
log_file_format,
datefmt=log_file_date_format)
log_file_format = get_option_ini(config, 'log_file_format', 'log_format')
log_file_date_format = get_option_ini(config, 'log_file_date_format', 'log_date_format')
# Each pytest runtests session will write to a clean logfile
self.log_file_handler = logging.FileHandler(log_file, mode='w')
log_file_formatter = logging.Formatter(log_file_format, datefmt=log_file_date_format)
self.log_file_handler.setFormatter(log_file_formatter)
else:
self.log_file_handler = None
# initialized during pytest_runtestloop
self.log_cli_handler = None
@contextmanager
def _runtest_for(self, item, when):
"""Implements the internals of pytest_runtest_xxx() hook."""
with catching_logs(LogCaptureHandler(),
formatter=self.formatter) as log_handler:
formatter=self.formatter, level=self.log_level) as log_handler:
if self.log_cli_handler:
self.log_cli_handler.set_when(when)
if not hasattr(item, 'catch_log_handlers'):
item.catch_log_handlers = {}
item.catch_log_handlers[when] = log_handler
item.catch_log_handler = log_handler
try:
yield # run test
finally:
del item.catch_log_handler
if when == 'teardown':
del item.catch_log_handlers
if self.print_logs:
# Add a captured log section to the report.
@ -313,10 +402,15 @@ class LoggingPlugin(object):
with self._runtest_for(item, 'teardown'):
yield
def pytest_runtest_logstart(self):
if self.log_cli_handler:
self.log_cli_handler.reset()
@pytest.hookimpl(hookwrapper=True)
def pytest_runtestloop(self, session):
"""Runs all collected test items."""
with self.live_logs:
self._setup_cli_logging()
with self.live_logs_context:
if self.log_file_handler is not None:
with closing(self.log_file_handler):
with catching_logs(self.log_file_handler,
@ -324,3 +418,69 @@ class LoggingPlugin(object):
yield # run all the tests
else:
yield # run all the tests
def _setup_cli_logging(self):
"""Sets up the handler and logger for the Live Logs feature, if enabled.
This must be done right before starting the loop so we can access the terminal reporter plugin.
"""
terminal_reporter = self._config.pluginmanager.get_plugin('terminalreporter')
if self._config.getini('log_cli') and terminal_reporter is not None:
capture_manager = self._config.pluginmanager.get_plugin('capturemanager')
log_cli_handler = _LiveLoggingStreamHandler(terminal_reporter, capture_manager)
log_cli_format = get_option_ini(self._config, 'log_cli_format', 'log_format')
log_cli_date_format = get_option_ini(self._config, 'log_cli_date_format', 'log_date_format')
if self._config.option.color != 'no' and ColoredLevelFormatter.LEVELNAME_FMT_REGEX.search(log_cli_format):
log_cli_formatter = ColoredLevelFormatter(create_terminal_writer(self._config),
log_cli_format, datefmt=log_cli_date_format)
else:
log_cli_formatter = logging.Formatter(log_cli_format, datefmt=log_cli_date_format)
log_cli_level = get_actual_log_level(self._config, 'log_cli_level', 'log_level')
self.log_cli_handler = log_cli_handler
self.live_logs_context = catching_logs(log_cli_handler, formatter=log_cli_formatter, level=log_cli_level)
else:
self.live_logs_context = _dummy_context_manager()
class _LiveLoggingStreamHandler(logging.StreamHandler):
"""
Custom StreamHandler used by the live logging feature: it will write a newline before the first log message
in each test.
During live logging we must also explicitly disable stdout/stderr capturing otherwise it will get captured
and won't appear in the terminal.
"""
def __init__(self, terminal_reporter, capture_manager):
"""
:param _pytest.terminal.TerminalReporter terminal_reporter:
:param _pytest.capture.CaptureManager capture_manager:
"""
logging.StreamHandler.__init__(self, stream=terminal_reporter)
self.capture_manager = capture_manager
self.reset()
self.set_when(None)
def reset(self):
"""Reset the handler; should be called before the start of each test"""
self._first_record_emitted = False
def set_when(self, when):
"""Prepares for the given test phase (setup/call/teardown)"""
self._when = when
self._section_name_shown = False
def emit(self, record):
if self.capture_manager is not None:
self.capture_manager.suspend_global_capture()
try:
if not self._first_record_emitted or self._when == 'teardown':
self.stream.write('\n')
self._first_record_emitted = True
if not self._section_name_shown:
self.stream.section('live log ' + self._when, sep='-', bold=True)
self._section_name_shown = True
logging.StreamHandler.emit(self, record)
finally:
if self.capture_manager is not None:
self.capture_manager.resume_global_capture()

View File

@ -248,7 +248,7 @@ def _patched_find_module():
yield
class FSHookProxy:
class FSHookProxy(object):
def __init__(self, fspath, pm, remove_mods):
self.fspath = fspath
self.pm = pm

View File

@ -2,14 +2,19 @@
from __future__ import absolute_import, division, print_function
import inspect
import keyword
import warnings
import attr
from collections import namedtuple
from operator import attrgetter
from six.moves import map
from _pytest.config import UsageError
from .deprecated import MARK_PARAMETERSET_UNPACKING
from .compat import NOTSET, getfslineno
EMPTY_PARAMETERSET_OPTION = "empty_parameter_set_mark"
def alias(name, warning=None):
getter = attrgetter(name)
@ -70,7 +75,7 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
return cls(argval, marks=newmarks, id=None)
@classmethod
def _for_parameterize(cls, argnames, argvalues, function):
def _for_parameterize(cls, argnames, argvalues, function, config):
if not isinstance(argnames, (tuple, list)):
argnames = [x.strip() for x in argnames.split(",") if x.strip()]
force_tuple = len(argnames) == 1
@ -82,10 +87,7 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
del argvalues
if not parameters:
fs, lineno = getfslineno(function)
reason = "got empty parameter set %r, function %s at %s:%d" % (
argnames, function.__name__, fs, lineno)
mark = MARK_GEN.skip(reason=reason)
mark = get_empty_parameterset_mark(config, argnames, function)
parameters.append(ParameterSet(
values=(NOTSET,) * len(argnames),
marks=[mark],
@ -94,6 +96,20 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
return argnames, parameters
def get_empty_parameterset_mark(config, argnames, function):
requested_mark = config.getini(EMPTY_PARAMETERSET_OPTION)
if requested_mark in ('', None, 'skip'):
mark = MARK_GEN.skip
elif requested_mark == 'xfail':
mark = MARK_GEN.xfail(run=False)
else:
raise LookupError(requested_mark)
fs, lineno = getfslineno(function)
reason = "got empty parameter set %r, function %s at %s:%d" % (
argnames, function.__name__, fs, lineno)
return mark(reason=reason)
class MarkerError(Exception):
"""Error in use of a pytest marker/attribute."""
@ -133,6 +149,9 @@ def pytest_addoption(parser):
)
parser.addini("markers", "markers for test functions", 'linelist')
parser.addini(
EMPTY_PARAMETERSET_OPTION,
"default marker for empty parametersets")
def pytest_cmdline_main(config):
@ -222,6 +241,9 @@ class KeywordMapping(object):
return False
python_keywords_allowed_list = ["or", "and", "not"]
def matchmark(colitem, markexpr):
"""Tries to match on any marker names, attached to the given colitem."""
return eval(markexpr, {}, MarkMapping.from_keywords(colitem.keywords))
@ -259,7 +281,13 @@ def matchkeyword(colitem, keywordexpr):
return mapping[keywordexpr]
elif keywordexpr.startswith("not ") and " " not in keywordexpr[4:]:
return not mapping[keywordexpr[4:]]
return eval(keywordexpr, {}, mapping)
for kwd in keywordexpr.split():
if keyword.iskeyword(kwd) and kwd not in python_keywords_allowed_list:
raise UsageError("Python keyword '{}' not accepted in expressions passed to '-k'".format(kwd))
try:
return eval(keywordexpr, {}, mapping)
except SyntaxError:
raise UsageError("Wrong expression passed to '-k': {}".format(keywordexpr))
def pytest_configure(config):
@ -267,12 +295,19 @@ def pytest_configure(config):
if config.option.strict:
MARK_GEN._config = config
empty_parameterset = config.getini(EMPTY_PARAMETERSET_OPTION)
if empty_parameterset not in ('skip', 'xfail', None, ''):
raise UsageError(
"{!s} must be one of skip and xfail,"
" but it is {!r}".format(EMPTY_PARAMETERSET_OPTION, empty_parameterset))
def pytest_unconfigure(config):
MARK_GEN._config = getattr(config, '_old_mark_config', None)
class MarkGenerator:
class MarkGenerator(object):
""" Factory for :class:`MarkDecorator` objects - exposed as
a ``pytest.mark`` singleton instance. Example::

View File

@ -88,7 +88,7 @@ def derive_importpath(import_path, raising):
return attr, target
class Notset:
class Notset(object):
def __repr__(self):
return "<notset>"
@ -96,7 +96,7 @@ class Notset:
notset = Notset()
class MonkeyPatch:
class MonkeyPatch(object):
""" Object returned by the ``monkeypatch`` fixture keeping a record of setattr/item/env/syspath changes.
"""

View File

@ -171,7 +171,7 @@ def _pytest(request):
return PytestArg(request)
class PytestArg:
class PytestArg(object):
def __init__(self, request):
self.request = request
@ -186,7 +186,7 @@ def get_public_names(values):
return [x for x in values if x[0] != "_"]
class ParsedCall:
class ParsedCall(object):
def __init__(self, name, kwargs):
self.__dict__.update(kwargs)
self._name = name
@ -197,7 +197,7 @@ class ParsedCall:
return "<ParsedCall %r(**%r)>" % (self._name, d)
class HookRecorder:
class HookRecorder(object):
"""Record all hooks called in a plugin manager.
This wraps all the hook calls in the plugin manager, recording each call
@ -343,7 +343,7 @@ def testdir(request, tmpdir_factory):
rex_outcome = re.compile(r"(\d+) ([\w-]+)")
class RunResult:
class RunResult(object):
"""The result of running a command.
Attributes:
@ -397,7 +397,7 @@ class RunResult:
assert obtained == dict(passed=passed, skipped=skipped, failed=failed, error=error)
class CwdSnapshot:
class CwdSnapshot(object):
def __init__(self):
self.__saved = os.getcwd()
@ -405,7 +405,7 @@ class CwdSnapshot:
os.chdir(self.__saved)
class SysModulesSnapshot:
class SysModulesSnapshot(object):
def __init__(self, preserve=None):
self.__preserve = preserve
self.__saved = dict(sys.modules)
@ -418,7 +418,7 @@ class SysModulesSnapshot:
sys.modules.update(self.__saved)
class SysPathsSnapshot:
class SysPathsSnapshot(object):
def __init__(self):
self.__saved = list(sys.path), list(sys.meta_path)
@ -426,7 +426,7 @@ class SysPathsSnapshot:
sys.path[:], sys.meta_path[:] = self.__saved
class Testdir:
class Testdir(object):
"""Temporary test directory with tools to test/run pytest itself.
This is based on the ``tmpdir`` fixture but provides a number of methods
@ -740,7 +740,7 @@ class Testdir:
rec = []
class Collect:
class Collect(object):
def pytest_configure(x, config):
rec.append(self.make_hook_recorder(config.pluginmanager))
@ -750,7 +750,7 @@ class Testdir:
if len(rec) == 1:
reprec = rec.pop()
else:
class reprec:
class reprec(object):
pass
reprec.ret = ret
@ -780,13 +780,13 @@ class Testdir:
reprec = self.inline_run(*args, **kwargs)
except SystemExit as e:
class reprec:
class reprec(object):
ret = e.args[0]
except Exception:
traceback.print_exc()
class reprec:
class reprec(object):
ret = 3
finally:
out, err = capture.readouterr()
@ -1067,7 +1067,7 @@ def getdecoded(out):
py.io.saferepr(out),)
class LineComp:
class LineComp(object):
def __init__(self):
self.stringio = py.io.TextIO()
@ -1085,7 +1085,7 @@ class LineComp:
return LineMatcher(lines1).fnmatch_lines(lines2)
class LineMatcher:
class LineMatcher(object):
"""Flexible matching of text.
This is a convenience class to test large texts like the output of

View File

@ -786,7 +786,7 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
from _pytest.mark import ParameterSet
from py.io import saferepr
argnames, parameters = ParameterSet._for_parameterize(
argnames, argvalues, self.function)
argnames, argvalues, self.function, self.config)
del argvalues
if scope is None:

View File

@ -178,7 +178,7 @@ def call_runtest_hook(item, when, **kwds):
return CallInfo(lambda: ihook(item=item, **kwds), when=when)
class CallInfo:
class CallInfo(object):
""" Result/Exception info a function invocation. """
#: None or ExceptionInfo object.
excinfo = None

View File

@ -94,7 +94,7 @@ def pytest_report_teststatus(report):
return report.outcome, letter, report.outcome.upper()
class WarningReport:
class WarningReport(object):
"""
Simple structure to hold warnings information captured by ``pytest_logwarning``.
"""
@ -129,7 +129,7 @@ class WarningReport:
return None
class TerminalReporter:
class TerminalReporter(object):
def __init__(self, config, file=None):
import _pytest.config
self.config = config

View File

@ -8,7 +8,7 @@ import py
from _pytest.monkeypatch import MonkeyPatch
class TempdirFactory:
class TempdirFactory(object):
"""Factory for temporary directories under the common base temp directory.
The base directory can be configured using the ``--basetemp`` option.

View File

@ -1 +0,0 @@
Change parametrized automatic test id generation to use the ``__name__`` attribute of functions instead of the fallback argument name plus counter.

View File

@ -1,2 +0,0 @@
Fixed restoring Python state after in-process pytest runs with the ``pytester`` plugin; this may break tests using
making multiple inprocess pytest runs if later ones depend on earlier ones leaking global interpreter changes.

View File

@ -1 +0,0 @@
Console output fallsback to "classic" mode when capture is disabled (``-s``), otherwise the output gets garbled to the point of being useless.

View File

@ -1 +0,0 @@
Fix skipping plugin reporting hook when test aborted before plugin setup hook.

View File

@ -1 +0,0 @@
Fix the wording of a sentence on doctest flags use in pytest.

View File

@ -1 +0,0 @@
Fix progress percentage reported when tests fail during teardown.

View File

@ -1 +0,0 @@
Prefer ``https://*.readthedocs.io`` over ``http://*.rtfd.org`` for links in the documentation.

View File

@ -1,3 +0,0 @@
New `pytest_runtest_logfinish <https://docs.pytest.org/en/latest/writing_plugins.html#_pytest.hookspec.pytest_runtest_logfinish>`_
hook which is called when a test item has finished executing, analogous to
`pytest_runtest_logstart <https://docs.pytest.org/en/latest/writing_plugins.html#_pytest.hookspec.pytest_runtest_start>`_.

View File

@ -6,6 +6,7 @@ Release announcements
:maxdepth: 2
release-3.4.0
release-3.3.2
release-3.3.1
release-3.3.0

View File

@ -0,0 +1,52 @@
pytest-3.4.0
=======================================
The pytest team is proud to announce the 3.4.0 release!
pytest is a mature Python testing tool with more than a 1600 tests
against itself, passing on many different interpreters and platforms.
This release contains a number of bugs fixes and improvements, so users are encouraged
to take a look at the CHANGELOG:
http://doc.pytest.org/en/latest/changelog.html
For complete documentation, please visit:
http://docs.pytest.org
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed to this release, among them:
* Aaron
* Alan Velasco
* Anders Hovmöller
* Andrew Toolan
* Anthony Sottile
* Aron Coyle
* Brian Maissy
* Bruno Oliveira
* Cyrus Maden
* Florian Bruhin
* Henk-Jaap Wagenaar
* Ian Lesperance
* Jon Dufresne
* Jurko Gospodnetić
* Kate
* Kimberly
* Per A. Brodtkorb
* Pierre-Alexandre Fonta
* Raphael Castaneda
* Ronny Pfannschmidt
* ST John
* Segev Finer
* Thomas Hisch
* Tzu-ping Chung
* feuillemorte
Happy testing,
The Pytest Development Team

View File

@ -15,91 +15,6 @@ We will only remove deprecated functionality in major releases (e.g. if we depre
Deprecation Roadmap
-------------------
This page lists deprecated features and when we plan to remove them. It is important to list the feature, the version where it got deprecated and the version we plan to remove it.
We track deprecation and removal of features using milestones and the `deprecation <https://github.com/pytest-dev/pytest/issues?q=label%3A%22type%3A+deprecation%22>`_ and `removal <https://github.com/pytest-dev/pytest/labels/type%3A%20removal>`_ labels on GitHub.
Following our deprecation policy, we should aim to keep features for *at least* two minor versions after it was considered deprecated.
Future Releases
~~~~~~~~~~~~~~~
3.4
^^^
**Old style classes**
Issue: `#2147 <https://github.com/pytest-dev/pytest/issues/2147>`_.
Deprecated in ``3.2``.
4.0
^^^
**Yield tests**
Deprecated in ``3.0``.
**pytest-namespace hook**
deprecated in ``3.2``.
**Marks in parameter sets**
Deprecated in ``3.2``.
**--result-log**
Deprecated in ``3.0``.
See `#830 <https://github.com/pytest-dev/pytest/issues/830>`_ for more information. Suggested alternative: `pytest-tap <https://pypi.python.org/pypi/pytest-tap>`_.
**metafunc.addcall**
Issue: `#2876 <https://github.com/pytest-dev/pytest/issues/2876>`_.
Deprecated in ``3.3``.
**pytest_plugins in non-toplevel conftests**
There is a deep conceptual confusion as ``conftest.py`` files themselves are activated/deactivated based on path, but the plugins they depend on aren't.
Issue: `#2639 <https://github.com/pytest-dev/pytest/issues/2639>`_.
Not yet officially deprecated.
**passing a single string to pytest.main()**
Pass a list of strings to ``pytest.main()`` instead.
Deprecated in ``3.1``.
**[pytest] section in setup.cfg**
Use ``[tool:pytest]`` instead for compatibility with other tools.
Deprecated in ``3.0``.
Past Releases
~~~~~~~~~~~~~
3.0
^^^
* The following deprecated commandline options were removed:
* ``--genscript``: no longer supported;
* ``--no-assert``: use ``--assert=plain`` instead;
* ``--nomagic``: use ``--assert=plain`` instead;
* ``--report``: use ``-r`` instead;
* Removed all ``py.test-X*`` entry points. The versioned, suffixed entry points
were never documented and a leftover from a pre-virtualenv era. These entry
points also created broken entry points in wheels, so removing them also
removes a source of confusion for users.
3.3
^^^
* Dropped support for EOL Python 2.6 and 3.3.
Following our deprecation policy, after starting issuing deprecation warnings we keep features for *at least* two minor versions before considering removal.

View File

@ -116,6 +116,10 @@ You can ask for available builtin or project-custom
Add extra xml properties to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being automatically
xml-encoded.
record_xml_attribute
Add extra xml attributes to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being automatically
xml-encoded
caplog
Access and control log capturing.

View File

@ -225,7 +225,7 @@ You can always peek at the content of the cache using the
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
cachedir: $REGENDOC_TMPDIR/.cache
cachedir: $REGENDOC_TMPDIR/.pytest_cache
------------------------------- cache values -------------------------------
cache/lastfailed contains:
{'test_caching.py::test_function': True}

View File

@ -152,11 +152,25 @@ above will show verbose output because ``-v`` overwrites ``-q``.
Builtin configuration file options
----------------------------------------------
Here is a list of builtin configuration options that may be written in a ``pytest.ini``, ``tox.ini`` or ``setup.cfg``
file, usually located at the root of your repository. All options must be under a ``[pytest]`` section
(``[tool:pytest]`` for ``setup.cfg`` files).
Configuration file options may be overwritten in the command-line by using ``-o/--override``, which can also be
passed multiple times. The expected format is ``name=value``. For example::
pytest -o console_output_style=classic -o cache_dir=/tmp/mycache
.. confval:: minversion
Specifies a minimal pytest version required for running tests.
minversion = 2.1 # will fail if we run with pytest-2.0
.. code-block:: ini
# content of pytest.ini
[pytest]
minversion = 3.0 # will fail if we run with pytest-2.8
.. confval:: addopts
@ -165,6 +179,7 @@ Builtin configuration file options
.. code-block:: ini
# content of pytest.ini
[pytest]
addopts = --maxfail=2 -rf # exit after 2 failures, report fail info
@ -332,6 +347,31 @@ Builtin configuration file options
[pytest]
console_output_style = classic
.. confval:: empty_parameter_set_mark
.. versionadded:: 3.4
Allows to pick the action for empty parametersets in parameterization
* ``skip`` skips tests with a empty parameterset (default)
* ``xfail`` marks tests with a empty parameterset as xfail(run=False)
.. code-block:: ini
# content of pytest.ini
[pytest]
empty_parameter_set_mark = xfail
.. note::
The default value of this option is planned to change to ``xfail`` in future releases
as this is considered less error prone, see `#3155`_ for more details.
.. _`#3155`: https://github.com/pytest-dev/pytest/issues/3155
.. confval:: rootdir
Sets a :ref:`rootdir <rootdir>` directory. Directory may be relative or absolute path.

View File

@ -157,12 +157,14 @@ class TestRaises(object):
# thanks to Matthew Scott for this test
def test_dynamic_compile_shows_nicely():
import imp
import sys
src = 'def foo():\n assert 1 == 0\n'
name = 'abc-123'
module = py.std.imp.new_module(name)
module = imp.new_module(name)
code = _pytest._code.compile(src, name, 'exec')
py.builtin.exec_(code, module.__dict__)
py.std.sys.modules[name] = module
sys.modules[name] = module
module.foo()

View File

@ -32,7 +32,7 @@ You can then restrict a test run to only run tests marked with ``webtest``::
$ pytest -v -m webtest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items
@ -46,7 +46,7 @@ Or the inverse, running all tests except the webtest ones::
$ pytest -v -m "not webtest"
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items
@ -67,7 +67,7 @@ tests based on their module, class, method, or function name::
$ pytest -v test_server.py::TestClass::test_method
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 1 item
@ -80,7 +80,7 @@ You can also select on the class::
$ pytest -v test_server.py::TestClass
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 1 item
@ -93,7 +93,7 @@ Or select multiple nodes::
$ pytest -v test_server.py::TestClass test_server.py::test_send_http
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items
@ -131,7 +131,7 @@ select tests based on their names::
$ pytest -v -k http # running with the above defined example module
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items
@ -145,7 +145,7 @@ And you can also run all tests except the ones that match the keyword::
$ pytest -k "not send_http" -v
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items
@ -161,7 +161,7 @@ Or to select "http" and "quick" tests::
$ pytest -k "http or quick" -v
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items
@ -432,7 +432,7 @@ The output is as follows::
$ pytest -q -s
Marker info name=my_marker args=(<function hello_world at 0xdeadbeef>,) kwars={}
. [100%]
.
1 passed in 0.12 seconds
We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``.
@ -477,7 +477,7 @@ Let's run this without capturing output and see what we get::
glob args=('function',) kwargs={'x': 3}
glob args=('class',) kwargs={'x': 2}
glob args=('module',) kwargs={'x': 1}
. [100%]
.
1 passed in 0.12 seconds
marking platform specific tests with pytest

View File

@ -60,7 +60,7 @@ consulted when reporting in ``verbose`` mode::
nonpython $ pytest -v
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collecting ... collected 2 items

View File

@ -411,22 +411,24 @@ get on the terminal - we are working on that)::
____________________ test_dynamic_compile_shows_nicely _____________________
def test_dynamic_compile_shows_nicely():
import imp
import sys
src = 'def foo():\n assert 1 == 0\n'
name = 'abc-123'
module = py.std.imp.new_module(name)
module = imp.new_module(name)
code = _pytest._code.compile(src, name, 'exec')
py.builtin.exec_(code, module.__dict__)
py.std.sys.modules[name] = module
sys.modules[name] = module
> module.foo()
failure_demo.py:166:
failure_demo.py:168:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def foo():
> assert 1 == 0
E AssertionError
<2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:163>:2: AssertionError
<2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:165>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -438,7 +440,7 @@ get on the terminal - we are working on that)::
return 43
> somefunc(f(), g())
failure_demo.py:176:
failure_demo.py:178:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:9: in somefunc
otherfunc(x,y)
@ -460,7 +462,7 @@ get on the terminal - we are working on that)::
> a,b = l
E ValueError: not enough values to unpack (expected 2, got 0)
failure_demo.py:180: ValueError
failure_demo.py:182: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -470,7 +472,7 @@ get on the terminal - we are working on that)::
> a,b = l
E TypeError: 'int' object is not iterable
failure_demo.py:184: TypeError
failure_demo.py:186: TypeError
______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -483,7 +485,7 @@ get on the terminal - we are working on that)::
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
failure_demo.py:189: AssertionError
failure_demo.py:191: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -500,7 +502,7 @@ get on the terminal - we are working on that)::
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef>()
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>()
failure_demo.py:196: AssertionError
failure_demo.py:198: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -511,7 +513,7 @@ get on the terminal - we are working on that)::
E + where False = isinstance(43, float)
E + where 43 = globf(42)
failure_demo.py:199: AssertionError
failure_demo.py:201: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -522,7 +524,7 @@ get on the terminal - we are working on that)::
E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
failure_demo.py:203: AssertionError
failure_demo.py:205: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -532,7 +534,7 @@ get on the terminal - we are working on that)::
E assert 11 < 5
E + where 11 = globf(10)
failure_demo.py:206: AssertionError
failure_demo.py:208: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -543,7 +545,7 @@ get on the terminal - we are working on that)::
> assert x == 0
E assert 1 == 0
failure_demo.py:211: AssertionError
failure_demo.py:213: AssertionError
___________________ TestCustomAssertMsg.test_single_line ___________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
@ -557,7 +559,7 @@ get on the terminal - we are working on that)::
E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a
failure_demo.py:222: AssertionError
failure_demo.py:224: AssertionError
____________________ TestCustomAssertMsg.test_multiline ____________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
@ -574,7 +576,7 @@ get on the terminal - we are working on that)::
E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a
failure_demo.py:228: AssertionError
failure_demo.py:230: AssertionError
___________________ TestCustomAssertMsg.test_custom_repr ___________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
@ -594,7 +596,7 @@ get on the terminal - we are working on that)::
E assert 1 == 2
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
failure_demo.py:238: AssertionError
failure_demo.py:240: AssertionError
============================= warnings summary =============================
None
Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0.

View File

@ -332,7 +332,7 @@ which will add info only when run with "--v"::
$ pytest -v
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
cachedir: .pytest_cache
info1: did you know that ...
did you?
rootdir: $REGENDOC_TMPDIR, inifile:
@ -385,9 +385,9 @@ Now we can profile which test functions execute the slowest::
test_some_are_slow.py ... [100%]
========================= slowest 3 test durations =========================
0.31s call test_some_are_slow.py::test_funcslow2
0.20s call test_some_are_slow.py::test_funcslow1
0.17s call test_some_are_slow.py::test_funcfast
0.58s call test_some_are_slow.py::test_funcslow2
0.41s call test_some_are_slow.py::test_funcslow1
0.10s call test_some_are_slow.py::test_funcfast
========================= 3 passed in 0.12 seconds =========================
incremental testing - test steps
@ -537,7 +537,7 @@ We can run this::
file $REGENDOC_TMPDIR/b/test_error.py, line 1
def test_root(db): # no db here, will error out
E fixture 'db' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_xml_attribute, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
$REGENDOC_TMPDIR/b/test_error.py:1
@ -731,7 +731,7 @@ and run it::
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
Fexecuting test failed test_module.py::test_call_fails
F [100%]
F
================================== ERRORS ==================================
____________________ ERROR at setup of test_setup_fails ____________________
@ -826,15 +826,20 @@ Instead of freezing the pytest runner as a separate executable, you can make
your frozen program work as the pytest runner by some clever
argument handling during program startup. This allows you to
have a single executable, which is usually more convenient.
Please note that the mechanism for plugin discovery used by pytest
(setupttools entry points) doesn't work with frozen executables so pytest
can't find any third party plugins automatically. To include third party plugins
like ``pytest-timeout`` they must be imported explicitly and passed on to pytest.main.
.. code-block:: python
# contents of app_main.py
import sys
import pytest_timeout # Third party plugin
if len(sys.argv) > 1 and sys.argv[1] == '--pytest':
import pytest
sys.exit(pytest.main(sys.argv[2:]))
sys.exit(pytest.main(sys.argv[2:], plugins=[pytest_timeout]))
else:
# normal application execution: at this point argv can be parsed
# by your argument-parsing library of choice as usual
@ -845,3 +850,4 @@ This allows you to execute tests using the frozen
application with standard ``pytest`` command-line options::
./app_main --pytest --verbose --tb=long --junitxml=results.xml test-suite/

View File

@ -68,5 +68,5 @@ If you run this without output capturing::
.test_method1 called
.test other
.test_unit1 method called
. [100%]
.
4 passed in 0.12 seconds

View File

@ -286,7 +286,7 @@ tests.
Let's execute it::
$ pytest -s -q --tb=no
FF [100%]teardown smtp
FFteardown smtp
2 failed in 0.12 seconds
@ -391,7 +391,7 @@ We use the ``request.module`` attribute to optionally obtain an
again, nothing much has changed::
$ pytest -s -q --tb=no
FF [100%]finalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
2 failed in 0.12 seconds
@ -612,7 +612,7 @@ Here we declare an ``app`` fixture which receives the previously defined
$ pytest -v test_appsetup.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items
@ -681,40 +681,40 @@ Let's run the tests in verbose mode and with looking at the print-output::
$ pytest -v -s test_module.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 8 items
test_module.py::test_0[1] SETUP otherarg 1
RUN test0 with otherarg 1
PASSED [ 12%] TEARDOWN otherarg 1
PASSED TEARDOWN otherarg 1
test_module.py::test_0[2] SETUP otherarg 2
RUN test0 with otherarg 2
PASSED [ 25%] TEARDOWN otherarg 2
PASSED TEARDOWN otherarg 2
test_module.py::test_1[mod1] SETUP modarg mod1
RUN test1 with modarg mod1
PASSED [ 37%]
PASSED
test_module.py::test_2[1-mod1] SETUP otherarg 1
RUN test2 with otherarg 1 and modarg mod1
PASSED [ 50%] TEARDOWN otherarg 1
PASSED TEARDOWN otherarg 1
test_module.py::test_2[2-mod1] SETUP otherarg 2
RUN test2 with otherarg 2 and modarg mod1
PASSED [ 62%] TEARDOWN otherarg 2
PASSED TEARDOWN otherarg 2
test_module.py::test_1[mod2] TEARDOWN modarg mod1
SETUP modarg mod2
RUN test1 with modarg mod2
PASSED [ 75%]
PASSED
test_module.py::test_2[1-mod2] SETUP otherarg 1
RUN test2 with otherarg 1 and modarg mod2
PASSED [ 87%] TEARDOWN otherarg 1
PASSED TEARDOWN otherarg 1
test_module.py::test_2[2-mod2] SETUP otherarg 2
RUN test2 with otherarg 2 and modarg mod2
PASSED [100%] TEARDOWN otherarg 2
PASSED TEARDOWN otherarg 2
TEARDOWN modarg mod2

View File

@ -7,32 +7,34 @@ Installation and Getting Started
**PyPI package name**: `pytest <http://pypi.python.org/pypi/pytest>`_
**dependencies**: `py <http://pypi.python.org/pypi/py>`_,
**Dependencies**: `py <http://pypi.python.org/pypi/py>`_,
`colorama (Windows) <http://pypi.python.org/pypi/colorama>`_,
**documentation as PDF**: `download latest <https://media.readthedocs.org/pdf/pytest/latest/pytest.pdf>`_
**Documentation as PDF**: `download latest <https://media.readthedocs.org/pdf/pytest/latest/pytest.pdf>`_
``pytest`` is a framework that makes building simple and scalable tests easy. Tests are expressive and readable—no boilerplate code required. Get started in minutes with a small unit test or complex functional test for your application or library.
.. _`getstarted`:
.. _installation:
.. _`installation`:
Installation
Install ``pytest``
----------------------------------------
Installation::
1. Run the following command in your command line::
pip install -U pytest
To check your installation has installed the correct version::
2. Check that you installed the correct version::
$ pytest --version
This is pytest version 3.x.y, imported from $PYTHON_PREFIX/lib/python3.5/site-packages/pytest.py
.. _`simpletest`:
Our first test run
Create your first test
----------------------------------------------------------
Let's create a first test file with a simple test function::
Create a simple test function with just four lines of code::
# content of test_sample.py
def func(x):
@ -41,7 +43,7 @@ Let's create a first test file with a simple test function::
def test_answer():
assert func(3) == 5
That's it. You can execute the test function now::
Thats it. You can now execute the test function::
$ pytest
=========================== test session starts ============================
@ -62,30 +64,22 @@ That's it. You can execute the test function now::
test_sample.py:5: AssertionError
========================= 1 failed in 0.12 seconds =========================
We got a failure report because our little ``func(3)`` call did not return ``5``.
This test returns a failure report because ``func(3)`` does not return ``5``.
.. note::
You can simply use the ``assert`` statement for asserting test
expectations. pytest's :ref:`assert introspection` will intelligently
report intermediate values of the assert expression freeing
you from the need to learn the many names of `JUnit legacy methods`_.
You can use the ``assert`` statement to verify test expectations. pytests `Advanced assertion introspection <http://docs.python.org/reference/simple_stmts.html#the-assert-statement>`_ will intelligently report intermediate values of the assert expression so you can avoid the many names `of JUnit legacy methods <http://docs.python.org/library/unittest.html#test-cases>`_.
.. _`JUnit legacy methods`: http://docs.python.org/library/unittest.html#test-cases
.. _`assert statement`: http://docs.python.org/reference/simple_stmts.html#the-assert-statement
Running multiple tests
Run multiple tests
----------------------------------------------------------
``pytest`` will run all files in the current directory and its subdirectories of the form test_*.py or \*_test.py. More generally, it follows :ref:`standard test discovery rules <test discovery>`.
``pytest`` will run all files of the form test_*.py or \*_test.py in the current directory and its subdirectories. More generally, it follows :ref:`standard test discovery rules <test discovery>`.
Asserting that a certain exception is raised
Assert that a certain exception is raised
--------------------------------------------------------------
If you want to assert that some code raises an exception you can
use the ``raises`` helper::
Use the ``raises`` helper to assert that some code raises an exception::
# content of test_sysexit.py
import pytest
@ -96,18 +90,16 @@ use the ``raises`` helper::
with pytest.raises(SystemExit):
f()
Running it with, this time in "quiet" reporting mode::
Execute the test function with “quiet” reporting mode::
$ pytest -q test_sysexit.py
. [100%]
1 passed in 0.12 seconds
Grouping multiple tests in a class
Group multiple tests in a class
--------------------------------------------------------------
Once you start to have more than a few tests it often makes sense
to group tests logically, in classes and modules. Let's write a class
containing two tests::
Once you develop multiple tests, you may want to group them into a class. pytest makes it easy to create a class containing more than one test::
# content of test_class.py
class TestClass(object):
@ -119,9 +111,7 @@ containing two tests::
x = "hello"
assert hasattr(x, 'check')
The two tests are found because of the standard :ref:`test discovery`.
There is no need to subclass anything. We can simply
run the module by passing its filename::
``pytest`` discovers all tests following its :ref:`Conventions for Python test discovery <test discovery>`, so it finds both ``test_`` prefixed functions. There is no need to subclass anything. We can simply run the module by passing its filename::
$ pytest -q test_class.py
.F [100%]
@ -139,26 +129,19 @@ run the module by passing its filename::
test_class.py:8: AssertionError
1 failed, 1 passed in 0.12 seconds
The first test passed, the second failed. Again we can easily see
the intermediate values used in the assertion, helping us to
understand the reason for the failure.
The first test passed and the second failed. You can easily see the intermediate values in the assertion to help you understand the reason for the failure.
Going functional: requesting a unique temporary directory
Request a unique temporary directory for functional tests
--------------------------------------------------------------
For functional tests one often needs to create some files
and pass them to application objects. pytest provides
:ref:`builtinfixtures` which allow to request arbitrary
resources, for example a unique temporary directory::
``pytest`` provides `Builtin fixtures/function arguments <https://docs.pytest.org/en/latest/builtin.html#builtinfixtures>`_ to request arbitrary resources, like a unique temporary directory::
# content of test_tmpdir.py
def test_needsfiles(tmpdir):
print (tmpdir)
assert 0
We list the name ``tmpdir`` in the test function signature and
``pytest`` will lookup and call a fixture factory to create the resource
before performing the test function call. Let's just run it::
List the name ``tmpdir`` in the test function signature and ``pytest`` will lookup and call a fixture factory to create the resource before performing the test function call. Before the test runs, ``pytest`` creates a unique-per-test-invocation temporary directory::
$ pytest -q test_tmpdir.py
F [100%]
@ -177,22 +160,21 @@ before performing the test function call. Let's just run it::
PYTEST_TMPDIR/test_needsfiles0
1 failed in 0.12 seconds
Before the test runs, a unique-per-test-invocation temporary directory
was created. More info at :ref:`tmpdir handling`.
More info on tmpdir handling is available at `Temporary directories and files <tmpdir handling>`_.
You can find out what kind of builtin :ref:`fixtures` exist by typing::
Find out what kind of builtin ```pytest`` fixtures <fixtures>`_ exist with the command::
pytest --fixtures # shows builtin and custom fixtures
Where to go next
Continue reading
-------------------------------------
Here are a few suggestions where to go next:
Check out additional pytest resources to help you customize tests for your unique workflow:
* :ref:`cmdline` for command line invocation examples
* :ref:`good practices <goodpractices>` for virtualenv, test layout
* :ref:`existingtestsuite` for working with pre-existing tests
* :ref:`fixtures` for providing a functional baseline to your tests
* :ref:`plugins` managing and writing plugins
* ":ref:`cmdline`" for command line invocation examples
* ":ref:`goodpractices`" for virtualenv and test layouts
* ":ref:`existingtestsuite`" for working with pre-existing tests
* ":ref:`fixtures`" for providing a functional baseline to your tests
* ":ref:`plugins`" for managing and writing plugins
.. include:: links.inc

View File

@ -3,24 +3,11 @@
Logging
-------
.. versionadded 3.3.0
.. versionadded:: 3.3
.. versionchanged:: 3.4
.. note::
This feature is a drop-in replacement for the `pytest-catchlog
<https://pypi.org/project/pytest-catchlog/>`_ plugin and they will conflict
with each other. The backward compatibility API with ``pytest-capturelog``
has been dropped when this feature was introduced, so if for that reason you
still need ``pytest-catchlog`` you can disable the internal feature by
adding to your ``pytest.ini``:
.. code-block:: ini
[pytest]
addopts=-p no:logging
Log messages are captured by default and for each failed test will be shown in
the same manner as captured stdout and stderr.
pytest captures log messages of level ``WARNING`` or above automatically and displays them in their own section
for each failed test in the same manner as captured stdout and stderr.
Running without options::
@ -29,7 +16,7 @@ Running without options::
Shows failed tests like so::
----------------------- Captured stdlog call ----------------------
test_reporting.py 26 INFO text going to logger
test_reporting.py 26 WARNING text going to logger
----------------------- Captured stdout call ----------------------
text going to stdout
----------------------- Captured stderr call ----------------------
@ -37,11 +24,10 @@ Shows failed tests like so::
==================== 2 failed in 0.02 seconds =====================
By default each captured log message shows the module, line number, log level
and message. Showing the exact module and line number is useful for testing and
debugging. If desired the log format and date format can be specified to
anything that the logging module supports.
and message.
Running pytest specifying formatting options::
If desired the log and date format can be specified to
anything that the logging module supports by passing specific formatting options::
pytest --log-format="%(asctime)s %(levelname)s %(message)s" \
--log-date-format="%Y-%m-%d %H:%M:%S"
@ -49,14 +35,14 @@ Running pytest specifying formatting options::
Shows failed tests like so::
----------------------- Captured stdlog call ----------------------
2010-04-10 14:48:44 INFO text going to logger
2010-04-10 14:48:44 WARNING text going to logger
----------------------- Captured stdout call ----------------------
text going to stdout
----------------------- Captured stderr call ----------------------
text going to stderr
==================== 2 failed in 0.02 seconds =====================
These options can also be customized through a configuration file:
These options can also be customized through ``pytest.ini`` file:
.. code-block:: ini
@ -69,7 +55,7 @@ with::
pytest --no-print-logs
Or in you ``pytest.ini``:
Or in the ``pytest.ini`` file:
.. code-block:: ini
@ -85,6 +71,10 @@ Shows failed tests in the normal manner as no logs were captured::
text going to stderr
==================== 2 failed in 0.02 seconds =====================
caplog fixture
^^^^^^^^^^^^^^
Inside tests it is possible to change the log level for the captured log
messages. This is supported by the ``caplog`` fixture::
@ -92,7 +82,7 @@ messages. This is supported by the ``caplog`` fixture::
caplog.set_level(logging.INFO)
pass
By default the level is set on the handler used to catch the log messages,
By default the level is set on the root logger,
however as a convenience it is also possible to set the log level of any
logger::
@ -100,14 +90,16 @@ logger::
caplog.set_level(logging.CRITICAL, logger='root.baz')
pass
The log levels set are restored automatically at the end of the test.
It is also possible to use a context manager to temporarily change the log
level::
level inside a ``with`` block::
def test_bar(caplog):
with caplog.at_level(logging.INFO):
pass
Again, by default the level of the handler is affected but the level of any
Again, by default the level of the root logger is affected but the level of any
logger can be changed instead with::
def test_bar(caplog):
@ -115,7 +107,7 @@ logger can be changed instead with::
pass
Lastly all the logs sent to the logger during the test run are made available on
the fixture in the form of both the LogRecord instances and the final log text.
the fixture in the form of both the ``logging.LogRecord`` instances and the final log text.
This is useful for when you want to assert on the contents of a message::
def test_baz(caplog):
@ -146,12 +138,41 @@ You can call ``caplog.clear()`` to reset the captured log records in a test::
your_test_method()
assert ['Foo'] == [rec.message for rec in caplog.records]
The ``caplop.records`` attribute contains records from the current stage only, so
inside the ``setup`` phase it contains only setup logs, same with the ``call`` and
``teardown`` phases.
To access logs from other stages, use the ``caplog.get_records(when)`` method. As an example,
if you want to make sure that tests which use a certain fixture never log any warnings, you can inspect
the records for the ``setup`` and ``call`` stages during teardown like so:
.. code-block:: python
@pytest.fixture
def window(caplog):
window = create_window()
yield window
for when in ('setup', 'call'):
messages = [x.message for x in caplog.get_records(when) if x.level == logging.WARNING]
if messages:
pytest.fail('warning messages encountered during testing: {}'.format(messages))
caplog fixture API
~~~~~~~~~~~~~~~~~~
.. autoclass:: _pytest.logging.LogCaptureFixture
:members:
.. _live_logs:
Live Logs
^^^^^^^^^
By default, pytest will output any logging records with a level higher or
equal to WARNING. In order to actually see these logs in the console you have to
disable pytest output capture by passing ``-s``.
By setting the :confval:`log_cli` configuration option to ``true``, pytest will output
logging records as they are emitted directly into the console.
You can specify the logging level for which log records with equal or higher
level are printed to the console by passing ``--log-cli-level``. This setting
@ -190,3 +211,49 @@ option names are:
* ``log_file_level``
* ``log_file_format``
* ``log_file_date_format``
.. _log_release_notes:
Release notes
^^^^^^^^^^^^^
This feature was introduced as a drop-in replacement for the `pytest-catchlog
<https://pypi.org/project/pytest-catchlog/>`_ plugin and they conflict
with each other. The backward compatibility API with ``pytest-capturelog``
has been dropped when this feature was introduced, so if for that reason you
still need ``pytest-catchlog`` you can disable the internal feature by
adding to your ``pytest.ini``:
.. code-block:: ini
[pytest]
addopts=-p no:logging
.. _log_changes_3_4:
Incompatible changes in pytest 3.4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This feature was introduced in ``3.3`` and some **incompatible changes** have been
made in ``3.4`` after community feedback:
* Log levels are no longer changed unless explicitly requested by the :confval:`log_level` configuration
or ``--log-level`` command-line options. This allows users to configure logger objects themselves.
* :ref:`Live Logs <live_logs>` is now disabled by default and can be enabled setting the
:confval:`log_cli` configuration option to ``true``. When enabled, the verbosity is increased so logging for each
test is visible.
* :ref:`Live Logs <live_logs>` are now sent to ``sys.stdout`` and no longer require the ``-s`` command-line option
to work.
If you want to partially restore the logging behavior of version ``3.3``, you can add this options to your ``ini``
file:
.. code-block:: ini
[pytest]
log_cli=true
log_level=NOTSET
More details about the discussion that lead to this changes can be read in
issue `#3013 <https://github.com/pytest-dev/pytest/issues/3013>`_.

View File

@ -123,7 +123,7 @@ To get all combinations of multiple parametrized arguments you can stack
def test_foo(x, y):
pass
This will run the test with the arguments set to ``x=0/y=2``,``x=1/y=2``,
This will run the test with the arguments set to ``x=0/y=2``, ``x=1/y=2``,
``x=0/y=3``, and ``x=1/y=3`` exhausting parameters in the order of the decorators.
.. _`pytest_generate_tests`:

View File

@ -256,6 +256,66 @@ This will add an extra property ``example_key="1"`` to the generated
Also please note that using this feature will break any schema verification.
This might be a problem when used with some CI servers.
record_xml_attribute
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. versionadded:: 3.4
To add an additional xml attribute to a testcase element, you can use
``record_xml_attribute`` fixture. This can also be used to override existing values:
.. code-block:: python
def test_function(record_xml_attribute):
record_xml_attribute("assertions", "REQ-1234")
record_xml_attribute("classname", "custom_classname")
print('hello world')
assert True
Unlike ``record_xml_property``, this will not add a new child element.
Instead, this will add an attribute ``assertions="REQ-1234"`` inside the generated
``testcase`` tag and override the default ``classname`` with ``"classname=custom_classname"``:
.. code-block:: xml
<testcase classname="custom_classname" file="test_function.py" line="0" name="test_function" time="0.003" assertions="REQ-1234">
<system-out>
hello world
</system-out>
</testcase>
.. warning::
``record_xml_attribute`` is an experimental feature, and its interface might be replaced
by something more powerful and general in future versions. The
functionality per-se will be kept, however.
Using this over ``record_xml_property`` can help when using ci tools to parse the xml report.
However, some parsers are quite strict about the elements and attributes that are allowed.
Many tools use an xsd schema (like the example below) to validate incoming xml.
Make sure you are using attribute names that are allowed by your parser.
Below is the Scheme used by Jenkins to validate the XML report:
.. code-block:: xml
<xs:element name="testcase">
<xs:complexType>
<xs:sequence>
<xs:element ref="skipped" minOccurs="0" maxOccurs="1"/>
<xs:element ref="error" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="failure" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="system-out" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="system-err" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="name" type="xs:string" use="required"/>
<xs:attribute name="assertions" type="xs:string" use="optional"/>
<xs:attribute name="time" type="xs:string" use="optional"/>
<xs:attribute name="classname" type="xs:string" use="optional"/>
<xs:attribute name="status" type="xs:string" use="optional"/>
</xs:complexType>
</xs:element>
LogXML: add_global_property
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -389,4 +449,14 @@ hook was invoked::
*** test run reporting finishing
.. note::
Calling ``pytest.main()`` will result in importing your tests and any modules
that they import. Due to the caching mechanism of python's import system,
making subsequent calls to ``pytest.main()`` from the same process will not
reflect changes to those files between the calls. For this reason, making
multiple calls to ``pytest.main()`` from the same process (in order to re-run
tests, for example) is not recommended.
.. include:: links.inc

View File

@ -112,6 +112,12 @@ decorator or to all tests in a module by setting the ``pytestmark`` variable:
pytestmark = pytest.mark.filterwarnings('error')
.. note::
Except for these features, pytest does not change the python warning filter; it only captures
and displays the warnings which are issued with respect to the currently configured filter,
including changes to the filter made by test functions or by the system under test.
.. note::
``DeprecationWarning`` and ``PendingDeprecationWarning`` are hidden by the standard library

View File

@ -583,11 +583,22 @@ pytest hook reference
Initialization, command line and configuration hooks
----------------------------------------------------
Bootstrapping hooks
~~~~~~~~~~~~~~~~~~~
Bootstrapping hooks called for plugins registered early enough (internal and setuptools plugins).
.. autofunction:: pytest_load_initial_conftests
.. autofunction:: pytest_cmdline_preparse
.. autofunction:: pytest_cmdline_parse
.. autofunction:: pytest_addoption
.. autofunction:: pytest_cmdline_main
Initialization hooks
~~~~~~~~~~~~~~~~~~~~
Initialization hooks called for plugins and ``conftest.py`` files.
.. autofunction:: pytest_addoption
.. autofunction:: pytest_configure
.. autofunction:: pytest_unconfigure

View File

@ -1,6 +1,5 @@
import json
import py
import textwrap
issues_url = "https://api.github.com/repos/pytest-dev/pytest/issues"

View File

@ -1,5 +1,6 @@
invoke
tox
devpi-client
gitpython
invoke
towncrier
tox
wheel

View File

@ -2,6 +2,7 @@
from __future__ import absolute_import, division, print_function
import os
import sys
import types
import six
@ -398,7 +399,7 @@ class TestGeneralUsage(object):
p = tmpdir.join('test_test_plugins_given_as_strings.py')
p.write('def test_foo(): pass')
mod = py.std.types.ModuleType("myplugin")
mod = types.ModuleType("myplugin")
monkeypatch.setitem(sys.modules, 'myplugin', mod)
assert pytest.main(args=[str(tmpdir)], plugins=['myplugin']) == 0
@ -492,17 +493,17 @@ class TestInvocationVariants(object):
def test_python_minus_m_invocation_ok(self, testdir):
p1 = testdir.makepyfile("def test_hello(): pass")
res = testdir.run(py.std.sys.executable, "-m", "pytest", str(p1))
res = testdir.run(sys.executable, "-m", "pytest", str(p1))
assert res.ret == 0
def test_python_minus_m_invocation_fail(self, testdir):
p1 = testdir.makepyfile("def test_fail(): 0/0")
res = testdir.run(py.std.sys.executable, "-m", "pytest", str(p1))
res = testdir.run(sys.executable, "-m", "pytest", str(p1))
assert res.ret == 1
def test_python_pytest_package(self, testdir):
p1 = testdir.makepyfile("def test_pass(): pass")
res = testdir.run(py.std.sys.executable, "-m", "pytest", str(p1))
res = testdir.run(sys.executable, "-m", "pytest", str(p1))
assert res.ret == 0
res.stdout.fnmatch_lines(["*1 passed*"])
@ -560,7 +561,7 @@ class TestInvocationVariants(object):
])
def join_pythonpath(what):
cur = py.std.os.environ.get('PYTHONPATH')
cur = os.environ.get('PYTHONPATH')
if cur:
return str(what) + os.pathsep + cur
return what
@ -618,7 +619,7 @@ class TestInvocationVariants(object):
# └── test_world.py
def join_pythonpath(*dirs):
cur = py.std.os.environ.get('PYTHONPATH')
cur = os.environ.get('PYTHONPATH')
if cur:
dirs += (cur,)
return os.pathsep.join(str(p) for p in dirs)
@ -901,7 +902,7 @@ def test_deferred_hook_checking(testdir):
testdir.syspathinsert()
testdir.makepyfile(**{
'plugin.py': """
class Hooks:
class Hooks(object):
def pytest_my_hook(self, config):
pass

View File

@ -2,6 +2,8 @@
from __future__ import absolute_import, division, print_function
import operator
import os
import sys
import _pytest
import py
import pytest
@ -472,7 +474,7 @@ class TestFormattedExcinfo(object):
excinfo = _pytest._code.ExceptionInfo()
repr = pr.repr_excinfo(excinfo)
assert repr.reprtraceback.reprentries[1].lines[0] == "> ???"
if py.std.sys.version_info[0] >= 3:
if sys.version_info[0] >= 3:
assert repr.chain[0][0].reprentries[1].lines[0] == "> ???"
def test_repr_many_line_source_not_existing(self):
@ -487,7 +489,7 @@ raise ValueError()
excinfo = _pytest._code.ExceptionInfo()
repr = pr.repr_excinfo(excinfo)
assert repr.reprtraceback.reprentries[1].lines[0] == "> ???"
if py.std.sys.version_info[0] >= 3:
if sys.version_info[0] >= 3:
assert repr.chain[0][0].reprentries[1].lines[0] == "> ???"
def test_repr_source_failing_fullsource(self):
@ -545,13 +547,13 @@ raise ValueError()
fail = IOError()
repr = pr.repr_excinfo(excinfo)
assert repr.reprtraceback.reprentries[0].lines[0] == "> ???"
if py.std.sys.version_info[0] >= 3:
if sys.version_info[0] >= 3:
assert repr.chain[0][0].reprentries[0].lines[0] == "> ???"
fail = py.error.ENOENT # noqa
repr = pr.repr_excinfo(excinfo)
assert repr.reprtraceback.reprentries[0].lines[0] == "> ???"
if py.std.sys.version_info[0] >= 3:
if sys.version_info[0] >= 3:
assert repr.chain[0][0].reprentries[0].lines[0] == "> ???"
def test_repr_local(self):
@ -738,7 +740,7 @@ raise ValueError()
repr = p.repr_excinfo(excinfo)
assert repr.reprtraceback
assert len(repr.reprtraceback.reprentries) == len(reprtb.reprentries)
if py.std.sys.version_info[0] >= 3:
if sys.version_info[0] >= 3:
assert repr.chain[0][0]
assert len(repr.chain[0][0].reprentries) == len(reprtb.reprentries)
assert repr.reprcrash.path.endswith("mod.py")
@ -758,7 +760,7 @@ raise ValueError()
def raiseos():
raise OSError(2)
monkeypatch.setattr(py.std.os, 'getcwd', raiseos)
monkeypatch.setattr(os, 'getcwd', raiseos)
assert p._makepath(__file__) == __file__
p.repr_traceback(excinfo)
@ -816,10 +818,10 @@ raise ValueError()
for style in ("short", "long", "no"):
for showlocals in (True, False):
repr = excinfo.getrepr(style=style, showlocals=showlocals)
if py.std.sys.version_info[0] < 3:
if sys.version_info[0] < 3:
assert isinstance(repr, ReprExceptionInfo)
assert repr.reprtraceback.style == style
if py.std.sys.version_info[0] >= 3:
if sys.version_info[0] >= 3:
assert isinstance(repr, ExceptionChainRepr)
for repr in repr.chain:
assert repr[0].style == style

View File

@ -2,6 +2,7 @@
# disable flake check on this file because some constructs are strange
# or redundant on purpose and can't be disable on a line-by-line basis
from __future__ import absolute_import, division, print_function
import inspect
import sys
import _pytest._code
@ -187,9 +188,9 @@ class TestSourceParsingAndCompiling(object):
def f():
raise ValueError()
""")
source1 = py.std.inspect.getsource(co1)
source1 = inspect.getsource(co1)
assert 'KeyError' in source1
source2 = py.std.inspect.getsource(co2)
source2 = inspect.getsource(co2)
assert 'ValueError' in source2
def test_getstatement(self):
@ -373,7 +374,6 @@ def test_deindent():
c = '''while True:
pass
'''
import inspect
lines = deindent(inspect.getsource(f).splitlines())
assert lines == ["def f():", " c = '''while True:", " pass", "'''"]
@ -461,7 +461,7 @@ def test_getfslineno():
fspath, lineno = getfslineno(A)
_, A_lineno = py.std.inspect.findsource(A)
_, A_lineno = inspect.findsource(A)
assert fspath.basename == "test_source.py"
assert lineno == A_lineno

View File

@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
import logging
import pytest
logger = logging.getLogger(__name__)
sublogger = logging.getLogger(__name__ + '.baz')
@ -26,6 +27,30 @@ def test_change_level(caplog):
assert 'CRITICAL' in caplog.text
def test_change_level_undo(testdir):
"""Ensure that 'set_level' is undone after the end of the test"""
testdir.makepyfile('''
import logging
def test1(caplog):
caplog.set_level(logging.INFO)
# using + operator here so fnmatch_lines doesn't match the code in the traceback
logging.info('log from ' + 'test1')
assert 0
def test2(caplog):
# using + operator here so fnmatch_lines doesn't match the code in the traceback
logging.info('log from ' + 'test2')
assert 0
''')
result = testdir.runpytest_subprocess()
result.stdout.fnmatch_lines([
'*log from test1*',
'*2 failed in *',
])
assert 'log from test2' not in result.stdout.str()
def test_with_statement(caplog):
with caplog.at_level(logging.INFO):
logger.debug('handler DEBUG level')
@ -42,6 +67,7 @@ def test_with_statement(caplog):
def test_log_access(caplog):
caplog.set_level(logging.INFO)
logger.info('boo %s', 'arg')
assert caplog.records[0].levelname == 'INFO'
assert caplog.records[0].msg == 'boo %s'
@ -49,6 +75,7 @@ def test_log_access(caplog):
def test_record_tuples(caplog):
caplog.set_level(logging.INFO)
logger.info('boo %s', 'arg')
assert caplog.record_tuples == [
@ -57,6 +84,7 @@ def test_record_tuples(caplog):
def test_unicode(caplog):
caplog.set_level(logging.INFO)
logger.info(u'')
assert caplog.records[0].levelname == 'INFO'
assert caplog.records[0].msg == u''
@ -64,7 +92,29 @@ def test_unicode(caplog):
def test_clear(caplog):
caplog.set_level(logging.INFO)
logger.info(u'')
assert len(caplog.records)
caplog.clear()
assert not len(caplog.records)
@pytest.fixture
def logging_during_setup_and_teardown(caplog):
caplog.set_level('INFO')
logger.info('a_setup_log')
yield
logger.info('a_teardown_log')
assert [x.message for x in caplog.get_records('teardown')] == ['a_teardown_log']
def test_caplog_captures_for_all_stages(caplog, logging_during_setup_and_teardown):
assert not caplog.records
assert not caplog.get_records('call')
logger.info('a_call_log')
assert [x.message for x in caplog.get_records('call')] == ['a_call_log']
assert [x.message for x in caplog.get_records('setup')] == ['a_setup_log']
# This reachers into private API, don't use this type of thing in real tests!
assert set(caplog._item.catch_log_handlers.keys()) == {'setup', 'call'}

View File

@ -0,0 +1,29 @@
import logging
import py.io
from _pytest.logging import ColoredLevelFormatter
def test_coloredlogformatter():
logfmt = '%(filename)-25s %(lineno)4d %(levelname)-8s %(message)s'
record = logging.LogRecord(
name='dummy', level=logging.INFO, pathname='dummypath', lineno=10,
msg='Test Message', args=(), exc_info=False)
class ColorConfig(object):
class option(object):
pass
tw = py.io.TerminalWriter()
tw.hasmarkup = True
formatter = ColoredLevelFormatter(tw, logfmt)
output = formatter.format(record)
assert output == ('dummypath 10 '
'\x1b[32mINFO \x1b[0m Test Message')
tw.hasmarkup = False
formatter = ColoredLevelFormatter(tw, logfmt)
output = formatter.format(record)
assert output == ('dummypath 10 '
'INFO Test Message')

View File

@ -1,5 +1,8 @@
# -*- coding: utf-8 -*-
import os
import six
import pytest
@ -35,7 +38,7 @@ def test_messages_logged(testdir):
logger.info('text going to logger')
assert False
''')
result = testdir.runpytest()
result = testdir.runpytest('--log-level=INFO')
assert result.ret == 1
result.stdout.fnmatch_lines(['*- Captured *log call -*',
'*text going to logger*'])
@ -58,7 +61,7 @@ def test_setup_logging(testdir):
logger.info('text going to logger from call')
assert False
''')
result = testdir.runpytest()
result = testdir.runpytest('--log-level=INFO')
assert result.ret == 1
result.stdout.fnmatch_lines(['*- Captured *log setup -*',
'*text going to logger from setup*',
@ -79,7 +82,7 @@ def test_teardown_logging(testdir):
logger.info('text going to logger from teardown')
assert False
''')
result = testdir.runpytest()
result = testdir.runpytest('--log-level=INFO')
assert result.ret == 1
result.stdout.fnmatch_lines(['*- Captured *log call -*',
'*text going to logger from call*',
@ -141,6 +144,30 @@ def test_disable_log_capturing_ini(testdir):
result.stdout.fnmatch_lines(['*- Captured *log call -*'])
@pytest.mark.parametrize('enabled', [True, False])
def test_log_cli_enabled_disabled(testdir, enabled):
msg = 'critical message logged by test'
testdir.makepyfile('''
import logging
def test_log_cli():
logging.critical("{}")
'''.format(msg))
if enabled:
testdir.makeini('''
[pytest]
log_cli=true
''')
result = testdir.runpytest()
if enabled:
result.stdout.fnmatch_lines([
'test_log_cli_enabled_disabled.py::test_log_cli ',
'test_log_cli_enabled_disabled.py* CRITICAL critical message logged by test',
'PASSED*',
])
else:
assert msg not in result.stdout.str()
def test_log_cli_default_level(testdir):
# Default log file level
testdir.makepyfile('''
@ -148,32 +175,103 @@ def test_log_cli_default_level(testdir):
import logging
def test_log_cli(request):
plugin = request.config.pluginmanager.getplugin('logging-plugin')
assert plugin.log_cli_handler.level == logging.WARNING
logging.getLogger('catchlog').info("This log message won't be shown")
logging.getLogger('catchlog').warning("This log message will be shown")
print('PASSED')
assert plugin.log_cli_handler.level == logging.NOTSET
logging.getLogger('catchlog').info("INFO message won't be shown")
logging.getLogger('catchlog').warning("WARNING message will be shown")
''')
testdir.makeini('''
[pytest]
log_cli=true
''')
result = testdir.runpytest('-s')
result = testdir.runpytest()
# fnmatch_lines does an assertion internally
result.stdout.fnmatch_lines([
'test_log_cli_default_level.py PASSED',
'test_log_cli_default_level.py::test_log_cli ',
'test_log_cli_default_level.py*WARNING message will be shown*',
])
result.stderr.fnmatch_lines([
"* This log message will be shown"
])
for line in result.errlines:
try:
assert "This log message won't be shown" in line
pytest.fail("A log message was shown and it shouldn't have been")
except AssertionError:
continue
assert "INFO message won't be shown" not in result.stdout.str()
# make sure that that we get a '0' exit code for the testsuite
assert result.ret == 0
def test_log_cli_default_level_multiple_tests(testdir, request):
"""Ensure we reset the first newline added by the live logger between tests"""
filename = request.node.name + '.py'
testdir.makepyfile('''
import logging
def test_log_1():
logging.warning("log message from test_log_1")
def test_log_2():
logging.warning("log message from test_log_2")
''')
testdir.makeini('''
[pytest]
log_cli=true
''')
result = testdir.runpytest()
result.stdout.fnmatch_lines([
'{}::test_log_1 '.format(filename),
'*WARNING*log message from test_log_1*',
'PASSED *50%*',
'{}::test_log_2 '.format(filename),
'*WARNING*log message from test_log_2*',
'PASSED *100%*',
'=* 2 passed in *=',
])
def test_log_cli_default_level_sections(testdir, request):
"""Check that with live logging enable we are printing the correct headers during setup/call/teardown."""
filename = request.node.name + '.py'
testdir.makepyfile('''
import pytest
import logging
@pytest.fixture
def fix(request):
logging.warning("log message from setup of {}".format(request.node.name))
yield
logging.warning("log message from teardown of {}".format(request.node.name))
def test_log_1(fix):
logging.warning("log message from test_log_1")
def test_log_2(fix):
logging.warning("log message from test_log_2")
''')
testdir.makeini('''
[pytest]
log_cli=true
''')
result = testdir.runpytest()
result.stdout.fnmatch_lines([
'{}::test_log_1 '.format(filename),
'*-- live log setup --*',
'*WARNING*log message from setup of test_log_1*',
'*-- live log call --*',
'*WARNING*log message from test_log_1*',
'PASSED *50%*',
'*-- live log teardown --*',
'*WARNING*log message from teardown of test_log_1*',
'{}::test_log_2 '.format(filename),
'*-- live log setup --*',
'*WARNING*log message from setup of test_log_2*',
'*-- live log call --*',
'*WARNING*log message from test_log_2*',
'PASSED *100%*',
'*-- live log teardown --*',
'*WARNING*log message from teardown of test_log_2*',
'=* 2 passed in *=',
])
def test_log_cli_level(testdir):
# Default log file level
testdir.makepyfile('''
@ -186,22 +284,19 @@ def test_log_cli_level(testdir):
logging.getLogger('catchlog').info("This log message will be shown")
print('PASSED')
''')
testdir.makeini('''
[pytest]
log_cli=true
''')
result = testdir.runpytest('-s', '--log-cli-level=INFO')
# fnmatch_lines does an assertion internally
result.stdout.fnmatch_lines([
'test_log_cli_level.py PASSED',
'test_log_cli_level.py*This log message will be shown',
'PASSED', # 'PASSED' on its own line because the log message prints a new line
])
result.stderr.fnmatch_lines([
"* This log message will be shown"
])
for line in result.errlines:
try:
assert "This log message won't be shown" in line
pytest.fail("A log message was shown and it shouldn't have been")
except AssertionError:
continue
assert "This log message won't be shown" not in result.stdout.str()
# make sure that that we get a '0' exit code for the testsuite
assert result.ret == 0
@ -210,17 +305,10 @@ def test_log_cli_level(testdir):
# fnmatch_lines does an assertion internally
result.stdout.fnmatch_lines([
'test_log_cli_level.py PASSED',
'test_log_cli_level.py* This log message will be shown',
'PASSED', # 'PASSED' on its own line because the log message prints a new line
])
result.stderr.fnmatch_lines([
"* This log message will be shown"
])
for line in result.errlines:
try:
assert "This log message won't be shown" in line
pytest.fail("A log message was shown and it shouldn't have been")
except AssertionError:
continue
assert "This log message won't be shown" not in result.stdout.str()
# make sure that that we get a '0' exit code for the testsuite
assert result.ret == 0
@ -230,6 +318,7 @@ def test_log_cli_ini_level(testdir):
testdir.makeini(
"""
[pytest]
log_cli=true
log_cli_level = INFO
""")
testdir.makepyfile('''
@ -247,17 +336,10 @@ def test_log_cli_ini_level(testdir):
# fnmatch_lines does an assertion internally
result.stdout.fnmatch_lines([
'test_log_cli_ini_level.py PASSED',
'test_log_cli_ini_level.py* This log message will be shown',
'PASSED', # 'PASSED' on its own line because the log message prints a new line
])
result.stderr.fnmatch_lines([
"* This log message will be shown"
])
for line in result.errlines:
try:
assert "This log message won't be shown" in line
pytest.fail("A log message was shown and it shouldn't have been")
except AssertionError:
continue
assert "This log message won't be shown" not in result.stdout.str()
# make sure that that we get a '0' exit code for the testsuite
assert result.ret == 0
@ -278,7 +360,7 @@ def test_log_file_cli(testdir):
log_file = testdir.tmpdir.join('pytest.log').strpath
result = testdir.runpytest('-s', '--log-file={0}'.format(log_file))
result = testdir.runpytest('-s', '--log-file={0}'.format(log_file), '--log-file-level=WARNING')
# fnmatch_lines does an assertion internally
result.stdout.fnmatch_lines([
@ -327,6 +409,16 @@ def test_log_file_cli_level(testdir):
assert "This log message won't be shown" not in contents
def test_log_level_not_changed_by_default(testdir):
testdir.makepyfile('''
import logging
def test_log_file():
assert logging.getLogger().level == logging.WARNING
''')
result = testdir.runpytest('-s')
result.stdout.fnmatch_lines('* 1 passed in *')
def test_log_file_ini(testdir):
log_file = testdir.tmpdir.join('pytest.log').strpath
@ -334,6 +426,7 @@ def test_log_file_ini(testdir):
"""
[pytest]
log_file={0}
log_file_level=WARNING
""".format(log_file))
testdir.makepyfile('''
import pytest
@ -396,3 +489,53 @@ def test_log_file_ini_level(testdir):
contents = rfh.read()
assert "This log message will be shown" in contents
assert "This log message won't be shown" not in contents
@pytest.mark.parametrize('has_capture_manager', [True, False])
def test_live_logging_suspends_capture(has_capture_manager, request):
"""Test that capture manager is suspended when we emitting messages for live logging.
This tests the implementation calls instead of behavior because it is difficult/impossible to do it using
``testdir`` facilities because they do their own capturing.
We parametrize the test to also make sure _LiveLoggingStreamHandler works correctly if no capture manager plugin
is installed.
"""
import logging
from functools import partial
from _pytest.capture import CaptureManager
from _pytest.logging import _LiveLoggingStreamHandler
class MockCaptureManager:
calls = []
def suspend_global_capture(self):
self.calls.append('suspend_global_capture')
def resume_global_capture(self):
self.calls.append('resume_global_capture')
# sanity check
assert CaptureManager.suspend_capture_item
assert CaptureManager.resume_global_capture
class DummyTerminal(six.StringIO):
def section(self, *args, **kwargs):
pass
out_file = DummyTerminal()
capture_manager = MockCaptureManager() if has_capture_manager else None
handler = _LiveLoggingStreamHandler(out_file, capture_manager)
handler.set_when('call')
logger = logging.getLogger(__name__ + '.test_live_logging_suspends_capture')
logger.addHandler(handler)
request.addfinalizer(partial(logger.removeHandler, handler))
logger.critical('some message')
if has_capture_manager:
assert MockCaptureManager.calls == ['suspend_global_capture', 'resume_global_capture']
else:
assert MockCaptureManager.calls == []
assert out_file.getvalue() == '\nsome message\n'

View File

@ -4,7 +4,6 @@ import sys
from textwrap import dedent
import _pytest._code
import py
import pytest
from _pytest.main import EXIT_NOTESTSCOLLECTED
from _pytest.nodes import Collector
@ -22,7 +21,7 @@ class TestModule(object):
b = testdir.mkdir("b")
p = a.ensure("test_whatever.py")
p.pyimport()
del py.std.sys.modules['test_whatever']
del sys.modules['test_whatever']
b.ensure("test_whatever.py")
result = testdir.runpytest()
result.stdout.fnmatch_lines([
@ -751,7 +750,7 @@ class TestSorting(object):
assert fn1 == fn2
assert fn1 != modcol
if py.std.sys.version_info < (3, 0):
if sys.version_info < (3, 0):
assert cmp(fn1, fn2) == 0
assert hash(fn1) == hash(fn2)
@ -880,10 +879,10 @@ class TestConftestCustomization(object):
import sys, os, imp
from _pytest.python import Module
class Loader:
class Loader(object):
def load_module(self, name):
return imp.load_source(name, name + ".narf")
class Finder:
class Finder(object):
def find_module(self, name, path=None):
if os.path.exists(name + ".narf"):
return Loader()

View File

@ -2828,7 +2828,7 @@ class TestShowFixtures(object):
def test_show_fixtures_indented_in_class(self, testdir):
p = testdir.makepyfile(dedent('''
import pytest
class TestClass:
class TestClass(object):
@pytest.fixture
def fixture1(self):
"""line1

View File

@ -14,7 +14,7 @@ PY3 = sys.version_info >= (3, 0)
class TestMetafunc(object):
def Metafunc(self, func):
def Metafunc(self, func, config=None):
# the unit tests of this class check if things work correctly
# on the funcarg level, so we don't need a full blown
# initiliazation
@ -26,7 +26,7 @@ class TestMetafunc(object):
names = fixtures.getfuncargnames(func)
fixtureinfo = FixtureInfo(names)
return python.Metafunc(func, fixtureinfo, None)
return python.Metafunc(func, fixtureinfo, config)
def test_no_funcargs(self, testdir):
def function():
@ -156,7 +156,19 @@ class TestMetafunc(object):
def test_parametrize_empty_list(self):
def func(y):
pass
metafunc = self.Metafunc(func)
class MockConfig(object):
def getini(self, name):
return ''
@property
def hook(self):
return self
def pytest_make_parametrize_id(self, **kw):
pass
metafunc = self.Metafunc(func, MockConfig())
metafunc.parametrize("y", [])
assert 'skip' == metafunc._calls[0].marks[0].name
@ -241,7 +253,7 @@ class TestMetafunc(object):
"""
from _pytest.python import _idval
class TestClass:
class TestClass(object):
pass
def test_function():
@ -749,7 +761,7 @@ class TestMetafuncFunctional(object):
def test_attributes(self, testdir):
p = testdir.makepyfile("""
# assumes that generate/provide runs in the same process
import py, pytest
import sys, pytest
def pytest_generate_tests(metafunc):
metafunc.addcall(param=metafunc)
@ -768,7 +780,7 @@ class TestMetafuncFunctional(object):
def test_method(self, metafunc, pytestconfig):
assert metafunc.config == pytestconfig
assert metafunc.module.__name__ == __name__
if py.std.sys.version_info > (3, 0):
if sys.version_info > (3, 0):
unbound = TestClass.test_method
else:
unbound = TestClass.test_method.im_func

View File

@ -1,5 +1,6 @@
from __future__ import absolute_import, division, print_function
import py
import subprocess
import sys
import pytest
# test for _argcomplete but not specific for any application
@ -23,21 +24,21 @@ def equal_with_bash(prefix, ffc, fc, out=None):
def _wrapcall(*args, **kargs):
try:
if py.std.sys.version_info > (2, 7):
return py.std.subprocess.check_output(*args, **kargs).decode().splitlines()
if sys.version_info > (2, 7):
return subprocess.check_output(*args, **kargs).decode().splitlines()
if 'stdout' in kargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
process = py.std.subprocess.Popen(
stdout=py.std.subprocess.PIPE, *args, **kargs)
process = subprocess.Popen(
stdout=subprocess.PIPE, *args, **kargs)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kargs.get("args")
if cmd is None:
cmd = args[0]
raise py.std.subprocess.CalledProcessError(retcode, cmd)
raise subprocess.CalledProcessError(retcode, cmd)
return output.decode().splitlines()
except py.std.subprocess.CalledProcessError:
except subprocess.CalledProcessError:
return []
@ -83,7 +84,7 @@ class TestArgComplete(object):
ffc = FastFilesCompleter()
fc = FilesCompleter()
for x in ['/', '/d', '/data', 'qqq', '']:
assert equal_with_bash(x, ffc, fc, out=py.std.sys.stdout)
assert equal_with_bash(x, ffc, fc, out=sys.stdout)
@pytest.mark.skipif("sys.platform in ('win32', 'darwin')")
def test_remove_dir_prefix(self):
@ -94,4 +95,4 @@ class TestArgComplete(object):
ffc = FastFilesCompleter()
fc = FilesCompleter()
for x in '/usr/'.split():
assert not equal_with_bash(x, ffc, fc, out=py.std.sys.stdout)
assert not equal_with_bash(x, ffc, fc, out=sys.stdout)

View File

@ -5,6 +5,7 @@ import os
import py_compile
import stat
import sys
import textwrap
import zipfile
import py
import pytest
@ -911,7 +912,7 @@ class TestAssertionRewriteHookDetails(object):
def test_reload_is_same(self, testdir):
# A file that will be picked up during collecting.
testdir.tmpdir.join("file.py").ensure()
testdir.tmpdir.join("pytest.ini").write(py.std.textwrap.dedent("""
testdir.tmpdir.join("pytest.ini").write(textwrap.dedent("""
[pytest]
python_files = *.py
"""))
@ -997,7 +998,7 @@ class TestIssue2121():
def test_simple_failure():
assert 1 + 1 == 3
""")
testdir.tmpdir.join("pytest.ini").write(py.std.textwrap.dedent("""
testdir.tmpdir.join("pytest.ini").write(textwrap.dedent("""
[pytest]
python_files = tests/**.py
"""))

30
testing/test_cache.py → testing/test_cacheprovider.py Executable file → Normal file
View File

@ -31,7 +31,7 @@ class TestNewAPI(object):
def test_cache_writefail_cachfile_silent(self, testdir):
testdir.makeini("[pytest]")
testdir.tmpdir.join('.cache').write('gone wrong')
testdir.tmpdir.join('.pytest_cache').write('gone wrong')
config = testdir.parseconfigure()
cache = config.cache
cache.set('test/broken', [])
@ -39,14 +39,14 @@ class TestNewAPI(object):
@pytest.mark.skipif(sys.platform.startswith('win'), reason='no chmod on windows')
def test_cache_writefail_permissions(self, testdir):
testdir.makeini("[pytest]")
testdir.tmpdir.ensure_dir('.cache').chmod(0)
testdir.tmpdir.ensure_dir('.pytest_cache').chmod(0)
config = testdir.parseconfigure()
cache = config.cache
cache.set('test/broken', [])
@pytest.mark.skipif(sys.platform.startswith('win'), reason='no chmod on windows')
def test_cache_failure_warns(self, testdir):
testdir.tmpdir.ensure_dir('.cache').chmod(0)
testdir.tmpdir.ensure_dir('.pytest_cache').chmod(0)
testdir.makepyfile("""
def test_error():
raise Exception
@ -127,7 +127,7 @@ def test_cache_reportheader(testdir):
""")
result = testdir.runpytest("-v")
result.stdout.fnmatch_lines([
"cachedir: .cache"
"cachedir: .pytest_cache"
])
@ -201,8 +201,8 @@ class TestLastFailed(object):
])
# Run this again to make sure clear-cache is robust
if os.path.isdir('.cache'):
shutil.rmtree('.cache')
if os.path.isdir('.pytest_cache'):
shutil.rmtree('.pytest_cache')
result = testdir.runpytest("--lf", "--cache-clear")
result.stdout.fnmatch_lines([
"*1 failed*2 passed*",
@ -410,8 +410,8 @@ class TestLastFailed(object):
def test_lastfailed_collectfailure(self, testdir, monkeypatch):
testdir.makepyfile(test_maybe="""
import py
env = py.std.os.environ
import os
env = os.environ
if '1' == env['FAILIMPORT']:
raise ImportError('fail')
def test_hello():
@ -439,8 +439,8 @@ class TestLastFailed(object):
def test_lastfailed_failure_subset(self, testdir, monkeypatch):
testdir.makepyfile(test_maybe="""
import py
env = py.std.os.environ
import os
env = os.environ
if '1' == env['FAILIMPORT']:
raise ImportError('fail')
def test_hello():
@ -448,8 +448,8 @@ class TestLastFailed(object):
""")
testdir.makepyfile(test_maybe2="""
import py
env = py.std.os.environ
import os
env = os.environ
if '1' == env['FAILIMPORT']:
raise ImportError('fail')
def test_hello():
@ -495,15 +495,15 @@ class TestLastFailed(object):
# Issue #1342
testdir.makepyfile(test_empty='')
testdir.runpytest('-q', '--lf')
assert not os.path.exists('.cache')
assert not os.path.exists('.pytest_cache')
testdir.makepyfile(test_successful='def test_success():\n assert True')
testdir.runpytest('-q', '--lf')
assert not os.path.exists('.cache')
assert not os.path.exists('.pytest_cache')
testdir.makepyfile(test_errored='def test_error():\n assert False')
testdir.runpytest('-q', '--lf')
assert os.path.exists('.cache')
assert os.path.exists('.pytest_cache')
def test_xfail_not_considered_failure(self, testdir):
testdir.makepyfile('''

View File

@ -1245,7 +1245,7 @@ def test_py36_windowsconsoleio_workaround_non_standard_streams():
"""
from _pytest.capture import _py36_windowsconsoleio_workaround
class DummyStream:
class DummyStream(object):
def write(self, s):
pass

View File

@ -1,6 +1,7 @@
from __future__ import absolute_import, division, print_function
import pprint
import sys
import pytest
import py
import _pytest._code
from _pytest.main import Session, EXIT_NOTESTSCOLLECTED, _in_venv
@ -36,7 +37,7 @@ class TestCollector(object):
assert fn1 == fn2
assert fn1 != modcol
if py.std.sys.version_info < (3, 0):
if sys.version_info < (3, 0):
assert cmp(fn1, fn2) == 0
assert hash(fn1) == hash(fn2)
@ -128,7 +129,7 @@ class TestCollectFS(object):
("activate", "activate.csh", "activate.fish",
"Activate", "Activate.bat", "Activate.ps1"))
def test_ignored_virtualenvs(self, testdir, fname):
bindir = "Scripts" if py.std.sys.platform.startswith("win") else "bin"
bindir = "Scripts" if sys.platform.startswith("win") else "bin"
testdir.tmpdir.ensure("virtual", bindir, fname)
testfile = testdir.tmpdir.ensure("virtual", "test_invenv.py")
testfile.write("def test_hello(): pass")
@ -147,7 +148,7 @@ class TestCollectFS(object):
("activate", "activate.csh", "activate.fish",
"Activate", "Activate.bat", "Activate.ps1"))
def test_ignored_virtualenvs_norecursedirs_precedence(self, testdir, fname):
bindir = "Scripts" if py.std.sys.platform.startswith("win") else "bin"
bindir = "Scripts" if sys.platform.startswith("win") else "bin"
# norecursedirs takes priority
testdir.tmpdir.ensure(".virtual", bindir, fname)
testfile = testdir.tmpdir.ensure(".virtual", "test_invenv.py")
@ -163,7 +164,7 @@ class TestCollectFS(object):
"Activate", "Activate.bat", "Activate.ps1"))
def test__in_venv(self, testdir, fname):
"""Directly test the virtual env detection function"""
bindir = "Scripts" if py.std.sys.platform.startswith("win") else "bin"
bindir = "Scripts" if sys.platform.startswith("win") else "bin"
# no bin/activate, not a virtualenv
base_path = testdir.tmpdir.mkdir('venv')
assert _in_venv(base_path) is False
@ -436,7 +437,7 @@ class TestSession(object):
assert item.name == "test_func"
newid = item.nodeid
assert newid == id
py.std.pprint.pprint(hookrec.calls)
pprint.pprint(hookrec.calls)
topdir = testdir.tmpdir # noqa
hookrec.assert_contains([
("pytest_collectstart", "collector.fspath == topdir"),
@ -486,7 +487,7 @@ class TestSession(object):
id = p.basename
items, hookrec = testdir.inline_genitems(id)
py.std.pprint.pprint(hookrec.calls)
pprint.pprint(hookrec.calls)
assert len(items) == 2
hookrec.assert_contains([
("pytest_collectstart",
@ -508,7 +509,7 @@ class TestSession(object):
items, hookrec = testdir.inline_genitems()
assert len(items) == 1
py.std.pprint.pprint(hookrec.calls)
pprint.pprint(hookrec.calls)
hookrec.assert_contains([
("pytest_collectstart", "collector.fspath == test_aaa"),
("pytest_pycollect_makeitem", "name == 'test_func'"),
@ -529,7 +530,7 @@ class TestSession(object):
items, hookrec = testdir.inline_genitems(id)
assert len(items) == 2
py.std.pprint.pprint(hookrec.calls)
pprint.pprint(hookrec.calls)
hookrec.assert_contains([
("pytest_collectstart", "collector.fspath == test_aaa"),
("pytest_pycollect_makeitem", "name == 'test_func'"),

View File

@ -1,6 +1,6 @@
from __future__ import absolute_import, division, print_function
import sys
import py
import textwrap
import pytest
import _pytest._code
@ -57,7 +57,7 @@ class TestParseIni(object):
('pytest', 'pytest.ini')],
)
def test_ini_names(self, testdir, name, section):
testdir.tmpdir.join(name).write(py.std.textwrap.dedent("""
testdir.tmpdir.join(name).write(textwrap.dedent("""
[{section}]
minversion = 1.0
""".format(section=section)))
@ -66,11 +66,11 @@ class TestParseIni(object):
def test_toxini_before_lower_pytestini(self, testdir):
sub = testdir.tmpdir.mkdir("sub")
sub.join("tox.ini").write(py.std.textwrap.dedent("""
sub.join("tox.ini").write(textwrap.dedent("""
[pytest]
minversion = 2.0
"""))
testdir.tmpdir.join("pytest.ini").write(py.std.textwrap.dedent("""
testdir.tmpdir.join("pytest.ini").write(textwrap.dedent("""
[pytest]
minversion = 1.5
"""))
@ -731,7 +731,7 @@ class TestRootdir(object):
class TestOverrideIniArgs(object):
@pytest.mark.parametrize("name", "setup.cfg tox.ini pytest.ini".split())
def test_override_ini_names(self, testdir, name):
testdir.tmpdir.join(name).write(py.std.textwrap.dedent("""
testdir.tmpdir.join(name).write(textwrap.dedent("""
[pytest]
custom = 1.0"""))
testdir.makeconftest("""
@ -781,16 +781,18 @@ class TestOverrideIniArgs(object):
testdir.makeini("""
[pytest]
custom_option_1=custom_option_1
custom_option_2=custom_option_2""")
custom_option_2=custom_option_2
""")
testdir.makepyfile("""
def test_multiple_options(pytestconfig):
prefix = "custom_option"
for x in range(1, 5):
ini_value=pytestconfig.getini("%s_%d" % (prefix, x))
print('\\nini%d:%s' % (x, ini_value))""")
print('\\nini%d:%s' % (x, ini_value))
""")
result = testdir.runpytest(
"--override-ini", 'custom_option_1=fulldir=/tmp/user1',
'custom_option_2=url=/tmp/user2?a=b&d=e',
'-o', 'custom_option_2=url=/tmp/user2?a=b&d=e',
"-o", 'custom_option_3=True',
"-o", 'custom_option_4=no', "-s")
result.stdout.fnmatch_lines(["ini1:fulldir=/tmp/user1",
@ -853,10 +855,42 @@ class TestOverrideIniArgs(object):
assert rootdir == tmpdir
assert inifile is None
def test_addopts_before_initini(self, testdir, tmpdir, monkeypatch):
def test_addopts_before_initini(self, monkeypatch):
cache_dir = '.custom_cache'
monkeypatch.setenv('PYTEST_ADDOPTS', '-o cache_dir=%s' % cache_dir)
from _pytest.config import get_config
config = get_config()
config._preparse([], addopts=True)
assert config._override_ini == [['cache_dir=%s' % cache_dir]]
assert config._override_ini == ['cache_dir=%s' % cache_dir]
def test_override_ini_does_not_contain_paths(self):
"""Check that -o no longer swallows all options after it (#3103)"""
from _pytest.config import get_config
config = get_config()
config._preparse(['-o', 'cache_dir=/cache', '/some/test/path'])
assert config._override_ini == ['cache_dir=/cache']
def test_multiple_override_ini_options(self, testdir, request):
"""Ensure a file path following a '-o' option does not generate an error (#3103)"""
testdir.makepyfile(**{
"conftest.py": """
def pytest_addoption(parser):
parser.addini('foo', default=None, help='some option')
parser.addini('bar', default=None, help='some option')
""",
"test_foo.py": """
def test(pytestconfig):
assert pytestconfig.getini('foo') == '1'
assert pytestconfig.getini('bar') == '0'
""",
"test_bar.py": """
def test():
assert False
""",
})
result = testdir.runpytest('-o', 'foo=1', '-o', 'bar=0', 'test_foo.py')
assert 'ERROR:' not in result.stderr.str()
result.stdout.fnmatch_lines([
'collected 1 item',
'*= 1 passed in *=',
])

View File

@ -232,7 +232,7 @@ def test_fixture_dependency(testdir, monkeypatch):
ct1.write("")
sub = testdir.mkdir("sub")
sub.join("__init__.py").write("")
sub.join("conftest.py").write(py.std.textwrap.dedent("""
sub.join("conftest.py").write(dedent("""
import pytest
@pytest.fixture
@ -249,7 +249,7 @@ def test_fixture_dependency(testdir, monkeypatch):
"""))
subsub = sub.mkdir("subsub")
subsub.join("__init__.py").write("")
subsub.join("test_bar.py").write(py.std.textwrap.dedent("""
subsub.join("test_bar.py").write(dedent("""
import pytest
@pytest.fixture
@ -265,7 +265,7 @@ def test_fixture_dependency(testdir, monkeypatch):
def test_conftest_found_with_double_dash(testdir):
sub = testdir.mkdir("sub")
sub.join("conftest.py").write(py.std.textwrap.dedent("""
sub.join("conftest.py").write(dedent("""
def pytest_addoption(parser):
parser.addoption("--hello-world", action="store_true")
"""))

View File

@ -879,6 +879,27 @@ def test_record_property_same_name(testdir):
pnodes[1].assert_attr(name="foo", value="baz")
def test_record_attribute(testdir):
testdir.makepyfile("""
import pytest
@pytest.fixture
def other(record_xml_attribute):
record_xml_attribute("bar", 1)
def test_record(record_xml_attribute, other):
record_xml_attribute("foo", "<1");
""")
result, dom = runandparse(testdir, '-rw')
node = dom.find_first_by_tag("testsuite")
tnode = node.find_first_by_tag("testcase")
tnode.assert_attr(bar="1")
tnode.assert_attr(foo="<1")
result.stdout.fnmatch_lines([
'test_record_attribute.py::test_record',
'*record_xml_attribute*experimental*',
])
def test_random_report_log_xdist(testdir):
"""xdist calls pytest_runtest_logreport as they are executed by the slaves,
with nodes from several nodes overlapping, so junitxml must cope with that

View File

@ -3,7 +3,10 @@ import os
import sys
import pytest
from _pytest.mark import MarkGenerator as Mark, ParameterSet, transfer_markers
from _pytest.mark import (
MarkGenerator as Mark, ParameterSet, transfer_markers,
EMPTY_PARAMETERSET_OPTION,
)
class TestMark(object):
@ -344,6 +347,21 @@ def test_keyword_option_parametrize(spec, testdir):
assert list(passed) == list(passed_result)
@pytest.mark.parametrize("spec", [
("foo or import", "ERROR: Python keyword 'import' not accepted in expressions passed to '-k'"),
("foo or", "ERROR: Wrong expression passed to '-k': foo or")
])
def test_keyword_option_wrong_arguments(spec, testdir, capsys):
testdir.makepyfile("""
def test_func(arg):
pass
""")
opt, expected_result = spec
testdir.inline_run("-k", opt)
out = capsys.readouterr().err
assert expected_result in out
def test_parametrized_collected_from_command_line(testdir):
"""Parametrized test not collected if test named specified
in command line issue#649.
@ -876,3 +894,27 @@ class TestMarkDecorator(object):
])
def test__eq__(self, lhs, rhs, expected):
assert (lhs == rhs) == expected
@pytest.mark.parametrize('mark', [None, '', 'skip', 'xfail'])
def test_parameterset_for_parametrize_marks(testdir, mark):
if mark is not None:
testdir.makeini(
"[pytest]\n{}={}".format(EMPTY_PARAMETERSET_OPTION, mark))
config = testdir.parseconfig()
from _pytest.mark import pytest_configure, get_empty_parameterset_mark
pytest_configure(config)
result_mark = get_empty_parameterset_mark(config, ['a'], all)
if mark in (None, ''):
# normalize to the requested name
mark = 'skip'
assert result_mark.name == mark
assert result_mark.kwargs['reason'].startswith("got empty parameter set ")
if mark == 'xfail':
assert result_mark.kwargs.get('run') is False
def test_parameterset_for_parametrize_bad_markname(testdir):
with pytest.raises(pytest.UsageError):
test_parameterset_for_parametrize_marks(testdir, 'bad')

View File

@ -1,4 +1,5 @@
from __future__ import absolute_import, division, print_function
import argparse
import sys
import os
import py
@ -189,7 +190,7 @@ class TestParser(object):
assert option.no is False
def test_drop_short_helper(self):
parser = py.std.argparse.ArgumentParser(formatter_class=parseopt.DropShorterLongHelpFormatter)
parser = argparse.ArgumentParser(formatter_class=parseopt.DropShorterLongHelpFormatter)
parser.add_argument('-t', '--twoword', '--duo', '--two-word', '--two',
help='foo').map_long_option = {'two': 'two-word'}
# throws error on --deux only!

View File

@ -402,5 +402,4 @@ class TestPDB(object):
child = testdir.spawn_pytest("--pdbcls=custom_pdb:CustomPdb %s" % str(p1))
child.expect('custom set_trace>')
if child.isalive():
child.wait()
self.flush(child)

View File

@ -1,8 +1,10 @@
# encoding: UTF-8
from __future__ import absolute_import, division, print_function
import pytest
import py
import os
import re
import sys
import types
from _pytest.config import get_config, PytestPluginManager
from _pytest.main import EXIT_NOTESTSCOLLECTED, Session
@ -208,14 +210,14 @@ def test_importplugin_error_message(testdir, pytestpm):
expected_message = '.*Error importing plugin "qwe": Not possible to import: .'
expected_traceback = ".*in test_traceback"
assert py.std.re.match(expected_message, str(excinfo.value))
assert py.std.re.match(expected_traceback, str(excinfo.traceback[-1]))
assert re.match(expected_message, str(excinfo.value))
assert re.match(expected_traceback, str(excinfo.traceback[-1]))
class TestPytestPluginManager(object):
def test_register_imported_modules(self):
pm = PytestPluginManager()
mod = py.std.types.ModuleType("x.y.pytest_hello")
mod = types.ModuleType("x.y.pytest_hello")
pm.register(mod)
assert pm.is_registered(mod)
values = pm.get_plugins()
@ -226,8 +228,8 @@ class TestPytestPluginManager(object):
assert pm.get_plugins() == values
def test_canonical_import(self, monkeypatch):
mod = py.std.types.ModuleType("pytest_xyz")
monkeypatch.setitem(py.std.sys.modules, 'pytest_xyz', mod)
mod = types.ModuleType("pytest_xyz")
monkeypatch.setitem(sys.modules, 'pytest_xyz', mod)
pm = PytestPluginManager()
pm.import_plugin('pytest_xyz')
assert pm.get_plugin('pytest_xyz') == mod
@ -237,7 +239,7 @@ class TestPytestPluginManager(object):
testdir.syspathinsert()
testdir.makepyfile(pytest_p1="#")
testdir.makepyfile(pytest_p2="#")
mod = py.std.types.ModuleType("temp")
mod = types.ModuleType("temp")
mod.pytest_plugins = ["pytest_p1", "pytest_p2"]
pytestpm.consider_module(mod)
assert pytestpm.get_plugin("pytest_p1").__name__ == "pytest_p1"
@ -245,12 +247,12 @@ class TestPytestPluginManager(object):
def test_consider_module_import_module(self, testdir):
pytestpm = get_config().pluginmanager
mod = py.std.types.ModuleType("x")
mod = types.ModuleType("x")
mod.pytest_plugins = "pytest_a"
aplugin = testdir.makepyfile(pytest_a="#")
reprec = testdir.make_hook_recorder(pytestpm)
# syspath.prepend(aplugin.dirpath())
py.std.sys.path.insert(0, str(aplugin.dirpath()))
sys.path.insert(0, str(aplugin.dirpath()))
pytestpm.consider_module(mod)
call = reprec.getcall(pytestpm.hook.pytest_plugin_registered.name)
assert call.plugin.__name__ == "pytest_a"

View File

@ -135,7 +135,7 @@ def test_makepyfile_utf8(testdir):
assert u"mixed_encoding = u'São Paulo'".encode('utf-8') in p.read('rb')
class TestInlineRunModulesCleanup:
class TestInlineRunModulesCleanup(object):
def test_inline_run_test_module_not_cleaned_up(self, testdir):
test_mod = testdir.makepyfile("def test_foo(): assert True")
result = testdir.inline_run(str(test_mod))
@ -146,7 +146,7 @@ class TestInlineRunModulesCleanup:
assert result2.ret == EXIT_TESTSFAILED
def spy_factory(self):
class SysModulesSnapshotSpy:
class SysModulesSnapshotSpy(object):
instances = []
def __init__(self, preserve=None):
@ -223,7 +223,7 @@ def test_inline_run_clean_sys_paths(testdir):
assert sys.meta_path == original_meta_path
def spy_factory(self):
class SysPathsSnapshotSpy:
class SysPathsSnapshotSpy(object):
instances = []
def __init__(self):
@ -266,7 +266,7 @@ def test_cwd_snapshot(tmpdir):
assert py.path.local() == foo
class TestSysModulesSnapshot:
class TestSysModulesSnapshot(object):
key = 'my-test-module'
def test_remove_added(self):
@ -329,7 +329,7 @@ class TestSysModulesSnapshot:
@pytest.mark.parametrize('path_type', ('path', 'meta_path'))
class TestSysPathsSnapshot:
class TestSysPathsSnapshot(object):
other_path = {
'path': 'meta_path',
'meta_path': 'path'}

View File

@ -1,7 +1,6 @@
from __future__ import absolute_import, division, print_function
import warnings
import re
import py
import pytest
from _pytest.recwarn import WarningsRecorder
@ -24,9 +23,9 @@ class TestWarningsRecorderChecker(object):
rec = WarningsRecorder()
with rec:
assert not rec.list
py.std.warnings.warn_explicit("hello", UserWarning, "xyz", 13)
warnings.warn_explicit("hello", UserWarning, "xyz", 13)
assert len(rec.list) == 1
py.std.warnings.warn(DeprecationWarning("hello"))
warnings.warn(DeprecationWarning("hello"))
assert len(rec.list) == 2
warn = rec.pop()
assert str(warn.message) == "hello"
@ -64,14 +63,14 @@ class TestDeprecatedCall(object):
def dep(self, i, j=None):
if i == 0:
py.std.warnings.warn("is deprecated", DeprecationWarning,
stacklevel=1)
warnings.warn("is deprecated", DeprecationWarning,
stacklevel=1)
return 42
def dep_explicit(self, i):
if i == 0:
py.std.warnings.warn_explicit("dep_explicit", category=DeprecationWarning,
filename="hello", lineno=3)
warnings.warn_explicit("dep_explicit", category=DeprecationWarning,
filename="hello", lineno=3)
def test_deprecated_call_raises(self):
with pytest.raises(AssertionError) as excinfo:
@ -86,16 +85,16 @@ class TestDeprecatedCall(object):
assert ret == 42
def test_deprecated_call_preserves(self):
onceregistry = py.std.warnings.onceregistry.copy()
filters = py.std.warnings.filters[:]
warn = py.std.warnings.warn
warn_explicit = py.std.warnings.warn_explicit
onceregistry = warnings.onceregistry.copy()
filters = warnings.filters[:]
warn = warnings.warn
warn_explicit = warnings.warn_explicit
self.test_deprecated_call_raises()
self.test_deprecated_call()
assert onceregistry == py.std.warnings.onceregistry
assert filters == py.std.warnings.filters
assert warn is py.std.warnings.warn
assert warn_explicit is py.std.warnings.warn_explicit
assert onceregistry == warnings.onceregistry
assert filters == warnings.filters
assert warn is warnings.warn
assert warn_explicit is warnings.warn_explicit
def test_deprecated_explicit_call_raises(self):
with pytest.raises(AssertionError):

View File

@ -2,10 +2,12 @@
from __future__ import absolute_import, division, print_function
import _pytest._code
import inspect
import os
import py
import pytest
import sys
import types
from _pytest import runner, main, outcomes
@ -404,10 +406,10 @@ reporttypes = [
@pytest.mark.parametrize('reporttype', reporttypes, ids=[x.__name__ for x in reporttypes])
def test_report_extra_parameters(reporttype):
if hasattr(py.std.inspect, 'signature'):
args = list(py.std.inspect.signature(reporttype.__init__).parameters.keys())[1:]
if hasattr(inspect, 'signature'):
args = list(inspect.signature(reporttype.__init__).parameters.keys())[1:]
else:
args = py.std.inspect.getargspec(reporttype.__init__)[0][1:]
args = inspect.getargspec(reporttype.__init__)[0][1:]
basekw = dict.fromkeys(args, [])
report = reporttype(newthing=1, **basekw)
assert report.newthing == 1
@ -576,10 +578,10 @@ def test_importorskip(monkeypatch):
importorskip("asdlkj")
try:
sys = importorskip("sys")
assert sys == py.std.sys
sysmod = importorskip("sys")
assert sysmod is sys
# path = pytest.importorskip("os.path")
# assert path == py.std.os.path
# assert path == os.path
excinfo = pytest.raises(pytest.skip.Exception, f)
path = py.path.local(excinfo.getrepr().reprcrash.path)
# check that importorskip reports the actual call
@ -587,7 +589,7 @@ def test_importorskip(monkeypatch):
assert path.purebasename == "test_runner"
pytest.raises(SyntaxError, "pytest.importorskip('x y z')")
pytest.raises(SyntaxError, "pytest.importorskip('x=y')")
mod = py.std.types.ModuleType("hello123")
mod = types.ModuleType("hello123")
mod.__version__ = "1.3"
monkeypatch.setitem(sys.modules, "hello123", mod)
pytest.raises(pytest.skip.Exception, """
@ -607,7 +609,7 @@ def test_importorskip_imports_last_module_part():
def test_importorskip_dev_module(monkeypatch):
try:
mod = py.std.types.ModuleType("mockmodule")
mod = types.ModuleType("mockmodule")
mod.__version__ = '0.13.0.dev-43290'
monkeypatch.setitem(sys.modules, 'mockmodule', mod)
mod2 = pytest.importorskip('mockmodule', minversion='0.12.0')

View File

@ -326,7 +326,7 @@ def test_repr_python_version(monkeypatch):
try:
monkeypatch.setattr(sys, 'version_info', (2, 5, 1, 'final', 0))
assert repr_pythonversion() == "2.5.1-final-0"
py.std.sys.version_info = x = (2, 3)
sys.version_info = x = (2, 3)
assert repr_pythonversion() == str(x)
finally:
monkeypatch.undo() # do this early as pytest can get confused
@ -475,11 +475,11 @@ class TestTerminalFunctional(object):
pass
""")
result = testdir.runpytest()
verinfo = ".".join(map(str, py.std.sys.version_info[:3]))
verinfo = ".".join(map(str, sys.version_info[:3]))
result.stdout.fnmatch_lines([
"*===== test session starts ====*",
"platform %s -- Python %s*pytest-%s*py-%s*pluggy-%s" % (
py.std.sys.platform, verinfo,
sys.platform, verinfo,
pytest.__version__, py.__version__, pluggy.__version__),
"*test_header_trailer_info.py .*",
"=* 1 passed*in *.[0-9][0-9] seconds *=",
@ -966,7 +966,7 @@ def test_no_trailing_whitespace_after_inifile_word(testdir):
assert 'inifile: tox.ini\n' in result.stdout.str()
class TestProgress:
class TestProgress(object):
@pytest.fixture
def many_tests_files(self, testdir):
@ -1047,7 +1047,7 @@ class TestProgress:
])
class TestProgressWithTeardown:
class TestProgressWithTeardown(object):
"""Ensure we show the correct percentages for tests that fail during teardown (#3088)"""
@pytest.fixture