Merge branch 'release-3.2.0'
This commit is contained in:
commit
0a15edd573
|
@ -17,9 +17,11 @@ env:
|
|||
- TOXENV=py27-pexpect
|
||||
- TOXENV=py27-xdist
|
||||
- TOXENV=py27-trial
|
||||
- TOXENV=py27-numpy
|
||||
- TOXENV=py35-pexpect
|
||||
- TOXENV=py35-xdist
|
||||
- TOXENV=py35-trial
|
||||
- TOXENV=py35-numpy
|
||||
- TOXENV=py27-nobyte
|
||||
- TOXENV=doctesting
|
||||
- TOXENV=freeze
|
||||
|
|
4
AUTHORS
4
AUTHORS
|
@ -84,6 +84,7 @@ John Towler
|
|||
Jon Sonesen
|
||||
Jonas Obrist
|
||||
Jordan Guymon
|
||||
Jordan Moldow
|
||||
Joshua Bronson
|
||||
Jurko Gospodnetić
|
||||
Justyna Janczyszyn
|
||||
|
@ -91,6 +92,7 @@ Kale Kundert
|
|||
Katarzyna Jachim
|
||||
Kevin Cox
|
||||
Kodi B. Arfer
|
||||
Lawrence Mitchell
|
||||
Lee Kamentsky
|
||||
Lev Maximov
|
||||
Llandy Riveron Del Risco
|
||||
|
@ -99,6 +101,7 @@ Lukas Bednar
|
|||
Luke Murphy
|
||||
Maciek Fijalkowski
|
||||
Maho
|
||||
Maik Figura
|
||||
Mandeep Bhutani
|
||||
Manuel Krebber
|
||||
Marc Schlaich
|
||||
|
@ -122,6 +125,7 @@ Michael Seifert
|
|||
Michal Wajszczuk
|
||||
Mihai Capotă
|
||||
Mike Lundy
|
||||
Nathaniel Waisbrot
|
||||
Ned Batchelder
|
||||
Neven Mundar
|
||||
Nicolas Delaby
|
||||
|
|
166
CHANGELOG.rst
166
CHANGELOG.rst
|
@ -8,6 +8,172 @@
|
|||
|
||||
.. towncrier release notes start
|
||||
|
||||
Pytest 3.2.0 (2017-07-30)
|
||||
=========================
|
||||
|
||||
Deprecations and Removals
|
||||
-------------------------
|
||||
|
||||
- ``pytest.approx`` no longer supports ``>``, ``>=``, ``<`` and ``<=``
|
||||
operators to avoid surprising/inconsistent behavior. See `the docs
|
||||
<https://docs.pytest.org/en/latest/builtin.html#pytest.approx>`_ for more
|
||||
information. (`#2003 <https://github.com/pytest-dev/pytest/issues/2003>`_)
|
||||
|
||||
- All old-style specific behavior in current classes in the pytest's API is
|
||||
considered deprecated at this point and will be removed in a future release.
|
||||
This affects Python 2 users only and in rare situations. (`#2147
|
||||
<https://github.com/pytest-dev/pytest/issues/2147>`_)
|
||||
|
||||
- A deprecation warning is now raised when using marks for parameters
|
||||
in ``pytest.mark.parametrize``. Use ``pytest.param`` to apply marks to
|
||||
parameters instead. (`#2427 <https://github.com/pytest-dev/pytest/issues/2427>`_)
|
||||
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Add support for numpy arrays (and dicts) to approx. (`#1994
|
||||
<https://github.com/pytest-dev/pytest/issues/1994>`_)
|
||||
|
||||
- Now test function objects have a ``pytestmark`` attribute containing a list
|
||||
of marks applied directly to the test function, as opposed to marks inherited
|
||||
from parent classes or modules. (`#2516 <https://github.com/pytest-
|
||||
dev/pytest/issues/2516>`_)
|
||||
|
||||
- Collection ignores local virtualenvs by default; `--collect-in-virtualenv`
|
||||
overrides this behavior. (`#2518 <https://github.com/pytest-
|
||||
dev/pytest/issues/2518>`_)
|
||||
|
||||
- Allow class methods decorated as ``@staticmethod`` to be candidates for
|
||||
collection as a test function. (Only for Python 2.7 and above. Python 2.6
|
||||
will still ignore static methods.) (`#2528 <https://github.com/pytest-
|
||||
dev/pytest/issues/2528>`_)
|
||||
|
||||
- Introduce ``mark.with_args`` in order to allow passing functions/classes as
|
||||
sole argument to marks. (`#2540 <https://github.com/pytest-
|
||||
dev/pytest/issues/2540>`_)
|
||||
|
||||
- New ``cache_dir`` ini option: sets the directory where the contents of the
|
||||
cache plugin are stored. Directory may be relative or absolute path: if relative path, then
|
||||
directory is created relative to ``rootdir``, otherwise it is used as is.
|
||||
Additionally path may contain environment variables which are expanded during
|
||||
runtime. (`#2543 <https://github.com/pytest-dev/pytest/issues/2543>`_)
|
||||
|
||||
- Introduce the ``PYTEST_CURRENT_TEST`` environment variable that is set with
|
||||
the ``nodeid`` and stage (``setup``, ``call`` and ``teardown``) of the test
|
||||
being currently executed. See the `documentation
|
||||
<https://docs.pytest.org/en/latest/example/simple.html#pytest-current-test-
|
||||
environment-variable>`_ for more info. (`#2583 <https://github.com/pytest-
|
||||
dev/pytest/issues/2583>`_)
|
||||
|
||||
- Introduced ``@pytest.mark.filterwarnings`` mark which allows overwriting the
|
||||
warnings filter on a per test, class or module level. See the `docs
|
||||
<https://docs.pytest.org/en/latest/warnings.html#pytest-mark-
|
||||
filterwarnings>`_ for more information. (`#2598 <https://github.com/pytest-
|
||||
dev/pytest/issues/2598>`_)
|
||||
|
||||
- ``--last-failed`` now remembers forever when a test has failed and only
|
||||
forgets it if it passes again. This makes it easy to fix a test suite by
|
||||
selectively running files and fixing tests incrementally. (`#2621
|
||||
<https://github.com/pytest-dev/pytest/issues/2621>`_)
|
||||
|
||||
- New ``pytest_report_collectionfinish`` hook which allows plugins to add
|
||||
messages to the terminal reporting after collection has been finished
|
||||
successfully. (`#2622 <https://github.com/pytest-dev/pytest/issues/2622>`_)
|
||||
|
||||
- Added support for `PEP-415's <https://www.python.org/dev/peps/pep-0415/>`_
|
||||
``Exception.__suppress_context__``. Now if a ``raise exception from None`` is
|
||||
caught by pytest, pytest will no longer chain the context in the test report.
|
||||
The behavior now matches Python's traceback behavior. (`#2631
|
||||
<https://github.com/pytest-dev/pytest/issues/2631>`_)
|
||||
|
||||
- Exceptions raised by ``pytest.fail``, ``pytest.skip`` and ``pytest.xfail``
|
||||
now subclass BaseException, making them harder to be caught unintentionally
|
||||
by normal code. (`#580 <https://github.com/pytest-dev/pytest/issues/580>`_)
|
||||
|
||||
|
||||
Bug Fixes
|
||||
---------
|
||||
|
||||
- Set ``stdin`` to a closed ``PIPE`` in ``pytester.py.Testdir.popen()`` for
|
||||
avoid unwanted interactive ``pdb`` (`#2023 <https://github.com/pytest-
|
||||
dev/pytest/issues/2023>`_)
|
||||
|
||||
- Add missing ``encoding`` attribute to ``sys.std*`` streams when using
|
||||
``capsys`` capture mode. (`#2375 <https://github.com/pytest-
|
||||
dev/pytest/issues/2375>`_)
|
||||
|
||||
- Fix terminal color changing to black on Windows if ``colorama`` is imported
|
||||
in a ``conftest.py`` file. (`#2510 <https://github.com/pytest-
|
||||
dev/pytest/issues/2510>`_)
|
||||
|
||||
- Fix line number when reporting summary of skipped tests. (`#2548
|
||||
<https://github.com/pytest-dev/pytest/issues/2548>`_)
|
||||
|
||||
- capture: ensure that EncodedFile.name is a string. (`#2555
|
||||
<https://github.com/pytest-dev/pytest/issues/2555>`_)
|
||||
|
||||
- The options ```--fixtures`` and ```--fixtures-per-test`` will now keep
|
||||
indentation within docstrings. (`#2574 <https://github.com/pytest-
|
||||
dev/pytest/issues/2574>`_)
|
||||
|
||||
- doctests line numbers are now reported correctly, fixing `pytest-sugar#122
|
||||
<https://github.com/Frozenball/pytest-sugar/issues/122>`_. (`#2610
|
||||
<https://github.com/pytest-dev/pytest/issues/2610>`_)
|
||||
|
||||
- Fix non-determinism in order of fixture collection. Adds new dependency
|
||||
(ordereddict) for Python 2.6. (`#920 <https://github.com/pytest-
|
||||
dev/pytest/issues/920>`_)
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Clarify ``pytest_configure`` hook call order. (`#2539
|
||||
<https://github.com/pytest-dev/pytest/issues/2539>`_)
|
||||
|
||||
- Extend documentation for testing plugin code with the ``pytester`` plugin.
|
||||
(`#971 <https://github.com/pytest-dev/pytest/issues/971>`_)
|
||||
|
||||
|
||||
Trivial/Internal Changes
|
||||
------------------------
|
||||
|
||||
- Update help message for ``--strict`` to make it clear it only deals with
|
||||
unregistered markers, not warnings. (`#2444 <https://github.com/pytest-
|
||||
dev/pytest/issues/2444>`_)
|
||||
|
||||
- Internal code move: move code for pytest.approx/pytest.raises to own files in
|
||||
order to cut down the size of python.py (`#2489 <https://github.com/pytest-
|
||||
dev/pytest/issues/2489>`_)
|
||||
|
||||
- Renamed the utility function ``_pytest.compat._escape_strings`` to
|
||||
``_ascii_escaped`` to better communicate the function's purpose. (`#2533
|
||||
<https://github.com/pytest-dev/pytest/issues/2533>`_)
|
||||
|
||||
- Improve error message for CollectError with skip/skipif. (`#2546
|
||||
<https://github.com/pytest-dev/pytest/issues/2546>`_)
|
||||
|
||||
- Emit warning about ``yield`` tests being deprecated only once per generator.
|
||||
(`#2562 <https://github.com/pytest-dev/pytest/issues/2562>`_)
|
||||
|
||||
- Ensure final collected line doesn't include artifacts of previous write.
|
||||
(`#2571 <https://github.com/pytest-dev/pytest/issues/2571>`_)
|
||||
|
||||
- Fixed all flake8 errors and warnings. (`#2581 <https://github.com/pytest-
|
||||
dev/pytest/issues/2581>`_)
|
||||
|
||||
- Added ``fix-lint`` tox environment to run automatic pep8 fixes on the code.
|
||||
(`#2582 <https://github.com/pytest-dev/pytest/issues/2582>`_)
|
||||
|
||||
- Turn warnings into errors in pytest's own test suite in order to catch
|
||||
regressions due to deprecations more promptly. (`#2588
|
||||
<https://github.com/pytest-dev/pytest/issues/2588>`_)
|
||||
|
||||
- Show multiple issue links in CHANGELOG entries. (`#2620
|
||||
<https://github.com/pytest-dev/pytest/issues/2620>`_)
|
||||
|
||||
|
||||
Pytest 3.1.3 (2017-07-03)
|
||||
=========================
|
||||
|
||||
|
|
|
@ -679,7 +679,7 @@ class FormattedExcinfo(object):
|
|||
e = e.__cause__
|
||||
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
|
||||
descr = 'The above exception was the direct cause of the following exception:'
|
||||
elif e.__context__ is not None:
|
||||
elif (e.__context__ is not None and not e.__suppress_context__):
|
||||
e = e.__context__
|
||||
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
|
||||
descr = 'During handling of the above exception, another exception occurred:'
|
||||
|
|
|
@ -8,13 +8,14 @@ from __future__ import absolute_import, division, print_function
|
|||
import py
|
||||
import pytest
|
||||
import json
|
||||
import os
|
||||
from os.path import sep as _sep, altsep as _altsep
|
||||
|
||||
|
||||
class Cache(object):
|
||||
def __init__(self, config):
|
||||
self.config = config
|
||||
self._cachedir = config.rootdir.join(".cache")
|
||||
self._cachedir = Cache.cache_dir_from_config(config)
|
||||
self.trace = config.trace.root.get("cache")
|
||||
if config.getvalue("cacheclear"):
|
||||
self.trace("clearing cachedir")
|
||||
|
@ -22,6 +23,16 @@ class Cache(object):
|
|||
self._cachedir.remove()
|
||||
self._cachedir.mkdir()
|
||||
|
||||
@staticmethod
|
||||
def cache_dir_from_config(config):
|
||||
cache_dir = config.getini("cache_dir")
|
||||
cache_dir = os.path.expanduser(cache_dir)
|
||||
cache_dir = os.path.expandvars(cache_dir)
|
||||
if os.path.isabs(cache_dir):
|
||||
return py.path.local(cache_dir)
|
||||
else:
|
||||
return config.rootdir.join(cache_dir)
|
||||
|
||||
def makedir(self, name):
|
||||
""" return a directory path object with the given name. If the
|
||||
directory does not yet exist, it will be created. You can use it
|
||||
|
@ -94,27 +105,26 @@ class LFPlugin:
|
|||
self.config = config
|
||||
active_keys = 'lf', 'failedfirst'
|
||||
self.active = any(config.getvalue(key) for key in active_keys)
|
||||
if self.active:
|
||||
self.lastfailed = config.cache.get("cache/lastfailed", {})
|
||||
else:
|
||||
self.lastfailed = {}
|
||||
self.lastfailed = config.cache.get("cache/lastfailed", {})
|
||||
self._previously_failed_count = None
|
||||
|
||||
def pytest_report_header(self):
|
||||
def pytest_report_collectionfinish(self):
|
||||
if self.active:
|
||||
if not self.lastfailed:
|
||||
if not self._previously_failed_count:
|
||||
mode = "run all (no recorded failures)"
|
||||
else:
|
||||
mode = "rerun last %d failures%s" % (
|
||||
len(self.lastfailed),
|
||||
" first" if self.config.getvalue("failedfirst") else "")
|
||||
noun = 'failure' if self._previously_failed_count == 1 else 'failures'
|
||||
suffix = " first" if self.config.getvalue("failedfirst") else ""
|
||||
mode = "rerun previous {count} {noun}{suffix}".format(
|
||||
count=self._previously_failed_count, suffix=suffix, noun=noun
|
||||
)
|
||||
return "run-last-failure: %s" % mode
|
||||
|
||||
def pytest_runtest_logreport(self, report):
|
||||
if report.failed and "xfail" not in report.keywords:
|
||||
if (report.when == 'call' and report.passed) or report.skipped:
|
||||
self.lastfailed.pop(report.nodeid, None)
|
||||
elif report.failed:
|
||||
self.lastfailed[report.nodeid] = True
|
||||
elif not report.failed:
|
||||
if report.when == "call":
|
||||
self.lastfailed.pop(report.nodeid, None)
|
||||
|
||||
def pytest_collectreport(self, report):
|
||||
passed = report.outcome in ('passed', 'skipped')
|
||||
|
@ -136,11 +146,12 @@ class LFPlugin:
|
|||
previously_failed.append(item)
|
||||
else:
|
||||
previously_passed.append(item)
|
||||
if not previously_failed and previously_passed:
|
||||
self._previously_failed_count = len(previously_failed)
|
||||
if not previously_failed:
|
||||
# running a subset of all tests with recorded failures outside
|
||||
# of the set of tests currently executing
|
||||
pass
|
||||
elif self.config.getvalue("lf"):
|
||||
return
|
||||
if self.config.getvalue("lf"):
|
||||
items[:] = previously_failed
|
||||
config.hook.pytest_deselected(items=previously_passed)
|
||||
else:
|
||||
|
@ -150,8 +161,9 @@ class LFPlugin:
|
|||
config = self.config
|
||||
if config.getvalue("cacheshow") or hasattr(config, "slaveinput"):
|
||||
return
|
||||
prev_failed = config.cache.get("cache/lastfailed", None) is not None
|
||||
if (session.testscollected and prev_failed) or self.lastfailed:
|
||||
|
||||
saved_lastfailed = config.cache.get("cache/lastfailed", {})
|
||||
if saved_lastfailed != self.lastfailed:
|
||||
config.cache.set("cache/lastfailed", self.lastfailed)
|
||||
|
||||
|
||||
|
@ -172,6 +184,9 @@ def pytest_addoption(parser):
|
|||
group.addoption(
|
||||
'--cache-clear', action='store_true', dest="cacheclear",
|
||||
help="remove all cache contents at start of test run.")
|
||||
parser.addini(
|
||||
"cache_dir", default='.cache',
|
||||
help="cache directory path.")
|
||||
|
||||
|
||||
def pytest_cmdline_main(config):
|
||||
|
|
|
@ -123,6 +123,7 @@ if sys.version_info[:2] == (2, 6):
|
|||
if _PY3:
|
||||
import codecs
|
||||
imap = map
|
||||
izip = zip
|
||||
STRING_TYPES = bytes, str
|
||||
UNICODE_TYPES = str,
|
||||
|
||||
|
@ -158,7 +159,7 @@ else:
|
|||
STRING_TYPES = bytes, str, unicode
|
||||
UNICODE_TYPES = unicode,
|
||||
|
||||
from itertools import imap # NOQA
|
||||
from itertools import imap, izip # NOQA
|
||||
|
||||
def _ascii_escaped(val):
|
||||
"""In py2 bytes and str are the same type, so return if it's a bytes
|
||||
|
|
|
@ -7,6 +7,11 @@ be removed when the time comes.
|
|||
"""
|
||||
from __future__ import absolute_import, division, print_function
|
||||
|
||||
|
||||
class RemovedInPytest4Warning(DeprecationWarning):
|
||||
"""warning class for features removed in pytest 4.0"""
|
||||
|
||||
|
||||
MAIN_STR_ARGS = 'passing a string to pytest.main() is deprecated, ' \
|
||||
'pass a list of arguments instead.'
|
||||
|
||||
|
@ -22,3 +27,13 @@ SETUP_CFG_PYTEST = '[pytest] section in setup.cfg files is deprecated, use [tool
|
|||
GETFUNCARGVALUE = "use of getfuncargvalue is deprecated, use getfixturevalue"
|
||||
|
||||
RESULT_LOG = '--result-log is deprecated and scheduled for removal in pytest 4.0'
|
||||
|
||||
MARK_INFO_ATTRIBUTE = RemovedInPytest4Warning(
|
||||
"MarkInfo objects are deprecated as they contain the merged marks"
|
||||
)
|
||||
|
||||
MARK_PARAMETERSET_UNPACKING = RemovedInPytest4Warning(
|
||||
"Applying marks directly to parameters is deprecated,"
|
||||
" please use pytest.param(..., marks=...) instead.\n"
|
||||
"For more details, see: https://docs.pytest.org/en/latest/parametrize.html"
|
||||
)
|
||||
|
|
|
@ -16,9 +16,14 @@ from _pytest.compat import (
|
|||
getlocation, getfuncargnames,
|
||||
safe_getattr,
|
||||
)
|
||||
from _pytest.runner import fail
|
||||
from _pytest.outcomes import fail, TEST_OUTCOME
|
||||
from _pytest.compat import FuncargnamesCompatAttr
|
||||
|
||||
if sys.version_info[:2] == (2, 6):
|
||||
from ordereddict import OrderedDict
|
||||
else:
|
||||
from collections import OrderedDict
|
||||
|
||||
|
||||
def pytest_sessionstart(session):
|
||||
import _pytest.python
|
||||
|
@ -121,7 +126,7 @@ def getfixturemarker(obj):
|
|||
exceptions."""
|
||||
try:
|
||||
return getattr(obj, "_pytestfixturefunction", None)
|
||||
except Exception:
|
||||
except TEST_OUTCOME:
|
||||
# some objects raise errors like request (from flask import request)
|
||||
# we don't expect them to be fixture functions
|
||||
return None
|
||||
|
@ -136,10 +141,10 @@ def get_parametrized_fixture_keys(item, scopenum):
|
|||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
# cs.indictes.items() is random order of argnames but
|
||||
# then again different functions (items) can change order of
|
||||
# arguments so it doesn't matter much probably
|
||||
for argname, param_index in cs.indices.items():
|
||||
# cs.indices.items() is random order of argnames. Need to
|
||||
# sort this so that different calls to
|
||||
# get_parametrized_fixture_keys will be deterministic.
|
||||
for argname, param_index in sorted(cs.indices.items()):
|
||||
if cs._arg2scopenum[argname] != scopenum:
|
||||
continue
|
||||
if scopenum == 0: # session
|
||||
|
@ -161,7 +166,7 @@ def reorder_items(items):
|
|||
for scopenum in range(0, scopenum_function):
|
||||
argkeys_cache[scopenum] = d = {}
|
||||
for item in items:
|
||||
keys = set(get_parametrized_fixture_keys(item, scopenum))
|
||||
keys = OrderedDict.fromkeys(get_parametrized_fixture_keys(item, scopenum))
|
||||
if keys:
|
||||
d[item] = keys
|
||||
return reorder_items_atscope(items, set(), argkeys_cache, 0)
|
||||
|
@ -196,9 +201,9 @@ def slice_items(items, ignore, scoped_argkeys_cache):
|
|||
for i, item in enumerate(it):
|
||||
argkeys = scoped_argkeys_cache.get(item)
|
||||
if argkeys is not None:
|
||||
argkeys = argkeys.difference(ignore)
|
||||
if argkeys: # found a slicing key
|
||||
slicing_argkey = argkeys.pop()
|
||||
newargkeys = OrderedDict.fromkeys(k for k in argkeys if k not in ignore)
|
||||
if newargkeys: # found a slicing key
|
||||
slicing_argkey, _ = newargkeys.popitem()
|
||||
items_before = items[:i]
|
||||
items_same = [item]
|
||||
items_other = []
|
||||
|
@ -811,7 +816,7 @@ def pytest_fixture_setup(fixturedef, request):
|
|||
my_cache_key = request.param_index
|
||||
try:
|
||||
result = call_fixture_func(fixturefunc, request, kwargs)
|
||||
except Exception:
|
||||
except TEST_OUTCOME:
|
||||
fixturedef.cached_result = (None, my_cache_key, sys.exc_info())
|
||||
raise
|
||||
fixturedef.cached_result = (result, my_cache_key, None)
|
||||
|
|
|
@ -337,7 +337,10 @@ def pytest_assertrepr_compare(config, op, left, right):
|
|||
|
||||
|
||||
def pytest_report_header(config, startdir):
|
||||
""" return a string to be displayed as header info for terminal reporting.
|
||||
""" return a string or list of strings to be displayed as header info for terminal reporting.
|
||||
|
||||
:param config: the pytest config object.
|
||||
:param startdir: py.path object with the starting dir
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -347,6 +350,20 @@ def pytest_report_header(config, startdir):
|
|||
"""
|
||||
|
||||
|
||||
def pytest_report_collectionfinish(config, startdir, items):
|
||||
"""
|
||||
.. versionadded:: 3.2
|
||||
|
||||
return a string or list of strings to be displayed after collection has finished successfully.
|
||||
|
||||
This strings will be displayed after the standard "collected X items" message.
|
||||
|
||||
:param config: the pytest config object.
|
||||
:param startdir: py.path object with the starting dir
|
||||
:param items: list of pytest items that are going to be executed; this list should not be modified.
|
||||
"""
|
||||
|
||||
|
||||
@hookspec(firstresult=True)
|
||||
def pytest_report_teststatus(report):
|
||||
""" return result-category, shortletter and verbose word for reporting.
|
||||
|
|
|
@ -14,7 +14,8 @@ except ImportError:
|
|||
from UserDict import DictMixin as MappingMixin
|
||||
|
||||
from _pytest.config import directory_arg, UsageError, hookimpl
|
||||
from _pytest.runner import collect_one_node, exit
|
||||
from _pytest.runner import collect_one_node
|
||||
from _pytest.outcomes import exit
|
||||
|
||||
tracebackcutdir = py.path.local(_pytest.__file__).dirpath()
|
||||
|
||||
|
@ -72,6 +73,9 @@ def pytest_addoption(parser):
|
|||
group.addoption('--keepduplicates', '--keep-duplicates', action="store_true",
|
||||
dest="keepduplicates", default=False,
|
||||
help="Keep duplicate tests.")
|
||||
group.addoption('--collect-in-virtualenv', action='store_true',
|
||||
dest='collect_in_virtualenv', default=False,
|
||||
help="Don't ignore tests in a local virtualenv directory")
|
||||
|
||||
group = parser.getgroup("debugconfig",
|
||||
"test session debugging and configuration")
|
||||
|
@ -168,6 +172,17 @@ def pytest_runtestloop(session):
|
|||
return True
|
||||
|
||||
|
||||
def _in_venv(path):
|
||||
"""Attempts to detect if ``path`` is the root of a Virtual Environment by
|
||||
checking for the existence of the appropriate activate script"""
|
||||
bindir = path.join('Scripts' if sys.platform.startswith('win') else 'bin')
|
||||
if not bindir.exists():
|
||||
return False
|
||||
activates = ('activate', 'activate.csh', 'activate.fish',
|
||||
'Activate', 'Activate.bat', 'Activate.ps1')
|
||||
return any([fname.basename in activates for fname in bindir.listdir()])
|
||||
|
||||
|
||||
def pytest_ignore_collect(path, config):
|
||||
ignore_paths = config._getconftest_pathlist("collect_ignore", path=path.dirpath())
|
||||
ignore_paths = ignore_paths or []
|
||||
|
@ -178,6 +193,10 @@ def pytest_ignore_collect(path, config):
|
|||
if py.path.local(path) in ignore_paths:
|
||||
return True
|
||||
|
||||
allow_in_venv = config.getoption("collect_in_virtualenv")
|
||||
if _in_venv(path) and not allow_in_venv:
|
||||
return True
|
||||
|
||||
# Skip duplicate paths.
|
||||
keepduplicates = config.getoption("keepduplicates")
|
||||
duplicate_paths = config.pluginmanager._duplicatepaths
|
||||
|
|
111
_pytest/mark.py
111
_pytest/mark.py
|
@ -2,13 +2,21 @@
|
|||
from __future__ import absolute_import, division, print_function
|
||||
|
||||
import inspect
|
||||
import warnings
|
||||
from collections import namedtuple
|
||||
from operator import attrgetter
|
||||
from .compat import imap
|
||||
from .deprecated import MARK_PARAMETERSET_UNPACKING
|
||||
|
||||
|
||||
def alias(name):
|
||||
return property(attrgetter(name), doc='alias for ' + name)
|
||||
def alias(name, warning=None):
|
||||
getter = attrgetter(name)
|
||||
|
||||
def warned(self):
|
||||
warnings.warn(warning, stacklevel=2)
|
||||
return getter(self)
|
||||
|
||||
return property(getter if warning is None else warned, doc='alias for ' + name)
|
||||
|
||||
|
||||
class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
|
||||
|
@ -54,6 +62,9 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
|
|||
if legacy_force_tuple:
|
||||
argval = argval,
|
||||
|
||||
if newmarks:
|
||||
warnings.warn(MARK_PARAMETERSET_UNPACKING)
|
||||
|
||||
return cls(argval, marks=newmarks, id=None)
|
||||
|
||||
@property
|
||||
|
@ -324,6 +335,17 @@ class MarkDecorator:
|
|||
def __repr__(self):
|
||||
return "<MarkDecorator %r>" % (self.mark,)
|
||||
|
||||
def with_args(self, *args, **kwargs):
|
||||
""" return a MarkDecorator with extra arguments added
|
||||
|
||||
unlike call this can be used even if the sole argument is a callable/class
|
||||
|
||||
:return: MarkDecorator
|
||||
"""
|
||||
|
||||
mark = Mark(self.name, args, kwargs)
|
||||
return self.__class__(self.mark.combined_with(mark))
|
||||
|
||||
def __call__(self, *args, **kwargs):
|
||||
""" if passed a single callable argument: decorate it with mark info.
|
||||
otherwise add *args/**kwargs in-place to mark information. """
|
||||
|
@ -332,27 +354,49 @@ class MarkDecorator:
|
|||
is_class = inspect.isclass(func)
|
||||
if len(args) == 1 and (istestfunc(func) or is_class):
|
||||
if is_class:
|
||||
if hasattr(func, 'pytestmark'):
|
||||
mark_list = func.pytestmark
|
||||
if not isinstance(mark_list, list):
|
||||
mark_list = [mark_list]
|
||||
# always work on a copy to avoid updating pytestmark
|
||||
# from a superclass by accident
|
||||
mark_list = mark_list + [self]
|
||||
func.pytestmark = mark_list
|
||||
else:
|
||||
func.pytestmark = [self]
|
||||
store_mark(func, self.mark)
|
||||
else:
|
||||
holder = getattr(func, self.name, None)
|
||||
if holder is None:
|
||||
holder = MarkInfo(self.mark)
|
||||
setattr(func, self.name, holder)
|
||||
else:
|
||||
holder.add_mark(self.mark)
|
||||
store_legacy_markinfo(func, self.mark)
|
||||
store_mark(func, self.mark)
|
||||
return func
|
||||
return self.with_args(*args, **kwargs)
|
||||
|
||||
mark = Mark(self.name, args, kwargs)
|
||||
return self.__class__(self.mark.combined_with(mark))
|
||||
|
||||
def get_unpacked_marks(obj):
|
||||
"""
|
||||
obtain the unpacked marks that are stored on a object
|
||||
"""
|
||||
mark_list = getattr(obj, 'pytestmark', [])
|
||||
|
||||
if not isinstance(mark_list, list):
|
||||
mark_list = [mark_list]
|
||||
return [
|
||||
getattr(mark, 'mark', mark) # unpack MarkDecorator
|
||||
for mark in mark_list
|
||||
]
|
||||
|
||||
|
||||
def store_mark(obj, mark):
|
||||
"""store a Mark on a object
|
||||
this is used to implement the Mark declarations/decorators correctly
|
||||
"""
|
||||
assert isinstance(mark, Mark), mark
|
||||
# always reassign name to avoid updating pytestmark
|
||||
# in a referene that was only borrowed
|
||||
obj.pytestmark = get_unpacked_marks(obj) + [mark]
|
||||
|
||||
|
||||
def store_legacy_markinfo(func, mark):
|
||||
"""create the legacy MarkInfo objects and put them onto the function
|
||||
"""
|
||||
if not isinstance(mark, Mark):
|
||||
raise TypeError("got {mark!r} instead of a Mark".format(mark=mark))
|
||||
holder = getattr(func, mark.name, None)
|
||||
if holder is None:
|
||||
holder = MarkInfo(mark)
|
||||
setattr(func, mark.name, holder)
|
||||
else:
|
||||
holder.add_mark(mark)
|
||||
|
||||
|
||||
class Mark(namedtuple('Mark', 'name, args, kwargs')):
|
||||
|
@ -390,3 +434,30 @@ class MarkInfo(object):
|
|||
|
||||
|
||||
MARK_GEN = MarkGenerator()
|
||||
|
||||
|
||||
def _marked(func, mark):
|
||||
""" Returns True if :func: is already marked with :mark:, False otherwise.
|
||||
This can happen if marker is applied to class and the test file is
|
||||
invoked more than once.
|
||||
"""
|
||||
try:
|
||||
func_mark = getattr(func, mark.name)
|
||||
except AttributeError:
|
||||
return False
|
||||
return mark.args == func_mark.args and mark.kwargs == func_mark.kwargs
|
||||
|
||||
|
||||
def transfer_markers(funcobj, cls, mod):
|
||||
"""
|
||||
this function transfers class level markers and module level markers
|
||||
into function level markinfo objects
|
||||
|
||||
this is the main reason why marks are so broken
|
||||
the resolution will involve phasing out function level MarkInfo objects
|
||||
|
||||
"""
|
||||
for obj in (cls, mod):
|
||||
for mark in get_unpacked_marks(obj):
|
||||
if not _marked(funcobj, mark):
|
||||
store_legacy_markinfo(funcobj, mark)
|
||||
|
|
|
@ -0,0 +1,140 @@
|
|||
"""
|
||||
exception classes and constants handling test outcomes
|
||||
as well as functions creating them
|
||||
"""
|
||||
from __future__ import absolute_import, division, print_function
|
||||
import py
|
||||
import sys
|
||||
|
||||
|
||||
class OutcomeException(BaseException):
|
||||
""" OutcomeException and its subclass instances indicate and
|
||||
contain info about test and collection outcomes.
|
||||
"""
|
||||
def __init__(self, msg=None, pytrace=True):
|
||||
BaseException.__init__(self, msg)
|
||||
self.msg = msg
|
||||
self.pytrace = pytrace
|
||||
|
||||
def __repr__(self):
|
||||
if self.msg:
|
||||
val = self.msg
|
||||
if isinstance(val, bytes):
|
||||
val = py._builtin._totext(val, errors='replace')
|
||||
return val
|
||||
return "<%s instance>" % (self.__class__.__name__,)
|
||||
__str__ = __repr__
|
||||
|
||||
|
||||
TEST_OUTCOME = (OutcomeException, Exception)
|
||||
|
||||
|
||||
class Skipped(OutcomeException):
|
||||
# XXX hackish: on 3k we fake to live in the builtins
|
||||
# in order to have Skipped exception printing shorter/nicer
|
||||
__module__ = 'builtins'
|
||||
|
||||
def __init__(self, msg=None, pytrace=True, allow_module_level=False):
|
||||
OutcomeException.__init__(self, msg=msg, pytrace=pytrace)
|
||||
self.allow_module_level = allow_module_level
|
||||
|
||||
|
||||
class Failed(OutcomeException):
|
||||
""" raised from an explicit call to pytest.fail() """
|
||||
__module__ = 'builtins'
|
||||
|
||||
|
||||
class Exit(KeyboardInterrupt):
|
||||
""" raised for immediate program exits (no tracebacks/summaries)"""
|
||||
def __init__(self, msg="unknown reason"):
|
||||
self.msg = msg
|
||||
KeyboardInterrupt.__init__(self, msg)
|
||||
|
||||
# exposed helper methods
|
||||
|
||||
|
||||
def exit(msg):
|
||||
""" exit testing process as if KeyboardInterrupt was triggered. """
|
||||
__tracebackhide__ = True
|
||||
raise Exit(msg)
|
||||
|
||||
|
||||
exit.Exception = Exit
|
||||
|
||||
|
||||
def skip(msg=""):
|
||||
""" skip an executing test with the given message. Note: it's usually
|
||||
better to use the pytest.mark.skipif marker to declare a test to be
|
||||
skipped under certain conditions like mismatching platforms or
|
||||
dependencies. See the pytest_skipping plugin for details.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
raise Skipped(msg=msg)
|
||||
|
||||
|
||||
skip.Exception = Skipped
|
||||
|
||||
|
||||
def fail(msg="", pytrace=True):
|
||||
""" explicitly fail an currently-executing test with the given Message.
|
||||
|
||||
:arg pytrace: if false the msg represents the full failure information
|
||||
and no python traceback will be reported.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
raise Failed(msg=msg, pytrace=pytrace)
|
||||
|
||||
|
||||
fail.Exception = Failed
|
||||
|
||||
|
||||
class XFailed(fail.Exception):
|
||||
""" raised from an explicit call to pytest.xfail() """
|
||||
|
||||
|
||||
def xfail(reason=""):
|
||||
""" xfail an executing test or setup functions with the given reason."""
|
||||
__tracebackhide__ = True
|
||||
raise XFailed(reason)
|
||||
|
||||
|
||||
xfail.Exception = XFailed
|
||||
|
||||
|
||||
def importorskip(modname, minversion=None):
|
||||
""" return imported module if it has at least "minversion" as its
|
||||
__version__ attribute. If no minversion is specified the a skip
|
||||
is only triggered if the module can not be imported.
|
||||
"""
|
||||
import warnings
|
||||
__tracebackhide__ = True
|
||||
compile(modname, '', 'eval') # to catch syntaxerrors
|
||||
should_skip = False
|
||||
|
||||
with warnings.catch_warnings():
|
||||
# make sure to ignore ImportWarnings that might happen because
|
||||
# of existing directories with the same name we're trying to
|
||||
# import but without a __init__.py file
|
||||
warnings.simplefilter('ignore')
|
||||
try:
|
||||
__import__(modname)
|
||||
except ImportError:
|
||||
# Do not raise chained exception here(#1485)
|
||||
should_skip = True
|
||||
if should_skip:
|
||||
raise Skipped("could not import %r" % (modname,), allow_module_level=True)
|
||||
mod = sys.modules[modname]
|
||||
if minversion is None:
|
||||
return mod
|
||||
verattr = getattr(mod, '__version__', None)
|
||||
if minversion is not None:
|
||||
try:
|
||||
from pkg_resources import parse_version as pv
|
||||
except ImportError:
|
||||
raise Skipped("we have a required version for %r but can not import "
|
||||
"pkg_resources to parse version strings." % (modname,),
|
||||
allow_module_level=True)
|
||||
if verattr is None or pv(verattr) < pv(minversion):
|
||||
raise Skipped("module %r has __version__ %r, required is: %r" % (
|
||||
modname, verattr, minversion), allow_module_level=True)
|
||||
return mod
|
|
@ -6,7 +6,6 @@ import inspect
|
|||
import sys
|
||||
import os
|
||||
import collections
|
||||
import math
|
||||
from textwrap import dedent
|
||||
from itertools import count
|
||||
|
||||
|
@ -24,7 +23,8 @@ from _pytest.compat import (
|
|||
get_real_func, getfslineno, safe_getattr,
|
||||
safe_str, getlocation, enum,
|
||||
)
|
||||
from _pytest.runner import fail
|
||||
from _pytest.outcomes import fail
|
||||
from _pytest.mark import transfer_markers
|
||||
|
||||
cutdir1 = py.path.local(pluggy.__file__.rstrip("oc"))
|
||||
cutdir2 = py.path.local(_pytest.__file__).dirpath()
|
||||
|
@ -275,10 +275,22 @@ class PyCollector(PyobjMixin, main.Collector):
|
|||
return self._matches_prefix_or_glob_option('python_classes', name)
|
||||
|
||||
def istestfunction(self, obj, name):
|
||||
return (
|
||||
(self.funcnamefilter(name) or self.isnosetest(obj)) and
|
||||
safe_getattr(obj, "__call__", False) and fixtures.getfixturemarker(obj) is None
|
||||
)
|
||||
if self.funcnamefilter(name) or self.isnosetest(obj):
|
||||
if isinstance(obj, staticmethod):
|
||||
# static methods need to be unwrapped
|
||||
obj = safe_getattr(obj, '__func__', False)
|
||||
if obj is False:
|
||||
# Python 2.6 wraps in a different way that we won't try to handle
|
||||
msg = "cannot collect static method %r because " \
|
||||
"it is not a function (always the case in Python 2.6)"
|
||||
self.warn(
|
||||
code="C2", message=msg % name)
|
||||
return False
|
||||
return (
|
||||
safe_getattr(obj, "__call__", False) and fixtures.getfixturemarker(obj) is None
|
||||
)
|
||||
else:
|
||||
return False
|
||||
|
||||
def istestclass(self, obj, name):
|
||||
return self.classnamefilter(name) or self.isnosetest(obj)
|
||||
|
@ -366,35 +378,6 @@ class PyCollector(PyobjMixin, main.Collector):
|
|||
)
|
||||
|
||||
|
||||
def _marked(func, mark):
|
||||
""" Returns True if :func: is already marked with :mark:, False otherwise.
|
||||
This can happen if marker is applied to class and the test file is
|
||||
invoked more than once.
|
||||
"""
|
||||
try:
|
||||
func_mark = getattr(func, mark.name)
|
||||
except AttributeError:
|
||||
return False
|
||||
return mark.args == func_mark.args and mark.kwargs == func_mark.kwargs
|
||||
|
||||
|
||||
def transfer_markers(funcobj, cls, mod):
|
||||
# XXX this should rather be code in the mark plugin or the mark
|
||||
# plugin should merge with the python plugin.
|
||||
for holder in (cls, mod):
|
||||
try:
|
||||
pytestmark = holder.pytestmark
|
||||
except AttributeError:
|
||||
continue
|
||||
if isinstance(pytestmark, list):
|
||||
for mark in pytestmark:
|
||||
if not _marked(funcobj, mark):
|
||||
mark(funcobj)
|
||||
else:
|
||||
if not _marked(funcobj, pytestmark):
|
||||
pytestmark(funcobj)
|
||||
|
||||
|
||||
class Module(main.File, PyCollector):
|
||||
""" Collector for test classes and functions. """
|
||||
|
||||
|
@ -1113,438 +1096,6 @@ def write_docstring(tw, doc):
|
|||
tw.write(INDENT + line + "\n")
|
||||
|
||||
|
||||
# builtin pytest.raises helper
|
||||
|
||||
def raises(expected_exception, *args, **kwargs):
|
||||
"""
|
||||
Assert that a code block/function call raises ``expected_exception``
|
||||
and raise a failure exception otherwise.
|
||||
|
||||
This helper produces a ``ExceptionInfo()`` object (see below).
|
||||
|
||||
If using Python 2.5 or above, you may use this function as a
|
||||
context manager::
|
||||
|
||||
>>> with raises(ZeroDivisionError):
|
||||
... 1/0
|
||||
|
||||
.. versionchanged:: 2.10
|
||||
|
||||
In the context manager form you may use the keyword argument
|
||||
``message`` to specify a custom failure message::
|
||||
|
||||
>>> with raises(ZeroDivisionError, message="Expecting ZeroDivisionError"):
|
||||
... pass
|
||||
Traceback (most recent call last):
|
||||
...
|
||||
Failed: Expecting ZeroDivisionError
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
When using ``pytest.raises`` as a context manager, it's worthwhile to
|
||||
note that normal context manager rules apply and that the exception
|
||||
raised *must* be the final line in the scope of the context manager.
|
||||
Lines of code after that, within the scope of the context manager will
|
||||
not be executed. For example::
|
||||
|
||||
>>> value = 15
|
||||
>>> with raises(ValueError) as exc_info:
|
||||
... if value > 10:
|
||||
... raise ValueError("value must be <= 10")
|
||||
... assert exc_info.type == ValueError # this will not execute
|
||||
|
||||
Instead, the following approach must be taken (note the difference in
|
||||
scope)::
|
||||
|
||||
>>> with raises(ValueError) as exc_info:
|
||||
... if value > 10:
|
||||
... raise ValueError("value must be <= 10")
|
||||
...
|
||||
>>> assert exc_info.type == ValueError
|
||||
|
||||
Or you can use the keyword argument ``match`` to assert that the
|
||||
exception matches a text or regex::
|
||||
|
||||
>>> with raises(ValueError, match='must be 0 or None'):
|
||||
... raise ValueError("value must be 0 or None")
|
||||
|
||||
>>> with raises(ValueError, match=r'must be \d+$'):
|
||||
... raise ValueError("value must be 42")
|
||||
|
||||
|
||||
Or you can specify a callable by passing a to-be-called lambda::
|
||||
|
||||
>>> raises(ZeroDivisionError, lambda: 1/0)
|
||||
<ExceptionInfo ...>
|
||||
|
||||
or you can specify an arbitrary callable with arguments::
|
||||
|
||||
>>> def f(x): return 1/x
|
||||
...
|
||||
>>> raises(ZeroDivisionError, f, 0)
|
||||
<ExceptionInfo ...>
|
||||
>>> raises(ZeroDivisionError, f, x=0)
|
||||
<ExceptionInfo ...>
|
||||
|
||||
A third possibility is to use a string to be executed::
|
||||
|
||||
>>> raises(ZeroDivisionError, "f(0)")
|
||||
<ExceptionInfo ...>
|
||||
|
||||
.. autoclass:: _pytest._code.ExceptionInfo
|
||||
:members:
|
||||
|
||||
.. note::
|
||||
Similar to caught exception objects in Python, explicitly clearing
|
||||
local references to returned ``ExceptionInfo`` objects can
|
||||
help the Python interpreter speed up its garbage collection.
|
||||
|
||||
Clearing those references breaks a reference cycle
|
||||
(``ExceptionInfo`` --> caught exception --> frame stack raising
|
||||
the exception --> current frame stack --> local variables -->
|
||||
``ExceptionInfo``) which makes Python keep all objects referenced
|
||||
from that cycle (including all local variables in the current
|
||||
frame) alive until the next cyclic garbage collection run. See the
|
||||
official Python ``try`` statement documentation for more detailed
|
||||
information.
|
||||
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
msg = ("exceptions must be old-style classes or"
|
||||
" derived from BaseException, not %s")
|
||||
if isinstance(expected_exception, tuple):
|
||||
for exc in expected_exception:
|
||||
if not isclass(exc):
|
||||
raise TypeError(msg % type(exc))
|
||||
elif not isclass(expected_exception):
|
||||
raise TypeError(msg % type(expected_exception))
|
||||
|
||||
message = "DID NOT RAISE {0}".format(expected_exception)
|
||||
match_expr = None
|
||||
|
||||
if not args:
|
||||
if "message" in kwargs:
|
||||
message = kwargs.pop("message")
|
||||
if "match" in kwargs:
|
||||
match_expr = kwargs.pop("match")
|
||||
message += " matching '{0}'".format(match_expr)
|
||||
return RaisesContext(expected_exception, message, match_expr)
|
||||
elif isinstance(args[0], str):
|
||||
code, = args
|
||||
assert isinstance(code, str)
|
||||
frame = sys._getframe(1)
|
||||
loc = frame.f_locals.copy()
|
||||
loc.update(kwargs)
|
||||
# print "raises frame scope: %r" % frame.f_locals
|
||||
try:
|
||||
code = _pytest._code.Source(code).compile()
|
||||
py.builtin.exec_(code, frame.f_globals, loc)
|
||||
# XXX didn'T mean f_globals == f_locals something special?
|
||||
# this is destroyed here ...
|
||||
except expected_exception:
|
||||
return _pytest._code.ExceptionInfo()
|
||||
else:
|
||||
func = args[0]
|
||||
try:
|
||||
func(*args[1:], **kwargs)
|
||||
except expected_exception:
|
||||
return _pytest._code.ExceptionInfo()
|
||||
fail(message)
|
||||
|
||||
|
||||
raises.Exception = fail.Exception
|
||||
|
||||
|
||||
class RaisesContext(object):
|
||||
def __init__(self, expected_exception, message, match_expr):
|
||||
self.expected_exception = expected_exception
|
||||
self.message = message
|
||||
self.match_expr = match_expr
|
||||
self.excinfo = None
|
||||
|
||||
def __enter__(self):
|
||||
self.excinfo = object.__new__(_pytest._code.ExceptionInfo)
|
||||
return self.excinfo
|
||||
|
||||
def __exit__(self, *tp):
|
||||
__tracebackhide__ = True
|
||||
if tp[0] is None:
|
||||
fail(self.message)
|
||||
if sys.version_info < (2, 7):
|
||||
# py26: on __exit__() exc_value often does not contain the
|
||||
# exception value.
|
||||
# http://bugs.python.org/issue7853
|
||||
if not isinstance(tp[1], BaseException):
|
||||
exc_type, value, traceback = tp
|
||||
tp = exc_type, exc_type(value), traceback
|
||||
self.excinfo.__init__(tp)
|
||||
suppress_exception = issubclass(self.excinfo.type, self.expected_exception)
|
||||
if sys.version_info[0] == 2 and suppress_exception:
|
||||
sys.exc_clear()
|
||||
if self.match_expr:
|
||||
self.excinfo.match(self.match_expr)
|
||||
return suppress_exception
|
||||
|
||||
|
||||
# builtin pytest.approx helper
|
||||
|
||||
class approx(object):
|
||||
"""
|
||||
Assert that two numbers (or two sets of numbers) are equal to each other
|
||||
within some tolerance.
|
||||
|
||||
Due to the `intricacies of floating-point arithmetic`__, numbers that we
|
||||
would intuitively expect to be equal are not always so::
|
||||
|
||||
>>> 0.1 + 0.2 == 0.3
|
||||
False
|
||||
|
||||
__ https://docs.python.org/3/tutorial/floatingpoint.html
|
||||
|
||||
This problem is commonly encountered when writing tests, e.g. when making
|
||||
sure that floating-point values are what you expect them to be. One way to
|
||||
deal with this problem is to assert that two floating-point numbers are
|
||||
equal to within some appropriate tolerance::
|
||||
|
||||
>>> abs((0.1 + 0.2) - 0.3) < 1e-6
|
||||
True
|
||||
|
||||
However, comparisons like this are tedious to write and difficult to
|
||||
understand. Furthermore, absolute comparisons like the one above are
|
||||
usually discouraged because there's no tolerance that works well for all
|
||||
situations. ``1e-6`` is good for numbers around ``1``, but too small for
|
||||
very big numbers and too big for very small ones. It's better to express
|
||||
the tolerance as a fraction of the expected value, but relative comparisons
|
||||
like that are even more difficult to write correctly and concisely.
|
||||
|
||||
The ``approx`` class performs floating-point comparisons using a syntax
|
||||
that's as intuitive as possible::
|
||||
|
||||
>>> from pytest import approx
|
||||
>>> 0.1 + 0.2 == approx(0.3)
|
||||
True
|
||||
|
||||
The same syntax also works on sequences of numbers::
|
||||
|
||||
>>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6))
|
||||
True
|
||||
|
||||
By default, ``approx`` considers numbers within a relative tolerance of
|
||||
``1e-6`` (i.e. one part in a million) of its expected value to be equal.
|
||||
This treatment would lead to surprising results if the expected value was
|
||||
``0.0``, because nothing but ``0.0`` itself is relatively close to ``0.0``.
|
||||
To handle this case less surprisingly, ``approx`` also considers numbers
|
||||
within an absolute tolerance of ``1e-12`` of its expected value to be
|
||||
equal. Infinite numbers are another special case. They are only
|
||||
considered equal to themselves, regardless of the relative tolerance. Both
|
||||
the relative and absolute tolerances can be changed by passing arguments to
|
||||
the ``approx`` constructor::
|
||||
|
||||
>>> 1.0001 == approx(1)
|
||||
False
|
||||
>>> 1.0001 == approx(1, rel=1e-3)
|
||||
True
|
||||
>>> 1.0001 == approx(1, abs=1e-3)
|
||||
True
|
||||
|
||||
If you specify ``abs`` but not ``rel``, the comparison will not consider
|
||||
the relative tolerance at all. In other words, two numbers that are within
|
||||
the default relative tolerance of ``1e-6`` will still be considered unequal
|
||||
if they exceed the specified absolute tolerance. If you specify both
|
||||
``abs`` and ``rel``, the numbers will be considered equal if either
|
||||
tolerance is met::
|
||||
|
||||
>>> 1 + 1e-8 == approx(1)
|
||||
True
|
||||
>>> 1 + 1e-8 == approx(1, abs=1e-12)
|
||||
False
|
||||
>>> 1 + 1e-8 == approx(1, rel=1e-6, abs=1e-12)
|
||||
True
|
||||
|
||||
If you're thinking about using ``approx``, then you might want to know how
|
||||
it compares to other good ways of comparing floating-point numbers. All of
|
||||
these algorithms are based on relative and absolute tolerances and should
|
||||
agree for the most part, but they do have meaningful differences:
|
||||
|
||||
- ``math.isclose(a, b, rel_tol=1e-9, abs_tol=0.0)``: True if the relative
|
||||
tolerance is met w.r.t. either ``a`` or ``b`` or if the absolute
|
||||
tolerance is met. Because the relative tolerance is calculated w.r.t.
|
||||
both ``a`` and ``b``, this test is symmetric (i.e. neither ``a`` nor
|
||||
``b`` is a "reference value"). You have to specify an absolute tolerance
|
||||
if you want to compare to ``0.0`` because there is no tolerance by
|
||||
default. Only available in python>=3.5. `More information...`__
|
||||
|
||||
__ https://docs.python.org/3/library/math.html#math.isclose
|
||||
|
||||
- ``numpy.isclose(a, b, rtol=1e-5, atol=1e-8)``: True if the difference
|
||||
between ``a`` and ``b`` is less that the sum of the relative tolerance
|
||||
w.r.t. ``b`` and the absolute tolerance. Because the relative tolerance
|
||||
is only calculated w.r.t. ``b``, this test is asymmetric and you can
|
||||
think of ``b`` as the reference value. Support for comparing sequences
|
||||
is provided by ``numpy.allclose``. `More information...`__
|
||||
|
||||
__ http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.isclose.html
|
||||
|
||||
- ``unittest.TestCase.assertAlmostEqual(a, b)``: True if ``a`` and ``b``
|
||||
are within an absolute tolerance of ``1e-7``. No relative tolerance is
|
||||
considered and the absolute tolerance cannot be changed, so this function
|
||||
is not appropriate for very large or very small numbers. Also, it's only
|
||||
available in subclasses of ``unittest.TestCase`` and it's ugly because it
|
||||
doesn't follow PEP8. `More information...`__
|
||||
|
||||
__ https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertAlmostEqual
|
||||
|
||||
- ``a == pytest.approx(b, rel=1e-6, abs=1e-12)``: True if the relative
|
||||
tolerance is met w.r.t. ``b`` or if the absolute tolerance is met.
|
||||
Because the relative tolerance is only calculated w.r.t. ``b``, this test
|
||||
is asymmetric and you can think of ``b`` as the reference value. In the
|
||||
special case that you explicitly specify an absolute tolerance but not a
|
||||
relative tolerance, only the absolute tolerance is considered.
|
||||
"""
|
||||
|
||||
def __init__(self, expected, rel=None, abs=None):
|
||||
self.expected = expected
|
||||
self.abs = abs
|
||||
self.rel = rel
|
||||
|
||||
def __repr__(self):
|
||||
return ', '.join(repr(x) for x in self.expected)
|
||||
|
||||
def __eq__(self, actual):
|
||||
from collections import Iterable
|
||||
if not isinstance(actual, Iterable):
|
||||
actual = [actual]
|
||||
if len(actual) != len(self.expected):
|
||||
return False
|
||||
return all(a == x for a, x in zip(actual, self.expected))
|
||||
|
||||
__hash__ = None
|
||||
|
||||
def __ne__(self, actual):
|
||||
return not (actual == self)
|
||||
|
||||
@property
|
||||
def expected(self):
|
||||
# Regardless of whether the user-specified expected value is a number
|
||||
# or a sequence of numbers, return a list of ApproxNotIterable objects
|
||||
# that can be compared against.
|
||||
from collections import Iterable
|
||||
|
||||
def approx_non_iter(x):
|
||||
return ApproxNonIterable(x, self.rel, self.abs)
|
||||
|
||||
if isinstance(self._expected, Iterable):
|
||||
return [approx_non_iter(x) for x in self._expected]
|
||||
else:
|
||||
return [approx_non_iter(self._expected)]
|
||||
|
||||
@expected.setter
|
||||
def expected(self, expected):
|
||||
self._expected = expected
|
||||
|
||||
|
||||
class ApproxNonIterable(object):
|
||||
"""
|
||||
Perform approximate comparisons for single numbers only.
|
||||
|
||||
In other words, the ``expected`` attribute for objects of this class must
|
||||
be some sort of number. This is in contrast to the ``approx`` class, where
|
||||
the ``expected`` attribute can either be a number of a sequence of numbers.
|
||||
This class is responsible for making comparisons, while ``approx`` is
|
||||
responsible for abstracting the difference between numbers and sequences of
|
||||
numbers. Although this class can stand on its own, it's only meant to be
|
||||
used within ``approx``.
|
||||
"""
|
||||
|
||||
def __init__(self, expected, rel=None, abs=None):
|
||||
self.expected = expected
|
||||
self.abs = abs
|
||||
self.rel = rel
|
||||
|
||||
def __repr__(self):
|
||||
if isinstance(self.expected, complex):
|
||||
return str(self.expected)
|
||||
|
||||
# Infinities aren't compared using tolerances, so don't show a
|
||||
# tolerance.
|
||||
if math.isinf(self.expected):
|
||||
return str(self.expected)
|
||||
|
||||
# If a sensible tolerance can't be calculated, self.tolerance will
|
||||
# raise a ValueError. In this case, display '???'.
|
||||
try:
|
||||
vetted_tolerance = '{:.1e}'.format(self.tolerance)
|
||||
except ValueError:
|
||||
vetted_tolerance = '???'
|
||||
|
||||
if sys.version_info[0] == 2:
|
||||
return '{0} +- {1}'.format(self.expected, vetted_tolerance)
|
||||
else:
|
||||
return u'{0} \u00b1 {1}'.format(self.expected, vetted_tolerance)
|
||||
|
||||
def __eq__(self, actual):
|
||||
# Short-circuit exact equality.
|
||||
if actual == self.expected:
|
||||
return True
|
||||
|
||||
# Infinity shouldn't be approximately equal to anything but itself, but
|
||||
# if there's a relative tolerance, it will be infinite and infinity
|
||||
# will seem approximately equal to everything. The equal-to-itself
|
||||
# case would have been short circuited above, so here we can just
|
||||
# return false if the expected value is infinite. The abs() call is
|
||||
# for compatibility with complex numbers.
|
||||
if math.isinf(abs(self.expected)):
|
||||
return False
|
||||
|
||||
# Return true if the two numbers are within the tolerance.
|
||||
return abs(self.expected - actual) <= self.tolerance
|
||||
|
||||
__hash__ = None
|
||||
|
||||
def __ne__(self, actual):
|
||||
return not (actual == self)
|
||||
|
||||
@property
|
||||
def tolerance(self):
|
||||
def set_default(x, default):
|
||||
return x if x is not None else default
|
||||
|
||||
# Figure out what the absolute tolerance should be. ``self.abs`` is
|
||||
# either None or a value specified by the user.
|
||||
absolute_tolerance = set_default(self.abs, 1e-12)
|
||||
|
||||
if absolute_tolerance < 0:
|
||||
raise ValueError("absolute tolerance can't be negative: {}".format(absolute_tolerance))
|
||||
if math.isnan(absolute_tolerance):
|
||||
raise ValueError("absolute tolerance can't be NaN.")
|
||||
|
||||
# If the user specified an absolute tolerance but not a relative one,
|
||||
# just return the absolute tolerance.
|
||||
if self.rel is None:
|
||||
if self.abs is not None:
|
||||
return absolute_tolerance
|
||||
|
||||
# Figure out what the relative tolerance should be. ``self.rel`` is
|
||||
# either None or a value specified by the user. This is done after
|
||||
# we've made sure the user didn't ask for an absolute tolerance only,
|
||||
# because we don't want to raise errors about the relative tolerance if
|
||||
# we aren't even going to use it.
|
||||
relative_tolerance = set_default(self.rel, 1e-6) * abs(self.expected)
|
||||
|
||||
if relative_tolerance < 0:
|
||||
raise ValueError("relative tolerance can't be negative: {}".format(absolute_tolerance))
|
||||
if math.isnan(relative_tolerance):
|
||||
raise ValueError("relative tolerance can't be NaN.")
|
||||
|
||||
# Return the larger of the relative and absolute tolerances.
|
||||
return max(relative_tolerance, absolute_tolerance)
|
||||
|
||||
|
||||
#
|
||||
# the basic pytest Function item
|
||||
#
|
||||
|
||||
class Function(FunctionMixin, main.Item, fixtures.FuncargnamesCompatAttr):
|
||||
""" a Function Item is responsible for setting up and executing a
|
||||
Python test function.
|
||||
|
|
|
@ -0,0 +1,616 @@
|
|||
import math
|
||||
import sys
|
||||
|
||||
import py
|
||||
|
||||
from _pytest.compat import isclass, izip
|
||||
from _pytest.outcomes import fail
|
||||
import _pytest._code
|
||||
|
||||
|
||||
def _cmp_raises_type_error(self, other):
|
||||
"""__cmp__ implementation which raises TypeError. Used
|
||||
by Approx base classes to implement only == and != and raise a
|
||||
TypeError for other comparisons.
|
||||
|
||||
Needed in Python 2 only, Python 3 all it takes is not implementing the
|
||||
other operators at all.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
raise TypeError('Comparison operators other than == and != not supported by approx objects')
|
||||
|
||||
|
||||
# builtin pytest.approx helper
|
||||
|
||||
|
||||
class ApproxBase(object):
|
||||
"""
|
||||
Provide shared utilities for making approximate comparisons between numbers
|
||||
or sequences of numbers.
|
||||
"""
|
||||
|
||||
def __init__(self, expected, rel=None, abs=None, nan_ok=False):
|
||||
self.expected = expected
|
||||
self.abs = abs
|
||||
self.rel = rel
|
||||
self.nan_ok = nan_ok
|
||||
|
||||
def __repr__(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def __eq__(self, actual):
|
||||
return all(
|
||||
a == self._approx_scalar(x)
|
||||
for a, x in self._yield_comparisons(actual))
|
||||
|
||||
__hash__ = None
|
||||
|
||||
def __ne__(self, actual):
|
||||
return not (actual == self)
|
||||
|
||||
if sys.version_info[0] == 2:
|
||||
__cmp__ = _cmp_raises_type_error
|
||||
|
||||
def _approx_scalar(self, x):
|
||||
return ApproxScalar(x, rel=self.rel, abs=self.abs, nan_ok=self.nan_ok)
|
||||
|
||||
def _yield_comparisons(self, actual):
|
||||
"""
|
||||
Yield all the pairs of numbers to be compared. This is used to
|
||||
implement the `__eq__` method.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class ApproxNumpy(ApproxBase):
|
||||
"""
|
||||
Perform approximate comparisons for numpy arrays.
|
||||
"""
|
||||
|
||||
# Tell numpy to use our `__eq__` operator instead of its.
|
||||
__array_priority__ = 100
|
||||
|
||||
def __repr__(self):
|
||||
# It might be nice to rewrite this function to account for the
|
||||
# shape of the array...
|
||||
return "approx({0!r})".format(list(
|
||||
self._approx_scalar(x) for x in self.expected))
|
||||
|
||||
if sys.version_info[0] == 2:
|
||||
__cmp__ = _cmp_raises_type_error
|
||||
|
||||
def __eq__(self, actual):
|
||||
import numpy as np
|
||||
|
||||
try:
|
||||
actual = np.asarray(actual)
|
||||
except:
|
||||
raise TypeError("cannot compare '{0}' to numpy.ndarray".format(actual))
|
||||
|
||||
if actual.shape != self.expected.shape:
|
||||
return False
|
||||
|
||||
return ApproxBase.__eq__(self, actual)
|
||||
|
||||
def _yield_comparisons(self, actual):
|
||||
import numpy as np
|
||||
|
||||
# We can be sure that `actual` is a numpy array, because it's
|
||||
# casted in `__eq__` before being passed to `ApproxBase.__eq__`,
|
||||
# which is the only method that calls this one.
|
||||
for i in np.ndindex(self.expected.shape):
|
||||
yield actual[i], self.expected[i]
|
||||
|
||||
|
||||
class ApproxMapping(ApproxBase):
|
||||
"""
|
||||
Perform approximate comparisons for mappings where the values are numbers
|
||||
(the keys can be anything).
|
||||
"""
|
||||
|
||||
def __repr__(self):
|
||||
return "approx({0!r})".format(dict(
|
||||
(k, self._approx_scalar(v))
|
||||
for k, v in self.expected.items()))
|
||||
|
||||
def __eq__(self, actual):
|
||||
if set(actual.keys()) != set(self.expected.keys()):
|
||||
return False
|
||||
|
||||
return ApproxBase.__eq__(self, actual)
|
||||
|
||||
def _yield_comparisons(self, actual):
|
||||
for k in self.expected.keys():
|
||||
yield actual[k], self.expected[k]
|
||||
|
||||
|
||||
class ApproxSequence(ApproxBase):
|
||||
"""
|
||||
Perform approximate comparisons for sequences of numbers.
|
||||
"""
|
||||
|
||||
# Tell numpy to use our `__eq__` operator instead of its.
|
||||
__array_priority__ = 100
|
||||
|
||||
def __repr__(self):
|
||||
seq_type = type(self.expected)
|
||||
if seq_type not in (tuple, list, set):
|
||||
seq_type = list
|
||||
return "approx({0!r})".format(seq_type(
|
||||
self._approx_scalar(x) for x in self.expected))
|
||||
|
||||
def __eq__(self, actual):
|
||||
if len(actual) != len(self.expected):
|
||||
return False
|
||||
return ApproxBase.__eq__(self, actual)
|
||||
|
||||
def _yield_comparisons(self, actual):
|
||||
return izip(actual, self.expected)
|
||||
|
||||
|
||||
class ApproxScalar(ApproxBase):
|
||||
"""
|
||||
Perform approximate comparisons for single numbers only.
|
||||
"""
|
||||
|
||||
def __repr__(self):
|
||||
"""
|
||||
Return a string communicating both the expected value and the tolerance
|
||||
for the comparison being made, e.g. '1.0 +- 1e-6'. Use the unicode
|
||||
plus/minus symbol if this is python3 (it's too hard to get right for
|
||||
python2).
|
||||
"""
|
||||
if isinstance(self.expected, complex):
|
||||
return str(self.expected)
|
||||
|
||||
# Infinities aren't compared using tolerances, so don't show a
|
||||
# tolerance.
|
||||
if math.isinf(self.expected):
|
||||
return str(self.expected)
|
||||
|
||||
# If a sensible tolerance can't be calculated, self.tolerance will
|
||||
# raise a ValueError. In this case, display '???'.
|
||||
try:
|
||||
vetted_tolerance = '{:.1e}'.format(self.tolerance)
|
||||
except ValueError:
|
||||
vetted_tolerance = '???'
|
||||
|
||||
if sys.version_info[0] == 2:
|
||||
return '{0} +- {1}'.format(self.expected, vetted_tolerance)
|
||||
else:
|
||||
return u'{0} \u00b1 {1}'.format(self.expected, vetted_tolerance)
|
||||
|
||||
def __eq__(self, actual):
|
||||
"""
|
||||
Return true if the given value is equal to the expected value within
|
||||
the pre-specified tolerance.
|
||||
"""
|
||||
|
||||
# Short-circuit exact equality.
|
||||
if actual == self.expected:
|
||||
return True
|
||||
|
||||
# Allow the user to control whether NaNs are considered equal to each
|
||||
# other or not. The abs() calls are for compatibility with complex
|
||||
# numbers.
|
||||
if math.isnan(abs(self.expected)):
|
||||
return self.nan_ok and math.isnan(abs(actual))
|
||||
|
||||
# Infinity shouldn't be approximately equal to anything but itself, but
|
||||
# if there's a relative tolerance, it will be infinite and infinity
|
||||
# will seem approximately equal to everything. The equal-to-itself
|
||||
# case would have been short circuited above, so here we can just
|
||||
# return false if the expected value is infinite. The abs() call is
|
||||
# for compatibility with complex numbers.
|
||||
if math.isinf(abs(self.expected)):
|
||||
return False
|
||||
|
||||
# Return true if the two numbers are within the tolerance.
|
||||
return abs(self.expected - actual) <= self.tolerance
|
||||
|
||||
__hash__ = None
|
||||
|
||||
@property
|
||||
def tolerance(self):
|
||||
"""
|
||||
Return the tolerance for the comparison. This could be either an
|
||||
absolute tolerance or a relative tolerance, depending on what the user
|
||||
specified or which would be larger.
|
||||
"""
|
||||
def set_default(x, default): return x if x is not None else default
|
||||
|
||||
# Figure out what the absolute tolerance should be. ``self.abs`` is
|
||||
# either None or a value specified by the user.
|
||||
absolute_tolerance = set_default(self.abs, 1e-12)
|
||||
|
||||
if absolute_tolerance < 0:
|
||||
raise ValueError("absolute tolerance can't be negative: {}".format(absolute_tolerance))
|
||||
if math.isnan(absolute_tolerance):
|
||||
raise ValueError("absolute tolerance can't be NaN.")
|
||||
|
||||
# If the user specified an absolute tolerance but not a relative one,
|
||||
# just return the absolute tolerance.
|
||||
if self.rel is None:
|
||||
if self.abs is not None:
|
||||
return absolute_tolerance
|
||||
|
||||
# Figure out what the relative tolerance should be. ``self.rel`` is
|
||||
# either None or a value specified by the user. This is done after
|
||||
# we've made sure the user didn't ask for an absolute tolerance only,
|
||||
# because we don't want to raise errors about the relative tolerance if
|
||||
# we aren't even going to use it.
|
||||
relative_tolerance = set_default(self.rel, 1e-6) * abs(self.expected)
|
||||
|
||||
if relative_tolerance < 0:
|
||||
raise ValueError("relative tolerance can't be negative: {}".format(absolute_tolerance))
|
||||
if math.isnan(relative_tolerance):
|
||||
raise ValueError("relative tolerance can't be NaN.")
|
||||
|
||||
# Return the larger of the relative and absolute tolerances.
|
||||
return max(relative_tolerance, absolute_tolerance)
|
||||
|
||||
|
||||
def approx(expected, rel=None, abs=None, nan_ok=False):
|
||||
"""
|
||||
Assert that two numbers (or two sets of numbers) are equal to each other
|
||||
within some tolerance.
|
||||
|
||||
Due to the `intricacies of floating-point arithmetic`__, numbers that we
|
||||
would intuitively expect to be equal are not always so::
|
||||
|
||||
>>> 0.1 + 0.2 == 0.3
|
||||
False
|
||||
|
||||
__ https://docs.python.org/3/tutorial/floatingpoint.html
|
||||
|
||||
This problem is commonly encountered when writing tests, e.g. when making
|
||||
sure that floating-point values are what you expect them to be. One way to
|
||||
deal with this problem is to assert that two floating-point numbers are
|
||||
equal to within some appropriate tolerance::
|
||||
|
||||
>>> abs((0.1 + 0.2) - 0.3) < 1e-6
|
||||
True
|
||||
|
||||
However, comparisons like this are tedious to write and difficult to
|
||||
understand. Furthermore, absolute comparisons like the one above are
|
||||
usually discouraged because there's no tolerance that works well for all
|
||||
situations. ``1e-6`` is good for numbers around ``1``, but too small for
|
||||
very big numbers and too big for very small ones. It's better to express
|
||||
the tolerance as a fraction of the expected value, but relative comparisons
|
||||
like that are even more difficult to write correctly and concisely.
|
||||
|
||||
The ``approx`` class performs floating-point comparisons using a syntax
|
||||
that's as intuitive as possible::
|
||||
|
||||
>>> from pytest import approx
|
||||
>>> 0.1 + 0.2 == approx(0.3)
|
||||
True
|
||||
|
||||
The same syntax also works for sequences of numbers::
|
||||
|
||||
>>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6))
|
||||
True
|
||||
|
||||
Dictionary *values*::
|
||||
|
||||
>>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6})
|
||||
True
|
||||
|
||||
And ``numpy`` arrays::
|
||||
|
||||
>>> import numpy as np # doctest: +SKIP
|
||||
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6])) # doctest: +SKIP
|
||||
True
|
||||
|
||||
By default, ``approx`` considers numbers within a relative tolerance of
|
||||
``1e-6`` (i.e. one part in a million) of its expected value to be equal.
|
||||
This treatment would lead to surprising results if the expected value was
|
||||
``0.0``, because nothing but ``0.0`` itself is relatively close to ``0.0``.
|
||||
To handle this case less surprisingly, ``approx`` also considers numbers
|
||||
within an absolute tolerance of ``1e-12`` of its expected value to be
|
||||
equal. Infinity and NaN are special cases. Infinity is only considered
|
||||
equal to itself, regardless of the relative tolerance. NaN is not
|
||||
considered equal to anything by default, but you can make it be equal to
|
||||
itself by setting the ``nan_ok`` argument to True. (This is meant to
|
||||
facilitate comparing arrays that use NaN to mean "no data".)
|
||||
|
||||
Both the relative and absolute tolerances can be changed by passing
|
||||
arguments to the ``approx`` constructor::
|
||||
|
||||
>>> 1.0001 == approx(1)
|
||||
False
|
||||
>>> 1.0001 == approx(1, rel=1e-3)
|
||||
True
|
||||
>>> 1.0001 == approx(1, abs=1e-3)
|
||||
True
|
||||
|
||||
If you specify ``abs`` but not ``rel``, the comparison will not consider
|
||||
the relative tolerance at all. In other words, two numbers that are within
|
||||
the default relative tolerance of ``1e-6`` will still be considered unequal
|
||||
if they exceed the specified absolute tolerance. If you specify both
|
||||
``abs`` and ``rel``, the numbers will be considered equal if either
|
||||
tolerance is met::
|
||||
|
||||
>>> 1 + 1e-8 == approx(1)
|
||||
True
|
||||
>>> 1 + 1e-8 == approx(1, abs=1e-12)
|
||||
False
|
||||
>>> 1 + 1e-8 == approx(1, rel=1e-6, abs=1e-12)
|
||||
True
|
||||
|
||||
If you're thinking about using ``approx``, then you might want to know how
|
||||
it compares to other good ways of comparing floating-point numbers. All of
|
||||
these algorithms are based on relative and absolute tolerances and should
|
||||
agree for the most part, but they do have meaningful differences:
|
||||
|
||||
- ``math.isclose(a, b, rel_tol=1e-9, abs_tol=0.0)``: True if the relative
|
||||
tolerance is met w.r.t. either ``a`` or ``b`` or if the absolute
|
||||
tolerance is met. Because the relative tolerance is calculated w.r.t.
|
||||
both ``a`` and ``b``, this test is symmetric (i.e. neither ``a`` nor
|
||||
``b`` is a "reference value"). You have to specify an absolute tolerance
|
||||
if you want to compare to ``0.0`` because there is no tolerance by
|
||||
default. Only available in python>=3.5. `More information...`__
|
||||
|
||||
__ https://docs.python.org/3/library/math.html#math.isclose
|
||||
|
||||
- ``numpy.isclose(a, b, rtol=1e-5, atol=1e-8)``: True if the difference
|
||||
between ``a`` and ``b`` is less that the sum of the relative tolerance
|
||||
w.r.t. ``b`` and the absolute tolerance. Because the relative tolerance
|
||||
is only calculated w.r.t. ``b``, this test is asymmetric and you can
|
||||
think of ``b`` as the reference value. Support for comparing sequences
|
||||
is provided by ``numpy.allclose``. `More information...`__
|
||||
|
||||
__ http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.isclose.html
|
||||
|
||||
- ``unittest.TestCase.assertAlmostEqual(a, b)``: True if ``a`` and ``b``
|
||||
are within an absolute tolerance of ``1e-7``. No relative tolerance is
|
||||
considered and the absolute tolerance cannot be changed, so this function
|
||||
is not appropriate for very large or very small numbers. Also, it's only
|
||||
available in subclasses of ``unittest.TestCase`` and it's ugly because it
|
||||
doesn't follow PEP8. `More information...`__
|
||||
|
||||
__ https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertAlmostEqual
|
||||
|
||||
- ``a == pytest.approx(b, rel=1e-6, abs=1e-12)``: True if the relative
|
||||
tolerance is met w.r.t. ``b`` or if the absolute tolerance is met.
|
||||
Because the relative tolerance is only calculated w.r.t. ``b``, this test
|
||||
is asymmetric and you can think of ``b`` as the reference value. In the
|
||||
special case that you explicitly specify an absolute tolerance but not a
|
||||
relative tolerance, only the absolute tolerance is considered.
|
||||
|
||||
.. warning::
|
||||
|
||||
.. versionchanged:: 3.2
|
||||
|
||||
In order to avoid inconsistent behavior, ``TypeError`` is
|
||||
raised for ``>``, ``>=``, ``<`` and ``<=`` comparisons.
|
||||
The example below illustrates the problem::
|
||||
|
||||
assert approx(0.1) > 0.1 + 1e-10 # calls approx(0.1).__gt__(0.1 + 1e-10)
|
||||
assert 0.1 + 1e-10 > approx(0.1) # calls approx(0.1).__lt__(0.1 + 1e-10)
|
||||
|
||||
In the second example one expects ``approx(0.1).__le__(0.1 + 1e-10)``
|
||||
to be called. But instead, ``approx(0.1).__lt__(0.1 + 1e-10)`` is used to
|
||||
comparison. This is because the call hierarchy of rich comparisons
|
||||
follows a fixed behavior. `More information...`__
|
||||
|
||||
__ https://docs.python.org/3/reference/datamodel.html#object.__ge__
|
||||
"""
|
||||
|
||||
from collections import Mapping, Sequence
|
||||
from _pytest.compat import STRING_TYPES as String
|
||||
|
||||
# Delegate the comparison to a class that knows how to deal with the type
|
||||
# of the expected value (e.g. int, float, list, dict, numpy.array, etc).
|
||||
#
|
||||
# This architecture is really driven by the need to support numpy arrays.
|
||||
# The only way to override `==` for arrays without requiring that approx be
|
||||
# the left operand is to inherit the approx object from `numpy.ndarray`.
|
||||
# But that can't be a general solution, because it requires (1) numpy to be
|
||||
# installed and (2) the expected value to be a numpy array. So the general
|
||||
# solution is to delegate each type of expected value to a different class.
|
||||
#
|
||||
# This has the advantage that it made it easy to support mapping types
|
||||
# (i.e. dict). The old code accepted mapping types, but would only compare
|
||||
# their keys, which is probably not what most people would expect.
|
||||
|
||||
if _is_numpy_array(expected):
|
||||
cls = ApproxNumpy
|
||||
elif isinstance(expected, Mapping):
|
||||
cls = ApproxMapping
|
||||
elif isinstance(expected, Sequence) and not isinstance(expected, String):
|
||||
cls = ApproxSequence
|
||||
else:
|
||||
cls = ApproxScalar
|
||||
|
||||
return cls(expected, rel, abs, nan_ok)
|
||||
|
||||
|
||||
def _is_numpy_array(obj):
|
||||
"""
|
||||
Return true if the given object is a numpy array. Make a special effort to
|
||||
avoid importing numpy unless it's really necessary.
|
||||
"""
|
||||
import inspect
|
||||
|
||||
for cls in inspect.getmro(type(obj)):
|
||||
if cls.__module__ == 'numpy':
|
||||
try:
|
||||
import numpy as np
|
||||
return isinstance(obj, np.ndarray)
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
return False
|
||||
|
||||
|
||||
# builtin pytest.raises helper
|
||||
|
||||
def raises(expected_exception, *args, **kwargs):
|
||||
"""
|
||||
Assert that a code block/function call raises ``expected_exception``
|
||||
and raise a failure exception otherwise.
|
||||
|
||||
This helper produces a ``ExceptionInfo()`` object (see below).
|
||||
|
||||
If using Python 2.5 or above, you may use this function as a
|
||||
context manager::
|
||||
|
||||
>>> with raises(ZeroDivisionError):
|
||||
... 1/0
|
||||
|
||||
.. versionchanged:: 2.10
|
||||
|
||||
In the context manager form you may use the keyword argument
|
||||
``message`` to specify a custom failure message::
|
||||
|
||||
>>> with raises(ZeroDivisionError, message="Expecting ZeroDivisionError"):
|
||||
... pass
|
||||
Traceback (most recent call last):
|
||||
...
|
||||
Failed: Expecting ZeroDivisionError
|
||||
|
||||
.. note::
|
||||
|
||||
When using ``pytest.raises`` as a context manager, it's worthwhile to
|
||||
note that normal context manager rules apply and that the exception
|
||||
raised *must* be the final line in the scope of the context manager.
|
||||
Lines of code after that, within the scope of the context manager will
|
||||
not be executed. For example::
|
||||
|
||||
>>> value = 15
|
||||
>>> with raises(ValueError) as exc_info:
|
||||
... if value > 10:
|
||||
... raise ValueError("value must be <= 10")
|
||||
... assert exc_info.type == ValueError # this will not execute
|
||||
|
||||
Instead, the following approach must be taken (note the difference in
|
||||
scope)::
|
||||
|
||||
>>> with raises(ValueError) as exc_info:
|
||||
... if value > 10:
|
||||
... raise ValueError("value must be <= 10")
|
||||
...
|
||||
>>> assert exc_info.type == ValueError
|
||||
|
||||
Or you can use the keyword argument ``match`` to assert that the
|
||||
exception matches a text or regex::
|
||||
|
||||
>>> with raises(ValueError, match='must be 0 or None'):
|
||||
... raise ValueError("value must be 0 or None")
|
||||
|
||||
>>> with raises(ValueError, match=r'must be \d+$'):
|
||||
... raise ValueError("value must be 42")
|
||||
|
||||
Or you can specify a callable by passing a to-be-called lambda::
|
||||
|
||||
>>> raises(ZeroDivisionError, lambda: 1/0)
|
||||
<ExceptionInfo ...>
|
||||
|
||||
or you can specify an arbitrary callable with arguments::
|
||||
|
||||
>>> def f(x): return 1/x
|
||||
...
|
||||
>>> raises(ZeroDivisionError, f, 0)
|
||||
<ExceptionInfo ...>
|
||||
>>> raises(ZeroDivisionError, f, x=0)
|
||||
<ExceptionInfo ...>
|
||||
|
||||
A third possibility is to use a string to be executed::
|
||||
|
||||
>>> raises(ZeroDivisionError, "f(0)")
|
||||
<ExceptionInfo ...>
|
||||
|
||||
.. autoclass:: _pytest._code.ExceptionInfo
|
||||
:members:
|
||||
|
||||
.. note::
|
||||
Similar to caught exception objects in Python, explicitly clearing
|
||||
local references to returned ``ExceptionInfo`` objects can
|
||||
help the Python interpreter speed up its garbage collection.
|
||||
|
||||
Clearing those references breaks a reference cycle
|
||||
(``ExceptionInfo`` --> caught exception --> frame stack raising
|
||||
the exception --> current frame stack --> local variables -->
|
||||
``ExceptionInfo``) which makes Python keep all objects referenced
|
||||
from that cycle (including all local variables in the current
|
||||
frame) alive until the next cyclic garbage collection run. See the
|
||||
official Python ``try`` statement documentation for more detailed
|
||||
information.
|
||||
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
msg = ("exceptions must be old-style classes or"
|
||||
" derived from BaseException, not %s")
|
||||
if isinstance(expected_exception, tuple):
|
||||
for exc in expected_exception:
|
||||
if not isclass(exc):
|
||||
raise TypeError(msg % type(exc))
|
||||
elif not isclass(expected_exception):
|
||||
raise TypeError(msg % type(expected_exception))
|
||||
|
||||
message = "DID NOT RAISE {0}".format(expected_exception)
|
||||
match_expr = None
|
||||
|
||||
if not args:
|
||||
if "message" in kwargs:
|
||||
message = kwargs.pop("message")
|
||||
if "match" in kwargs:
|
||||
match_expr = kwargs.pop("match")
|
||||
message += " matching '{0}'".format(match_expr)
|
||||
return RaisesContext(expected_exception, message, match_expr)
|
||||
elif isinstance(args[0], str):
|
||||
code, = args
|
||||
assert isinstance(code, str)
|
||||
frame = sys._getframe(1)
|
||||
loc = frame.f_locals.copy()
|
||||
loc.update(kwargs)
|
||||
# print "raises frame scope: %r" % frame.f_locals
|
||||
try:
|
||||
code = _pytest._code.Source(code).compile()
|
||||
py.builtin.exec_(code, frame.f_globals, loc)
|
||||
# XXX didn'T mean f_globals == f_locals something special?
|
||||
# this is destroyed here ...
|
||||
except expected_exception:
|
||||
return _pytest._code.ExceptionInfo()
|
||||
else:
|
||||
func = args[0]
|
||||
try:
|
||||
func(*args[1:], **kwargs)
|
||||
except expected_exception:
|
||||
return _pytest._code.ExceptionInfo()
|
||||
fail(message)
|
||||
|
||||
|
||||
raises.Exception = fail.Exception
|
||||
|
||||
|
||||
class RaisesContext(object):
|
||||
def __init__(self, expected_exception, message, match_expr):
|
||||
self.expected_exception = expected_exception
|
||||
self.message = message
|
||||
self.match_expr = match_expr
|
||||
self.excinfo = None
|
||||
|
||||
def __enter__(self):
|
||||
self.excinfo = object.__new__(_pytest._code.ExceptionInfo)
|
||||
return self.excinfo
|
||||
|
||||
def __exit__(self, *tp):
|
||||
__tracebackhide__ = True
|
||||
if tp[0] is None:
|
||||
fail(self.message)
|
||||
if sys.version_info < (2, 7):
|
||||
# py26: on __exit__() exc_value often does not contain the
|
||||
# exception value.
|
||||
# http://bugs.python.org/issue7853
|
||||
if not isinstance(tp[1], BaseException):
|
||||
exc_type, value, traceback = tp
|
||||
tp = exc_type, exc_type(value), traceback
|
||||
self.excinfo.__init__(tp)
|
||||
suppress_exception = issubclass(self.excinfo.type, self.expected_exception)
|
||||
if sys.version_info[0] == 2 and suppress_exception:
|
||||
sys.exc_clear()
|
||||
if self.match_expr:
|
||||
self.excinfo.match(self.match_expr)
|
||||
return suppress_exception
|
|
@ -7,7 +7,9 @@ import _pytest._code
|
|||
import py
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
from _pytest.fixtures import yield_fixture
|
||||
from _pytest.outcomes import fail
|
||||
|
||||
|
||||
@yield_fixture
|
||||
|
@ -197,7 +199,6 @@ class WarningsChecker(WarningsRecorder):
|
|||
if not any(issubclass(r.category, self.expected_warning)
|
||||
for r in self):
|
||||
__tracebackhide__ = True
|
||||
from _pytest.runner import fail
|
||||
fail("DID NOT WARN. No warnings of type {0} was emitted. "
|
||||
"The list of emitted warnings is: {1}.".format(
|
||||
self.expected_warning,
|
||||
|
|
|
@ -2,16 +2,18 @@
|
|||
from __future__ import absolute_import, division, print_function
|
||||
|
||||
import bdb
|
||||
import os
|
||||
import sys
|
||||
from time import time
|
||||
|
||||
import py
|
||||
from _pytest._code.code import TerminalRepr, ExceptionInfo
|
||||
|
||||
from _pytest.outcomes import skip, Skipped, TEST_OUTCOME
|
||||
|
||||
#
|
||||
# pytest plugin hooks
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
group = parser.getgroup("terminal reporting", "reporting", after="general")
|
||||
group.addoption('--durations',
|
||||
|
@ -99,10 +101,12 @@ def show_test_item(item):
|
|||
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
_update_current_test_var(item, 'setup')
|
||||
item.session._setupstate.prepare(item)
|
||||
|
||||
|
||||
def pytest_runtest_call(item):
|
||||
_update_current_test_var(item, 'call')
|
||||
try:
|
||||
item.runtest()
|
||||
except Exception:
|
||||
|
@ -117,7 +121,22 @@ def pytest_runtest_call(item):
|
|||
|
||||
|
||||
def pytest_runtest_teardown(item, nextitem):
|
||||
_update_current_test_var(item, 'teardown')
|
||||
item.session._setupstate.teardown_exact(item, nextitem)
|
||||
_update_current_test_var(item, None)
|
||||
|
||||
|
||||
def _update_current_test_var(item, when):
|
||||
"""
|
||||
Update PYTEST_CURRENT_TEST to reflect the current item and stage.
|
||||
|
||||
If ``when`` is None, delete PYTEST_CURRENT_TEST from the environment.
|
||||
"""
|
||||
var_name = 'PYTEST_CURRENT_TEST'
|
||||
if when:
|
||||
os.environ[var_name] = '{0} ({1})'.format(item.nodeid, when)
|
||||
else:
|
||||
os.environ.pop(var_name)
|
||||
|
||||
|
||||
def pytest_report_teststatus(report):
|
||||
|
@ -427,7 +446,7 @@ class SetupState(object):
|
|||
fin = finalizers.pop()
|
||||
try:
|
||||
fin()
|
||||
except Exception:
|
||||
except TEST_OUTCOME:
|
||||
# XXX Only first exception will be seen by user,
|
||||
# ideally all should be reported.
|
||||
if exc is None:
|
||||
|
@ -474,7 +493,7 @@ class SetupState(object):
|
|||
self.stack.append(col)
|
||||
try:
|
||||
col.setup()
|
||||
except Exception:
|
||||
except TEST_OUTCOME:
|
||||
col._prepare_exc = sys.exc_info()
|
||||
raise
|
||||
|
||||
|
@ -487,126 +506,3 @@ def collect_one_node(collector):
|
|||
if call and check_interactive_exception(call, rep):
|
||||
ihook.pytest_exception_interact(node=collector, call=call, report=rep)
|
||||
return rep
|
||||
|
||||
|
||||
# =============================================================
|
||||
# Test OutcomeExceptions and helpers for creating them.
|
||||
|
||||
|
||||
class OutcomeException(Exception):
|
||||
""" OutcomeException and its subclass instances indicate and
|
||||
contain info about test and collection outcomes.
|
||||
"""
|
||||
|
||||
def __init__(self, msg=None, pytrace=True):
|
||||
Exception.__init__(self, msg)
|
||||
self.msg = msg
|
||||
self.pytrace = pytrace
|
||||
|
||||
def __repr__(self):
|
||||
if self.msg:
|
||||
val = self.msg
|
||||
if isinstance(val, bytes):
|
||||
val = py._builtin._totext(val, errors='replace')
|
||||
return val
|
||||
return "<%s instance>" % (self.__class__.__name__,)
|
||||
__str__ = __repr__
|
||||
|
||||
|
||||
class Skipped(OutcomeException):
|
||||
# XXX hackish: on 3k we fake to live in the builtins
|
||||
# in order to have Skipped exception printing shorter/nicer
|
||||
__module__ = 'builtins'
|
||||
|
||||
def __init__(self, msg=None, pytrace=True, allow_module_level=False):
|
||||
OutcomeException.__init__(self, msg=msg, pytrace=pytrace)
|
||||
self.allow_module_level = allow_module_level
|
||||
|
||||
|
||||
class Failed(OutcomeException):
|
||||
""" raised from an explicit call to pytest.fail() """
|
||||
__module__ = 'builtins'
|
||||
|
||||
|
||||
class Exit(KeyboardInterrupt):
|
||||
""" raised for immediate program exits (no tracebacks/summaries)"""
|
||||
|
||||
def __init__(self, msg="unknown reason"):
|
||||
self.msg = msg
|
||||
KeyboardInterrupt.__init__(self, msg)
|
||||
|
||||
# exposed helper methods
|
||||
|
||||
|
||||
def exit(msg):
|
||||
""" exit testing process as if KeyboardInterrupt was triggered. """
|
||||
__tracebackhide__ = True
|
||||
raise Exit(msg)
|
||||
|
||||
|
||||
exit.Exception = Exit
|
||||
|
||||
|
||||
def skip(msg=""):
|
||||
""" skip an executing test with the given message. Note: it's usually
|
||||
better to use the pytest.mark.skipif marker to declare a test to be
|
||||
skipped under certain conditions like mismatching platforms or
|
||||
dependencies. See the pytest_skipping plugin for details.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
raise Skipped(msg=msg)
|
||||
|
||||
|
||||
skip.Exception = Skipped
|
||||
|
||||
|
||||
def fail(msg="", pytrace=True):
|
||||
""" explicitly fail an currently-executing test with the given Message.
|
||||
|
||||
:arg pytrace: if false the msg represents the full failure information
|
||||
and no python traceback will be reported.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
raise Failed(msg=msg, pytrace=pytrace)
|
||||
|
||||
|
||||
fail.Exception = Failed
|
||||
|
||||
|
||||
def importorskip(modname, minversion=None):
|
||||
""" return imported module if it has at least "minversion" as its
|
||||
__version__ attribute. If no minversion is specified the a skip
|
||||
is only triggered if the module can not be imported.
|
||||
"""
|
||||
import warnings
|
||||
__tracebackhide__ = True
|
||||
compile(modname, '', 'eval') # to catch syntaxerrors
|
||||
should_skip = False
|
||||
|
||||
with warnings.catch_warnings():
|
||||
# make sure to ignore ImportWarnings that might happen because
|
||||
# of existing directories with the same name we're trying to
|
||||
# import but without a __init__.py file
|
||||
warnings.simplefilter('ignore')
|
||||
try:
|
||||
__import__(modname)
|
||||
except ImportError:
|
||||
# Do not raise chained exception here(#1485)
|
||||
should_skip = True
|
||||
if should_skip:
|
||||
raise Skipped("could not import %r" % (modname,), allow_module_level=True)
|
||||
mod = sys.modules[modname]
|
||||
if minversion is None:
|
||||
return mod
|
||||
verattr = getattr(mod, '__version__', None)
|
||||
if minversion is not None:
|
||||
try:
|
||||
from pkg_resources import parse_version as pv
|
||||
except ImportError:
|
||||
raise Skipped("we have a required version for %r but can not import "
|
||||
"pkg_resources to parse version strings." % (modname,),
|
||||
allow_module_level=True)
|
||||
if verattr is None or pv(verattr) < pv(minversion):
|
||||
raise Skipped("module %r has __version__ %r, required is: %r" % (
|
||||
modname, verattr, minversion), allow_module_level=True)
|
||||
return mod
|
||||
|
|
|
@ -8,7 +8,7 @@ import traceback
|
|||
import py
|
||||
from _pytest.config import hookimpl
|
||||
from _pytest.mark import MarkInfo, MarkDecorator
|
||||
from _pytest.runner import fail, skip
|
||||
from _pytest.outcomes import fail, skip, xfail, TEST_OUTCOME
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
|
@ -34,7 +34,7 @@ def pytest_configure(config):
|
|||
def nop(*args, **kwargs):
|
||||
pass
|
||||
|
||||
nop.Exception = XFailed
|
||||
nop.Exception = xfail.Exception
|
||||
setattr(pytest, "xfail", nop)
|
||||
|
||||
config.addinivalue_line("markers",
|
||||
|
@ -60,19 +60,6 @@ def pytest_configure(config):
|
|||
)
|
||||
|
||||
|
||||
class XFailed(fail.Exception):
|
||||
""" raised from an explicit call to pytest.xfail() """
|
||||
|
||||
|
||||
def xfail(reason=""):
|
||||
""" xfail an executing test or setup functions with the given reason."""
|
||||
__tracebackhide__ = True
|
||||
raise XFailed(reason)
|
||||
|
||||
|
||||
xfail.Exception = XFailed
|
||||
|
||||
|
||||
class MarkEvaluator:
|
||||
def __init__(self, item, name):
|
||||
self.item = item
|
||||
|
@ -98,7 +85,7 @@ class MarkEvaluator:
|
|||
def istrue(self):
|
||||
try:
|
||||
return self._istrue()
|
||||
except Exception:
|
||||
except TEST_OUTCOME:
|
||||
self.exc = sys.exc_info()
|
||||
if isinstance(self.exc[1], SyntaxError):
|
||||
msg = [" " * (self.exc[1].offset + 4) + "^", ]
|
||||
|
|
|
@ -323,6 +323,9 @@ class TerminalReporter:
|
|||
self.write_line(msg)
|
||||
lines = self.config.hook.pytest_report_header(
|
||||
config=self.config, startdir=self.startdir)
|
||||
self._write_report_lines_from_hooks(lines)
|
||||
|
||||
def _write_report_lines_from_hooks(self, lines):
|
||||
lines.reverse()
|
||||
for line in flatten(lines):
|
||||
self.write_line(line)
|
||||
|
@ -349,10 +352,9 @@ class TerminalReporter:
|
|||
rep.toterminal(self._tw)
|
||||
return 1
|
||||
return 0
|
||||
if not self.showheader:
|
||||
return
|
||||
# for i, testarg in enumerate(self.config.args):
|
||||
# self.write_line("test path %d: %s" %(i+1, testarg))
|
||||
lines = self.config.hook.pytest_report_collectionfinish(
|
||||
config=self.config, startdir=self.startdir, items=session.items)
|
||||
self._write_report_lines_from_hooks(lines)
|
||||
|
||||
def _printcollecteditems(self, items):
|
||||
# to print out items and their parent collectors
|
||||
|
|
|
@ -7,9 +7,9 @@ import traceback
|
|||
# for transferring markers
|
||||
import _pytest._code
|
||||
from _pytest.config import hookimpl
|
||||
from _pytest.runner import fail, skip
|
||||
from _pytest.outcomes import fail, skip, xfail
|
||||
from _pytest.python import transfer_markers, Class, Module, Function
|
||||
from _pytest.skipping import MarkEvaluator, xfail
|
||||
from _pytest.skipping import MarkEvaluator
|
||||
|
||||
|
||||
def pytest_pycollect_makeitem(collector, name, obj):
|
||||
|
|
|
@ -60,6 +60,11 @@ def catch_warnings_for_item(item):
|
|||
for arg in inifilters:
|
||||
_setoption(warnings, arg)
|
||||
|
||||
mark = item.get_marker('filterwarnings')
|
||||
if mark:
|
||||
for arg in mark.args:
|
||||
warnings._setoption(arg)
|
||||
|
||||
yield
|
||||
|
||||
for warning in log:
|
||||
|
|
|
@ -20,9 +20,11 @@ environment:
|
|||
- TOXENV: "py27-pexpect"
|
||||
- TOXENV: "py27-xdist"
|
||||
- TOXENV: "py27-trial"
|
||||
- TOXENV: "py27-numpy"
|
||||
- TOXENV: "py35-pexpect"
|
||||
- TOXENV: "py35-xdist"
|
||||
- TOXENV: "py35-trial"
|
||||
- TOXENV: "py35-numpy"
|
||||
- TOXENV: "py27-nobyte"
|
||||
- TOXENV: "doctesting"
|
||||
- TOXENV: "freeze"
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
Set ``stdin`` to a closed ``PIPE`` in ``pytester.py.Testdir.popen()`` for avoid unwanted interactive ``pdb``
|
|
@ -1 +0,0 @@
|
|||
Add missing ``encoding`` attribute to ``sys.std*`` streams when using ``capsys`` capture mode.
|
|
@ -1 +0,0 @@
|
|||
Update help message for ``--strict`` to make it clear it only deals with unregistered markers, not warnings.
|
|
@ -1 +0,0 @@
|
|||
Fix terminal color changing to black on Windows if ``colorama`` is imported in a ``conftest.py`` file.
|
|
@ -1 +0,0 @@
|
|||
Renamed the utility function ``_pytest.compat._escape_strings`` to ``_ascii_escaped`` to better communicate the function's purpose.
|
|
@ -1 +0,0 @@
|
|||
Clarify ``pytest_configure`` hook call order.
|
|
@ -1 +0,0 @@
|
|||
Improve error message for CollectError with skip/skipif.
|
|
@ -1 +0,0 @@
|
|||
Fix line number when reporting summary of skipped tests.
|
|
@ -1 +0,0 @@
|
|||
capture: ensure that EncodedFile.name is a string.
|
|
@ -1 +0,0 @@
|
|||
Emit warning about ``yield`` tests being deprecated only once per generator.
|
|
@ -1 +0,0 @@
|
|||
Ensure final collected line doesn't include artifacts of previous write.
|
|
@ -1 +0,0 @@
|
|||
The options ```--fixtures`` and ```--fixtures-per-test`` will now keep indentation within docstrings.
|
|
@ -1 +0,0 @@
|
|||
Fixed all flake8 errors and warnings.
|
|
@ -1 +0,0 @@
|
|||
Added ``fix-lint`` tox environment to run automatic pep8 fixes on the code.
|
|
@ -1 +0,0 @@
|
|||
doctests line numbers are now reported correctly, fixing `pytest-sugar#122 <https://github.com/Frozenball/pytest-sugar/issues/122>`_.
|
|
@ -1 +0,0 @@
|
|||
Show multiple issue links in CHANGELOG entries.
|
|
@ -1 +0,0 @@
|
|||
Extend documentation for testing plugin code with the ``pytester`` plugin.
|
|
@ -6,6 +6,7 @@ Release announcements
|
|||
:maxdepth: 2
|
||||
|
||||
|
||||
release-3.2.0
|
||||
release-3.1.3
|
||||
release-3.1.2
|
||||
release-3.1.1
|
||||
|
|
|
@ -0,0 +1,48 @@
|
|||
pytest-3.2.0
|
||||
=======================================
|
||||
|
||||
The pytest team is proud to announce the 3.2.0 release!
|
||||
|
||||
pytest is a mature Python testing tool with more than a 1600 tests
|
||||
against itself, passing on many different interpreters and platforms.
|
||||
|
||||
This release contains a number of bugs fixes and improvements, so users are encouraged
|
||||
to take a look at the CHANGELOG:
|
||||
|
||||
http://doc.pytest.org/en/latest/changelog.html
|
||||
|
||||
For complete documentation, please visit:
|
||||
|
||||
http://docs.pytest.org
|
||||
|
||||
As usual, you can upgrade from pypi via:
|
||||
|
||||
pip install -U pytest
|
||||
|
||||
Thanks to all who contributed to this release, among them:
|
||||
|
||||
* Alex Hartoto
|
||||
* Andras Tim
|
||||
* Bruno Oliveira
|
||||
* Daniel Hahler
|
||||
* Florian Bruhin
|
||||
* Floris Bruynooghe
|
||||
* John Still
|
||||
* Jordan Moldow
|
||||
* Kale Kundert
|
||||
* Lawrence Mitchell
|
||||
* Llandy Riveron Del Risco
|
||||
* Maik Figura
|
||||
* Martin Altmayer
|
||||
* Mihai Capotă
|
||||
* Nathaniel Waisbrot
|
||||
* Nguyễn Hồng Quân
|
||||
* Pauli Virtanen
|
||||
* Raphael Pierzina
|
||||
* Ronny Pfannschmidt
|
||||
* Segev Finer
|
||||
* V.Kuznetsov
|
||||
|
||||
|
||||
Happy testing,
|
||||
The Pytest Development Team
|
|
@ -38,7 +38,7 @@ Examples at :ref:`assertraises`.
|
|||
Comparing floating point numbers
|
||||
--------------------------------
|
||||
|
||||
.. autoclass:: approx
|
||||
.. autofunction:: approx
|
||||
|
||||
Raising a specific test outcome
|
||||
--------------------------------------
|
||||
|
@ -47,11 +47,11 @@ You can use the following functions in your test, fixture or setup
|
|||
functions to force a certain test outcome. Note that most often
|
||||
you can rather use declarative marks, see :ref:`skipping`.
|
||||
|
||||
.. autofunction:: _pytest.runner.fail
|
||||
.. autofunction:: _pytest.runner.skip
|
||||
.. autofunction:: _pytest.runner.importorskip
|
||||
.. autofunction:: _pytest.skipping.xfail
|
||||
.. autofunction:: _pytest.runner.exit
|
||||
.. autofunction:: _pytest.outcomes.fail
|
||||
.. autofunction:: _pytest.outcomes.skip
|
||||
.. autofunction:: _pytest.outcomes.importorskip
|
||||
.. autofunction:: _pytest.outcomes.xfail
|
||||
.. autofunction:: _pytest.outcomes.exit
|
||||
|
||||
Fixtures and requests
|
||||
-----------------------------------------------------
|
||||
|
@ -108,14 +108,14 @@ You can ask for available builtin or project-custom
|
|||
The returned ``monkeypatch`` fixture provides these
|
||||
helper methods to modify objects, dictionaries or os.environ::
|
||||
|
||||
monkeypatch.setattr(obj, name, value, raising=True)
|
||||
monkeypatch.delattr(obj, name, raising=True)
|
||||
monkeypatch.setitem(mapping, name, value)
|
||||
monkeypatch.delitem(obj, name, raising=True)
|
||||
monkeypatch.setenv(name, value, prepend=False)
|
||||
monkeypatch.delenv(name, value, raising=True)
|
||||
monkeypatch.syspath_prepend(path)
|
||||
monkeypatch.chdir(path)
|
||||
monkeypatch.setattr(obj, name, value, raising=True)
|
||||
monkeypatch.delattr(obj, name, raising=True)
|
||||
monkeypatch.setitem(mapping, name, value)
|
||||
monkeypatch.delitem(obj, name, raising=True)
|
||||
monkeypatch.setenv(name, value, prepend=False)
|
||||
monkeypatch.delenv(name, value, raising=True)
|
||||
monkeypatch.syspath_prepend(path)
|
||||
monkeypatch.chdir(path)
|
||||
|
||||
All modifications will be undone after the requesting
|
||||
test function or fixture has finished. The ``raising``
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
.. _`cache_provider`:
|
||||
.. _cache:
|
||||
|
||||
|
||||
Cache: working with cross-testrun state
|
||||
=======================================
|
||||
|
||||
|
@ -75,9 +77,9 @@ If you then run it with ``--lf``::
|
|||
$ pytest --lf
|
||||
======= test session starts ========
|
||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||
run-last-failure: rerun last 2 failures
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collected 50 items
|
||||
run-last-failure: rerun previous 2 failures
|
||||
|
||||
test_50.py FF
|
||||
|
||||
|
@ -117,9 +119,9 @@ of ``FF`` and dots)::
|
|||
$ pytest --ff
|
||||
======= test session starts ========
|
||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||
run-last-failure: rerun last 2 failures first
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collected 50 items
|
||||
run-last-failure: rerun previous 2 failures first
|
||||
|
||||
test_50.py FF................................................
|
||||
|
||||
|
|
|
@ -126,15 +126,27 @@ progress output, you can write it into a configuration file:
|
|||
# content of pytest.ini
|
||||
# (or tox.ini or setup.cfg)
|
||||
[pytest]
|
||||
addopts = -rsxX -q
|
||||
addopts = -ra -q
|
||||
|
||||
Alternatively, you can set a PYTEST_ADDOPTS environment variable to add command
|
||||
Alternatively, you can set a ``PYTEST_ADDOPTS`` environment variable to add command
|
||||
line options while the environment is in use::
|
||||
|
||||
export PYTEST_ADDOPTS="-rsxX -q"
|
||||
export PYTEST_ADDOPTS="-v"
|
||||
|
||||
From now on, running ``pytest`` will add the specified options.
|
||||
Here's how the command-line is built in the presence of ``addopts`` or the environment variable::
|
||||
|
||||
<pytest.ini:addopts> $PYTEST_ADDOTPS <extra command-line arguments>
|
||||
|
||||
So if the user executes in the command-line::
|
||||
|
||||
pytest -m slow
|
||||
|
||||
The actual command line executed is::
|
||||
|
||||
pytest -ra -q -v -m slow
|
||||
|
||||
Note that as usual for other command-line applications, in case of conflicting options the last one wins, so the example
|
||||
above will show verbose output because ``-v`` overwrites ``-q``.
|
||||
|
||||
|
||||
Builtin configuration file options
|
||||
|
@ -185,7 +197,16 @@ Builtin configuration file options
|
|||
norecursedirs = .svn _build tmp*
|
||||
|
||||
This would tell ``pytest`` to not look into typical subversion or
|
||||
sphinx-build directories or into any ``tmp`` prefixed directory.
|
||||
sphinx-build directories or into any ``tmp`` prefixed directory.
|
||||
|
||||
Additionally, ``pytest`` will attempt to intelligently identify and ignore a
|
||||
virtualenv by the presence of an activation script. Any directory deemed to
|
||||
be the root of a virtual environment will not be considered during test
|
||||
collection unless ``‑‑collect‑in‑virtualenv`` is given. Note also that
|
||||
``norecursedirs`` takes precedence over ``‑‑collect‑in‑virtualenv``; e.g. if
|
||||
you intend to run tests in a virtualenv with a base directory that matches
|
||||
``'.*'`` you *must* override ``norecursedirs`` in addition to using the
|
||||
``‑‑collect‑in‑virtualenv`` flag.
|
||||
|
||||
.. confval:: testpaths
|
||||
|
||||
|
@ -276,3 +297,14 @@ Builtin configuration file options
|
|||
|
||||
This tells pytest to ignore deprecation warnings and turn all other warnings
|
||||
into errors. For more information please refer to :ref:`warnings`.
|
||||
|
||||
.. confval:: cache_dir
|
||||
|
||||
.. versionadded:: 3.2
|
||||
|
||||
Sets a directory where stores content of cache plugin. Default directory is
|
||||
``.cache`` which is created in :ref:`rootdir <rootdir>`. Directory may be
|
||||
relative or absolute path. If setting relative path, then directory is created
|
||||
relative to :ref:`rootdir <rootdir>`. Additionally path may contain environment
|
||||
variables, that will be expanded. For more information about cache plugin
|
||||
please refer to :ref:`cache_provider`.
|
||||
|
|
|
@ -494,7 +494,7 @@ then you will see two tests skipped and two executed tests as expected::
|
|||
|
||||
test_plat.py s.s.
|
||||
======= short test summary info ========
|
||||
SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux
|
||||
SKIP [2] $REGENDOC_TMPDIR/conftest.py:13: cannot run on platform linux
|
||||
|
||||
======= 2 passed, 2 skipped in 0.12 seconds ========
|
||||
|
||||
|
|
|
@ -413,7 +413,7 @@ Running it results in some skips if we don't have all the python interpreters in
|
|||
. $ pytest -rs -q multipython.py
|
||||
sssssssssssssss.........sss.........sss.........
|
||||
======= short test summary info ========
|
||||
SKIP [21] $REGENDOC_TMPDIR/CWD/multipython.py:23: 'python2.6' not found
|
||||
SKIP [21] $REGENDOC_TMPDIR/CWD/multipython.py:24: 'python2.6' not found
|
||||
27 passed, 21 skipped in 0.12 seconds
|
||||
|
||||
Indirect parametrization of optional implementations/imports
|
||||
|
@ -467,7 +467,7 @@ If you run this with reporting for skips enabled::
|
|||
|
||||
test_module.py .s
|
||||
======= short test summary info ========
|
||||
SKIP [1] $REGENDOC_TMPDIR/conftest.py:10: could not import 'opt2'
|
||||
SKIP [1] $REGENDOC_TMPDIR/conftest.py:11: could not import 'opt2'
|
||||
|
||||
======= 1 passed, 1 skipped in 0.12 seconds ========
|
||||
|
||||
|
|
|
@ -358,7 +358,7 @@ get on the terminal - we are working on that)::
|
|||
> int(s)
|
||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||
|
||||
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python.py:1219>:1: ValueError
|
||||
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:570>:1: ValueError
|
||||
_______ TestRaises.test_raises_doesnt ________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
|
|
@ -170,7 +170,7 @@ and when running it will see a skipped "slow" test::
|
|||
|
||||
test_module.py .s
|
||||
======= short test summary info ========
|
||||
SKIP [1] test_module.py:13: need --runslow option to run
|
||||
SKIP [1] test_module.py:14: need --runslow option to run
|
||||
|
||||
======= 1 passed, 1 skipped in 0.12 seconds ========
|
||||
|
||||
|
@ -761,6 +761,47 @@ and run it::
|
|||
You'll see that the fixture finalizers could use the precise reporting
|
||||
information.
|
||||
|
||||
``PYTEST_CURRENT_TEST`` environment variable
|
||||
--------------------------------------------
|
||||
|
||||
.. versionadded:: 3.2
|
||||
|
||||
Sometimes a test session might get stuck and there might be no easy way to figure out
|
||||
which test got stuck, for example if pytest was run in quiet mode (``-q``) or you don't have access to the console
|
||||
output. This is particularly a problem if the problem helps only sporadically, the famous "flaky" kind of tests.
|
||||
|
||||
``pytest`` sets a ``PYTEST_CURRENT_TEST`` environment variable when running tests, which can be inspected
|
||||
by process monitoring utilities or libraries like `psutil <https://pypi.python.org/pypi/psutil>`_ to discover which
|
||||
test got stuck if necessary:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import psutil
|
||||
|
||||
for pid in psutil.pids():
|
||||
environ = psutil.Process(pid).environ()
|
||||
if 'PYTEST_CURRENT_TEST' in environ:
|
||||
print(f'pytest process {pid} running: {environ["PYTEST_CURRENT_TEST"]}')
|
||||
|
||||
During the test session pytest will set ``PYTEST_CURRENT_TEST`` to the current test
|
||||
:ref:`nodeid <nodeids>` and the current stage, which can be ``setup``, ``call``
|
||||
and ``teardown``.
|
||||
|
||||
For example, when running a single test function named ``test_foo`` from ``foo_module.py``,
|
||||
``PYTEST_CURRENT_TEST`` will be set to:
|
||||
|
||||
#. ``foo_module.py::test_foo (setup)``
|
||||
#. ``foo_module.py::test_foo (call)``
|
||||
#. ``foo_module.py::test_foo (teardown)``
|
||||
|
||||
In that order.
|
||||
|
||||
.. note::
|
||||
|
||||
The contents of ``PYTEST_CURRENT_TEST`` is meant to be human readable and the actual format
|
||||
can be changed between releases (even bug fixes) so it shouldn't be relied on for scripting
|
||||
or automation.
|
||||
|
||||
Freezing pytest
|
||||
---------------
|
||||
|
||||
|
|
|
@ -9,7 +9,8 @@ Installation and Getting Started
|
|||
|
||||
**dependencies**: `py <http://pypi.python.org/pypi/py>`_,
|
||||
`colorama (Windows) <http://pypi.python.org/pypi/colorama>`_,
|
||||
`argparse (py26) <http://pypi.python.org/pypi/argparse>`_.
|
||||
`argparse (py26) <http://pypi.python.org/pypi/argparse>`_,
|
||||
`ordereddict (py26) <http://pypi.python.org/pypi/ordereddict>`_.
|
||||
|
||||
**documentation as PDF**: `download latest <https://media.readthedocs.org/pdf/pytest/latest/pytest.pdf>`_
|
||||
|
||||
|
|
|
@ -195,7 +195,7 @@ list::
|
|||
$ pytest -q -rs test_strings.py
|
||||
s
|
||||
======= short test summary info ========
|
||||
SKIP [1] test_strings.py:1: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1
|
||||
SKIP [1] test_strings.py:2: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1
|
||||
1 skipped in 0.12 seconds
|
||||
|
||||
For further examples, you might want to look at :ref:`more
|
||||
|
|
|
@ -52,23 +52,64 @@ To stop the testing process after the first (N) failures::
|
|||
Specifying tests / selecting tests
|
||||
---------------------------------------------------
|
||||
|
||||
Several test run options::
|
||||
Pytest supports several ways to run and select tests from the command-line.
|
||||
|
||||
pytest test_mod.py # run tests in module
|
||||
pytest somepath # run all tests below somepath
|
||||
pytest -k stringexpr # only run tests with names that match the
|
||||
# "string expression", e.g. "MyClass and not method"
|
||||
# will select TestMyClass.test_something
|
||||
# but not TestMyClass.test_method_simple
|
||||
pytest test_mod.py::test_func # only run tests that match the "node ID",
|
||||
# e.g. "test_mod.py::test_func" will select
|
||||
# only test_func in test_mod.py
|
||||
pytest test_mod.py::TestClass::test_method # run a single method in
|
||||
# a single class
|
||||
**Run tests in a module**
|
||||
|
||||
Import 'pkg' and use its filesystem location to find and run tests::
|
||||
::
|
||||
|
||||
pytest --pyargs pkg # run all tests found below directory of pkg
|
||||
pytest test_mod.py
|
||||
|
||||
**Run tests in a directory**
|
||||
|
||||
::
|
||||
|
||||
pytest testing/
|
||||
|
||||
**Run tests by keyword expressions**
|
||||
|
||||
::
|
||||
|
||||
pytest -k "MyClass and not method"
|
||||
|
||||
This will run tests which contain names that match the given *string expression*, which can
|
||||
include Python operators that use filenames, class names and function names as variables.
|
||||
The example above will run ``TestMyClass.test_something`` but not ``TestMyClass.test_method_simple``.
|
||||
|
||||
.. _nodeids:
|
||||
|
||||
**Run tests by node ids**
|
||||
|
||||
Each collected test is assigned a unique ``nodeid`` which consist of the module filename followed
|
||||
by specifiers like class names, function names and parameters from parametrization, separated by ``::`` characters.
|
||||
|
||||
To run a specific test within a module::
|
||||
|
||||
pytest test_mod.py::test_func
|
||||
|
||||
|
||||
Another example specifying a test method in the command line::
|
||||
|
||||
pytest test_mod.py::TestClass::test_method
|
||||
|
||||
**Run tests by marker expressions**
|
||||
|
||||
::
|
||||
|
||||
pytest -m slow
|
||||
|
||||
Will run all tests which are decorated with the ``@pytest.mark.slow`` decorator.
|
||||
|
||||
For more information see :ref:`marks <mark>`.
|
||||
|
||||
**Run tests from packages**
|
||||
|
||||
::
|
||||
|
||||
pytest --pyargs pkg.testing
|
||||
|
||||
This will import ``pkg.testing`` and use its filesystem location to find and run tests from.
|
||||
|
||||
|
||||
Modifying Python traceback printing
|
||||
----------------------------------------------
|
||||
|
|
|
@ -78,6 +78,40 @@ Both ``-W`` command-line option and ``filterwarnings`` ini option are based on P
|
|||
`-W option`_ and `warnings.simplefilter`_, so please refer to those sections in the Python
|
||||
documentation for other examples and advanced usage.
|
||||
|
||||
``@pytest.mark.filterwarnings``
|
||||
-------------------------------
|
||||
|
||||
.. versionadded:: 3.2
|
||||
|
||||
You can use the ``@pytest.mark.filterwarnings`` to add warning filters to specific test items,
|
||||
allowing you to have finer control of which warnings should be captured at test, class or
|
||||
even module level:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import warnings
|
||||
|
||||
def api_v1():
|
||||
warnings.warn(UserWarning("api v1, should use functions from v2"))
|
||||
return 1
|
||||
|
||||
@pytest.mark.filterwarnings('ignore:api v1')
|
||||
def test_one():
|
||||
assert api_v1() == 1
|
||||
|
||||
|
||||
Filters applied using a mark take precedence over filters passed on the command line or configured
|
||||
by the ``filterwarnings`` ini option.
|
||||
|
||||
You may apply a filter to all tests of a class by using the ``filterwarnings`` mark as a class
|
||||
decorator or to all tests in a module by setting the ``pytestmark`` variable:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# turns all warnings into errors for this module
|
||||
pytestmark = @pytest.mark.filterwarnings('error')
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
``DeprecationWarning`` and ``PendingDeprecationWarning`` are hidden by the standard library
|
||||
|
|
|
@ -644,6 +644,7 @@ Session related reporting hooks:
|
|||
.. autofunction:: pytest_collectreport
|
||||
.. autofunction:: pytest_deselected
|
||||
.. autofunction:: pytest_report_header
|
||||
.. autofunction:: pytest_report_collectionfinish
|
||||
.. autofunction:: pytest_report_teststatus
|
||||
.. autofunction:: pytest_terminal_summary
|
||||
.. autofunction:: pytest_fixture_setup
|
||||
|
|
|
@ -16,16 +16,16 @@ from _pytest.freeze_support import freeze_includes
|
|||
from _pytest import __version__
|
||||
from _pytest.debugging import pytestPDB as __pytestPDB
|
||||
from _pytest.recwarn import warns, deprecated_call
|
||||
from _pytest.runner import fail, skip, importorskip, exit
|
||||
from _pytest.outcomes import fail, skip, importorskip, exit, xfail
|
||||
from _pytest.mark import MARK_GEN as mark, param
|
||||
from _pytest.skipping import xfail
|
||||
from _pytest.main import Item, Collector, File, Session
|
||||
from _pytest.fixtures import fillfixtures as _fillfuncargs
|
||||
from _pytest.python import (
|
||||
raises, approx,
|
||||
Module, Class, Instance, Function, Generator,
|
||||
)
|
||||
|
||||
from _pytest.python_api import approx, raises
|
||||
|
||||
set_trace = __pytestPDB.set_trace
|
||||
|
||||
__all__ = [
|
||||
|
|
3
setup.py
3
setup.py
|
@ -46,11 +46,12 @@ def main():
|
|||
install_requires = ['py>=1.4.33', 'setuptools'] # pluggy is vendored in _pytest.vendored_packages
|
||||
extras_require = {}
|
||||
if has_environment_marker_support():
|
||||
extras_require[':python_version=="2.6"'] = ['argparse']
|
||||
extras_require[':python_version=="2.6"'] = ['argparse', 'ordereddict']
|
||||
extras_require[':sys_platform=="win32"'] = ['colorama']
|
||||
else:
|
||||
if sys.version_info < (2, 7):
|
||||
install_requires.append('argparse')
|
||||
install_requires.append('ordereddict')
|
||||
if sys.platform == 'win32':
|
||||
install_requires.append('colorama')
|
||||
|
||||
|
|
|
@ -1095,6 +1095,36 @@ raise ValueError()
|
|||
assert line.endswith('mod.py')
|
||||
assert tw.lines[47] == ":15: AttributeError"
|
||||
|
||||
@pytest.mark.skipif("sys.version_info[0] < 3")
|
||||
def test_exc_repr_with_raise_from_none_chain_suppression(self, importasmod):
|
||||
mod = importasmod("""
|
||||
def f():
|
||||
try:
|
||||
g()
|
||||
except Exception:
|
||||
raise AttributeError() from None
|
||||
def g():
|
||||
raise ValueError()
|
||||
""")
|
||||
excinfo = pytest.raises(AttributeError, mod.f)
|
||||
r = excinfo.getrepr(style="long")
|
||||
tw = TWMock()
|
||||
r.toterminal(tw)
|
||||
for line in tw.lines:
|
||||
print(line)
|
||||
assert tw.lines[0] == ""
|
||||
assert tw.lines[1] == " def f():"
|
||||
assert tw.lines[2] == " try:"
|
||||
assert tw.lines[3] == " g()"
|
||||
assert tw.lines[4] == " except Exception:"
|
||||
assert tw.lines[5] == "> raise AttributeError() from None"
|
||||
assert tw.lines[6] == "E AttributeError"
|
||||
assert tw.lines[7] == ""
|
||||
line = tw.get_write_msg(8)
|
||||
assert line.endswith('mod.py')
|
||||
assert tw.lines[9] == ":6: AttributeError"
|
||||
assert len(tw.lines) == 10
|
||||
|
||||
@pytest.mark.skipif("sys.version_info[0] < 3")
|
||||
@pytest.mark.parametrize('reason, description', [
|
||||
('cause', 'The above exception was the direct cause of the following exception:'),
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
# encoding: utf-8
|
||||
import operator
|
||||
import sys
|
||||
import pytest
|
||||
import doctest
|
||||
|
@ -29,13 +30,21 @@ class TestApprox(object):
|
|||
if sys.version_info[:2] == (2, 6):
|
||||
tol1, tol2, infr = '???', '???', '???'
|
||||
assert repr(approx(1.0)) == '1.0 {pm} {tol1}'.format(pm=plus_minus, tol1=tol1)
|
||||
assert repr(approx([1.0, 2.0])) == '1.0 {pm} {tol1}, 2.0 {pm} {tol2}'.format(
|
||||
assert repr(approx([1.0, 2.0])) == 'approx([1.0 {pm} {tol1}, 2.0 {pm} {tol2}])'.format(
|
||||
pm=plus_minus, tol1=tol1, tol2=tol2)
|
||||
assert repr(approx((1.0, 2.0))) == 'approx((1.0 {pm} {tol1}, 2.0 {pm} {tol2}))'.format(
|
||||
pm=plus_minus, tol1=tol1, tol2=tol2)
|
||||
assert repr(approx(inf)) == 'inf'
|
||||
assert repr(approx(1.0, rel=nan)) == '1.0 {pm} ???'.format(pm=plus_minus)
|
||||
assert repr(approx(1.0, rel=inf)) == '1.0 {pm} {infr}'.format(pm=plus_minus, infr=infr)
|
||||
assert repr(approx(1.0j, rel=inf)) == '1j'
|
||||
|
||||
# Dictionaries aren't ordered, so we need to check both orders.
|
||||
assert repr(approx({'a': 1.0, 'b': 2.0})) in (
|
||||
"approx({{'a': 1.0 {pm} {tol1}, 'b': 2.0 {pm} {tol2}}})".format(pm=plus_minus, tol1=tol1, tol2=tol2),
|
||||
"approx({{'b': 2.0 {pm} {tol2}, 'a': 1.0 {pm} {tol1}}})".format(pm=plus_minus, tol1=tol1, tol2=tol2),
|
||||
)
|
||||
|
||||
def test_operator_overloading(self):
|
||||
assert 1 == approx(1, rel=1e-6, abs=1e-12)
|
||||
assert not (1 != approx(1, rel=1e-6, abs=1e-12))
|
||||
|
@ -213,34 +222,51 @@ class TestApprox(object):
|
|||
|
||||
def test_expecting_nan(self):
|
||||
examples = [
|
||||
(nan, nan),
|
||||
(-nan, -nan),
|
||||
(nan, -nan),
|
||||
(0.0, nan),
|
||||
(inf, nan),
|
||||
(eq, nan, nan),
|
||||
(eq, -nan, -nan),
|
||||
(eq, nan, -nan),
|
||||
(ne, 0.0, nan),
|
||||
(ne, inf, nan),
|
||||
]
|
||||
for a, x in examples:
|
||||
# If there is a relative tolerance and the expected value is NaN,
|
||||
# the actual tolerance is a NaN, which should be an error.
|
||||
with pytest.raises(ValueError):
|
||||
a != approx(x, rel=inf)
|
||||
for op, a, x in examples:
|
||||
# Nothing is equal to NaN by default.
|
||||
assert a != approx(x)
|
||||
|
||||
# You can make comparisons against NaN by not specifying a relative
|
||||
# tolerance, so only an absolute tolerance is calculated.
|
||||
assert a != approx(x, abs=inf)
|
||||
# If ``nan_ok=True``, then NaN is equal to NaN.
|
||||
assert op(a, approx(x, nan_ok=True))
|
||||
|
||||
def test_expecting_sequence(self):
|
||||
within_1e8 = [
|
||||
(1e8 + 1e0, 1e8),
|
||||
(1e0 + 1e-8, 1e0),
|
||||
(1e-8 + 1e-16, 1e-8),
|
||||
def test_int(self):
|
||||
within_1e6 = [
|
||||
(1000001, 1000000),
|
||||
(-1000001, -1000000),
|
||||
]
|
||||
actual, expected = zip(*within_1e8)
|
||||
assert actual == approx(expected, rel=5e-8, abs=0.0)
|
||||
for a, x in within_1e6:
|
||||
assert a == approx(x, rel=5e-6, abs=0)
|
||||
assert a != approx(x, rel=5e-7, abs=0)
|
||||
assert approx(x, rel=5e-6, abs=0) == a
|
||||
assert approx(x, rel=5e-7, abs=0) != a
|
||||
|
||||
def test_expecting_sequence_wrong_len(self):
|
||||
assert [1, 2] != approx([1])
|
||||
assert [1, 2] != approx([1, 2, 3])
|
||||
def test_decimal(self):
|
||||
within_1e6 = [
|
||||
(Decimal('1.000001'), Decimal('1.0')),
|
||||
(Decimal('-1.000001'), Decimal('-1.0')),
|
||||
]
|
||||
for a, x in within_1e6:
|
||||
assert a == approx(x, rel=Decimal('5e-6'), abs=0)
|
||||
assert a != approx(x, rel=Decimal('5e-7'), abs=0)
|
||||
assert approx(x, rel=Decimal('5e-6'), abs=0) == a
|
||||
assert approx(x, rel=Decimal('5e-7'), abs=0) != a
|
||||
|
||||
def test_fraction(self):
|
||||
within_1e6 = [
|
||||
(1 + Fraction(1, 1000000), Fraction(1)),
|
||||
(-1 - Fraction(-1, 1000000), Fraction(-1)),
|
||||
]
|
||||
for a, x in within_1e6:
|
||||
assert a == approx(x, rel=5e-6, abs=0)
|
||||
assert a != approx(x, rel=5e-7, abs=0)
|
||||
assert approx(x, rel=5e-6, abs=0) == a
|
||||
assert approx(x, rel=5e-7, abs=0) != a
|
||||
|
||||
def test_complex(self):
|
||||
within_1e6 = [
|
||||
|
@ -252,33 +278,80 @@ class TestApprox(object):
|
|||
for a, x in within_1e6:
|
||||
assert a == approx(x, rel=5e-6, abs=0)
|
||||
assert a != approx(x, rel=5e-7, abs=0)
|
||||
assert approx(x, rel=5e-6, abs=0) == a
|
||||
assert approx(x, rel=5e-7, abs=0) != a
|
||||
|
||||
def test_int(self):
|
||||
within_1e6 = [
|
||||
(1000001, 1000000),
|
||||
(-1000001, -1000000),
|
||||
]
|
||||
for a, x in within_1e6:
|
||||
assert a == approx(x, rel=5e-6, abs=0)
|
||||
assert a != approx(x, rel=5e-7, abs=0)
|
||||
def test_list(self):
|
||||
actual = [1 + 1e-7, 2 + 1e-8]
|
||||
expected = [1, 2]
|
||||
|
||||
def test_decimal(self):
|
||||
within_1e6 = [
|
||||
(Decimal('1.000001'), Decimal('1.0')),
|
||||
(Decimal('-1.000001'), Decimal('-1.0')),
|
||||
]
|
||||
for a, x in within_1e6:
|
||||
assert a == approx(x, rel=Decimal('5e-6'), abs=0)
|
||||
assert a != approx(x, rel=Decimal('5e-7'), abs=0)
|
||||
# Return false if any element is outside the tolerance.
|
||||
assert actual == approx(expected, rel=5e-7, abs=0)
|
||||
assert actual != approx(expected, rel=5e-8, abs=0)
|
||||
assert approx(expected, rel=5e-7, abs=0) == actual
|
||||
assert approx(expected, rel=5e-8, abs=0) != actual
|
||||
|
||||
def test_fraction(self):
|
||||
within_1e6 = [
|
||||
(1 + Fraction(1, 1000000), Fraction(1)),
|
||||
(-1 - Fraction(-1, 1000000), Fraction(-1)),
|
||||
]
|
||||
for a, x in within_1e6:
|
||||
assert a == approx(x, rel=5e-6, abs=0)
|
||||
assert a != approx(x, rel=5e-7, abs=0)
|
||||
def test_list_wrong_len(self):
|
||||
assert [1, 2] != approx([1])
|
||||
assert [1, 2] != approx([1, 2, 3])
|
||||
|
||||
def test_tuple(self):
|
||||
actual = (1 + 1e-7, 2 + 1e-8)
|
||||
expected = (1, 2)
|
||||
|
||||
# Return false if any element is outside the tolerance.
|
||||
assert actual == approx(expected, rel=5e-7, abs=0)
|
||||
assert actual != approx(expected, rel=5e-8, abs=0)
|
||||
assert approx(expected, rel=5e-7, abs=0) == actual
|
||||
assert approx(expected, rel=5e-8, abs=0) != actual
|
||||
|
||||
def test_tuple_wrong_len(self):
|
||||
assert (1, 2) != approx((1,))
|
||||
assert (1, 2) != approx((1, 2, 3))
|
||||
|
||||
def test_dict(self):
|
||||
actual = {'a': 1 + 1e-7, 'b': 2 + 1e-8}
|
||||
# Dictionaries became ordered in python3.6, so switch up the order here
|
||||
# to make sure it doesn't matter.
|
||||
expected = {'b': 2, 'a': 1}
|
||||
|
||||
# Return false if any element is outside the tolerance.
|
||||
assert actual == approx(expected, rel=5e-7, abs=0)
|
||||
assert actual != approx(expected, rel=5e-8, abs=0)
|
||||
assert approx(expected, rel=5e-7, abs=0) == actual
|
||||
assert approx(expected, rel=5e-8, abs=0) != actual
|
||||
|
||||
def test_dict_wrong_len(self):
|
||||
assert {'a': 1, 'b': 2} != approx({'a': 1})
|
||||
assert {'a': 1, 'b': 2} != approx({'a': 1, 'c': 2})
|
||||
assert {'a': 1, 'b': 2} != approx({'a': 1, 'b': 2, 'c': 3})
|
||||
|
||||
def test_numpy_array(self):
|
||||
np = pytest.importorskip('numpy')
|
||||
|
||||
actual = np.array([1 + 1e-7, 2 + 1e-8])
|
||||
expected = np.array([1, 2])
|
||||
|
||||
# Return false if any element is outside the tolerance.
|
||||
assert actual == approx(expected, rel=5e-7, abs=0)
|
||||
assert actual != approx(expected, rel=5e-8, abs=0)
|
||||
assert approx(expected, rel=5e-7, abs=0) == expected
|
||||
assert approx(expected, rel=5e-8, abs=0) != actual
|
||||
|
||||
# Should be able to compare lists with numpy arrays.
|
||||
assert list(actual) == approx(expected, rel=5e-7, abs=0)
|
||||
assert list(actual) != approx(expected, rel=5e-8, abs=0)
|
||||
assert actual == approx(list(expected), rel=5e-7, abs=0)
|
||||
assert actual != approx(list(expected), rel=5e-8, abs=0)
|
||||
|
||||
def test_numpy_array_wrong_shape(self):
|
||||
np = pytest.importorskip('numpy')
|
||||
|
||||
a12 = np.array([[1, 2]])
|
||||
a21 = np.array([[1], [2]])
|
||||
|
||||
assert a12 != approx(a21)
|
||||
assert a21 != approx(a12)
|
||||
|
||||
def test_doctests(self):
|
||||
parser = doctest.DocTestParser()
|
||||
|
@ -310,3 +383,16 @@ class TestApprox(object):
|
|||
'*At index 0 diff: 3 != 4 * {0}'.format(expected),
|
||||
'=* 1 failed in *=',
|
||||
])
|
||||
|
||||
@pytest.mark.parametrize('op', [
|
||||
pytest.param(operator.le, id='<='),
|
||||
pytest.param(operator.lt, id='<'),
|
||||
pytest.param(operator.ge, id='>='),
|
||||
pytest.param(operator.gt, id='>'),
|
||||
])
|
||||
def test_comparison_operator_type_error(self, op):
|
||||
"""
|
||||
pytest.approx should raise TypeError for operators other than == and != (#2003).
|
||||
"""
|
||||
with pytest.raises(TypeError):
|
||||
op(1, approx(1, rel=1e-6, abs=1e-12))
|
||||
|
|
|
@ -12,6 +12,9 @@ from _pytest.main import (
|
|||
)
|
||||
|
||||
|
||||
ignore_parametrized_marks = pytest.mark.filterwarnings('ignore:Applying marks directly to parameters')
|
||||
|
||||
|
||||
class TestModule(object):
|
||||
def test_failing_import(self, testdir):
|
||||
modcol = testdir.getmodulecol("import alksdjalskdjalkjals")
|
||||
|
@ -143,6 +146,26 @@ class TestClass(object):
|
|||
"*collected 0*",
|
||||
])
|
||||
|
||||
def test_static_method(self, testdir):
|
||||
testdir.getmodulecol("""
|
||||
class Test(object):
|
||||
@staticmethod
|
||||
def test_something():
|
||||
pass
|
||||
""")
|
||||
result = testdir.runpytest()
|
||||
if sys.version_info < (2, 7):
|
||||
# in 2.6, the code to handle static methods doesn't work
|
||||
result.stdout.fnmatch_lines([
|
||||
"*collected 0 items*",
|
||||
"*cannot collect static method*",
|
||||
])
|
||||
else:
|
||||
result.stdout.fnmatch_lines([
|
||||
"*collected 1 item*",
|
||||
"*1 passed in*",
|
||||
])
|
||||
|
||||
def test_setup_teardown_class_as_classmethod(self, testdir):
|
||||
testdir.makepyfile(test_mod1="""
|
||||
class TestClassMethod(object):
|
||||
|
@ -547,7 +570,8 @@ class TestFunction(object):
|
|||
rec = testdir.inline_run()
|
||||
rec.assertoutcome(passed=1)
|
||||
|
||||
def test_parametrize_with_mark(selfself, testdir):
|
||||
@ignore_parametrized_marks
|
||||
def test_parametrize_with_mark(self, testdir):
|
||||
items = testdir.getitems("""
|
||||
import pytest
|
||||
@pytest.mark.foo
|
||||
|
@ -620,6 +644,7 @@ class TestFunction(object):
|
|||
assert colitems[2].name == 'test2[a-c]'
|
||||
assert colitems[3].name == 'test2[b-c]'
|
||||
|
||||
@ignore_parametrized_marks
|
||||
def test_parametrize_skipif(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
@ -633,6 +658,7 @@ class TestFunction(object):
|
|||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines('* 2 passed, 1 skipped in *')
|
||||
|
||||
@ignore_parametrized_marks
|
||||
def test_parametrize_skip(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
@ -646,6 +672,7 @@ class TestFunction(object):
|
|||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines('* 2 passed, 1 skipped in *')
|
||||
|
||||
@ignore_parametrized_marks
|
||||
def test_parametrize_skipif_no_skip(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
@ -659,6 +686,7 @@ class TestFunction(object):
|
|||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines('* 1 failed, 2 passed in *')
|
||||
|
||||
@ignore_parametrized_marks
|
||||
def test_parametrize_xfail(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
@ -672,6 +700,7 @@ class TestFunction(object):
|
|||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines('* 2 passed, 1 xfailed in *')
|
||||
|
||||
@ignore_parametrized_marks
|
||||
def test_parametrize_passed(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
@ -685,6 +714,7 @@ class TestFunction(object):
|
|||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines('* 2 passed, 1 xpassed in *')
|
||||
|
||||
@ignore_parametrized_marks
|
||||
def test_parametrize_xfail_passed(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
|
|
@ -2547,6 +2547,39 @@ class TestFixtureMarker(object):
|
|||
'*test_foo*alpha*',
|
||||
'*test_foo*beta*'])
|
||||
|
||||
@pytest.mark.issue920
|
||||
def test_deterministic_fixture_collection(self, testdir, monkeypatch):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@pytest.fixture(scope="module",
|
||||
params=["A",
|
||||
"B",
|
||||
"C"])
|
||||
def A(request):
|
||||
return request.param
|
||||
|
||||
@pytest.fixture(scope="module",
|
||||
params=["DDDDDDDDD", "EEEEEEEEEEEE", "FFFFFFFFFFF", "banansda"])
|
||||
def B(request, A):
|
||||
return request.param
|
||||
|
||||
def test_foo(B):
|
||||
# Something funky is going on here.
|
||||
# Despite specified seeds, on what is collected,
|
||||
# sometimes we get unexpected passes. hashing B seems
|
||||
# to help?
|
||||
assert hash(B) or True
|
||||
""")
|
||||
monkeypatch.setenv("PYTHONHASHSEED", "1")
|
||||
out1 = testdir.runpytest_subprocess("-v")
|
||||
monkeypatch.setenv("PYTHONHASHSEED", "2")
|
||||
out2 = testdir.runpytest_subprocess("-v")
|
||||
out1 = [line for line in out1.outlines if line.startswith("test_deterministic_fixture_collection.py::test_foo")]
|
||||
out2 = [line for line in out2.outlines if line.startswith("test_deterministic_fixture_collection.py::test_foo")]
|
||||
assert len(out1) == 12
|
||||
assert out1 == out2
|
||||
|
||||
|
||||
class TestRequestScopeAccess(object):
|
||||
pytestmark = pytest.mark.parametrize(("scope", "ok", "error"), [
|
||||
|
|
|
@ -1289,8 +1289,9 @@ class TestMetafuncFunctionalAuto(object):
|
|||
assert output.count('preparing foo-3') == 1
|
||||
|
||||
|
||||
@pytest.mark.filterwarnings('ignore:Applying marks directly to parameters')
|
||||
@pytest.mark.issue308
|
||||
class TestMarkersWithParametrization(object):
|
||||
pytestmark = pytest.mark.issue308
|
||||
|
||||
def test_simple_mark(self, testdir):
|
||||
s = """
|
||||
|
|
|
@ -187,7 +187,7 @@ def test_dynamic_fixture_request(testdir):
|
|||
pass
|
||||
@pytest.fixture()
|
||||
def dependent_fixture(request):
|
||||
request.getfuncargvalue('dynamically_requested_fixture')
|
||||
request.getfixturevalue('dynamically_requested_fixture')
|
||||
def test_dyn(dependent_fixture):
|
||||
pass
|
||||
''')
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
from __future__ import absolute_import, division, print_function
|
||||
import sys
|
||||
|
||||
import py
|
||||
import _pytest
|
||||
import pytest
|
||||
import os
|
||||
|
@ -88,6 +88,37 @@ class TestNewAPI(object):
|
|||
assert result.ret == 0
|
||||
result.stdout.fnmatch_lines(["*1 passed*"])
|
||||
|
||||
def test_custom_rel_cache_dir(self, testdir):
|
||||
rel_cache_dir = os.path.join('custom_cache_dir', 'subdir')
|
||||
testdir.makeini("""
|
||||
[pytest]
|
||||
cache_dir = {cache_dir}
|
||||
""".format(cache_dir=rel_cache_dir))
|
||||
testdir.makepyfile(test_errored='def test_error():\n assert False')
|
||||
testdir.runpytest()
|
||||
assert testdir.tmpdir.join(rel_cache_dir).isdir()
|
||||
|
||||
def test_custom_abs_cache_dir(self, testdir, tmpdir_factory):
|
||||
tmp = str(tmpdir_factory.mktemp('tmp'))
|
||||
abs_cache_dir = os.path.join(tmp, 'custom_cache_dir')
|
||||
testdir.makeini("""
|
||||
[pytest]
|
||||
cache_dir = {cache_dir}
|
||||
""".format(cache_dir=abs_cache_dir))
|
||||
testdir.makepyfile(test_errored='def test_error():\n assert False')
|
||||
testdir.runpytest()
|
||||
assert py.path.local(abs_cache_dir).isdir()
|
||||
|
||||
def test_custom_cache_dir_with_env_var(self, testdir, monkeypatch):
|
||||
monkeypatch.setenv('env_var', 'custom_cache_dir')
|
||||
testdir.makeini("""
|
||||
[pytest]
|
||||
cache_dir = {cache_dir}
|
||||
""".format(cache_dir='$env_var'))
|
||||
testdir.makepyfile(test_errored='def test_error():\n assert False')
|
||||
testdir.runpytest()
|
||||
assert testdir.tmpdir.join('custom_cache_dir').isdir()
|
||||
|
||||
|
||||
def test_cache_reportheader(testdir):
|
||||
testdir.makepyfile("""
|
||||
|
@ -309,6 +340,73 @@ class TestLastFailed(object):
|
|||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines('*1 failed in*')
|
||||
|
||||
def test_terminal_report_lastfailed(self, testdir):
|
||||
test_a = testdir.makepyfile(test_a="""
|
||||
def test_a1():
|
||||
pass
|
||||
def test_a2():
|
||||
pass
|
||||
""")
|
||||
test_b = testdir.makepyfile(test_b="""
|
||||
def test_b1():
|
||||
assert 0
|
||||
def test_b2():
|
||||
assert 0
|
||||
""")
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines([
|
||||
'collected 4 items',
|
||||
'*2 failed, 2 passed in*',
|
||||
])
|
||||
|
||||
result = testdir.runpytest('--lf')
|
||||
result.stdout.fnmatch_lines([
|
||||
'collected 4 items',
|
||||
'run-last-failure: rerun previous 2 failures',
|
||||
'*2 failed, 2 deselected in*',
|
||||
])
|
||||
|
||||
result = testdir.runpytest(test_a, '--lf')
|
||||
result.stdout.fnmatch_lines([
|
||||
'collected 2 items',
|
||||
'run-last-failure: run all (no recorded failures)',
|
||||
'*2 passed in*',
|
||||
])
|
||||
|
||||
result = testdir.runpytest(test_b, '--lf')
|
||||
result.stdout.fnmatch_lines([
|
||||
'collected 2 items',
|
||||
'run-last-failure: rerun previous 2 failures',
|
||||
'*2 failed in*',
|
||||
])
|
||||
|
||||
result = testdir.runpytest('test_b.py::test_b1', '--lf')
|
||||
result.stdout.fnmatch_lines([
|
||||
'collected 1 item',
|
||||
'run-last-failure: rerun previous 1 failure',
|
||||
'*1 failed in*',
|
||||
])
|
||||
|
||||
def test_terminal_report_failedfirst(self, testdir):
|
||||
testdir.makepyfile(test_a="""
|
||||
def test_a1():
|
||||
assert 0
|
||||
def test_a2():
|
||||
pass
|
||||
""")
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines([
|
||||
'collected 2 items',
|
||||
'*1 failed, 1 passed in*',
|
||||
])
|
||||
|
||||
result = testdir.runpytest('--ff')
|
||||
result.stdout.fnmatch_lines([
|
||||
'collected 2 items',
|
||||
'run-last-failure: rerun previous 1 failure first',
|
||||
'*1 failed, 1 passed in*',
|
||||
])
|
||||
|
||||
def test_lastfailed_collectfailure(self, testdir, monkeypatch):
|
||||
|
||||
testdir.makepyfile(test_maybe="""
|
||||
|
@ -406,3 +504,102 @@ class TestLastFailed(object):
|
|||
testdir.makepyfile(test_errored='def test_error():\n assert False')
|
||||
testdir.runpytest('-q', '--lf')
|
||||
assert os.path.exists('.cache')
|
||||
|
||||
def test_xfail_not_considered_failure(self, testdir):
|
||||
testdir.makepyfile('''
|
||||
import pytest
|
||||
@pytest.mark.xfail
|
||||
def test():
|
||||
assert 0
|
||||
''')
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines('*1 xfailed*')
|
||||
assert self.get_cached_last_failed(testdir) == []
|
||||
|
||||
def test_xfail_strict_considered_failure(self, testdir):
|
||||
testdir.makepyfile('''
|
||||
import pytest
|
||||
@pytest.mark.xfail(strict=True)
|
||||
def test():
|
||||
pass
|
||||
''')
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines('*1 failed*')
|
||||
assert self.get_cached_last_failed(testdir) == ['test_xfail_strict_considered_failure.py::test']
|
||||
|
||||
@pytest.mark.parametrize('mark', ['mark.xfail', 'mark.skip'])
|
||||
def test_failed_changed_to_xfail_or_skip(self, testdir, mark):
|
||||
testdir.makepyfile('''
|
||||
import pytest
|
||||
def test():
|
||||
assert 0
|
||||
''')
|
||||
result = testdir.runpytest()
|
||||
assert self.get_cached_last_failed(testdir) == ['test_failed_changed_to_xfail_or_skip.py::test']
|
||||
assert result.ret == 1
|
||||
|
||||
testdir.makepyfile('''
|
||||
import pytest
|
||||
@pytest.{mark}
|
||||
def test():
|
||||
assert 0
|
||||
'''.format(mark=mark))
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == 0
|
||||
assert self.get_cached_last_failed(testdir) == []
|
||||
assert result.ret == 0
|
||||
|
||||
def get_cached_last_failed(self, testdir):
|
||||
config = testdir.parseconfigure()
|
||||
return sorted(config.cache.get("cache/lastfailed", {}))
|
||||
|
||||
def test_cache_cumulative(self, testdir):
|
||||
"""
|
||||
Test workflow where user fixes errors gradually file by file using --lf.
|
||||
"""
|
||||
# 1. initial run
|
||||
test_bar = testdir.makepyfile(test_bar="""
|
||||
def test_bar_1():
|
||||
pass
|
||||
def test_bar_2():
|
||||
assert 0
|
||||
""")
|
||||
test_foo = testdir.makepyfile(test_foo="""
|
||||
def test_foo_3():
|
||||
pass
|
||||
def test_foo_4():
|
||||
assert 0
|
||||
""")
|
||||
testdir.runpytest()
|
||||
assert self.get_cached_last_failed(testdir) == ['test_bar.py::test_bar_2', 'test_foo.py::test_foo_4']
|
||||
|
||||
# 2. fix test_bar_2, run only test_bar.py
|
||||
testdir.makepyfile(test_bar="""
|
||||
def test_bar_1():
|
||||
pass
|
||||
def test_bar_2():
|
||||
pass
|
||||
""")
|
||||
result = testdir.runpytest(test_bar)
|
||||
result.stdout.fnmatch_lines('*2 passed*')
|
||||
# ensure cache does not forget that test_foo_4 failed once before
|
||||
assert self.get_cached_last_failed(testdir) == ['test_foo.py::test_foo_4']
|
||||
|
||||
result = testdir.runpytest('--last-failed')
|
||||
result.stdout.fnmatch_lines('*1 failed, 3 deselected*')
|
||||
assert self.get_cached_last_failed(testdir) == ['test_foo.py::test_foo_4']
|
||||
|
||||
# 3. fix test_foo_4, run only test_foo.py
|
||||
test_foo = testdir.makepyfile(test_foo="""
|
||||
def test_foo_3():
|
||||
pass
|
||||
def test_foo_4():
|
||||
pass
|
||||
""")
|
||||
result = testdir.runpytest(test_foo, '--last-failed')
|
||||
result.stdout.fnmatch_lines('*1 passed, 1 deselected*')
|
||||
assert self.get_cached_last_failed(testdir) == []
|
||||
|
||||
result = testdir.runpytest('--last-failed')
|
||||
result.stdout.fnmatch_lines('*4 passed*')
|
||||
assert self.get_cached_last_failed(testdir) == []
|
||||
|
|
|
@ -2,7 +2,7 @@ from __future__ import absolute_import, division, print_function
|
|||
import pytest
|
||||
import py
|
||||
|
||||
from _pytest.main import Session, EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import Session, EXIT_NOTESTSCOLLECTED, _in_venv
|
||||
|
||||
|
||||
class TestCollector(object):
|
||||
|
@ -123,6 +123,53 @@ class TestCollectFS(object):
|
|||
assert "test_notfound" not in s
|
||||
assert "test_found" in s
|
||||
|
||||
@pytest.mark.parametrize('fname',
|
||||
("activate", "activate.csh", "activate.fish",
|
||||
"Activate", "Activate.bat", "Activate.ps1"))
|
||||
def test_ignored_virtualenvs(self, testdir, fname):
|
||||
bindir = "Scripts" if py.std.sys.platform.startswith("win") else "bin"
|
||||
testdir.tmpdir.ensure("virtual", bindir, fname)
|
||||
testfile = testdir.tmpdir.ensure("virtual", "test_invenv.py")
|
||||
testfile.write("def test_hello(): pass")
|
||||
|
||||
# by default, ignore tests inside a virtualenv
|
||||
result = testdir.runpytest()
|
||||
assert "test_invenv" not in result.stdout.str()
|
||||
# allow test collection if user insists
|
||||
result = testdir.runpytest("--collect-in-virtualenv")
|
||||
assert "test_invenv" in result.stdout.str()
|
||||
# allow test collection if user directly passes in the directory
|
||||
result = testdir.runpytest("virtual")
|
||||
assert "test_invenv" in result.stdout.str()
|
||||
|
||||
@pytest.mark.parametrize('fname',
|
||||
("activate", "activate.csh", "activate.fish",
|
||||
"Activate", "Activate.bat", "Activate.ps1"))
|
||||
def test_ignored_virtualenvs_norecursedirs_precedence(self, testdir, fname):
|
||||
bindir = "Scripts" if py.std.sys.platform.startswith("win") else "bin"
|
||||
# norecursedirs takes priority
|
||||
testdir.tmpdir.ensure(".virtual", bindir, fname)
|
||||
testfile = testdir.tmpdir.ensure(".virtual", "test_invenv.py")
|
||||
testfile.write("def test_hello(): pass")
|
||||
result = testdir.runpytest("--collect-in-virtualenv")
|
||||
assert "test_invenv" not in result.stdout.str()
|
||||
# ...unless the virtualenv is explicitly given on the CLI
|
||||
result = testdir.runpytest("--collect-in-virtualenv", ".virtual")
|
||||
assert "test_invenv" in result.stdout.str()
|
||||
|
||||
@pytest.mark.parametrize('fname',
|
||||
("activate", "activate.csh", "activate.fish",
|
||||
"Activate", "Activate.bat", "Activate.ps1"))
|
||||
def test__in_venv(self, testdir, fname):
|
||||
"""Directly test the virtual env detection function"""
|
||||
bindir = "Scripts" if py.std.sys.platform.startswith("win") else "bin"
|
||||
# no bin/activate, not a virtualenv
|
||||
base_path = testdir.tmpdir.mkdir('venv')
|
||||
assert _in_venv(base_path) is False
|
||||
# with bin/activate, totally a virtualenv
|
||||
base_path.ensure(bindir, fname)
|
||||
assert _in_venv(base_path) is True
|
||||
|
||||
def test_custom_norecursedirs(self, testdir):
|
||||
testdir.makeini("""
|
||||
[pytest]
|
||||
|
|
|
@ -3,7 +3,7 @@ import os
|
|||
import sys
|
||||
|
||||
import pytest
|
||||
from _pytest.mark import MarkGenerator as Mark, ParameterSet
|
||||
from _pytest.mark import MarkGenerator as Mark, ParameterSet, transfer_markers
|
||||
|
||||
|
||||
class TestMark(object):
|
||||
|
@ -22,6 +22,19 @@ class TestMark(object):
|
|||
mark = Mark()
|
||||
pytest.raises((AttributeError, TypeError), mark)
|
||||
|
||||
def test_mark_with_param(self):
|
||||
def some_function(abc):
|
||||
pass
|
||||
|
||||
class SomeClass(object):
|
||||
pass
|
||||
|
||||
assert pytest.mark.fun(some_function) is some_function
|
||||
assert pytest.mark.fun.with_args(some_function) is not some_function
|
||||
|
||||
assert pytest.mark.fun(SomeClass) is SomeClass
|
||||
assert pytest.mark.fun.with_args(SomeClass) is not SomeClass
|
||||
|
||||
def test_pytest_mark_name_starts_with_underscore(self):
|
||||
mark = Mark()
|
||||
pytest.raises(AttributeError, getattr, mark, '_some_name')
|
||||
|
@ -774,6 +787,28 @@ class TestKeywordSelection(object):
|
|||
marks=[pytest.mark.xfail, pytest.mark.skip], id=None)),
|
||||
|
||||
])
|
||||
@pytest.mark.filterwarnings('ignore')
|
||||
def test_parameterset_extractfrom(argval, expected):
|
||||
extracted = ParameterSet.extract_from(argval)
|
||||
assert extracted == expected
|
||||
|
||||
|
||||
def test_legacy_transfer():
|
||||
|
||||
class FakeModule(object):
|
||||
pytestmark = []
|
||||
|
||||
class FakeClass(object):
|
||||
pytestmark = pytest.mark.nofun
|
||||
|
||||
@pytest.mark.fun
|
||||
def fake_method(self):
|
||||
pass
|
||||
|
||||
transfer_markers(fake_method, FakeClass, FakeModule)
|
||||
|
||||
# legacy marks transfer smeared
|
||||
assert fake_method.nofun
|
||||
assert fake_method.fun
|
||||
# pristine marks dont transfer
|
||||
assert fake_method.pytestmark == [pytest.mark.fun.mark]
|
||||
|
|
|
@ -2,7 +2,6 @@ from __future__ import absolute_import, division, print_function
|
|||
import warnings
|
||||
import re
|
||||
import py
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
from _pytest.recwarn import WarningsRecorder
|
||||
|
@ -125,6 +124,7 @@ class TestDeprecatedCall(object):
|
|||
@pytest.mark.parametrize('warning_type', [PendingDeprecationWarning, DeprecationWarning])
|
||||
@pytest.mark.parametrize('mode', ['context_manager', 'call'])
|
||||
@pytest.mark.parametrize('call_f_first', [True, False])
|
||||
@pytest.mark.filterwarnings('ignore')
|
||||
def test_deprecated_call_modes(self, warning_type, mode, call_f_first):
|
||||
"""Ensure deprecated_call() captures a deprecation warning as expected inside its
|
||||
block/function.
|
||||
|
@ -170,32 +170,6 @@ class TestDeprecatedCall(object):
|
|||
with pytest.deprecated_call():
|
||||
f()
|
||||
|
||||
def test_deprecated_function_already_called(self, testdir):
|
||||
"""deprecated_call should be able to catch a call to a deprecated
|
||||
function even if that function has already been called in the same
|
||||
module. See #1190.
|
||||
"""
|
||||
testdir.makepyfile("""
|
||||
import warnings
|
||||
import pytest
|
||||
|
||||
def deprecated_function():
|
||||
warnings.warn("deprecated", DeprecationWarning)
|
||||
|
||||
def test_one():
|
||||
deprecated_function()
|
||||
|
||||
def test_two():
|
||||
pytest.deprecated_call(deprecated_function)
|
||||
""")
|
||||
result = testdir.runpytest()
|
||||
# for some reason in py26 catch_warnings manages to catch the deprecation warning
|
||||
# from deprecated_function(), even with default filters active (which ignore deprecation
|
||||
# warnings)
|
||||
py26 = sys.version_info[:2] == (2, 6)
|
||||
expected = '*=== 2 passed in *===' if not py26 else '*=== 2 passed, 1 warnings in *==='
|
||||
result.stdout.fnmatch_lines(expected)
|
||||
|
||||
|
||||
class TestWarns(object):
|
||||
def test_strings(self):
|
||||
|
|
|
@ -6,7 +6,7 @@ import os
|
|||
import py
|
||||
import pytest
|
||||
import sys
|
||||
from _pytest import runner, main
|
||||
from _pytest import runner, main, outcomes
|
||||
|
||||
|
||||
class TestSetupState(object):
|
||||
|
@ -449,10 +449,18 @@ def test_runtest_in_module_ordering(testdir):
|
|||
|
||||
|
||||
def test_outcomeexception_exceptionattributes():
|
||||
outcome = runner.OutcomeException('test')
|
||||
outcome = outcomes.OutcomeException('test')
|
||||
assert outcome.args[0] == outcome.msg
|
||||
|
||||
|
||||
def test_outcomeexception_passes_except_Exception():
|
||||
with pytest.raises(outcomes.OutcomeException):
|
||||
try:
|
||||
raise outcomes.OutcomeException('test')
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def test_pytest_exit():
|
||||
try:
|
||||
pytest.exit("hello")
|
||||
|
@ -701,6 +709,8 @@ def test_store_except_info_on_eror():
|
|||
"""
|
||||
# Simulate item that raises a specific exception
|
||||
class ItemThatRaises(object):
|
||||
nodeid = 'item_that_raises'
|
||||
|
||||
def runtest(self):
|
||||
raise IndexError('TEST')
|
||||
try:
|
||||
|
@ -713,6 +723,31 @@ def test_store_except_info_on_eror():
|
|||
assert sys.last_traceback
|
||||
|
||||
|
||||
def test_current_test_env_var(testdir, monkeypatch):
|
||||
pytest_current_test_vars = []
|
||||
monkeypatch.setattr(sys, 'pytest_current_test_vars', pytest_current_test_vars, raising=False)
|
||||
testdir.makepyfile('''
|
||||
import pytest
|
||||
import sys
|
||||
import os
|
||||
|
||||
@pytest.fixture
|
||||
def fix():
|
||||
sys.pytest_current_test_vars.append(('setup', os.environ['PYTEST_CURRENT_TEST']))
|
||||
yield
|
||||
sys.pytest_current_test_vars.append(('teardown', os.environ['PYTEST_CURRENT_TEST']))
|
||||
|
||||
def test(fix):
|
||||
sys.pytest_current_test_vars.append(('call', os.environ['PYTEST_CURRENT_TEST']))
|
||||
''')
|
||||
result = testdir.runpytest_inprocess()
|
||||
assert result.ret == 0
|
||||
test_id = 'test_current_test_env_var.py::test'
|
||||
assert pytest_current_test_vars == [
|
||||
('setup', test_id + ' (setup)'), ('call', test_id + ' (call)'), ('teardown', test_id + ' (teardown)')]
|
||||
assert 'PYTEST_CURRENT_TEST' not in os.environ
|
||||
|
||||
|
||||
class TestReportContents(object):
|
||||
"""
|
||||
Test user-level API of ``TestReport`` objects.
|
||||
|
|
|
@ -544,6 +544,23 @@ class TestTerminalFunctional(object):
|
|||
assert "===" not in s
|
||||
assert "passed" not in s
|
||||
|
||||
def test_report_collectionfinish_hook(self, testdir):
|
||||
testdir.makeconftest("""
|
||||
def pytest_report_collectionfinish(config, startdir, items):
|
||||
return ['hello from hook: {0} items'.format(len(items))]
|
||||
""")
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
@pytest.mark.parametrize('i', range(3))
|
||||
def test(i):
|
||||
pass
|
||||
""")
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines([
|
||||
"collected 3 items",
|
||||
"hello from hook: 3 items",
|
||||
])
|
||||
|
||||
|
||||
def test_fail_extra_reporting(testdir):
|
||||
testdir.makepyfile("def test_this(): assert 0")
|
||||
|
|
|
@ -9,9 +9,9 @@ def test_simple_unittest(testdir):
|
|||
import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
def testpassing(self):
|
||||
self.assertEquals('foo', 'foo')
|
||||
self.assertEqual('foo', 'foo')
|
||||
def test_failing(self):
|
||||
self.assertEquals('foo', 'bar')
|
||||
self.assertEqual('foo', 'bar')
|
||||
""")
|
||||
reprec = testdir.inline_run(testpath)
|
||||
assert reprec.matchreport("testpassing").passed
|
||||
|
@ -23,10 +23,10 @@ def test_runTest_method(testdir):
|
|||
import unittest
|
||||
class MyTestCaseWithRunTest(unittest.TestCase):
|
||||
def runTest(self):
|
||||
self.assertEquals('foo', 'foo')
|
||||
self.assertEqual('foo', 'foo')
|
||||
class MyTestCaseWithoutRunTest(unittest.TestCase):
|
||||
def runTest(self):
|
||||
self.assertEquals('foo', 'foo')
|
||||
self.assertEqual('foo', 'foo')
|
||||
def test_something(self):
|
||||
pass
|
||||
""")
|
||||
|
@ -59,7 +59,7 @@ def test_setup(testdir):
|
|||
def setup_method(self, method):
|
||||
self.foo2 = 1
|
||||
def test_both(self):
|
||||
self.assertEquals(1, self.foo)
|
||||
self.assertEqual(1, self.foo)
|
||||
assert self.foo2 == 1
|
||||
def teardown_method(self, method):
|
||||
assert 0, "42"
|
||||
|
@ -136,7 +136,7 @@ def test_teardown(testdir):
|
|||
self.l.append(None)
|
||||
class Second(unittest.TestCase):
|
||||
def test_check(self):
|
||||
self.assertEquals(MyTestCase.l, [None])
|
||||
self.assertEqual(MyTestCase.l, [None])
|
||||
""")
|
||||
reprec = testdir.inline_run(testpath)
|
||||
passed, skipped, failed = reprec.countoutcomes()
|
||||
|
@ -351,61 +351,12 @@ def test_module_level_pytestmark(testdir):
|
|||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
|
||||
def test_trial_testcase_skip_property(testdir):
|
||||
pytest.importorskip('twisted.trial.unittest')
|
||||
testpath = testdir.makepyfile("""
|
||||
from twisted.trial import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
skip = 'dont run'
|
||||
def test_func(self):
|
||||
pass
|
||||
""")
|
||||
reprec = testdir.inline_run(testpath, "-s")
|
||||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
|
||||
def test_trial_testfunction_skip_property(testdir):
|
||||
pytest.importorskip('twisted.trial.unittest')
|
||||
testpath = testdir.makepyfile("""
|
||||
from twisted.trial import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
def test_func(self):
|
||||
pass
|
||||
test_func.skip = 'dont run'
|
||||
""")
|
||||
reprec = testdir.inline_run(testpath, "-s")
|
||||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
|
||||
def test_trial_testcase_todo_property(testdir):
|
||||
pytest.importorskip('twisted.trial.unittest')
|
||||
testpath = testdir.makepyfile("""
|
||||
from twisted.trial import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
todo = 'dont run'
|
||||
def test_func(self):
|
||||
assert 0
|
||||
""")
|
||||
reprec = testdir.inline_run(testpath, "-s")
|
||||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
|
||||
def test_trial_testfunction_todo_property(testdir):
|
||||
pytest.importorskip('twisted.trial.unittest')
|
||||
testpath = testdir.makepyfile("""
|
||||
from twisted.trial import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
def test_func(self):
|
||||
assert 0
|
||||
test_func.todo = 'dont run'
|
||||
""")
|
||||
reprec = testdir.inline_run(testpath, "-s")
|
||||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
|
||||
class TestTrialUnittest(object):
|
||||
def setup_class(cls):
|
||||
cls.ut = pytest.importorskip("twisted.trial.unittest")
|
||||
# on windows trial uses a socket for a reactor and apparently doesn't close it properly
|
||||
# https://twistedmatrix.com/trac/ticket/9227
|
||||
cls.ignore_unclosed_socket_warning = ('-W', 'always')
|
||||
|
||||
def test_trial_testcase_runtest_not_collected(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
|
@ -415,7 +366,7 @@ class TestTrialUnittest(object):
|
|||
def test_hello(self):
|
||||
pass
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec = testdir.inline_run(*self.ignore_unclosed_socket_warning)
|
||||
reprec.assertoutcome(passed=1)
|
||||
testdir.makepyfile("""
|
||||
from twisted.trial.unittest import TestCase
|
||||
|
@ -424,7 +375,7 @@ class TestTrialUnittest(object):
|
|||
def runTest(self):
|
||||
pass
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec = testdir.inline_run(*self.ignore_unclosed_socket_warning)
|
||||
reprec.assertoutcome(passed=1)
|
||||
|
||||
def test_trial_exceptions_with_skips(self, testdir):
|
||||
|
@ -462,7 +413,7 @@ class TestTrialUnittest(object):
|
|||
""")
|
||||
from _pytest.compat import _is_unittest_unexpected_success_a_failure
|
||||
should_fail = _is_unittest_unexpected_success_a_failure()
|
||||
result = testdir.runpytest("-rxs")
|
||||
result = testdir.runpytest("-rxs", *self.ignore_unclosed_socket_warning)
|
||||
result.stdout.fnmatch_lines_random([
|
||||
"*XFAIL*test_trial_todo*",
|
||||
"*trialselfskip*",
|
||||
|
@ -537,6 +488,50 @@ class TestTrialUnittest(object):
|
|||
child.expect("hellopdb")
|
||||
child.sendeof()
|
||||
|
||||
def test_trial_testcase_skip_property(self, testdir):
|
||||
testpath = testdir.makepyfile("""
|
||||
from twisted.trial import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
skip = 'dont run'
|
||||
def test_func(self):
|
||||
pass
|
||||
""")
|
||||
reprec = testdir.inline_run(testpath, "-s")
|
||||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
def test_trial_testfunction_skip_property(self, testdir):
|
||||
testpath = testdir.makepyfile("""
|
||||
from twisted.trial import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
def test_func(self):
|
||||
pass
|
||||
test_func.skip = 'dont run'
|
||||
""")
|
||||
reprec = testdir.inline_run(testpath, "-s")
|
||||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
def test_trial_testcase_todo_property(self, testdir):
|
||||
testpath = testdir.makepyfile("""
|
||||
from twisted.trial import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
todo = 'dont run'
|
||||
def test_func(self):
|
||||
assert 0
|
||||
""")
|
||||
reprec = testdir.inline_run(testpath, "-s")
|
||||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
def test_trial_testfunction_todo_property(self, testdir):
|
||||
testpath = testdir.makepyfile("""
|
||||
from twisted.trial import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
def test_func(self):
|
||||
assert 0
|
||||
test_func.todo = 'dont run'
|
||||
""")
|
||||
reprec = testdir.inline_run(testpath, "-s", *self.ignore_unclosed_socket_warning)
|
||||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
|
||||
def test_djangolike_testcase(testdir):
|
||||
# contributed from Morten Breekevold
|
||||
|
@ -598,7 +593,7 @@ def test_unittest_not_shown_in_traceback(testdir):
|
|||
class t(unittest.TestCase):
|
||||
def test_hello(self):
|
||||
x = 3
|
||||
self.assertEquals(x, 4)
|
||||
self.assertEqual(x, 4)
|
||||
""")
|
||||
res = testdir.runpytest()
|
||||
assert "failUnlessEqual" not in res.stdout.str()
|
||||
|
|
|
@ -33,6 +33,7 @@ def pyfile_with_warnings(testdir, request):
|
|||
})
|
||||
|
||||
|
||||
@pytest.mark.filterwarnings('always')
|
||||
def test_normal_flow(testdir, pyfile_with_warnings):
|
||||
"""
|
||||
Check that the warnings section is displayed, containing test node ids followed by
|
||||
|
@ -54,6 +55,7 @@ def test_normal_flow(testdir, pyfile_with_warnings):
|
|||
assert result.stdout.str().count('test_normal_flow.py::test_func') == 1
|
||||
|
||||
|
||||
@pytest.mark.filterwarnings('always')
|
||||
def test_setup_teardown_warnings(testdir, pyfile_with_warnings):
|
||||
testdir.makepyfile('''
|
||||
import warnings
|
||||
|
@ -115,6 +117,7 @@ def test_ignore(testdir, pyfile_with_warnings, method):
|
|||
|
||||
@pytest.mark.skipif(sys.version_info < (3, 0),
|
||||
reason='warnings message is unicode is ok in python3')
|
||||
@pytest.mark.filterwarnings('always')
|
||||
def test_unicode(testdir, pyfile_with_warnings):
|
||||
testdir.makepyfile('''
|
||||
# -*- coding: utf8 -*-
|
||||
|
@ -152,6 +155,7 @@ def test_py2_unicode(testdir, pyfile_with_warnings):
|
|||
warnings.warn(u"测试")
|
||||
yield
|
||||
|
||||
@pytest.mark.filterwarnings('always')
|
||||
def test_func(fix):
|
||||
pass
|
||||
''')
|
||||
|
@ -188,3 +192,32 @@ def test_works_with_filterwarnings(testdir):
|
|||
result.stdout.fnmatch_lines([
|
||||
'*== 1 passed in *',
|
||||
])
|
||||
|
||||
|
||||
@pytest.mark.parametrize('default_config', ['ini', 'cmdline'])
|
||||
def test_filterwarnings_mark(testdir, default_config):
|
||||
"""
|
||||
Test ``filterwarnings`` mark works and takes precedence over command line and ini options.
|
||||
"""
|
||||
if default_config == 'ini':
|
||||
testdir.makeini("""
|
||||
[pytest]
|
||||
filterwarnings = always
|
||||
""")
|
||||
testdir.makepyfile("""
|
||||
import warnings
|
||||
import pytest
|
||||
|
||||
@pytest.mark.filterwarnings('ignore::RuntimeWarning')
|
||||
def test_ignore_runtime_warning():
|
||||
warnings.warn(RuntimeWarning())
|
||||
|
||||
@pytest.mark.filterwarnings('error')
|
||||
def test_warning_error():
|
||||
warnings.warn(RuntimeWarning())
|
||||
|
||||
def test_show_warning():
|
||||
warnings.warn(RuntimeWarning())
|
||||
""")
|
||||
result = testdir.runpytest('-W always' if default_config == 'cmdline' else '')
|
||||
result.stdout.fnmatch_lines(['*= 1 failed, 2 passed, 1 warnings in *'])
|
||||
|
|
16
tox.ini
16
tox.ini
|
@ -12,7 +12,7 @@ envlist =
|
|||
py36
|
||||
py37
|
||||
pypy
|
||||
{py27,py35}-{pexpect,xdist,trial}
|
||||
{py27,py35}-{pexpect,xdist,trial,numpy}
|
||||
py27-nobyte
|
||||
doctesting
|
||||
freeze
|
||||
|
@ -50,9 +50,6 @@ commands =
|
|||
skipsdist = True
|
||||
usedevelop = True
|
||||
basepython = python2.7
|
||||
# needed to keep check-manifest working
|
||||
setenv =
|
||||
SETUPTOOLS_SCM_PRETEND_VERSION=2.0.1
|
||||
deps =
|
||||
flake8
|
||||
# pygments required by rst-lint
|
||||
|
@ -110,6 +107,16 @@ deps = {[testenv:py27-trial]deps}
|
|||
commands =
|
||||
pytest -ra {posargs:testing/test_unittest.py}
|
||||
|
||||
[testenv:py27-numpy]
|
||||
deps=numpy
|
||||
commands=
|
||||
pytest -rfsxX {posargs:testing/python/approx.py}
|
||||
|
||||
[testenv:py35-numpy]
|
||||
deps=numpy
|
||||
commands=
|
||||
pytest -rfsxX {posargs:testing/python/approx.py}
|
||||
|
||||
[testenv:docs]
|
||||
skipsdist = True
|
||||
usedevelop = True
|
||||
|
@ -194,6 +201,7 @@ python_classes = Test Acceptance
|
|||
python_functions = test
|
||||
norecursedirs = .tox ja .hg cx_freeze_source
|
||||
filterwarnings =
|
||||
error
|
||||
# produced by path.local
|
||||
ignore:bad escape.*:DeprecationWarning:re
|
||||
# produced by path.readlines
|
||||
|
|
Loading…
Reference in New Issue