Merge branch 'features' of https://github.com/feuillemorte/pytest into 3034-new-tests-first

This commit is contained in:
feuillemorte 2018-02-23 23:55:28 +03:00
commit c032d4c5d5
50 changed files with 751 additions and 245 deletions

View File

@ -1,15 +1,14 @@
Thanks for submitting a PR, your contribution is really appreciated! Thanks for submitting a PR, your contribution is really appreciated!
Here's a quick checklist that should be present in PRs: Here's a quick checklist that should be present in PRs (you can delete this text from the final description, this is
just a guideline):
- [ ] Add a new news fragment into the changelog folder - [ ] Create a new changelog file in the `changelog` folder, with a name like `<ISSUE NUMBER>.<TYPE>.rst`. See [changelog/README.rst](/changelog/README.rst) for details.
* name it `$issue_id.$type` for example (588.bugfix) - [ ] Target the `master` branch for bug fixes, documentation updates and trivial changes.
* if you don't have an issue_id change it to the pr id after creating the pr - [ ] Target the `features` branch for new features and removals/deprecations.
* ensure type is one of `removal`, `feature`, `bugfix`, `vendor`, `doc` or `trivial` - [ ] Include documentation when adding new features.
* Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files." - [ ] Include new tests or update existing tests when applicable.
- [ ] Target: for `bugfix`, `vendor`, `doc` or `trivial` fixes, target `master`; for removals or features target `features`;
- [ ] Make sure to include reasonable tests for your change if necessary
Unless your change is a trivial or a documentation fix (e.g., a typo or reword of a small section) please: Unless your change is trivial or a small documentation fix (e.g., a typo or reword of a small section) please:
- [ ] Add yourself to `AUTHORS`, in alphabetical order; - [ ] Add yourself to `AUTHORS` in alphabetical order;

View File

@ -2,10 +2,8 @@ sudo: false
language: python language: python
python: python:
- '3.6' - '3.6'
# command to install dependencies
install: install:
- pip install --upgrade --pre tox - pip install --upgrade --pre tox
# # command to run tests
env: env:
matrix: matrix:
# coveralls is not listed in tox's envlist, but should run in travis # coveralls is not listed in tox's envlist, but should run in travis
@ -29,7 +27,7 @@ env:
- TOXENV=doctesting - TOXENV=doctesting
- TOXENV=docs - TOXENV=docs
matrix: jobs:
include: include:
- env: TOXENV=pypy - env: TOXENV=pypy
python: 'pypy-5.4' python: 'pypy-5.4'
@ -39,9 +37,22 @@ matrix:
python: '3.5' python: '3.5'
- env: TOXENV=py37 - env: TOXENV=py37
python: 'nightly' python: 'nightly'
allow_failures:
- env: TOXENV=py37 - stage: deploy
python: 'nightly' python: '3.6'
env:
install: pip install -U setuptools setuptools_scm
script: skip
deploy:
provider: pypi
user: nicoddemus
distributions: sdist bdist_wheel
skip_upload_docs: true
password:
secure: xanTgTUu6XDQVqB/0bwJQXoDMnU5tkwZc5koz6mBkkqZhKdNOi2CLoC1XhiSZ+ah24l4V1E0GAqY5kBBcy9d7NVe4WNg4tD095LsHw+CRU6/HCVIFfyk2IZ+FPAlguesCcUiJSXOrlBF+Wj68wEvLoK7EoRFbJeiZ/f91Ww1sbtDlqXABWGHrmhPJL5Wva7o7+wG7JwJowqdZg1pbQExsCc7b53w4v2RBu3D6TJaTAzHiVsW+nUSI67vKI/uf+cR/OixsTfy37wlHgSwihYmrYLFls3V0bSpahCim3bCgMaFZx8S8xrdgJ++PzBCof2HeflFKvW+VCkoYzGEG4NrTWJoNz6ni4red9GdvfjGH3YCjAKS56h9x58zp2E5rpsb/kVq5/45xzV+dq6JRuhQ1nJWjBC6fSKAc/bfwnuFK3EBxNLkvBssLHvsNjj5XG++cB8DdS9wVGUqjpoK4puaXUWFqy4q3S9F86HEsKNgExtieA9qNx+pCIZVs6JCXZNjr0I5eVNzqJIyggNgJG6RyravsU35t9Zd9doL5g4Y7UKmAGTn1Sz24HQ4sMQgXdm2SyD8gEK5je4tlhUvfGtDvMSlstq71kIn9nRpFnqB6MFlbYSEAZmo8dGbCquoUc++6Rum208wcVbrzzVtGlXB/Ow9AbFMYeAGA0+N/K1e59c=
on:
tags: true
repo: pytest-dev/pytest
script: tox --recreate script: tox --recreate

View File

@ -29,11 +29,13 @@ Benjamin Peterson
Bernard Pratz Bernard Pratz
Bob Ippolito Bob Ippolito
Brian Dorsey Brian Dorsey
Brian Maissy
Brian Okken Brian Okken
Brianna Laugher Brianna Laugher
Bruno Oliveira Bruno Oliveira
Cal Leeming Cal Leeming
Carl Friedrich Bolz Carl Friedrich Bolz
Carlos Jenkins
Ceridwen Ceridwen
Charles Cloud Charles Cloud
Charnjit SiNGH (CCSJ) Charnjit SiNGH (CCSJ)
@ -97,6 +99,7 @@ Jon Sonesen
Jonas Obrist Jonas Obrist
Jordan Guymon Jordan Guymon
Jordan Moldow Jordan Moldow
Jordan Speicher
Joshua Bronson Joshua Bronson
Jurko Gospodnetić Jurko Gospodnetić
Justyna Janczyszyn Justyna Janczyszyn

View File

@ -8,6 +8,67 @@
.. towncrier release notes start .. towncrier release notes start
Pytest 3.4.1 (2018-02-20)
=========================
Bug Fixes
---------
- Move import of ``doctest.UnexpectedException`` to top-level to avoid possible
errors when using ``--pdb``. (`#1810
<https://github.com/pytest-dev/pytest/issues/1810>`_)
- Added printing of captured stdout/stderr before entering pdb, and improved a
test which was giving false negatives about output capturing. (`#3052
<https://github.com/pytest-dev/pytest/issues/3052>`_)
- Fix ordering of tests using parametrized fixtures which can lead to fixtures
being created more than necessary. (`#3161
<https://github.com/pytest-dev/pytest/issues/3161>`_)
- Fix bug where logging happening at hooks outside of "test run" hooks would
cause an internal error. (`#3184
<https://github.com/pytest-dev/pytest/issues/3184>`_)
- Detect arguments injected by ``unittest.mock.patch`` decorator correctly when
pypi ``mock.patch`` is installed and imported. (`#3206
<https://github.com/pytest-dev/pytest/issues/3206>`_)
- Errors shown when a ``pytest.raises()`` with ``match=`` fails are now cleaner
on what happened: When no exception was raised, the "matching '...'" part got
removed as it falsely implies that an exception was raised but it didn't
match. When a wrong exception was raised, it's now thrown (like
``pytest.raised()`` without ``match=`` would) instead of complaining about
the unmatched text. (`#3222
<https://github.com/pytest-dev/pytest/issues/3222>`_)
- Fixed output capture handling in doctests on macOS. (`#985
<https://github.com/pytest-dev/pytest/issues/985>`_)
Improved Documentation
----------------------
- Add Sphinx parameter docs for ``match`` and ``message`` args to
``pytest.raises``. (`#3202
<https://github.com/pytest-dev/pytest/issues/3202>`_)
Trivial/Internal Changes
------------------------
- pytest has changed the publication procedure and is now being published to
PyPI directly from Travis. (`#3060
<https://github.com/pytest-dev/pytest/issues/3060>`_)
- Rename ``ParameterSet._for_parameterize()`` to ``_for_parametrize()`` in
order to comply with the naming convention. (`#3166
<https://github.com/pytest-dev/pytest/issues/3166>`_)
- Skip failing pdb/doctest test on mac. (`#985
<https://github.com/pytest-dev/pytest/issues/985>`_)
Pytest 3.4.0 (2018-01-30) Pytest 3.4.0 (2018-01-30)
========================= =========================

View File

@ -22,44 +22,28 @@ taking a lot of time to make a new one.
Ensure your are in a clean work tree. Ensure your are in a clean work tree.
#. Generate docs, changelog, announcements and upload a package to #. Generate docs, changelog, announcements and a **local** tag::
your ``devpi`` staging server::
invoke generate.pre-release <VERSION> <DEVPI USER> --password <DEVPI PASSWORD> invoke generate.pre-release <VERSION>
If ``--password`` is not given, it is assumed the user is already logged in ``devpi``.
If you don't have an account, please ask for one.
#. Open a PR for this branch targeting ``master``. #. Open a PR for this branch targeting ``master``.
#. Test the package #. After all tests pass and the PR has been approved, publish to PyPI by pushing the tag::
* **Manual method** git push git@github.com:pytest-dev/pytest.git <VERSION>
Run from multiple machines:: Wait for the deploy to complete, then make sure it is `available on PyPI <https://pypi.org/project/pytest>`_.
devpi use https://devpi.net/USER/dev #. Send an email announcement with the contents from::
devpi test pytest==VERSION
Check that tests pass for relevant combinations with:: doc/en/announce/release-<VERSION>.rst
devpi list pytest To the following mailing lists:
* **CI servers** * pytest-dev@python.org (all releases)
* python-announce-list@python.org (all releases)
* testing-in-python@lists.idyll.org (only major/minor releases)
Configure a repository as per-instructions on And announce it on `Twitter <https://twitter.com/>`_ with the ``#pytest`` hashtag.
devpi-cloud-test_ to test the package on Travis_ and AppVeyor_.
All test environments should pass.
#. Publish to PyPI::
invoke generate.publish-release <VERSION> <DEVPI USER> <PYPI_NAME>
where PYPI_NAME is the name of pypi.python.org as configured in your ``~/.pypirc``
file `for devpi <http://doc.devpi.net/latest/quickstart-releaseprocess.html?highlight=pypirc#devpi-push-releasing-to-an-external-index>`_.
#. After a minor/major release, merge ``release-X.Y.Z`` into ``master`` and push (or open a PR). #. After a minor/major release, merge ``release-X.Y.Z`` into ``master`` and push (or open a PR).
.. _devpi-cloud-test: https://github.com/obestwalter/devpi-cloud-test
.. _AppVeyor: https://www.appveyor.com/
.. _Travis: https://travis-ci.org

View File

@ -79,10 +79,11 @@ def num_mock_patch_args(function):
patchings = getattr(function, "patchings", None) patchings = getattr(function, "patchings", None)
if not patchings: if not patchings:
return 0 return 0
mock = sys.modules.get("mock", sys.modules.get("unittest.mock", None)) mock_modules = [sys.modules.get("mock"), sys.modules.get("unittest.mock")]
if mock is not None: if any(mock_modules):
sentinels = [m.DEFAULT for m in mock_modules if m is not None]
return len([p for p in patchings return len([p for p in patchings
if not p.attribute_name and p.new is mock.DEFAULT]) if not p.attribute_name and p.new in sentinels])
return len(patchings) return len(patchings)

View File

@ -2,6 +2,7 @@
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import pdb import pdb
import sys import sys
from doctest import UnexpectedException
def pytest_addoption(parser): def pytest_addoption(parser):
@ -85,6 +86,18 @@ def _enter_pdb(node, excinfo, rep):
# for not completely clear reasons. # for not completely clear reasons.
tw = node.config.pluginmanager.getplugin("terminalreporter")._tw tw = node.config.pluginmanager.getplugin("terminalreporter")._tw
tw.line() tw.line()
showcapture = node.config.option.showcapture
for sectionname, content in (('stdout', rep.capstdout),
('stderr', rep.capstderr),
('log', rep.caplog)):
if showcapture in (sectionname, 'all') and content:
tw.sep(">", "captured " + sectionname)
if content[-1:] == "\n":
content = content[:-1]
tw.line(content)
tw.sep(">", "traceback") tw.sep(">", "traceback")
rep.toterminal(tw) rep.toterminal(tw)
tw.sep(">", "entering PDB") tw.sep(">", "entering PDB")
@ -95,10 +108,9 @@ def _enter_pdb(node, excinfo, rep):
def _postmortem_traceback(excinfo): def _postmortem_traceback(excinfo):
# A doctest.UnexpectedException is not useful for post_mortem.
# Use the underlying exception instead:
from doctest import UnexpectedException
if isinstance(excinfo.value, UnexpectedException): if isinstance(excinfo.value, UnexpectedException):
# A doctest.UnexpectedException is not useful for post_mortem.
# Use the underlying exception instead:
return excinfo.value.exc_info[2] return excinfo.value.exc_info[2]
else: else:
return excinfo._excinfo[2] return excinfo._excinfo[2]

View File

@ -41,6 +41,12 @@ MARK_PARAMETERSET_UNPACKING = RemovedInPytest4Warning(
"For more details, see: https://docs.pytest.org/en/latest/parametrize.html" "For more details, see: https://docs.pytest.org/en/latest/parametrize.html"
) )
RECORD_XML_PROPERTY = (
'Fixture renamed from "record_xml_property" to "record_property" as user '
'properties are now available to all reporters.\n'
'"record_xml_property" is now deprecated.'
)
COLLECTOR_MAKEITEM = RemovedInPytest4Warning( COLLECTOR_MAKEITEM = RemovedInPytest4Warning(
"pycollector makeitem was removed " "pycollector makeitem was removed "
"as it is an accidentially leaked internal api" "as it is an accidentially leaked internal api"

View File

@ -2,6 +2,8 @@
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import traceback import traceback
import sys
import platform
import pytest import pytest
from _pytest._code.code import ExceptionInfo, ReprFileLocation, TerminalRepr from _pytest._code.code import ExceptionInfo, ReprFileLocation, TerminalRepr
@ -103,8 +105,21 @@ class DoctestItem(pytest.Item):
def runtest(self): def runtest(self):
_check_all_skipped(self.dtest) _check_all_skipped(self.dtest)
self._disable_output_capturing_for_darwin()
self.runner.run(self.dtest) self.runner.run(self.dtest)
def _disable_output_capturing_for_darwin(self):
"""
Disable output capturing. Otherwise, stdout is lost to doctest (#985)
"""
if platform.system() != 'Darwin':
return
capman = self.config.pluginmanager.getplugin("capturemanager")
if capman:
out, err = capman.suspend_global_capture(in_=True)
sys.stdout.write(out)
sys.stderr.write(err)
def repr_failure(self, excinfo): def repr_failure(self, excinfo):
import doctest import doctest
if excinfo.errisinstance((doctest.DocTestFailure, if excinfo.errisinstance((doctest.DocTestFailure,

View File

@ -166,7 +166,7 @@ def reorder_items(items):
items_by_argkey = {} items_by_argkey = {}
for scopenum in range(0, scopenum_function): for scopenum in range(0, scopenum_function):
argkeys_cache[scopenum] = d = {} argkeys_cache[scopenum] = d = {}
items_by_argkey[scopenum] = item_d = defaultdict(list) items_by_argkey[scopenum] = item_d = defaultdict(deque)
for item in items: for item in items:
keys = OrderedDict.fromkeys(get_parametrized_fixture_keys(item, scopenum)) keys = OrderedDict.fromkeys(get_parametrized_fixture_keys(item, scopenum))
if keys: if keys:
@ -174,12 +174,19 @@ def reorder_items(items):
for key in keys: for key in keys:
item_d[key].append(item) item_d[key].append(item)
items = OrderedDict.fromkeys(items) items = OrderedDict.fromkeys(items)
return list(reorder_items_atscope(items, set(), argkeys_cache, items_by_argkey, 0)) return list(reorder_items_atscope(items, argkeys_cache, items_by_argkey, 0))
def reorder_items_atscope(items, ignore, argkeys_cache, items_by_argkey, scopenum): def fix_cache_order(item, argkeys_cache, items_by_argkey):
for scopenum in range(0, scopenum_function):
for key in argkeys_cache[scopenum].get(item, []):
items_by_argkey[scopenum][key].appendleft(item)
def reorder_items_atscope(items, argkeys_cache, items_by_argkey, scopenum):
if scopenum >= scopenum_function or len(items) < 3: if scopenum >= scopenum_function or len(items) < 3:
return items return items
ignore = set()
items_deque = deque(items) items_deque = deque(items)
items_done = OrderedDict() items_done = OrderedDict()
scoped_items_by_argkey = items_by_argkey[scopenum] scoped_items_by_argkey = items_by_argkey[scopenum]
@ -197,13 +204,14 @@ def reorder_items_atscope(items, ignore, argkeys_cache, items_by_argkey, scopenu
else: else:
slicing_argkey, _ = argkeys.popitem() slicing_argkey, _ = argkeys.popitem()
# we don't have to remove relevant items from later in the deque because they'll just be ignored # we don't have to remove relevant items from later in the deque because they'll just be ignored
for i in reversed(scoped_items_by_argkey[slicing_argkey]): matching_items = [i for i in scoped_items_by_argkey[slicing_argkey] if i in items]
if i in items: for i in reversed(matching_items):
items_deque.appendleft(i) fix_cache_order(i, argkeys_cache, items_by_argkey)
items_deque.appendleft(i)
break break
if no_argkey_group: if no_argkey_group:
no_argkey_group = reorder_items_atscope( no_argkey_group = reorder_items_atscope(
no_argkey_group, set(), argkeys_cache, items_by_argkey, scopenum + 1) no_argkey_group, argkeys_cache, items_by_argkey, scopenum + 1)
for item in no_argkey_group: for item in no_argkey_group:
items_done[item] = None items_done[item] = None
ignore.add(slicing_argkey) ignore.add(slicing_argkey)
@ -831,9 +839,9 @@ def _ensure_immutable_ids(ids):
@attr.s(frozen=True) @attr.s(frozen=True)
class FixtureFunctionMarker(object): class FixtureFunctionMarker(object):
scope = attr.ib() scope = attr.ib()
params = attr.ib(convert=attr.converters.optional(tuple)) params = attr.ib(converter=attr.converters.optional(tuple))
autouse = attr.ib(default=False) autouse = attr.ib(default=False)
ids = attr.ib(default=None, convert=_ensure_immutable_ids) ids = attr.ib(default=None, converter=_ensure_immutable_ids)
name = attr.ib(default=None) name = attr.ib(default=None)
def __call__(self, function): def __call__(self, function):

View File

@ -233,31 +233,41 @@ class _NodeReporter(object):
@pytest.fixture @pytest.fixture
def record_xml_property(request): def record_property(request):
"""Add extra xml properties to the tag for the calling test. """Add an extra properties the calling test.
The fixture is callable with ``(name, value)``, with value being automatically User properties become part of the test report and are available to the
xml-encoded. configured reporters, like JUnit XML.
The fixture is callable with ``(name, value)``.
""" """
request.node.warn( request.node.warn(
code='C3', code='C3',
message='record_xml_property is an experimental feature', message='record_property is an experimental feature',
) )
xml = getattr(request.config, "_xml", None)
if xml is not None:
node_reporter = xml.node_reporter(request.node.nodeid)
return node_reporter.add_property
else:
def add_property_noop(name, value):
pass
return add_property_noop def append_property(name, value):
request.node.user_properties.append((name, value))
return append_property
@pytest.fixture
def record_xml_property(record_property):
"""(Deprecated) use record_property."""
import warnings
from _pytest import deprecated
warnings.warn(
deprecated.RECORD_XML_PROPERTY,
DeprecationWarning,
stacklevel=2
)
return record_property
@pytest.fixture @pytest.fixture
def record_xml_attribute(request): def record_xml_attribute(request):
"""Add extra xml attributes to the tag for the calling test. """Add extra xml attributes to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being automatically The fixture is callable with ``(name, value)``, with value being
xml-encoded automatically xml-encoded
""" """
request.node.warn( request.node.warn(
code='C3', code='C3',
@ -442,6 +452,10 @@ class LogXML(object):
if report.when == "teardown": if report.when == "teardown":
reporter = self._opentestcase(report) reporter = self._opentestcase(report)
reporter.write_captured_output(report) reporter.write_captured_output(report)
for propname, propvalue in report.user_properties:
reporter.add_property(propname, propvalue)
self.finalize(report) self.finalize(report)
report_wid = getattr(report, "worker_id", None) report_wid = getattr(report, "worker_id", None)
report_ii = getattr(report, "item_index", None) report_ii = getattr(report, "item_index", None)

View File

@ -480,6 +480,7 @@ class _LiveLoggingStreamHandler(logging.StreamHandler):
self.capture_manager = capture_manager self.capture_manager = capture_manager
self.reset() self.reset()
self.set_when(None) self.set_when(None)
self._test_outcome_written = False
def reset(self): def reset(self):
"""Reset the handler; should be called before the start of each test""" """Reset the handler; should be called before the start of each test"""
@ -489,15 +490,21 @@ class _LiveLoggingStreamHandler(logging.StreamHandler):
"""Prepares for the given test phase (setup/call/teardown)""" """Prepares for the given test phase (setup/call/teardown)"""
self._when = when self._when = when
self._section_name_shown = False self._section_name_shown = False
if when == 'start':
self._test_outcome_written = False
def emit(self, record): def emit(self, record):
if self.capture_manager is not None: if self.capture_manager is not None:
self.capture_manager.suspend_global_capture() self.capture_manager.suspend_global_capture()
try: try:
if not self._first_record_emitted or self._when in ('teardown', 'finish'): if not self._first_record_emitted:
self.stream.write('\n') self.stream.write('\n')
self._first_record_emitted = True self._first_record_emitted = True
if not self._section_name_shown: elif self._when in ('teardown', 'finish'):
if not self._test_outcome_written:
self._test_outcome_written = True
self.stream.write('\n')
if not self._section_name_shown and self._when:
self.stream.section('live log ' + self._when, sep='-', bold=True) self.stream.section('live log ' + self._when, sep='-', bold=True)
self._section_name_shown = True self._section_name_shown = True
logging.StreamHandler.emit(self, record) logging.StreamHandler.emit(self, record)

View File

@ -66,6 +66,8 @@ def pytest_addoption(parser):
help="try to interpret all arguments as python packages.") help="try to interpret all arguments as python packages.")
group.addoption("--ignore", action="append", metavar="path", group.addoption("--ignore", action="append", metavar="path",
help="ignore path during collection (multi-allowed).") help="ignore path during collection (multi-allowed).")
group.addoption("--deselect", action="append", metavar="nodeid_prefix",
help="deselect item during collection (multi-allowed).")
# when changing this to --conf-cut-dir, config.py Conftest.setinitial # when changing this to --conf-cut-dir, config.py Conftest.setinitial
# needs upgrading as well # needs upgrading as well
group.addoption('--confcutdir', dest="confcutdir", default=None, group.addoption('--confcutdir', dest="confcutdir", default=None,
@ -208,6 +210,24 @@ def pytest_ignore_collect(path, config):
return False return False
def pytest_collection_modifyitems(items, config):
deselect_prefixes = tuple(config.getoption("deselect") or [])
if not deselect_prefixes:
return
remaining = []
deselected = []
for colitem in items:
if colitem.nodeid.startswith(deselect_prefixes):
deselected.append(colitem)
else:
remaining.append(colitem)
if deselected:
config.hook.pytest_deselected(items=deselected)
items[:] = remaining
@contextlib.contextmanager @contextlib.contextmanager
def _patched_find_module(): def _patched_find_module():
"""Patch bug in pkgutil.ImpImporter.find_module """Patch bug in pkgutil.ImpImporter.find_module

View File

@ -75,7 +75,7 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
return cls(argval, marks=newmarks, id=None) return cls(argval, marks=newmarks, id=None)
@classmethod @classmethod
def _for_parameterize(cls, argnames, argvalues, function, config): def _for_parametrize(cls, argnames, argvalues, function, config):
if not isinstance(argnames, (tuple, list)): if not isinstance(argnames, (tuple, list)):
argnames = [x.strip() for x in argnames.split(",") if x.strip()] argnames = [x.strip() for x in argnames.split(",") if x.strip()]
force_tuple = len(argnames) == 1 force_tuple = len(argnames) == 1

View File

@ -360,6 +360,10 @@ class Item(Node):
super(Item, self).__init__(name, parent, config, session) super(Item, self).__init__(name, parent, config, session)
self._report_sections = [] self._report_sections = []
#: user properties is a list of tuples (name, value) that holds user
#: defined properties for this test.
self.user_properties = []
def add_report_section(self, when, key, content): def add_report_section(self, when, key, content):
""" """
Adds a new report section, similar to what's done internally to add stdout and Adds a new report section, similar to what's done internally to add stdout and

View File

@ -785,7 +785,8 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
from _pytest.fixtures import scope2index from _pytest.fixtures import scope2index
from _pytest.mark import ParameterSet from _pytest.mark import ParameterSet
from py.io import saferepr from py.io import saferepr
argnames, parameters = ParameterSet._for_parameterize(
argnames, parameters = ParameterSet._for_parametrize(
argnames, argvalues, self.function, self.config) argnames, argvalues, self.function, self.config)
del argvalues del argvalues

View File

@ -453,6 +453,10 @@ def raises(expected_exception, *args, **kwargs):
Assert that a code block/function call raises ``expected_exception`` Assert that a code block/function call raises ``expected_exception``
and raise a failure exception otherwise. and raise a failure exception otherwise.
:arg message: if specified, provides a custom failure message if the
exception is not raised
:arg match: if specified, asserts that the exception matches a text or regex
This helper produces a ``ExceptionInfo()`` object (see below). This helper produces a ``ExceptionInfo()`` object (see below).
You may use this function as a context manager:: You may use this function as a context manager::
@ -567,7 +571,6 @@ def raises(expected_exception, *args, **kwargs):
message = kwargs.pop("message") message = kwargs.pop("message")
if "match" in kwargs: if "match" in kwargs:
match_expr = kwargs.pop("match") match_expr = kwargs.pop("match")
message += " matching '{0}'".format(match_expr)
return RaisesContext(expected_exception, message, match_expr) return RaisesContext(expected_exception, message, match_expr)
elif isinstance(args[0], str): elif isinstance(args[0], str):
code, = args code, = args
@ -614,6 +617,6 @@ class RaisesContext(object):
suppress_exception = issubclass(self.excinfo.type, self.expected_exception) suppress_exception = issubclass(self.excinfo.type, self.expected_exception)
if sys.version_info[0] == 2 and suppress_exception: if sys.version_info[0] == 2 and suppress_exception:
sys.exc_clear() sys.exc_clear()
if self.match_expr: if self.match_expr and suppress_exception:
self.excinfo.match(self.match_expr) self.excinfo.match(self.match_expr)
return suppress_exception return suppress_exception

View File

@ -317,7 +317,7 @@ def pytest_runtest_makereport(item, call):
sections.append(("Captured %s %s" % (key, rwhen), content)) sections.append(("Captured %s %s" % (key, rwhen), content))
return TestReport(item.nodeid, item.location, return TestReport(item.nodeid, item.location,
keywords, outcome, longrepr, when, keywords, outcome, longrepr, when,
sections, duration) sections, duration, user_properties=item.user_properties)
class TestReport(BaseReport): class TestReport(BaseReport):
@ -326,7 +326,7 @@ class TestReport(BaseReport):
""" """
def __init__(self, nodeid, location, keywords, outcome, def __init__(self, nodeid, location, keywords, outcome,
longrepr, when, sections=(), duration=0, **extra): longrepr, when, sections=(), duration=0, user_properties=(), **extra):
#: normalized collection node id #: normalized collection node id
self.nodeid = nodeid self.nodeid = nodeid
@ -348,6 +348,10 @@ class TestReport(BaseReport):
#: one of 'setup', 'call', 'teardown' to indicate runtest phase. #: one of 'setup', 'call', 'teardown' to indicate runtest phase.
self.when = when self.when = when
#: user properties is a list of tuples (name, value) that holds user
#: defined properties of the test
self.user_properties = user_properties
#: list of pairs ``(str, str)`` of extra information which needs to #: list of pairs ``(str, str)`` of extra information which needs to
#: marshallable. Used by pytest to add captured text #: marshallable. Used by pytest to add captured text
#: from ``stdout`` and ``stderr``, but may be used by other plugins #: from ``stdout`` and ``stderr``, but may be used by other plugins

View File

@ -44,9 +44,9 @@ def pytest_addoption(parser):
help="traceback print mode (auto/long/short/line/native/no).") help="traceback print mode (auto/long/short/line/native/no).")
group._addoption('--show-capture', group._addoption('--show-capture',
action="store", dest="showcapture", action="store", dest="showcapture",
choices=['no', 'stdout', 'stderr', 'both'], default='both', choices=['no', 'stdout', 'stderr', 'log', 'all'], default='all',
help="Controls how captured stdout/stderr is shown on failed tests. " help="Controls how captured stdout/stderr/log is shown on failed tests. "
"Default is 'both'.") "Default is 'all'.")
group._addoption('--fulltrace', '--full-trace', group._addoption('--fulltrace', '--full-trace',
action="store_true", default=False, action="store_true", default=False,
help="don't cut any tracebacks (default is to cut).") help="don't cut any tracebacks (default is to cut).")
@ -361,6 +361,7 @@ class TerminalReporter(object):
errors = len(self.stats.get('error', [])) errors = len(self.stats.get('error', []))
skipped = len(self.stats.get('skipped', [])) skipped = len(self.stats.get('skipped', []))
deselected = len(self.stats.get('deselected', []))
if final: if final:
line = "collected " line = "collected "
else: else:
@ -368,6 +369,8 @@ class TerminalReporter(object):
line += str(self._numcollected) + " item" + ('' if self._numcollected == 1 else 's') line += str(self._numcollected) + " item" + ('' if self._numcollected == 1 else 's')
if errors: if errors:
line += " / %d errors" % errors line += " / %d errors" % errors
if deselected:
line += " / %d deselected" % deselected
if skipped: if skipped:
line += " / %d skipped" % skipped line += " / %d skipped" % skipped
if self.isatty: if self.isatty:
@ -377,6 +380,7 @@ class TerminalReporter(object):
else: else:
self.write_line(line) self.write_line(line)
@pytest.hookimpl(trylast=True)
def pytest_collection_modifyitems(self): def pytest_collection_modifyitems(self):
self.report_collect(True) self.report_collect(True)
@ -484,7 +488,6 @@ class TerminalReporter(object):
if exitstatus == EXIT_INTERRUPTED: if exitstatus == EXIT_INTERRUPTED:
self._report_keyboardinterrupt() self._report_keyboardinterrupt()
del self._keyboardinterrupt_memo del self._keyboardinterrupt_memo
self.summary_deselected()
self.summary_stats() self.summary_stats()
def pytest_keyboard_interrupt(self, excinfo): def pytest_keyboard_interrupt(self, excinfo):
@ -627,12 +630,12 @@ class TerminalReporter(object):
def _outrep_summary(self, rep): def _outrep_summary(self, rep):
rep.toterminal(self._tw) rep.toterminal(self._tw)
if self.config.option.showcapture == 'no': showcapture = self.config.option.showcapture
if showcapture == 'no':
return return
for secname, content in rep.sections: for secname, content in rep.sections:
if self.config.option.showcapture != 'both': if showcapture != 'all' and showcapture not in secname:
if not (self.config.option.showcapture in secname): continue
continue
self._tw.sep("-", secname) self._tw.sep("-", secname)
if content[-1:] == "\n": if content[-1:] == "\n":
content = content[:-1] content = content[:-1]
@ -649,11 +652,6 @@ class TerminalReporter(object):
if self.verbosity == -1: if self.verbosity == -1:
self.write_line(msg, **markup) self.write_line(msg, **markup)
def summary_deselected(self):
if 'deselected' in self.stats:
self.write_sep("=", "%d tests deselected" % (
len(self.stats['deselected'])), bold=True)
def repr_pythonversion(v=None): def repr_pythonversion(v=None):
if v is None: if v is None:

View File

@ -1 +1 @@
New ``--show-capture`` command-line option that allows to specify how to display captured output when tests fail: ``no``, ``stdout``, ``stderr`` or ``both`` (the default). New ``--show-capture`` command-line option that allows to specify how to display captured output when tests fail: ``no``, ``stdout``, ``stderr``, ``log`` or ``all`` (the default).

2
changelog/2770.feature Normal file
View File

@ -0,0 +1,2 @@
``record_xml_property`` renamed to ``record_property`` and is now compatible with xdist, markers and any reporter.
``record_xml_property`` name is now deprecated.

View File

@ -0,0 +1 @@
``record_xml_property`` fixture is now deprecated in favor of the more generic ``record_property``.

View File

@ -0,0 +1 @@
Add command line option ``--deselect`` to allow deselection of individual tests at collection time.

1
changelog/3204.feature Normal file
View File

@ -0,0 +1 @@
Captured logs are printed before entering pdb.

1
changelog/3213.feature Normal file
View File

@ -0,0 +1 @@
Deselected item count is now shown before tests are run, e.g. ``collected X items / Y deselected``.

View File

@ -0,0 +1 @@
Change minimum requirement of ``attrs`` to ``17.4.0``.

View File

@ -0,0 +1 @@
Remove usage of deprecated ``metafunc.addcall`` in our own tests.

32
changelog/README.rst Normal file
View File

@ -0,0 +1,32 @@
This directory contains "newsfragments" which are short files that contain a small **ReST**-formatted
text that will be added to the next ``CHANGELOG``.
The ``CHANGELOG`` will be read by users, so this description should be aimed to pytest users
instead of describing internal changes which are only relevant to the developers.
Make sure to use full sentences with correct case and punctuation, for example::
Fix issue with non-ascii messages from the ``warnings`` module.
Each file should be named like ``<ISSUE>.<TYPE>.rst``, where
``<ISSUE>`` is an issue number, and ``<TYPE>`` is one of:
* ``feature``: new user facing features, like new command-line options and new behavior.
* ``bugfix``: fixes a reported bug.
* ``doc``: documentation improvement, like rewording an entire session or adding missing docs.
* ``removal``: feature deprecation or removal.
* ``vendor``: changes in packages vendored in pytest.
* ``trivial``: fixing a small typo or internal change that might be noteworthy.
So for example: ``123.feature.rst``, ``456.bugfix.rst``.
If your PR fixes an issue, use that number here. If there is no issue,
then after you submit the PR and get the PR number you can add a
changelog using that instead.
If you are not sure what issue type to use, don't hesitate to ask in your PR.
Note that the ``towncrier`` tool will automatically
reflow your text, so it will work best if you stick to a single paragraph, but multiple sentences and links are OK
and encouraged. You can install ``towncrier`` and then run ``towncrier --draft``
if you want to get a preview of how your change will look in the final release notes.

View File

@ -6,6 +6,7 @@ Release announcements
:maxdepth: 2 :maxdepth: 2
release-3.4.1
release-3.4.0 release-3.4.0
release-3.3.2 release-3.3.2
release-3.3.1 release-3.3.1

View File

@ -0,0 +1,27 @@
pytest-3.4.1
=======================================
pytest 3.4.1 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Aaron
* Alan Velasco
* Andy Freeland
* Brian Maissy
* Bruno Oliveira
* Florian Bruhin
* Jason R. Coombs
* Marcin Bachry
* Pedro Algarvio
* Ronny Pfannschmidt
Happy testing,
The pytest Development Team

View File

@ -112,10 +112,11 @@ You can ask for available builtin or project-custom
Inject names into the doctest namespace. Inject names into the doctest namespace.
pytestconfig pytestconfig
the pytest config object with access to command line opts. the pytest config object with access to command line opts.
record_xml_property record_property
Add extra xml properties to the tag for the calling test. Add an extra properties the calling test.
The fixture is callable with ``(name, value)``, with value being automatically User properties become part of the test report and are available to the
xml-encoded. configured reporters, like JUnit XML.
The fixture is callable with ``(name, value)``.
record_xml_attribute record_xml_attribute
Add extra xml attributes to the tag for the calling test. Add extra xml attributes to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being automatically The fixture is callable with ``(name, value)``, with value being automatically

View File

@ -39,6 +39,14 @@ you will see that ``pytest`` only collects test-modules, which do not match the
======= 5 passed in 0.02 seconds ======= ======= 5 passed in 0.02 seconds =======
Deselect tests during test collection
-------------------------------------
Tests can individually be deselected during collection by passing the ``--deselect=item`` option.
For example, say ``tests/foobar/test_foobar_01.py`` contains ``test_a`` and ``test_b``.
You can run all of the tests within ``tests/`` *except* for ``tests/foobar/test_foobar_01.py::test_a``
by invoking ``pytest`` with ``--deselect tests/foobar/test_foobar_01.py::test_a``.
``pytest`` allows multiple ``--deselect`` options.
Keeping duplicate paths specified from command line Keeping duplicate paths specified from command line
---------------------------------------------------- ----------------------------------------------------

View File

@ -358,7 +358,7 @@ get on the terminal - we are working on that)::
> int(s) > int(s)
E ValueError: invalid literal for int() with base 10: 'qwe' E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:580>:1: ValueError <0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:583>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________ ______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises object at 0xdeadbeef> self = <failure_demo.TestRaises object at 0xdeadbeef>

View File

@ -385,8 +385,8 @@ Now we can profile which test functions execute the slowest::
test_some_are_slow.py ... [100%] test_some_are_slow.py ... [100%]
========================= slowest 3 test durations ========================= ========================= slowest 3 test durations =========================
0.58s call test_some_are_slow.py::test_funcslow2 0.30s call test_some_are_slow.py::test_funcslow2
0.41s call test_some_are_slow.py::test_funcslow1 0.20s call test_some_are_slow.py::test_funcslow1
0.10s call test_some_are_slow.py::test_funcfast 0.10s call test_some_are_slow.py::test_funcfast
========================= 3 passed in 0.12 seconds ========================= ========================= 3 passed in 0.12 seconds =========================
@ -537,7 +537,7 @@ We can run this::
file $REGENDOC_TMPDIR/b/test_error.py, line 1 file $REGENDOC_TMPDIR/b/test_error.py, line 1
def test_root(db): # no db here, will error out def test_root(db): # no db here, will error out
E fixture 'db' not found E fixture 'db' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_xml_attribute, record_xml_property, recwarn, tmpdir, tmpdir_factory > available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_xml_attribute, record_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them. > use 'pytest --fixtures [testpath]' for help on them.
$REGENDOC_TMPDIR/b/test_error.py:1 $REGENDOC_TMPDIR/b/test_error.py:1

View File

@ -50,26 +50,10 @@ These options can also be customized through ``pytest.ini`` file:
log_format = %(asctime)s %(levelname)s %(message)s log_format = %(asctime)s %(levelname)s %(message)s
log_date_format = %Y-%m-%d %H:%M:%S log_date_format = %Y-%m-%d %H:%M:%S
Further it is possible to disable reporting logs on failed tests completely Further it is possible to disable reporting of captured content (stdout,
with:: stderr and logs) on failed tests completely with::
pytest --no-print-logs pytest --show-capture=no
Or in the ``pytest.ini`` file:
.. code-block:: ini
[pytest]
log_print = False
Shows failed tests in the normal manner as no logs were captured::
----------------------- Captured stdout call ----------------------
text going to stdout
----------------------- Captured stderr call ----------------------
text going to stderr
==================== 2 failed in 0.02 seconds =====================
caplog fixture caplog fixture

View File

@ -220,19 +220,24 @@ To set the name of the root test suite xml item, you can configure the ``junit_s
[pytest] [pytest]
junit_suite_name = my_suite junit_suite_name = my_suite
record_xml_property record_property
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. versionadded:: 2.8 .. versionadded:: 2.8
.. versionchanged:: 3.5
Fixture renamed from ``record_xml_property`` to ``record_property`` as user
properties are now available to all reporters.
``record_xml_property`` is now deprecated.
If you want to log additional information for a test, you can use the If you want to log additional information for a test, you can use the
``record_xml_property`` fixture: ``record_property`` fixture:
.. code-block:: python .. code-block:: python
def test_function(record_xml_property): def test_function(record_property):
record_xml_property("example_key", 1) record_property("example_key", 1)
assert 0 assert True
This will add an extra property ``example_key="1"`` to the generated This will add an extra property ``example_key="1"`` to the generated
``testcase`` tag: ``testcase`` tag:
@ -245,13 +250,42 @@ This will add an extra property ``example_key="1"`` to the generated
</properties> </properties>
</testcase> </testcase>
Alternatively, you can integrate this functionality with custom markers:
.. code-block:: python
# content of conftest.py
def pytest_collection_modifyitems(session, config, items):
for item in items:
marker = item.get_marker('test_id')
if marker is not None:
test_id = marker.args[0]
item.user_properties.append(('test_id', test_id))
And in your tests:
.. code-block:: python
# content of test_function.py
@pytest.mark.test_id(1501)
def test_function():
assert True
Will result in:
.. code-block:: xml
<testcase classname="test_function" file="test_function.py" line="0" name="test_function" time="0.0009">
<properties>
<property name="test_id" value="1501" />
</properties>
</testcase>
.. warning:: .. warning::
``record_xml_property`` is an experimental feature, and its interface might be replaced ``record_property`` is an experimental feature and may change in the future.
by something more powerful and general in future versions. The
functionality per-se will be kept, however.
Currently it does not work when used with the ``pytest-xdist`` plugin.
Also please note that using this feature will break any schema verification. Also please note that using this feature will break any schema verification.
This might be a problem when used with some CI servers. This might be a problem when used with some CI servers.

View File

@ -462,19 +462,24 @@ Here is an example definition of a hook wrapper::
@pytest.hookimpl(hookwrapper=True) @pytest.hookimpl(hookwrapper=True)
def pytest_pyfunc_call(pyfuncitem): def pytest_pyfunc_call(pyfuncitem):
# do whatever you want before the next hook executes do_something_before_next_hook_executes()
outcome = yield outcome = yield
# outcome.excinfo may be None or a (cls, val, tb) tuple # outcome.excinfo may be None or a (cls, val, tb) tuple
res = outcome.get_result() # will raise if outcome was exception res = outcome.get_result() # will raise if outcome was exception
# postprocess result
post_process_result(res)
outcome.force_result(new_res) # to override the return value to the plugin system
Note that hook wrappers don't return results themselves, they merely Note that hook wrappers don't return results themselves, they merely
perform tracing or other side effects around the actual hook implementations. perform tracing or other side effects around the actual hook implementations.
If the result of the underlying hook is a mutable object, they may modify If the result of the underlying hook is a mutable object, they may modify
that result but it's probably better to avoid it. that result but it's probably better to avoid it.
For more information, consult the `pluggy documentation <http://pluggy.readthedocs.io/en/latest/#wrappers>`_.
Hook function ordering / call example Hook function ordering / call example
------------------------------------- -------------------------------------

View File

@ -16,7 +16,7 @@ classifiers = [
'Topic :: Utilities', 'Topic :: Utilities',
] + [ ] + [
('Programming Language :: Python :: %s' % x) ('Programming Language :: Python :: %s' % x)
for x in '2 2.7 3 3.4 3.5 3.6'.split() for x in '2 2.7 3 3.4 3.5 3.6 3.7'.split()
] ]
with open('README.rst') as fd: with open('README.rst') as fd:
@ -59,7 +59,7 @@ def main():
'py>=1.5.0', 'py>=1.5.0',
'six>=1.10.0', 'six>=1.10.0',
'setuptools', 'setuptools',
'attrs>=17.2.0', 'attrs>=17.4.0',
] ]
# if _PYTEST_SETUP_SKIP_PLUGGY_DEP is set, skip installing pluggy; # if _PYTEST_SETUP_SKIP_PLUGGY_DEP is set, skip installing pluggy;
# used by tox.ini to test with pluggy master # used by tox.ini to test with pluggy master

View File

@ -1,4 +1,6 @@
import os """
Invoke development tasks.
"""
from pathlib import Path from pathlib import Path
from subprocess import check_output, check_call from subprocess import check_output, check_call
@ -57,7 +59,7 @@ def regen(ctx):
@invoke.task() @invoke.task()
def make_tag(ctx, version): def make_tag(ctx, version):
"""Create a new (local) tag for the release, only if the repository is clean.""" """Create a new, local tag for the release, only if the repository is clean."""
from git import Repo from git import Repo
repo = Repo('.') repo = Repo('.')
@ -74,81 +76,24 @@ def make_tag(ctx, version):
repo.create_tag(version) repo.create_tag(version)
@invoke.task()
def devpi_upload(ctx, version, user, password=None):
"""Creates and uploads a package to devpi for testing."""
if password:
print("[generate.devpi_upload] devpi login {}".format(user))
check_call(['devpi', 'login', user, '--password', password])
check_call(['devpi', 'use', 'https://devpi.net/{}/dev'.format(user)])
env = os.environ.copy()
env['SETUPTOOLS_SCM_PRETEND_VERSION'] = version
check_call(['devpi', 'upload', '--formats', 'sdist,bdist_wheel'], env=env)
print("[generate.devpi_upload] package uploaded")
@invoke.task(help={ @invoke.task(help={
'version': 'version being released', 'version': 'version being released',
'user': 'name of the user on devpi to stage the generated package',
'password': 'user password on devpi to stage the generated package '
'(if not given assumed logged in)',
}) })
def pre_release(ctx, version, user, password=None): def pre_release(ctx, version):
"""Generates new docs, release announcements and uploads a new release to devpi for testing.""" """Generates new docs, release announcements and creates a local tag."""
announce(ctx, version) announce(ctx, version)
regen(ctx) regen(ctx)
changelog(ctx, version, write_out=True) changelog(ctx, version, write_out=True)
msg = 'Preparing release version {}'.format(version) msg = 'Preparing release version {}'.format(version)
check_call(['git', 'commit', '-a', '-m', msg]) check_call(['git', 'commit', '-a', '-m', msg])
make_tag(ctx, version) make_tag(ctx, version)
devpi_upload(ctx, version=version, user=user, password=password)
print() print()
print('[generate.pre_release] Please push your branch and open a PR.') print('[generate.pre_release] Please push your branch and open a PR.')
@invoke.task(help={
'version': 'version being released',
'user': 'name of the user on devpi to stage the generated package',
'pypi_name': 'name of the pypi configuration section in your ~/.pypirc',
})
def publish_release(ctx, version, user, pypi_name):
"""Publishes a package previously created by the 'pre_release' command."""
from git import Repo
repo = Repo('.')
tag_names = [x.name for x in repo.tags]
if version not in tag_names:
print('Could not find tag for version {}, exiting...'.format(version))
raise invoke.Exit(code=2)
check_call(['devpi', 'use', 'https://devpi.net/{}/dev'.format(user)])
check_call(['devpi', 'push', 'pytest=={}'.format(version), 'pypi:{}'.format(pypi_name)])
check_call(['git', 'push', 'git@github.com:pytest-dev/pytest.git', version])
emails = [
'pytest-dev@python.org',
'python-announce-list@python.org'
]
if version.endswith('.0'):
emails.append('testing-in-python@lists.idyll.org')
print('Version {} has been published to PyPI!'.format(version))
print()
print('Please send an email announcement with the contents from:')
print()
print(' doc/en/announce/release-{}.rst'.format(version))
print()
print('To the following mail lists:')
print()
print(' ', ','.join(emails))
print()
print('And announce it on twitter adding the #pytest hash tag.')
@invoke.task(help={ @invoke.task(help={
'version': 'version being released', 'version': 'version being released',
'write_out': 'write changes to the actual changelog' 'write_out': 'write changes to the actual changelog'
@ -158,5 +103,4 @@ def changelog(ctx, version, write_out=False):
addopts = [] addopts = []
else: else:
addopts = ['--draft'] addopts = ['--draft']
check_call(['towncrier', '--version', version] + addopts) check_call(['towncrier', '--yes', '--version', version] + addopts)

View File

@ -1,6 +1,4 @@
devpi-client
gitpython gitpython
invoke invoke
towncrier towncrier
tox tox
wheel

View File

@ -1,4 +1,5 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import re
import os import os
import six import six
@ -293,6 +294,118 @@ def test_log_cli_default_level_sections(testdir, request):
]) ])
def test_live_logs_unknown_sections(testdir, request):
"""Check that with live logging enable we are printing the correct headers during
start/setup/call/teardown/finish."""
filename = request.node.name + '.py'
testdir.makeconftest('''
import pytest
import logging
def pytest_runtest_protocol(item, nextitem):
logging.warning('Unknown Section!')
def pytest_runtest_logstart():
logging.warning('>>>>> START >>>>>')
def pytest_runtest_logfinish():
logging.warning('<<<<< END <<<<<<<')
''')
testdir.makepyfile('''
import pytest
import logging
@pytest.fixture
def fix(request):
logging.warning("log message from setup of {}".format(request.node.name))
yield
logging.warning("log message from teardown of {}".format(request.node.name))
def test_log_1(fix):
logging.warning("log message from test_log_1")
''')
testdir.makeini('''
[pytest]
log_cli=true
''')
result = testdir.runpytest()
result.stdout.fnmatch_lines([
'*WARNING*Unknown Section*',
'{}::test_log_1 '.format(filename),
'*WARNING* >>>>> START >>>>>*',
'*-- live log setup --*',
'*WARNING*log message from setup of test_log_1*',
'*-- live log call --*',
'*WARNING*log message from test_log_1*',
'PASSED *100%*',
'*-- live log teardown --*',
'*WARNING*log message from teardown of test_log_1*',
'*WARNING* <<<<< END <<<<<<<*',
'=* 1 passed in *=',
])
def test_sections_single_new_line_after_test_outcome(testdir, request):
"""Check that only a single new line is written between log messages during
teardown/finish."""
filename = request.node.name + '.py'
testdir.makeconftest('''
import pytest
import logging
def pytest_runtest_logstart():
logging.warning('>>>>> START >>>>>')
def pytest_runtest_logfinish():
logging.warning('<<<<< END <<<<<<<')
logging.warning('<<<<< END <<<<<<<')
''')
testdir.makepyfile('''
import pytest
import logging
@pytest.fixture
def fix(request):
logging.warning("log message from setup of {}".format(request.node.name))
yield
logging.warning("log message from teardown of {}".format(request.node.name))
logging.warning("log message from teardown of {}".format(request.node.name))
def test_log_1(fix):
logging.warning("log message from test_log_1")
''')
testdir.makeini('''
[pytest]
log_cli=true
''')
result = testdir.runpytest()
result.stdout.fnmatch_lines([
'{}::test_log_1 '.format(filename),
'*-- live log start --*',
'*WARNING* >>>>> START >>>>>*',
'*-- live log setup --*',
'*WARNING*log message from setup of test_log_1*',
'*-- live log call --*',
'*WARNING*log message from test_log_1*',
'PASSED *100%*',
'*-- live log teardown --*',
'*WARNING*log message from teardown of test_log_1*',
'*-- live log finish --*',
'*WARNING* <<<<< END <<<<<<<*',
'*WARNING* <<<<< END <<<<<<<*',
'=* 1 passed in *=',
])
assert re.search(r'(.+)live log teardown(.+)\n(.+)WARNING(.+)\n(.+)WARNING(.+)',
result.stdout.str(), re.MULTILINE) is not None
assert re.search(r'(.+)live log finish(.+)\n(.+)WARNING(.+)\n(.+)WARNING(.+)',
result.stdout.str(), re.MULTILINE) is not None
def test_log_cli_level(testdir): def test_log_cli_level(testdir):
# Default log file level # Default log file level
testdir.makepyfile(''' testdir.makepyfile('''

View File

@ -2168,6 +2168,47 @@ class TestFixtureMarker(object):
test_mod1.py::test_func1[m2] PASSED test_mod1.py::test_func1[m2] PASSED
""") """)
def test_dynamic_parametrized_ordering(self, testdir):
testdir.makeini("""
[pytest]
console_output_style=classic
""")
testdir.makeconftest("""
import pytest
def pytest_configure(config):
class DynamicFixturePlugin(object):
@pytest.fixture(scope='session', params=['flavor1', 'flavor2'])
def flavor(self, request):
return request.param
config.pluginmanager.register(DynamicFixturePlugin(), 'flavor-fixture')
@pytest.fixture(scope='session', params=['vxlan', 'vlan'])
def encap(request):
return request.param
@pytest.fixture(scope='session', autouse='True')
def reprovision(request, flavor, encap):
pass
""")
testdir.makepyfile("""
def test(reprovision):
pass
def test2(reprovision):
pass
""")
result = testdir.runpytest("-v")
result.stdout.fnmatch_lines("""
test_dynamic_parametrized_ordering.py::test[flavor1-vxlan] PASSED
test_dynamic_parametrized_ordering.py::test2[flavor1-vxlan] PASSED
test_dynamic_parametrized_ordering.py::test[flavor2-vxlan] PASSED
test_dynamic_parametrized_ordering.py::test2[flavor2-vxlan] PASSED
test_dynamic_parametrized_ordering.py::test[flavor2-vlan] PASSED
test_dynamic_parametrized_ordering.py::test2[flavor2-vlan] PASSED
test_dynamic_parametrized_ordering.py::test[flavor1-vlan] PASSED
test_dynamic_parametrized_ordering.py::test2[flavor1-vlan] PASSED
""")
def test_class_ordering(self, testdir): def test_class_ordering(self, testdir):
testdir.makeini(""" testdir.makeini("""
[pytest] [pytest]

View File

@ -147,6 +147,28 @@ class TestMockDecoration(object):
reprec = testdir.inline_run() reprec = testdir.inline_run()
reprec.assertoutcome(passed=1) reprec.assertoutcome(passed=1)
def test_unittest_mock_and_pypi_mock(self, testdir):
pytest.importorskip("unittest.mock")
pytest.importorskip("mock", "1.0.1")
testdir.makepyfile("""
import mock
import unittest.mock
class TestBoth(object):
@unittest.mock.patch("os.path.abspath")
def test_hello(self, abspath):
import os
os.path.abspath("hello")
abspath.assert_any_call("hello")
@mock.patch("os.path.abspath")
def test_hello_mock(self, abspath):
import os
os.path.abspath("hello")
abspath.assert_any_call("hello")
""")
reprec = testdir.inline_run()
reprec.assertoutcome(passed=2)
def test_mock(self, testdir): def test_mock(self, testdir):
pytest.importorskip("mock", "1.0.1") pytest.importorskip("mock", "1.0.1")
testdir.makepyfile(""" testdir.makepyfile("""

View File

@ -132,3 +132,13 @@ class TestRaises(object):
with pytest.raises(AssertionError, match=expr): with pytest.raises(AssertionError, match=expr):
with pytest.raises(ValueError, match=msg): with pytest.raises(ValueError, match=msg):
int('asdf', base=10) int('asdf', base=10)
def test_raises_match_wrong_type(self):
"""Raising an exception with the wrong type and match= given.
pytest should throw the unexpected exception - the pattern match is not
really relevant if we got a different exception.
"""
with pytest.raises(ValueError):
with pytest.raises(IndexError, match='nomatch'):
int('asdf')

View File

@ -361,7 +361,7 @@ class TestLastFailed(object):
result = testdir.runpytest('--lf') result = testdir.runpytest('--lf')
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
'collected 4 items', 'collected 4 items / 2 deselected',
'run-last-failure: rerun previous 2 failures', 'run-last-failure: rerun previous 2 failures',
'*2 failed, 2 deselected in*', '*2 failed, 2 deselected in*',
]) ])

View File

@ -863,10 +863,10 @@ def test_record_property(testdir):
import pytest import pytest
@pytest.fixture @pytest.fixture
def other(record_xml_property): def other(record_property):
record_xml_property("bar", 1) record_property("bar", 1)
def test_record(record_xml_property, other): def test_record(record_property, other):
record_xml_property("foo", "<1"); record_property("foo", "<1");
""") """)
result, dom = runandparse(testdir, '-rw') result, dom = runandparse(testdir, '-rw')
node = dom.find_first_by_tag("testsuite") node = dom.find_first_by_tag("testsuite")
@ -877,15 +877,15 @@ def test_record_property(testdir):
pnodes[1].assert_attr(name="foo", value="<1") pnodes[1].assert_attr(name="foo", value="<1")
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
'test_record_property.py::test_record', 'test_record_property.py::test_record',
'*record_xml_property*experimental*', '*record_property*experimental*',
]) ])
def test_record_property_same_name(testdir): def test_record_property_same_name(testdir):
testdir.makepyfile(""" testdir.makepyfile("""
def test_record_with_same_name(record_xml_property): def test_record_with_same_name(record_property):
record_xml_property("foo", "bar") record_property("foo", "bar")
record_xml_property("foo", "baz") record_property("foo", "baz")
""") """)
result, dom = runandparse(testdir, '-rw') result, dom = runandparse(testdir, '-rw')
node = dom.find_first_by_tag("testsuite") node = dom.find_first_by_tag("testsuite")

View File

@ -141,19 +141,86 @@ class TestPDB(object):
child.sendeof() child.sendeof()
self.flush(child) self.flush(child)
def test_pdb_interaction_capture(self, testdir): def test_pdb_print_captured_stdout(self, testdir):
p1 = testdir.makepyfile(""" p1 = testdir.makepyfile("""
def test_1(): def test_1():
print("getrekt") print("get\\x20rekt")
assert False assert False
""") """)
child = testdir.spawn_pytest("--pdb %s" % p1) child = testdir.spawn_pytest("--pdb %s" % p1)
child.expect("getrekt") child.expect("captured stdout")
child.expect("get rekt")
child.expect("(Pdb)")
child.sendeof()
rest = child.read().decode("utf8")
assert "1 failed" in rest
assert "get rekt" not in rest
self.flush(child)
def test_pdb_print_captured_stderr(self, testdir):
p1 = testdir.makepyfile("""
def test_1():
import sys
sys.stderr.write("get\\x20rekt")
assert False
""")
child = testdir.spawn_pytest("--pdb %s" % p1)
child.expect("captured stderr")
child.expect("get rekt")
child.expect("(Pdb)")
child.sendeof()
rest = child.read().decode("utf8")
assert "1 failed" in rest
assert "get rekt" not in rest
self.flush(child)
def test_pdb_dont_print_empty_captured_stdout_and_stderr(self, testdir):
p1 = testdir.makepyfile("""
def test_1():
assert False
""")
child = testdir.spawn_pytest("--pdb %s" % p1)
child.expect("(Pdb)")
output = child.before.decode("utf8")
child.sendeof()
assert "captured stdout" not in output
assert "captured stderr" not in output
self.flush(child)
@pytest.mark.parametrize('showcapture', ['all', 'no', 'log'])
def test_pdb_print_captured_logs(self, testdir, showcapture):
p1 = testdir.makepyfile("""
def test_1():
import logging
logging.warn("get " + "rekt")
assert False
""")
child = testdir.spawn_pytest("--show-capture=%s --pdb %s" % (showcapture, p1))
if showcapture in ('all', 'log'):
child.expect("captured log")
child.expect("get rekt")
child.expect("(Pdb)")
child.sendeof()
rest = child.read().decode("utf8")
assert "1 failed" in rest
self.flush(child)
def test_pdb_print_captured_logs_nologging(self, testdir):
p1 = testdir.makepyfile("""
def test_1():
import logging
logging.warn("get " + "rekt")
assert False
""")
child = testdir.spawn_pytest("--show-capture=all --pdb "
"-p no:logging %s" % p1)
child.expect("get rekt")
output = child.before.decode("utf8")
assert "captured log" not in output
child.expect("(Pdb)") child.expect("(Pdb)")
child.sendeof() child.sendeof()
rest = child.read().decode("utf8") rest = child.read().decode("utf8")
assert "1 failed" in rest assert "1 failed" in rest
assert "getrekt" not in rest
self.flush(child) self.flush(child)
def test_pdb_interaction_exception(self, testdir): def test_pdb_interaction_exception(self, testdir):

View File

@ -206,13 +206,13 @@ class TestWarns(object):
with pytest.warns(RuntimeWarning): with pytest.warns(RuntimeWarning):
warnings.warn("user", UserWarning) warnings.warn("user", UserWarning)
excinfo.match(r"DID NOT WARN. No warnings of type \(.+RuntimeWarning.+,\) was emitted. " excinfo.match(r"DID NOT WARN. No warnings of type \(.+RuntimeWarning.+,\) was emitted. "
r"The list of emitted warnings is: \[UserWarning\('user',\)\].") r"The list of emitted warnings is: \[UserWarning\('user',?\)\].")
with pytest.raises(pytest.fail.Exception) as excinfo: with pytest.raises(pytest.fail.Exception) as excinfo:
with pytest.warns(UserWarning): with pytest.warns(UserWarning):
warnings.warn("runtime", RuntimeWarning) warnings.warn("runtime", RuntimeWarning)
excinfo.match(r"DID NOT WARN. No warnings of type \(.+UserWarning.+,\) was emitted. " excinfo.match(r"DID NOT WARN. No warnings of type \(.+UserWarning.+,\) was emitted. "
r"The list of emitted warnings is: \[RuntimeWarning\('runtime',\)\].") r"The list of emitted warnings is: \[RuntimeWarning\('runtime',?\)\].")
with pytest.raises(pytest.fail.Exception) as excinfo: with pytest.raises(pytest.fail.Exception) as excinfo:
with pytest.warns(UserWarning): with pytest.warns(UserWarning):

View File

@ -240,6 +240,20 @@ def test_exclude(testdir):
result.stdout.fnmatch_lines(["*1 passed*"]) result.stdout.fnmatch_lines(["*1 passed*"])
def test_deselect(testdir):
testdir.makepyfile(test_a="""
import pytest
def test_a1(): pass
@pytest.mark.parametrize('b', range(3))
def test_a2(b): pass
""")
result = testdir.runpytest("-v", "--deselect=test_a.py::test_a2[1]", "--deselect=test_a.py::test_a2[2]")
assert result.ret == 0
result.stdout.fnmatch_lines(["*2 passed, 2 deselected*"])
for line in result.stdout.lines:
assert not line.startswith(('test_a.py::test_a2[1]', 'test_a.py::test_a2[2]'))
def test_sessionfinish_with_start(testdir): def test_sessionfinish_with_start(testdir):
testdir.makeconftest(""" testdir.makeconftest("""
import os import os

View File

@ -32,16 +32,19 @@ class Option(object):
return values return values
def pytest_generate_tests(metafunc): @pytest.fixture(params=[
if "option" in metafunc.fixturenames: Option(verbose=False),
metafunc.addcall(id="default", Option(verbose=True),
funcargs={'option': Option(verbose=False)}) Option(verbose=-1),
metafunc.addcall(id="verbose", Option(fulltrace=True),
funcargs={'option': Option(verbose=True)}) ], ids=[
metafunc.addcall(id="quiet", "default",
funcargs={'option': Option(verbose=-1)}) "verbose",
metafunc.addcall(id="fulltrace", "quiet",
funcargs={'option': Option(fulltrace=True)}) "fulltrace",
])
def option(request):
return request.param
@pytest.mark.parametrize('input,expected', [ @pytest.mark.parametrize('input,expected', [
@ -431,11 +434,36 @@ class TestTerminalFunctional(object):
) )
result = testdir.runpytest("-k", "test_two:", testpath) result = testdir.runpytest("-k", "test_two:", testpath)
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
"collected 3 items / 1 deselected",
"*test_deselected.py ..*", "*test_deselected.py ..*",
"=* 1 test*deselected *=",
]) ])
assert result.ret == 0 assert result.ret == 0
def test_show_deselected_items_using_markexpr_before_test_execution(
self, testdir):
testdir.makepyfile("""
import pytest
@pytest.mark.foo
def test_foobar():
pass
@pytest.mark.bar
def test_bar():
pass
def test_pass():
pass
""")
result = testdir.runpytest('-m', 'not foo')
result.stdout.fnmatch_lines([
"collected 3 items / 1 deselected",
"*test_show_des*.py ..*",
"*= 2 passed, 1 deselected in * =*",
])
assert "= 1 deselected =" not in result.stdout.str()
assert result.ret == 0
def test_no_skip_summary_if_failure(self, testdir): def test_no_skip_summary_if_failure(self, testdir):
testdir.makepyfile(""" testdir.makepyfile("""
import pytest import pytest
@ -657,10 +685,12 @@ def test_color_yes_collection_on_non_atty(testdir, verbose):
def test_getreportopt(): def test_getreportopt():
class config(object): class Config(object):
class option(object): class Option(object):
reportchars = "" reportchars = ""
disable_warnings = True disable_warnings = True
option = Option()
config = Config()
config.option.reportchars = "sf" config.option.reportchars = "sf"
assert getreportopt(config) == "sf" assert getreportopt(config) == "sf"
@ -826,31 +856,47 @@ def pytest_report_header(config, startdir):
def test_show_capture(self, testdir): def test_show_capture(self, testdir):
testdir.makepyfile(""" testdir.makepyfile("""
import sys import sys
import logging
def test_one(): def test_one():
sys.stdout.write('!This is stdout!') sys.stdout.write('!This is stdout!')
sys.stderr.write('!This is stderr!') sys.stderr.write('!This is stderr!')
logging.warning('!This is a warning log msg!')
assert False, 'Something failed' assert False, 'Something failed'
""") """)
result = testdir.runpytest("--tb=short") result = testdir.runpytest("--tb=short")
result.stdout.fnmatch_lines(["!This is stdout!"]) result.stdout.fnmatch_lines(["!This is stdout!",
result.stdout.fnmatch_lines(["!This is stderr!"]) "!This is stderr!",
"*WARNING*!This is a warning log msg!"])
result = testdir.runpytest("--show-capture=both", "--tb=short") result = testdir.runpytest("--show-capture=all", "--tb=short")
result.stdout.fnmatch_lines(["!This is stdout!"]) result.stdout.fnmatch_lines(["!This is stdout!",
result.stdout.fnmatch_lines(["!This is stderr!"]) "!This is stderr!",
"*WARNING*!This is a warning log msg!"])
result = testdir.runpytest("--show-capture=stdout", "--tb=short") stdout = testdir.runpytest(
assert "!This is stderr!" not in result.stdout.str() "--show-capture=stdout", "--tb=short").stdout.str()
assert "!This is stdout!" in result.stdout.str() assert "!This is stderr!" not in stdout
assert "!This is stdout!" in stdout
assert "!This is a warning log msg!" not in stdout
result = testdir.runpytest("--show-capture=stderr", "--tb=short") stdout = testdir.runpytest(
assert "!This is stdout!" not in result.stdout.str() "--show-capture=stderr", "--tb=short").stdout.str()
assert "!This is stderr!" in result.stdout.str() assert "!This is stdout!" not in stdout
assert "!This is stderr!" in stdout
assert "!This is a warning log msg!" not in stdout
result = testdir.runpytest("--show-capture=no", "--tb=short") stdout = testdir.runpytest(
assert "!This is stdout!" not in result.stdout.str() "--show-capture=log", "--tb=short").stdout.str()
assert "!This is stderr!" not in result.stdout.str() assert "!This is stdout!" not in stdout
assert "!This is stderr!" not in stdout
assert "!This is a warning log msg!" in stdout
stdout = testdir.runpytest(
"--show-capture=no", "--tb=short").stdout.str()
assert "!This is stdout!" not in stdout
assert "!This is stderr!" not in stdout
assert "!This is a warning log msg!" not in stdout
@pytest.mark.xfail("not hasattr(os, 'dup')") @pytest.mark.xfail("not hasattr(os, 'dup')")