Merge pull request #6716 from bluetech/features-to-master-for-real

Merge the features branch into master, before stopping to use it
This commit is contained in:
Bruno Oliveira 2020-02-12 11:52:32 -03:00 committed by GitHub
commit d79179a239
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
103 changed files with 2286 additions and 893 deletions

View File

@ -53,8 +53,8 @@ repos:
- id: changelogs-rst
name: changelog filenames
language: fail
entry: 'changelog files must be named ####.(feature|bugfix|doc|deprecation|removal|vendor|trivial).rst'
exclude: changelog/(\d+\.(feature|improvement|bugfix|doc|deprecation|removal|vendor|trivial).rst|README.rst|_template.rst)
entry: 'changelog files must be named ####.(breaking|bugfix|deprecation|doc|feature|improvement|trivial|vendor).rst'
exclude: changelog/(\d+\.(breaking|bugfix|deprecation|doc|feature|improvement|trivial|vendor).rst|README.rst|_template.rst)
files: ^changelog/
- id: py-deprecated
name: py library is deprecated

10
AUTHORS
View File

@ -52,6 +52,7 @@ Carl Friedrich Bolz
Carlos Jenkins
Ceridwen
Charles Cloud
Charles Machalow
Charnjit SiNGH (CCSJ)
Chris Lamb
Christian Boelsen
@ -59,13 +60,13 @@ Christian Fetzer
Christian Neumüller
Christian Theunert
Christian Tismer
Christopher Gilling
Christoph Buelter
Christopher Dignam
Christopher Gilling
Claudio Madotto
CrazyMerlyn
Cyrus Maden
Damian Skrzypczak
Dhiren Serai
Daniel Grana
Daniel Hahler
Daniel Nuri
@ -80,6 +81,7 @@ David Szotten
David Vierra
Daw-Ran Liou
Denis Kirisov
Dhiren Serai
Diego Russo
Dmitry Dygalo
Dmitry Pribysh
@ -121,6 +123,7 @@ Ilya Konstantinov
Ionuț Turturică
Iwan Briquemont
Jaap Broekhuizen
Jakub Mitoraj
Jan Balster
Janne Vanhala
Jason R. Coombs
@ -207,8 +210,10 @@ Omer Hadari
Ondřej Súkup
Oscar Benjamin
Patrick Hayes
Pauli Virtanen
Paweł Adamczak
Pedro Algarvio
Philipp Loose
Pieter Mulder
Piotr Banaszkiewicz
Pulkit Goyal
@ -269,6 +274,7 @@ Vidar T. Fauske
Virgil Dupras
Vitaly Lashmanov
Vlad Dragos
Vladyslav Rachek
Volodymyr Piskun
Wei Lin
Wil Cooley

View File

@ -0,0 +1 @@
``pytest.mark.parametrize`` accepts integers for ``ids`` again, converting it to strings.

View File

@ -0,0 +1,5 @@
Option ``--no-print-logs`` is deprecated and meant to be removed in a future release. If you use ``--no-print-logs``, please try out ``--show-capture`` and
provide feedback.
``--show-capture`` command-line option was added in ``pytest 3.5.0`` and allows to specify how to
display captured output when tests fail: ``no``, ``stdout``, ``stderr``, ``log`` or ``all`` (the default).

View File

@ -0,0 +1 @@
``--trace`` now works with unittests.

View File

@ -0,0 +1 @@
Fixed some warning reports produced by pytest to point to the correct location of the warning in the user's code.

View File

@ -0,0 +1 @@
Use "yellow" main color with any XPASSED tests.

View File

@ -0,0 +1 @@
New :ref:`--capture=tee-sys <capture-method>` option to allow both live printing and capturing of test output.

View File

@ -0,0 +1,4 @@
Revert "A warning is now issued when assertions are made for ``None``".
The warning proved to be less useful than initially expected and had quite a
few false positive cases.

View File

@ -0,0 +1 @@
``tmpdir_factory.mktemp`` now fails when given absolute and non-normalized paths.

View File

@ -0,0 +1,2 @@
Now all arguments to ``@pytest.mark.parametrize`` need to be explicitly declared in the function signature or via ``indirect``.
Previously it was possible to omit an argument if a fixture with the same name existed, which was just an accident of implementation and was not meant to be a part of the API.

View File

@ -0,0 +1 @@
Report ``PytestUnknownMarkWarning`` at the level of the user's code, not ``pytest``'s.

View File

@ -0,0 +1,10 @@
Deprecate using direct constructors for ``Nodes``.
Instead they are new constructed via ``Node.from_parent``.
This transitional mechanism enables us to detangle the very intensely
entangled ``Node`` relationships by enforcing more controlled creation/configruation patterns.
As part of that session/config are already disallowed parameters and as we work on the details we might need disallow a few more as well.
Subclasses are expected to use `super().from_parent` if they intend to expand the creation of `Nodes`.

View File

@ -0,0 +1 @@
The ``pytest_warning_captured`` hook now receives a ``location`` parameter with the code location that generated the warning.

View File

@ -0,0 +1 @@
Fix interaction with ``--pdb`` and unittests: do not use unittest's ``TestCase.debug()``.

View File

@ -0,0 +1 @@
pytester: the ``testdir`` fixture respects environment settings from the ``monkeypatch`` fixture for inner runs.

View File

@ -0,0 +1 @@
``--fulltrace`` is honored with collection errors.

View File

@ -0,0 +1 @@
Matching of ``-k EXPRESSION`` to test names is now case-insensitive.

View File

@ -0,0 +1,3 @@
Fix summary entries appearing twice when ``f/F`` and ``s/S`` report chars were used at the same time in the ``-r`` command-line option (for example ``-rFf``).
The upper case variants were never documented and the preferred form should be the lower case.

View File

@ -0,0 +1 @@
Make `--showlocals` work also with `--tb=short`.

View File

@ -0,0 +1 @@
Remove usage of ``parser`` module, deprecated in Python 3.9.

View File

@ -0,0 +1,3 @@
Plugins specified with ``-p`` are now loaded after internal plugins, which results in their hooks being called *before* the internal ones.
This makes the ``-p`` behavior consistent with ``PYTEST_PLUGINS``.

View File

@ -0,0 +1 @@
`--disable-warnings` is honored with `-ra` and `-rA`.

View File

@ -0,0 +1 @@
Changed default for `-r` to `fE`, which displays failures and errors in the :ref:`short test summary <pytest.detailed_failed_tests_usage>`. `-rN` can be used to disable it (the old behavior).

View File

@ -0,0 +1 @@
New options have been added to the :confval:`junit_logging` option: ``log``, ``out-err``, and ``all``.

View File

@ -0,0 +1 @@
Fix node ids which contain a parametrized empty-string variable.

View File

@ -0,0 +1,3 @@
Removed the long-deprecated ``pytest_itemstart`` hook.
This hook has been marked as deprecated and not been even called by pytest for over 10 years now.

View File

@ -0,0 +1 @@
Add support for matching lines consecutively with :attr:`LineMatcher <_pytest.pytester.LineMatcher>`'s :func:`~_pytest.pytester.LineMatcher.fnmatch_lines` and :func:`~_pytest.pytester.LineMatcher.re_match_lines`.

View File

@ -0,0 +1,4 @@
Code is now highlighted in tracebacks when ``pygments`` is installed.
Users are encouraged to install ``pygments`` into their environment and provide feedback, because
the plan is to make ``pygments`` a regular dependency in the future.

View File

@ -0,0 +1 @@
``pytest.mark.parametrize`` supports iterators and generators for ``ids``.

View File

@ -18,7 +18,7 @@ Each file should be named like ``<ISSUE>.<TYPE>.rst``, where
* ``bugfix``: fixes a bug.
* ``doc``: documentation improvement, like rewording an entire session or adding missing docs.
* ``deprecation``: feature deprecation.
* ``removal``: feature removal.
* ``breaking``: a change which may break existing suites, such as feature removal or behavior change.
* ``vendor``: changes in packages vendored in pytest.
* ``trivial``: fixing a small typo or internal change that might be noteworthy.

View File

@ -3,6 +3,61 @@
Backwards Compatibility Policy
==============================
.. versionadded: 6.0
pytest is actively evolving and is a project that has been decades in the making,
we keep learning about new and better structures to express different details about testing.
While we implement those modifications we try to ensure an easy transition and don't want to impose unnecessary churn on our users and community/plugin authors.
As of now, pytest considers multipe types of backward compatibility transitions:
a) trivial: APIs which trivially translate to the new mechanism,
and do not cause problematic changes.
We try to support those indefinitely while encouraging users to switch to newer/better mechanisms through documentation.
b) transitional: the old and new API don't conflict
and we can help users transition by using warnings, while supporting both for a prolonged time.
We will only start the removal of deprecated functionality in major releases (e.g. if we deprecate something in 3.0 we will start to remove it in 4.0), and keep it around for at least two minor releases (e.g. if we deprecate something in 3.9 and 4.0 is the next release, we start to remove it in 5.0, not in 4.0).
When the deprecation expires (e.g. 4.0 is released), we won't remove the deprecated functionality immediately, but will use the standard warning filters to turn them into **errors** by default. This approach makes it explicit that removal is imminent, and still gives you time to turn the deprecated feature into a warning instead of an error so it can be dealt with in your own time. In the next minor release (e.g. 4.1), the feature will be effectively removed.
c) true breakage: should only to be considered when normal transition is unreasonably unsustainable and would offset important development/features by years.
In addition, they should be limited to APIs where the number of actual users is very small (for example only impacting some plugins), and can be coordinated with the community in advance.
Examples for such upcoming changes:
* removal of ``pytest_runtest_protocol/nextitem`` - `#895`_
* rearranging of the node tree to include ``FunctionDefinition``
* rearranging of ``SetupState`` `#895`_
True breakages must be announced first in an issue containing:
* Detailed description of the change
* Rationale
* Expected impact on users and plugin authors (example in `#895`_)
After there's no hard *-1* on the issue it should be followed up by an initial proof-of-concept Pull Request.
This POC serves as both a coordination point to assess impact and potential inspriation to come up with a transitional solution after all.
After a reasonable amount of time the PR can be merged to base a new major release.
For the PR to mature from POC to acceptance, it must contain:
* Setup of deprecation errors/warnings that help users fix and port their code. If it is possible to introduce a deprecation period under the current series, before the true breakage, it should be introduced in a separate PR and be part of the current release stream.
* Detailed description of the rationale and examples on how to port code in ``doc/en/deprecations.rst``.
History
=========
Focus primary on smooth transition - stance (pre 6.0)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Keeping backwards compatibility has a very high priority in the pytest project. Although we have deprecated functionality over the years, most of it is still supported. All deprecations in pytest were done because simpler or more efficient ways of accomplishing the same tasks have emerged, making the old way of doing things unnecessary.
With the pytest 3.0 release we introduced a clear communication scheme for when we will actually remove the old busted joint and politely ask you to use the new hotness instead, while giving you enough time to adjust your tests or raise concerns if there are valid reasons to keep deprecated functionality around.
@ -20,3 +75,6 @@ Deprecation Roadmap
Features currently deprecated and removed in previous releases can be found in :ref:`deprecations`.
We track future deprecation and removal of features using milestones and the `deprecation <https://github.com/pytest-dev/pytest/issues?q=label%3A%22type%3A+deprecation%22>`_ and `removal <https://github.com/pytest-dev/pytest/labels/type%3A%20removal>`_ labels on GitHub.
.. _`#895`: https://github.com/pytest-dev/pytest/issues/895

View File

@ -21,27 +21,36 @@ file descriptors. This allows to capture output from simple
print statements as well as output from a subprocess started by
a test.
.. _capture-method:
Setting capturing methods or disabling capturing
-------------------------------------------------
There are two ways in which ``pytest`` can perform capturing:
There are three ways in which ``pytest`` can perform capturing:
* file descriptor (FD) level capturing (default): All writes going to the
* ``fd`` (file descriptor) level capturing (default): All writes going to the
operating system file descriptors 1 and 2 will be captured.
* ``sys`` level capturing: Only writes to Python files ``sys.stdout``
and ``sys.stderr`` will be captured. No capturing of writes to
filedescriptors is performed.
* ``tee-sys`` capturing: Python writes to ``sys.stdout`` and ``sys.stderr``
will be captured, however the writes will also be passed-through to
the actual ``sys.stdout`` and ``sys.stderr``. This allows output to be
'live printed' and captured for plugin use, such as junitxml (new in pytest 5.4).
.. _`disable capturing`:
You can influence output capturing mechanisms from the command line:
.. code-block:: bash
pytest -s # disable all capturing
pytest --capture=sys # replace sys.stdout/stderr with in-mem files
pytest --capture=fd # also point filedescriptors 1 and 2 to temp file
pytest -s # disable all capturing
pytest --capture=sys # replace sys.stdout/stderr with in-mem files
pytest --capture=fd # also point filedescriptors 1 and 2 to temp file
pytest --capture=tee-sys # combines 'sys' and '-s', capturing sys.stdout/stderr
# and passing it along to the actual sys.stdout/stderr
.. _printdebugging:

View File

@ -19,6 +19,30 @@ Below is a complete list of all pytest features which are considered deprecated.
:class:`_pytest.warning_types.PytestWarning` or subclasses, which can be filtered using
:ref:`standard warning filters <warnings>`.
``--no-print-logs`` command-line option
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. deprecated:: 5.4
Option ``--no-print-logs`` is deprecated and meant to be removed in a future release. If you use ``--no-print-logs``, please try out ``--show-capture`` and
provide feedback.
``--show-capture`` command-line option was added in ``pytest 3.5.0` and allows to specify how to
display captured output when tests fail: ``no``, ``stdout``, ``stderr``, ``log`` or ``all`` (the default).
Node Construction changed to ``Node.from_parent``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. deprecated:: 5.3
The construction of nodes new should use the named constructor ``from_parent``.
This limitation in api surface intends to enable better/simpler refactoring of the collection tree.
``junit_family`` default value change to "xunit2"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -148,6 +148,10 @@ which implements a substring match on the test names instead of the
exact match on markers that ``-m`` provides. This makes it easy to
select tests based on their names:
.. versionadded: 5.4
The expression matching is now case-insensitive.
.. code-block:: pytest
$ pytest -v -k http # running with the above defined example module

View File

@ -4,7 +4,7 @@ import pytest
def pytest_collect_file(parent, path):
if path.ext == ".yaml" and path.basename.startswith("test"):
return YamlFile(path, parent)
return YamlFile.from_parent(parent, fspath=path)
class YamlFile(pytest.File):
@ -13,7 +13,7 @@ class YamlFile(pytest.File):
raw = yaml.safe_load(self.fspath.open())
for name, spec in sorted(raw.items()):
yield YamlItem(name, self, spec)
yield YamlItem.from_parent(self, name=name, spec=spec)
class YamlItem(pytest.Item):

View File

@ -398,6 +398,9 @@ The result of this test will be successful:
.. regendoc:wipe
Note, that each argument in `parametrize` list should be explicitly declared in corresponding
python test function or via `indirect`.
Parametrizing test methods through per-class configuration
--------------------------------------------------------------

View File

@ -77,6 +77,8 @@ This is also discussed in details in :ref:`test discovery`.
Invoking ``pytest`` versus ``python -m pytest``
-----------------------------------------------
Running pytest with ``python -m pytest [...]`` instead of ``pytest [...]`` yields nearly
equivalent behaviour, except that the former call will add the current directory to ``sys.path``.
Running pytest with ``pytest [...]`` instead of ``python -m pytest [...]`` yields nearly
equivalent behaviour, except that the latter will add the current directory to ``sys.path``, which
is standard ``python`` behavior.
See also :ref:`cmdline`.

View File

@ -738,7 +738,7 @@ ExceptionInfo
pytest.ExitCode
~~~~~~~~~~~~~~~
.. autoclass:: _pytest.main.ExitCode
.. autoclass:: _pytest.config.ExitCode
:members:
@ -1164,9 +1164,17 @@ passed multiple times. The expected format is ``name=value``. For example::
.. confval:: junit_logging
.. versionadded:: 3.5
.. versionchanged:: 5.4
``log``, ``all``, ``out-err`` options added.
Configures if stdout/stderr should be written to the JUnit XML file. Valid values are
``system-out``, ``system-err``, and ``no`` (the default).
Configures if captured output should be written to the JUnit XML file. Valid values are:
* ``log``: write only ``logging`` captured output.
* ``system-out``: write captured ``stdout`` contents.
* ``system-err``: write captured ``stderr`` contents.
* ``out-err``: write both captured ``stdout`` and ``stderr`` contents.
* ``all``: write captured ``logging``, ``stdout`` and ``stderr`` contents.
* ``no`` (the default): no captured output is written.
.. code-block:: ini

View File

@ -238,17 +238,6 @@ was executed ahead of the ``test_method``.
.. _pdb-unittest-note:
.. note::
Running tests from ``unittest.TestCase`` subclasses with ``--pdb`` will
disable tearDown and cleanup methods for the case that an Exception
occurs. This allows proper post mortem debugging for all applications
which have significant logic in their tearDown machinery. However,
supporting this feature has the following side effect: If people
overwrite ``unittest.TestCase`` ``__call__`` or ``run``, they need to
to overwrite ``debug`` in the same way (this is also true for standard
unittest).
.. note::
Due to architectural differences between the two frameworks, setup and

View File

@ -33,7 +33,7 @@ Running ``pytest`` can result in six different exit codes:
:Exit code 4: pytest command line usage error
:Exit code 5: No tests were collected
They are represented by the :class:`_pytest.main.ExitCode` enum. The exit codes being a part of the public API can be imported and accessed directly using:
They are represented by the :class:`_pytest.config.ExitCode` enum. The exit codes being a part of the public API can be imported and accessed directly using:
.. code-block:: python
@ -94,8 +94,8 @@ Pytest supports several ways to run and select tests from the command-line.
pytest -k "MyClass and not method"
This will run tests which contain names that match the given *string expression*, which can
include Python operators that use filenames, class names and function names as variables.
This will run tests which contain names that match the given *string expression* (case-insensitive),
which can include Python operators that use filenames, class names and function names as variables.
The example above will run ``TestMyClass.test_something`` but not ``TestMyClass.test_method_simple``.
.. _nodeids:
@ -169,11 +169,11 @@ option you make sure a trace is shown.
Detailed summary report
-----------------------
The ``-r`` flag can be used to display a "short test summary info" at the end of the test session,
making it easy in large test suites to get a clear picture of all failures, skips, xfails, etc.
It defaults to ``fE`` to list failures and errors.
Example:
.. code-block:: python
@ -261,8 +261,12 @@ Here is the full list of available characters that can be used:
- ``X`` - xpassed
- ``p`` - passed
- ``P`` - passed with output
Special characters for (de)selection of groups:
- ``a`` - all except ``pP``
- ``A`` - all
- ``N`` - none, this can be used to display nothing (since ``fE`` is the default)
More than one character can be used, so for example to only see failed and skipped tests, you can execute:

View File

@ -16,8 +16,8 @@ title_format = "pytest {version} ({project_date})"
template = "changelog/_template.rst"
[[tool.towncrier.type]]
directory = "removal"
name = "Removals"
directory = "breaking"
name = "Breaking Changes"
showcontent = true
[[tool.towncrier.type]]

View File

@ -53,19 +53,22 @@ If things do not work right away:
which should throw a KeyError: 'COMPLINE' (which is properly set by the
global argcomplete script).
"""
import argparse
import os
import sys
from glob import glob
from typing import Any
from typing import List
from typing import Optional
class FastFilesCompleter:
"Fast file completer class"
def __init__(self, directories=True):
def __init__(self, directories: bool = True) -> None:
self.directories = directories
def __call__(self, prefix, **kwargs):
def __call__(self, prefix: str, **kwargs: Any) -> List[str]:
"""only called on non option completions"""
if os.path.sep in prefix[1:]:
prefix_dir = len(os.path.dirname(prefix) + os.path.sep)
@ -94,13 +97,13 @@ if os.environ.get("_ARGCOMPLETE"):
sys.exit(-1)
filescompleter = FastFilesCompleter() # type: Optional[FastFilesCompleter]
def try_argcomplete(parser):
def try_argcomplete(parser: argparse.ArgumentParser) -> None:
argcomplete.autocomplete(parser, always_complete_options=False)
else:
def try_argcomplete(parser):
def try_argcomplete(parser: argparse.ArgumentParser) -> None:
pass
filescompleter = None

View File

@ -789,9 +789,7 @@ class FormattedExcinfo:
message = excinfo and excinfo.typename or ""
path = self._makepath(entry.path)
filelocrepr = ReprFileLocation(path, entry.lineno + 1, message)
localsrepr = None
if not short:
localsrepr = self.repr_locals(entry.locals)
localsrepr = self.repr_locals(entry.locals)
return ReprEntry(lines, reprargs, localsrepr, filelocrepr, style)
if excinfo:
lines.extend(self.get_exconly(excinfo, indent=4))
@ -1041,19 +1039,58 @@ class ReprEntry(TerminalRepr):
self.reprfileloc = filelocrepr
self.style = style
def _write_entry_lines(self, tw: TerminalWriter) -> None:
"""Writes the source code portions of a list of traceback entries with syntax highlighting.
Usually entries are lines like these:
" x = 1"
"> assert x == 2"
"E assert 1 == 2"
This function takes care of rendering the "source" portions of it (the lines without
the "E" prefix) using syntax highlighting, taking care to not highlighting the ">"
character, as doing so might break line continuations.
"""
indent_size = 4
def is_fail(line):
return line.startswith("{} ".format(FormattedExcinfo.fail_marker))
if not self.lines:
return
# separate indents and source lines that are not failures: we want to
# highlight the code but not the indentation, which may contain markers
# such as "> assert 0"
indents = []
source_lines = []
for line in self.lines:
if not is_fail(line):
indents.append(line[:indent_size])
source_lines.append(line[indent_size:])
tw._write_source(source_lines, indents)
# failure lines are always completely red and bold
for line in (x for x in self.lines if is_fail(x)):
tw.line(line, bold=True, red=True)
def toterminal(self, tw: TerminalWriter) -> None:
if self.style == "short":
assert self.reprfileloc is not None
self.reprfileloc.toterminal(tw)
for line in self.lines:
red = line.startswith("E ")
tw.line(line, bold=True, red=red)
self._write_entry_lines(tw)
if self.reprlocals:
self.reprlocals.toterminal(tw, indent=" " * 8)
return
if self.reprfuncargs:
self.reprfuncargs.toterminal(tw)
for line in self.lines:
red = line.startswith("E ")
tw.line(line, bold=True, red=red)
self._write_entry_lines(tw)
if self.reprlocals:
tw.line("")
self.reprlocals.toterminal(tw)
@ -1089,9 +1126,9 @@ class ReprLocals(TerminalRepr):
def __init__(self, lines: Sequence[str]) -> None:
self.lines = lines
def toterminal(self, tw: TerminalWriter) -> None:
def toterminal(self, tw: TerminalWriter, indent="") -> None:
for line in self.lines:
tw.line(line)
tw.line(indent + line)
class ReprFuncArgs(TerminalRepr):

View File

@ -146,18 +146,13 @@ class Source:
""" return True if source is parseable, heuristically
deindenting it by default.
"""
from parser import suite as syntax_checker
if deindent:
source = str(self.deindent())
else:
source = str(self)
try:
# compile(source+'\n', "x", "exec")
syntax_checker(source + "\n")
except KeyboardInterrupt:
raise
except Exception:
ast.parse(source)
except (SyntaxError, ValueError, TypeError):
return False
else:
return True

View File

@ -1,3 +1,39 @@
# Reexport TerminalWriter from here instead of py, to make it easier to
# extend or swap our own implementation in the future.
from py.io import TerminalWriter as TerminalWriter # noqa: F401
from typing import List
from typing import Sequence
from py.io import TerminalWriter as BaseTerminalWriter # noqa: F401
class TerminalWriter(BaseTerminalWriter):
def _write_source(self, lines: List[str], indents: Sequence[str] = ()) -> None:
"""Write lines of source code possibly highlighted.
Keeping this private for now because the API is clunky. We should discuss how
to evolve the terminal writer so we can have more precise color support, for example
being able to write part of a line in one color and the rest in another, and so on.
"""
if indents and len(indents) != len(lines):
raise ValueError(
"indents size ({}) should have same size as lines ({})".format(
len(indents), len(lines)
)
)
if not indents:
indents = [""] * len(lines)
source = "\n".join(lines)
new_lines = self._highlight(source).splitlines()
for indent, new_line in zip(indents, new_lines):
self.line(indent + new_line)
def _highlight(self, source):
"""Highlight the given source code according to the "code_highlight" option"""
if not self.hasmarkup:
return source
try:
from pygments.formatters.terminal import TerminalFormatter
from pygments.lexers.python import PythonLexer
from pygments import highlight
except ImportError:
return source
else:
return highlight(source, PythonLexer(), TerminalFormatter(bg="dark"))

View File

@ -80,3 +80,24 @@ def saferepr(obj: Any, maxsize: int = 240) -> str:
around the Repr/reprlib functionality of the standard 2.6 lib.
"""
return SafeRepr(maxsize).repr(obj)
class AlwaysDispatchingPrettyPrinter(pprint.PrettyPrinter):
"""PrettyPrinter that always dispatches (regardless of width)."""
def _format(self, object, stream, indent, allowance, context, level):
p = self._dispatch.get(type(object).__repr__, None)
objid = id(object)
if objid in context or p is None:
return super()._format(object, stream, indent, allowance, context, level)
context[objid] = 1
p(self, object, stream, indent, allowance, context, level + 1)
del context[objid]
def _pformat_dispatch(object, indent=1, width=80, depth=None, *, compact=False):
return AlwaysDispatchingPrettyPrinter(
indent=1, width=80, depth=None, compact=False
).pformat(object)

View File

@ -13,6 +13,7 @@ from typing import Tuple
import _pytest._code
from _pytest import outcomes
from _pytest._io.saferepr import _pformat_dispatch
from _pytest._io.saferepr import safeformat
from _pytest._io.saferepr import saferepr
from _pytest.compat import ATTRS_EQ_FIELD
@ -28,27 +29,6 @@ _reprcompare = None # type: Optional[Callable[[str, object, object], Optional[s
_assertion_pass = None # type: Optional[Callable[[int, str, str], None]]
class AlwaysDispatchingPrettyPrinter(pprint.PrettyPrinter):
"""PrettyPrinter that always dispatches (regardless of width)."""
def _format(self, object, stream, indent, allowance, context, level):
p = self._dispatch.get(type(object).__repr__, None)
objid = id(object)
if objid in context or p is None:
return super()._format(object, stream, indent, allowance, context, level)
context[objid] = 1
p(self, object, stream, indent, allowance, context, level + 1)
del context[objid]
def _pformat_dispatch(object, indent=1, width=80, depth=None, *, compact=False):
return AlwaysDispatchingPrettyPrinter(
indent=1, width=80, depth=None, compact=False
).pformat(object)
def format_explanation(explanation: str) -> str:
"""This formats an explanation

View File

@ -259,7 +259,7 @@ class LFPlugin:
self._report_status = "no previously failed tests, "
if self.config.getoption("last_failed_no_failures") == "none":
self._report_status += "deselecting all items."
config.hook.pytest_deselected(items=items)
config.hook.pytest_deselected(items=items[:])
items[:] = []
else:
self._report_status += "not deselecting items."

View File

@ -10,10 +10,14 @@ import sys
from io import UnsupportedOperation
from tempfile import TemporaryFile
from typing import BinaryIO
from typing import Generator
from typing import Iterable
from typing import Optional
import pytest
from _pytest.compat import CaptureAndPassthroughIO
from _pytest.compat import CaptureIO
from _pytest.config import Config
from _pytest.fixtures import FixtureRequest
patchsysdict = {0: "stdin", 1: "stdout", 2: "stderr"}
@ -26,8 +30,8 @@ def pytest_addoption(parser):
action="store",
default="fd" if hasattr(os, "dup") else "sys",
metavar="method",
choices=["fd", "sys", "no"],
help="per-test capturing method: one of fd|sys|no.",
choices=["fd", "sys", "no", "tee-sys"],
help="per-test capturing method: one of fd|sys|no|tee-sys.",
)
group._addoption(
"-s",
@ -39,7 +43,7 @@ def pytest_addoption(parser):
@pytest.hookimpl(hookwrapper=True)
def pytest_load_initial_conftests(early_config, parser, args):
def pytest_load_initial_conftests(early_config: Config):
ns = early_config.known_args_namespace
if ns.capture == "fd":
_py36_windowsconsoleio_workaround(sys.stdout)
@ -75,14 +79,14 @@ class CaptureManager:
case special handling is needed to ensure the fixtures take precedence over the global capture.
"""
def __init__(self, method):
def __init__(self, method) -> None:
self._method = method
self._global_capturing = None
self._current_item = None
self._capture_fixture = None # type: Optional[CaptureFixture]
def __repr__(self):
return "<CaptureManager _method={!r} _global_capturing={!r} _current_item={!r}>".format(
self._method, self._global_capturing, self._current_item
return "<CaptureManager _method={!r} _global_capturing={!r} _capture_fixture={!r}>".format(
self._method, self._global_capturing, self._capture_fixture
)
def _getcapture(self, method):
@ -92,16 +96,15 @@ class CaptureManager:
return MultiCapture(out=True, err=True, Capture=SysCapture)
elif method == "no":
return MultiCapture(out=False, err=False, in_=False)
elif method == "tee-sys":
return MultiCapture(out=True, err=True, in_=False, Capture=TeeSysCapture)
raise ValueError("unknown capturing method: %r" % method) # pragma: no cover
def is_capturing(self):
if self.is_globally_capturing():
return "global"
capture_fixture = getattr(self._current_item, "_capture_fixture", None)
if capture_fixture is not None:
return (
"fixture %s" % self._current_item._capture_fixture.request.fixturename
)
if self._capture_fixture:
return "fixture %s" % self._capture_fixture.request.fixturename
return False
# Global capturing control
@ -133,41 +136,59 @@ class CaptureManager:
def suspend(self, in_=False):
# Need to undo local capsys-et-al if it exists before disabling global capture.
self.suspend_fixture(self._current_item)
self.suspend_fixture()
self.suspend_global_capture(in_)
def resume(self):
self.resume_global_capture()
self.resume_fixture(self._current_item)
self.resume_fixture()
def read_global_capture(self):
return self._global_capturing.readouterr()
# Fixture Control (it's just forwarding, think about removing this later)
def activate_fixture(self, item):
@contextlib.contextmanager
def _capturing_for_request(
self, request: FixtureRequest
) -> Generator["CaptureFixture", None, None]:
if self._capture_fixture:
other_name = next(
k
for k, v in map_fixname_class.items()
if v is self._capture_fixture.captureclass
)
raise request.raiseerror(
"cannot use {} and {} at the same time".format(
request.fixturename, other_name
)
)
capture_class = map_fixname_class[request.fixturename]
self._capture_fixture = CaptureFixture(capture_class, request)
self.activate_fixture()
yield self._capture_fixture
self._capture_fixture.close()
self._capture_fixture = None
def activate_fixture(self):
"""If the current item is using ``capsys`` or ``capfd``, activate them so they take precedence over
the global capture.
"""
fixture = getattr(item, "_capture_fixture", None)
if fixture is not None:
fixture._start()
if self._capture_fixture:
self._capture_fixture._start()
def deactivate_fixture(self, item):
def deactivate_fixture(self):
"""Deactivates the ``capsys`` or ``capfd`` fixture of this item, if any."""
fixture = getattr(item, "_capture_fixture", None)
if fixture is not None:
fixture.close()
if self._capture_fixture:
self._capture_fixture.close()
def suspend_fixture(self, item):
fixture = getattr(item, "_capture_fixture", None)
if fixture is not None:
fixture._suspend()
def suspend_fixture(self):
if self._capture_fixture:
self._capture_fixture._suspend()
def resume_fixture(self, item):
fixture = getattr(item, "_capture_fixture", None)
if fixture is not None:
fixture._resume()
def resume_fixture(self):
if self._capture_fixture:
self._capture_fixture._resume()
# Helper context managers
@ -183,11 +204,11 @@ class CaptureManager:
@contextlib.contextmanager
def item_capture(self, when, item):
self.resume_global_capture()
self.activate_fixture(item)
self.activate_fixture()
try:
yield
finally:
self.deactivate_fixture(item)
self.deactivate_fixture()
self.suspend_global_capture(in_=False)
out, err = self.read_global_capture()
@ -211,12 +232,6 @@ class CaptureManager:
else:
yield
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(self, item):
self._current_item = item
yield
self._current_item = None
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_setup(self, item):
with self.item_capture("setup", item):
@ -241,18 +256,6 @@ class CaptureManager:
self.stop_global_capturing()
capture_fixtures = {"capfd", "capfdbinary", "capsys", "capsysbinary"}
def _ensure_only_one_capture_fixture(request: FixtureRequest, name):
fixtures = sorted(set(request.fixturenames) & capture_fixtures - {name})
if fixtures:
arg = fixtures[0] if len(fixtures) == 1 else fixtures
raise request.raiseerror(
"cannot use {} and {} at the same time".format(arg, name)
)
@pytest.fixture
def capsys(request):
"""Enable text capturing of writes to ``sys.stdout`` and ``sys.stderr``.
@ -261,8 +264,8 @@ def capsys(request):
calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``text`` objects.
"""
_ensure_only_one_capture_fixture(request, "capsys")
with _install_capture_fixture_on_item(request, SysCapture) as fixture:
capman = request.config.pluginmanager.getplugin("capturemanager")
with capman._capturing_for_request(request) as fixture:
yield fixture
@ -274,8 +277,8 @@ def capsysbinary(request):
method calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``bytes`` objects.
"""
_ensure_only_one_capture_fixture(request, "capsysbinary")
with _install_capture_fixture_on_item(request, SysCaptureBinary) as fixture:
capman = request.config.pluginmanager.getplugin("capturemanager")
with capman._capturing_for_request(request) as fixture:
yield fixture
@ -287,12 +290,12 @@ def capfd(request):
calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``text`` objects.
"""
_ensure_only_one_capture_fixture(request, "capfd")
if not hasattr(os, "dup"):
pytest.skip(
"capfd fixture needs os.dup function which is not available in this system"
)
with _install_capture_fixture_on_item(request, FDCapture) as fixture:
capman = request.config.pluginmanager.getplugin("capturemanager")
with capman._capturing_for_request(request) as fixture:
yield fixture
@ -304,35 +307,15 @@ def capfdbinary(request):
calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``byte`` objects.
"""
_ensure_only_one_capture_fixture(request, "capfdbinary")
if not hasattr(os, "dup"):
pytest.skip(
"capfdbinary fixture needs os.dup function which is not available in this system"
)
with _install_capture_fixture_on_item(request, FDCaptureBinary) as fixture:
capman = request.config.pluginmanager.getplugin("capturemanager")
with capman._capturing_for_request(request) as fixture:
yield fixture
@contextlib.contextmanager
def _install_capture_fixture_on_item(request, capture_class):
"""
Context manager which creates a ``CaptureFixture`` instance and "installs" it on
the item/node of the given request. Used by ``capsys`` and ``capfd``.
The CaptureFixture is added as attribute of the item because it needs to accessed
by ``CaptureManager`` during its ``pytest_runtest_*`` hooks.
"""
request.node._capture_fixture = fixture = CaptureFixture(capture_class, request)
capmanager = request.config.pluginmanager.getplugin("capturemanager")
# Need to active this fixture right away in case it is being used by another fixture (setup phase).
# If this fixture is being used only by a test function (call phase), then we wouldn't need this
# activation, but it doesn't hurt.
capmanager.activate_fixture(request.node)
yield fixture
fixture.close()
del request.node._capture_fixture
class CaptureFixture:
"""
Object returned by :py:func:`capsys`, :py:func:`capsysbinary`, :py:func:`capfd` and :py:func:`capfdbinary`
@ -680,6 +663,19 @@ class SysCapture:
self._old.flush()
class TeeSysCapture(SysCapture):
def __init__(self, fd, tmpfile=None):
name = patchsysdict[fd]
self._old = getattr(sys, name)
self.name = name
if tmpfile is None:
if name == "stdin":
tmpfile = DontReadFromInput()
else:
tmpfile = CaptureAndPassthroughIO(self._old)
self.tmpfile = tmpfile
class SysCaptureBinary(SysCapture):
# Ignore type because it doesn't match the type in the superclass (str).
EMPTY_BUFFER = b"" # type: ignore
@ -691,6 +687,14 @@ class SysCaptureBinary(SysCapture):
return res
map_fixname_class = {
"capfd": FDCapture,
"capfdbinary": FDCaptureBinary,
"capsys": SysCapture,
"capsysbinary": SysCaptureBinary,
}
class DontReadFromInput:
encoding = None

View File

@ -13,6 +13,7 @@ from inspect import signature
from typing import Any
from typing import Callable
from typing import Generic
from typing import IO
from typing import Optional
from typing import overload
from typing import Tuple
@ -368,6 +369,16 @@ class CaptureIO(io.TextIOWrapper):
return self.buffer.getvalue().decode("UTF-8")
class CaptureAndPassthroughIO(CaptureIO):
def __init__(self, other: IO) -> None:
self._other = other
super().__init__()
def write(self, s) -> int:
super().write(s)
return self._other.write(s)
if sys.version_info < (3, 5, 2):
def overload(f): # noqa: F811

View File

@ -1,6 +1,7 @@
""" command line options, ini-file and conftest.py processing. """
import argparse
import copy
import enum
import inspect
import os
import shlex
@ -46,6 +47,8 @@ from _pytest.warning_types import PytestConfigWarning
if TYPE_CHECKING:
from typing import Type
from .argparsing import Argument
_PluggyPlugin = object
"""A type to represent plugin objects.
@ -58,6 +61,29 @@ hookimpl = HookimplMarker("pytest")
hookspec = HookspecMarker("pytest")
class ExitCode(enum.IntEnum):
"""
.. versionadded:: 5.0
Encodes the valid exit codes by pytest.
Currently users and plugins may supply other exit codes as well.
"""
#: tests passed
OK = 0
#: tests failed
TESTS_FAILED = 1
#: pytest was interrupted
INTERRUPTED = 2
#: an internal error got in the way
INTERNAL_ERROR = 3
#: pytest was misused
USAGE_ERROR = 4
#: pytest couldn't find tests
NO_TESTS_COLLECTED = 5
class ConftestImportFailure(Exception):
def __init__(self, path, excinfo):
Exception.__init__(self, path, excinfo)
@ -65,7 +91,7 @@ class ConftestImportFailure(Exception):
self.excinfo = excinfo # type: Tuple[Type[Exception], Exception, TracebackType]
def main(args=None, plugins=None) -> "Union[int, _pytest.main.ExitCode]":
def main(args=None, plugins=None) -> Union[int, ExitCode]:
""" return exit code, after performing an in-process test run.
:arg args: list of command line arguments.
@ -73,8 +99,6 @@ def main(args=None, plugins=None) -> "Union[int, _pytest.main.ExitCode]":
:arg plugins: list of plugin objects to be auto-registered during
initialization.
"""
from _pytest.main import ExitCode
try:
try:
config = _prepareconfig(args, plugins)
@ -190,7 +214,7 @@ def get_config(args=None, plugins=None):
if args is not None:
# Handle any "-p no:plugin" args.
pluginmanager.consider_preparse(args)
pluginmanager.consider_preparse(args, exclude_only=True)
for spec in default_plugins:
pluginmanager.import_plugin(spec)
@ -498,7 +522,7 @@ class PytestPluginManager(PluginManager):
#
#
def consider_preparse(self, args):
def consider_preparse(self, args, *, exclude_only=False):
i = 0
n = len(args)
while i < n:
@ -515,6 +539,8 @@ class PytestPluginManager(PluginManager):
parg = opt[2:]
else:
continue
if exclude_only and not parg.startswith("no:"):
continue
self.consider_pluginarg(parg)
def consider_pluginarg(self, arg):
@ -597,7 +623,7 @@ class PytestPluginManager(PluginManager):
_issue_warning_captured(
PytestConfigWarning("skipped plugin {!r}: {}".format(modname, e.msg)),
self.hook,
stacklevel=1,
stacklevel=2,
)
else:
mod = sys.modules[importspec]
@ -740,7 +766,7 @@ class Config:
plugins = attr.ib()
dir = attr.ib(type=Path)
def __init__(self, pluginmanager, *, invocation_params=None):
def __init__(self, pluginmanager, *, invocation_params=None) -> None:
from .argparsing import Parser, FILE_OR_DIR
if invocation_params is None:
@ -853,11 +879,11 @@ class Config:
config.pluginmanager.consider_pluginarg(x)
return config
def _processopt(self, opt):
def _processopt(self, opt: "Argument") -> None:
for name in opt._short_opts + opt._long_opts:
self._opt2dest[name] = opt.dest
if hasattr(opt, "default") and opt.dest:
if hasattr(opt, "default"):
if not hasattr(self.option, opt.dest):
setattr(self.option, opt.dest, opt.default)
@ -865,7 +891,7 @@ class Config:
def pytest_load_initial_conftests(self, early_config):
self.pluginmanager._set_initial_conftests(early_config.known_args_namespace)
def _initini(self, args) -> None:
def _initini(self, args: Sequence[str]) -> None:
ns, unknown_args = self._parser.parse_known_and_unknown_args(
args, namespace=copy.copy(self.option)
)
@ -882,7 +908,7 @@ class Config:
self._parser.addini("minversion", "minimally required pytest version")
self._override_ini = ns.override_ini or ()
def _consider_importhook(self, args):
def _consider_importhook(self, args: Sequence[str]) -> None:
"""Install the PEP 302 import hook if using assertion rewriting.
Needs to parse the --assert=<mode> option from the commandline
@ -924,19 +950,19 @@ class Config:
for name in _iter_rewritable_modules(package_files):
hook.mark_rewrite(name)
def _validate_args(self, args, via):
def _validate_args(self, args: List[str], via: str) -> List[str]:
"""Validate known args."""
self._parser._config_source_hint = via
self._parser._config_source_hint = via # type: ignore
try:
self._parser.parse_known_and_unknown_args(
args, namespace=copy.copy(self.option)
)
finally:
del self._parser._config_source_hint
del self._parser._config_source_hint # type: ignore
return args
def _preparse(self, args, addopts=True):
def _preparse(self, args: List[str], addopts: bool = True) -> None:
if addopts:
env_addopts = os.environ.get("PYTEST_ADDOPTS", "")
if len(env_addopts):
@ -952,7 +978,7 @@ class Config:
self._checkversion()
self._consider_importhook(args)
self.pluginmanager.consider_preparse(args)
self.pluginmanager.consider_preparse(args, exclude_only=False)
if not os.environ.get("PYTEST_DISABLE_PLUGIN_AUTOLOAD"):
# Don't autoload from setuptools entry point. Only explicitly specified
# plugins are going to be loaded.
@ -1000,7 +1026,7 @@ class Config:
)
)
def parse(self, args, addopts=True):
def parse(self, args: List[str], addopts: bool = True) -> None:
# parse given cmdline arguments into this config object.
assert not hasattr(
self, "args"
@ -1011,7 +1037,7 @@ class Config:
self._preparse(args, addopts=addopts)
# XXX deprecated hook:
self.hook.pytest_cmdline_preparse(config=self, args=args)
self._parser.after_preparse = True
self._parser.after_preparse = True # type: ignore
try:
args = self._parser.parse_setoption(
args, self.option, namespace=self.option

View File

@ -3,15 +3,25 @@ import sys
import warnings
from gettext import gettext
from typing import Any
from typing import Callable
from typing import cast
from typing import Dict
from typing import List
from typing import Mapping
from typing import Optional
from typing import Sequence
from typing import Tuple
from typing import Union
import py
from _pytest.compat import TYPE_CHECKING
from _pytest.config.exceptions import UsageError
if TYPE_CHECKING:
from typing import NoReturn
from typing_extensions import Literal # noqa: F401
FILE_OR_DIR = "file_or_dir"
@ -22,9 +32,13 @@ class Parser:
there's an error processing the command line arguments.
"""
prog = None
prog = None # type: Optional[str]
def __init__(self, usage=None, processopt=None):
def __init__(
self,
usage: Optional[str] = None,
processopt: Optional[Callable[["Argument"], None]] = None,
) -> None:
self._anonymous = OptionGroup("custom options", parser=self)
self._groups = [] # type: List[OptionGroup]
self._processopt = processopt
@ -33,12 +47,14 @@ class Parser:
self._ininames = [] # type: List[str]
self.extra_info = {} # type: Dict[str, Any]
def processoption(self, option):
def processoption(self, option: "Argument") -> None:
if self._processopt:
if option.dest:
self._processopt(option)
def getgroup(self, name, description="", after=None):
def getgroup(
self, name: str, description: str = "", after: Optional[str] = None
) -> "OptionGroup":
""" get (or create) a named option Group.
:name: name of the option group.
@ -61,13 +77,13 @@ class Parser:
self._groups.insert(i + 1, group)
return group
def addoption(self, *opts, **attrs):
def addoption(self, *opts: str, **attrs: Any) -> None:
""" register a command line option.
:opts: option names, can be short or long options.
:attrs: same attributes which the ``add_option()`` function of the
:attrs: same attributes which the ``add_argument()`` function of the
`argparse library
<http://docs.python.org/2/library/argparse.html>`_
<https://docs.python.org/library/argparse.html>`_
accepts.
After command line parsing options are available on the pytest config
@ -77,7 +93,11 @@ class Parser:
"""
self._anonymous.addoption(*opts, **attrs)
def parse(self, args, namespace=None):
def parse(
self,
args: Sequence[Union[str, py.path.local]],
namespace: Optional[argparse.Namespace] = None,
) -> argparse.Namespace:
from _pytest._argcomplete import try_argcomplete
self.optparser = self._getparser()
@ -98,27 +118,37 @@ class Parser:
n = option.names()
a = option.attrs()
arggroup.add_argument(*n, **a)
file_or_dir_arg = optparser.add_argument(FILE_OR_DIR, nargs="*")
# bash like autocompletion for dirs (appending '/')
# Type ignored because typeshed doesn't know about argcomplete.
optparser.add_argument( # type: ignore
FILE_OR_DIR, nargs="*"
).completer = filescompleter
file_or_dir_arg.completer = filescompleter # type: ignore
return optparser
def parse_setoption(self, args, option, namespace=None):
def parse_setoption(
self,
args: Sequence[Union[str, py.path.local]],
option: argparse.Namespace,
namespace: Optional[argparse.Namespace] = None,
) -> List[str]:
parsedoption = self.parse(args, namespace=namespace)
for name, value in parsedoption.__dict__.items():
setattr(option, name, value)
return getattr(parsedoption, FILE_OR_DIR)
return cast(List[str], getattr(parsedoption, FILE_OR_DIR))
def parse_known_args(self, args, namespace=None) -> argparse.Namespace:
def parse_known_args(
self,
args: Sequence[Union[str, py.path.local]],
namespace: Optional[argparse.Namespace] = None,
) -> argparse.Namespace:
"""parses and returns a namespace object with known arguments at this
point.
"""
return self.parse_known_and_unknown_args(args, namespace=namespace)[0]
def parse_known_and_unknown_args(
self, args, namespace=None
self,
args: Sequence[Union[str, py.path.local]],
namespace: Optional[argparse.Namespace] = None,
) -> Tuple[argparse.Namespace, List[str]]:
"""parses and returns a namespace object with known arguments, and
the remaining arguments unknown at this point.
@ -127,7 +157,13 @@ class Parser:
strargs = [str(x) if isinstance(x, py.path.local) else x for x in args]
return optparser.parse_known_args(strargs, namespace=namespace)
def addini(self, name, help, type=None, default=None):
def addini(
self,
name: str,
help: str,
type: Optional["Literal['pathlist', 'args', 'linelist', 'bool']"] = None,
default=None,
) -> None:
""" register an ini-file option.
:name: name of the ini-variable
@ -149,11 +185,11 @@ class ArgumentError(Exception):
inconsistent arguments.
"""
def __init__(self, msg, option):
def __init__(self, msg: str, option: Union["Argument", str]) -> None:
self.msg = msg
self.option_id = str(option)
def __str__(self):
def __str__(self) -> str:
if self.option_id:
return "option {}: {}".format(self.option_id, self.msg)
else:
@ -170,12 +206,11 @@ class Argument:
_typ_map = {"int": int, "string": str, "float": float, "complex": complex}
def __init__(self, *names, **attrs):
def __init__(self, *names: str, **attrs: Any) -> None:
"""store parms in private vars for use in add_argument"""
self._attrs = attrs
self._short_opts = [] # type: List[str]
self._long_opts = [] # type: List[str]
self.dest = attrs.get("dest")
if "%default" in (attrs.get("help") or ""):
warnings.warn(
'pytest now uses argparse. "%default" should be'
@ -221,23 +256,25 @@ class Argument:
except KeyError:
pass
self._set_opt_strings(names)
if not self.dest:
if self._long_opts:
self.dest = self._long_opts[0][2:].replace("-", "_")
else:
try:
self.dest = self._short_opts[0][1:]
except IndexError:
raise ArgumentError("need a long or short option", self)
dest = attrs.get("dest") # type: Optional[str]
if dest:
self.dest = dest
elif self._long_opts:
self.dest = self._long_opts[0][2:].replace("-", "_")
else:
try:
self.dest = self._short_opts[0][1:]
except IndexError:
self.dest = "???" # Needed for the error repr.
raise ArgumentError("need a long or short option", self)
def names(self):
def names(self) -> List[str]:
return self._short_opts + self._long_opts
def attrs(self):
def attrs(self) -> Mapping[str, Any]:
# update any attributes set by processopt
attrs = "default dest help".split()
if self.dest:
attrs.append(self.dest)
attrs.append(self.dest)
for attr in attrs:
try:
self._attrs[attr] = getattr(self, attr)
@ -250,7 +287,7 @@ class Argument:
self._attrs["help"] = a
return self._attrs
def _set_opt_strings(self, opts):
def _set_opt_strings(self, opts: Sequence[str]) -> None:
"""directly from optparse
might not be necessary as this is passed to argparse later on"""
@ -293,13 +330,15 @@ class Argument:
class OptionGroup:
def __init__(self, name, description="", parser=None):
def __init__(
self, name: str, description: str = "", parser: Optional[Parser] = None
) -> None:
self.name = name
self.description = description
self.options = [] # type: List[Argument]
self.parser = parser
def addoption(self, *optnames, **attrs):
def addoption(self, *optnames: str, **attrs: Any) -> None:
""" add an option to this group.
if a shortened version of a long option is specified it will
@ -315,11 +354,11 @@ class OptionGroup:
option = Argument(*optnames, **attrs)
self._addoption_instance(option, shortupper=False)
def _addoption(self, *optnames, **attrs):
def _addoption(self, *optnames: str, **attrs: Any) -> None:
option = Argument(*optnames, **attrs)
self._addoption_instance(option, shortupper=True)
def _addoption_instance(self, option, shortupper=False):
def _addoption_instance(self, option: "Argument", shortupper: bool = False) -> None:
if not shortupper:
for opt in option._short_opts:
if opt[0] == "-" and opt[1].islower():
@ -330,9 +369,12 @@ class OptionGroup:
class MyOptionParser(argparse.ArgumentParser):
def __init__(self, parser, extra_info=None, prog=None):
if not extra_info:
extra_info = {}
def __init__(
self,
parser: Parser,
extra_info: Optional[Dict[str, Any]] = None,
prog: Optional[str] = None,
) -> None:
self._parser = parser
argparse.ArgumentParser.__init__(
self,
@ -344,34 +386,42 @@ class MyOptionParser(argparse.ArgumentParser):
)
# extra_info is a dict of (param -> value) to display if there's
# an usage error to provide more contextual information to the user
self.extra_info = extra_info
self.extra_info = extra_info if extra_info else {}
def error(self, message):
def error(self, message: str) -> "NoReturn":
"""Transform argparse error message into UsageError."""
msg = "{}: error: {}".format(self.prog, message)
if hasattr(self._parser, "_config_source_hint"):
msg = "{} ({})".format(msg, self._parser._config_source_hint)
# Type ignored because the attribute is set dynamically.
msg = "{} ({})".format(msg, self._parser._config_source_hint) # type: ignore
raise UsageError(self.format_usage() + msg)
def parse_args(self, args=None, namespace=None):
# Type ignored because typeshed has a very complex type in the superclass.
def parse_args( # type: ignore
self,
args: Optional[Sequence[str]] = None,
namespace: Optional[argparse.Namespace] = None,
) -> argparse.Namespace:
"""allow splitting of positional arguments"""
args, argv = self.parse_known_args(args, namespace)
if argv:
for arg in argv:
parsed, unrecognized = self.parse_known_args(args, namespace)
if unrecognized:
for arg in unrecognized:
if arg and arg[0] == "-":
lines = ["unrecognized arguments: %s" % (" ".join(argv))]
lines = ["unrecognized arguments: %s" % (" ".join(unrecognized))]
for k, v in sorted(self.extra_info.items()):
lines.append(" {}: {}".format(k, v))
self.error("\n".join(lines))
getattr(args, FILE_OR_DIR).extend(argv)
return args
getattr(parsed, FILE_OR_DIR).extend(unrecognized)
return parsed
if sys.version_info[:2] < (3, 9): # pragma: no cover
# Backport of https://github.com/python/cpython/pull/14316 so we can
# disable long --argument abbreviations without breaking short flags.
def _parse_optional(self, arg_string):
def _parse_optional(
self, arg_string: str
) -> Optional[Tuple[Optional[argparse.Action], str, Optional[str]]]:
if not arg_string:
return None
if not arg_string[0] in self.prefix_chars:
@ -409,49 +459,45 @@ class DropShorterLongHelpFormatter(argparse.HelpFormatter):
"""shorten help for long options that differ only in extra hyphens
- collapse **long** options that are the same except for extra hyphens
- special action attribute map_long_option allows suppressing additional
long options
- shortcut if there are only two options and one of them is a short one
- cache result on action object as this is called at least 2 times
"""
def __init__(self, *args, **kwargs):
def __init__(self, *args: Any, **kwargs: Any) -> None:
"""Use more accurate terminal width via pylib."""
if "width" not in kwargs:
kwargs["width"] = py.io.get_terminal_width()
super().__init__(*args, **kwargs)
def _format_action_invocation(self, action):
def _format_action_invocation(self, action: argparse.Action) -> str:
orgstr = argparse.HelpFormatter._format_action_invocation(self, action)
if orgstr and orgstr[0] != "-": # only optional arguments
return orgstr
res = getattr(action, "_formatted_action_invocation", None)
res = getattr(
action, "_formatted_action_invocation", None
) # type: Optional[str]
if res:
return res
options = orgstr.split(", ")
if len(options) == 2 and (len(options[0]) == 2 or len(options[1]) == 2):
# a shortcut for '-h, --help' or '--abc', '-a'
action._formatted_action_invocation = orgstr
action._formatted_action_invocation = orgstr # type: ignore
return orgstr
return_list = []
option_map = getattr(action, "map_long_option", {})
if option_map is None:
option_map = {}
short_long = {} # type: Dict[str, str]
for option in options:
if len(option) == 2 or option[2] == " ":
continue
if not option.startswith("--"):
raise ArgumentError(
'long optional argument without "--": [%s]' % (option), self
'long optional argument without "--": [%s]' % (option), option
)
xxoption = option[2:]
if xxoption.split()[0] not in option_map:
shortened = xxoption.replace("-", "")
if shortened not in short_long or len(short_long[shortened]) < len(
xxoption
):
short_long[shortened] = xxoption
shortened = xxoption.replace("-", "")
if shortened not in short_long or len(short_long[shortened]) < len(
xxoption
):
short_long[shortened] = xxoption
# now short_long has been filled out to the longest with dashes
# **and** we keep the right option ordering from add_argument
for option in options:
@ -459,5 +505,6 @@ class DropShorterLongHelpFormatter(argparse.HelpFormatter):
return_list.append(option)
if option[2:] == short_long.get(option.replace("-", "")):
return_list.append(option.replace(" ", "=", 1))
action._formatted_action_invocation = ", ".join(return_list)
return action._formatted_action_invocation
formatted_action_invocation = ", ".join(return_list)
action._formatted_action_invocation = formatted_action_invocation # type: ignore
return formatted_action_invocation

View File

@ -9,6 +9,7 @@ All constants defined in this module should be either PytestWarning instances or
in case of warnings which need to format their messages.
"""
from _pytest.warning_types import PytestDeprecationWarning
from _pytest.warning_types import UnformattedWarning
# set of plugins which have been integrated into the core; we use this list to ignore
# them during registration to avoid conflicts
@ -18,13 +19,11 @@ DEPRECATED_EXTERNAL_PLUGINS = {
"pytest_faulthandler",
}
FUNCARGNAMES = PytestDeprecationWarning(
"The `funcargnames` attribute was an alias for `fixturenames`, "
"since pytest 2.3 - use the newer attribute instead."
)
RESULT_LOG = PytestDeprecationWarning(
"--result-log is deprecated, please try the new pytest-reportlog plugin.\n"
"See https://docs.pytest.org/en/latest/deprecations.html#result-log-result-log for more information."
@ -35,8 +34,18 @@ FIXTURE_POSITIONAL_ARGUMENTS = PytestDeprecationWarning(
"as a keyword argument instead."
)
NODE_USE_FROM_PARENT = UnformattedWarning(
PytestDeprecationWarning,
"direct construction of {name} has been deprecated, please use {name}.from_parent",
)
JUNIT_XML_DEFAULT_FAMILY = PytestDeprecationWarning(
"The 'junit_family' default value will change to 'xunit2' in pytest 6.0.\n"
"Add 'junit_family=xunit1' to your pytest.ini file to keep the current format "
"in future versions of pytest and silence this warning."
)
NO_PRINT_LOGS = PytestDeprecationWarning(
"--no-print-logs is deprecated and scheduled for removal in pytest 6.0.\n"
"Please use --show-capture instead."
)

View File

@ -112,9 +112,9 @@ def pytest_collect_file(path, parent):
config = parent.config
if path.ext == ".py":
if config.option.doctestmodules and not _is_setup_py(config, path, parent):
return DoctestModule(path, parent)
return DoctestModule.from_parent(parent, fspath=path)
elif _is_doctest(config, path, parent):
return DoctestTextfile(path, parent)
return DoctestTextfile.from_parent(parent, fspath=path)
def _is_setup_py(config, path, parent):
@ -219,6 +219,16 @@ class DoctestItem(pytest.Item):
self.obj = None
self.fixture_request = None
@classmethod
def from_parent( # type: ignore
cls, parent: "Union[DoctestTextfile, DoctestModule]", *, name, runner, dtest
):
# incompatible signature due to to imposed limits on sublcass
"""
the public named constructor
"""
return super().from_parent(name=name, parent=parent, runner=runner, dtest=dtest)
def setup(self):
if self.dtest is not None:
self.fixture_request = _setup_fixtures(self)
@ -374,7 +384,9 @@ class DoctestTextfile(pytest.Module):
parser = doctest.DocTestParser()
test = parser.get_doctest(text, globs, name, filename, 0)
if test.examples:
yield DoctestItem(test.name, self, runner, test)
yield DoctestItem.from_parent(
self, name=test.name, runner=runner, dtest=test
)
def _check_all_skipped(test):
@ -483,7 +495,9 @@ class DoctestModule(pytest.Module):
for test in finder.find(module, module.__name__):
if test.examples: # skip empty doctests
yield DoctestItem(test.name, self, runner, test)
yield DoctestItem.from_parent(
self, name=test.name, runner=runner, dtest=test
)
def _setup_fixtures(doctest_item):

View File

@ -1,6 +1,5 @@
import functools
import inspect
import itertools
import sys
import warnings
from collections import defaultdict
@ -39,6 +38,7 @@ if TYPE_CHECKING:
from typing import Type
from _pytest import nodes
from _pytest.main import Session
@attr.s(frozen=True)
@ -47,7 +47,7 @@ class PseudoFixtureDef:
scope = attr.ib()
def pytest_sessionstart(session):
def pytest_sessionstart(session: "Session"):
import _pytest.python
import _pytest.nodes
@ -514,13 +514,11 @@ class FixtureRequest:
values.append(fixturedef)
current = current._parent_request
def _compute_fixture_value(self, fixturedef):
def _compute_fixture_value(self, fixturedef: "FixtureDef") -> None:
"""
Creates a SubRequest based on "self" and calls the execute method of the given fixturedef object. This will
force the FixtureDef object to throw away any previous results and compute a new fixture value, which
will be stored into the FixtureDef object itself.
:param FixtureDef fixturedef:
"""
# prepare a subrequest object before calling fixture function
# (latter managed by fixturedef)
@ -1251,7 +1249,6 @@ class FixtureManager:
self.config = session.config
self._arg2fixturedefs = {}
self._holderobjseen = set()
self._arg2finish = {}
self._nodeid_and_autousenames = [("", self.config.getini("usefixtures"))]
session.config.pluginmanager.register(self, "funcmanage")
@ -1280,10 +1277,8 @@ class FixtureManager:
else:
argnames = ()
usefixtures = itertools.chain.from_iterable(
mark.args for mark in node.iter_markers(name="usefixtures")
)
initialnames = tuple(usefixtures) + argnames
usefixtures = get_use_fixtures_for_node(node)
initialnames = usefixtures + argnames
fm = node.session._fixturemanager
initialnames, names_closure, arg2fixturedefs = fm.getfixtureclosure(
initialnames, node, ignore_args=self._get_direct_parametrize_args(node)
@ -1480,3 +1475,12 @@ class FixtureManager:
for fixturedef in fixturedefs:
if nodes.ischildnode(fixturedef.baseid, nodeid):
yield fixturedef
def get_use_fixtures_for_node(node) -> Tuple[str, ...]:
"""Returns the names of all the usefixtures() marks on the given node"""
return tuple(
str(name)
for mark in node.iter_markers(name="usefixtures")
for name in mark.args
)

View File

@ -66,7 +66,7 @@ def pytest_addoption(parser):
action="store_true",
default=False,
help="trace considerations of conftest.py files.",
),
)
group.addoption(
"--debug",
action="store_true",

View File

@ -315,10 +315,6 @@ def pytest_runtestloop(session):
"""
def pytest_itemstart(item, node):
"""(**Deprecated**) use pytest_runtest_logstart. """
@hookspec(firstresult=True)
def pytest_runtest_protocol(item, nextitem):
""" implements the runtest_setup/call/teardown protocol for
@ -570,7 +566,7 @@ def pytest_terminal_summary(terminalreporter, exitstatus, config):
@hookspec(historic=True)
def pytest_warning_captured(warning_message, when, item):
def pytest_warning_captured(warning_message, when, item, location):
"""
Process a warning captured by the internal pytest warnings plugin.
@ -590,6 +586,10 @@ def pytest_warning_captured(warning_message, when, item):
in a future release.
The item being executed if ``when`` is ``"runtest"``, otherwise ``None``.
:param tuple location:
Holds information about the execution context of the captured warning (filename, linenumber, function).
``function`` evaluates to <module> when the execution context is at the module level.
"""

View File

@ -167,51 +167,28 @@ class _NodeReporter:
content_out = report.capstdout
content_log = report.caplog
content_err = report.capstderr
if self.xml.logging == "no":
return
content_all = ""
if self.xml.logging in ["log", "all"]:
content_all = self._prepare_content(content_log, " Captured Log ")
if self.xml.logging in ["system-out", "out-err", "all"]:
content_all += self._prepare_content(content_out, " Captured Out ")
self._write_content(report, content_all, "system-out")
content_all = ""
if self.xml.logging in ["system-err", "out-err", "all"]:
content_all += self._prepare_content(content_err, " Captured Err ")
self._write_content(report, content_all, "system-err")
content_all = ""
if content_all:
self._write_content(report, content_all, "system-out")
if content_log or content_out:
if content_log and self.xml.logging == "system-out":
if content_out:
# syncing stdout and the log-output is not done yet. It's
# probably not worth the effort. Therefore, first the captured
# stdout is shown and then the captured logs.
content = "\n".join(
[
" Captured Stdout ".center(80, "-"),
content_out,
"",
" Captured Log ".center(80, "-"),
content_log,
]
)
else:
content = content_log
else:
content = content_out
def _prepare_content(self, content, header):
return "\n".join([header.center(80, "-"), content, ""])
if content:
tag = getattr(Junit, "system-out")
self.append(tag(bin_xml_escape(content)))
if content_log or content_err:
if content_log and self.xml.logging == "system-err":
if content_err:
content = "\n".join(
[
" Captured Stderr ".center(80, "-"),
content_err,
"",
" Captured Log ".center(80, "-"),
content_log,
]
)
else:
content = content_log
else:
content = content_err
if content:
tag = getattr(Junit, "system-err")
self.append(tag(bin_xml_escape(content)))
def _write_content(self, report, content, jheader):
tag = getattr(Junit, jheader)
self.append(tag(bin_xml_escape(content)))
def append_pass(self, report):
self.add_stats("passed")
@ -408,7 +385,7 @@ def pytest_addoption(parser):
parser.addini(
"junit_logging",
"Write captured log messages to JUnit report: "
"one of no|system-out|system-err",
"one of no|log|system-out|system-err|out-err|all",
default="no",
)
parser.addini(

View File

@ -487,6 +487,12 @@ class LoggingPlugin:
self._config = config
self.print_logs = get_option_ini(config, "log_print")
if not self.print_logs:
from _pytest.warnings import _issue_warning_captured
from _pytest.deprecated import NO_PRINT_LOGS
_issue_warning_captured(NO_PRINT_LOGS, self._config.hook, stacklevel=2)
self.formatter = self._create_formatter(
get_option_ini(config, "log_format"),
get_option_ini(config, "log_date_format"),

View File

@ -1,5 +1,4 @@
""" core implementation of testing process: init, session, runtest loop. """
import enum
import fnmatch
import functools
import importlib
@ -10,6 +9,8 @@ from typing import Dict
from typing import FrozenSet
from typing import List
from typing import Optional
from typing import Sequence
from typing import Tuple
from typing import Union
import attr
@ -20,42 +21,22 @@ from _pytest import nodes
from _pytest.compat import TYPE_CHECKING
from _pytest.config import Config
from _pytest.config import directory_arg
from _pytest.config import ExitCode
from _pytest.config import hookimpl
from _pytest.config import UsageError
from _pytest.fixtures import FixtureManager
from _pytest.nodes import Node
from _pytest.outcomes import Exit
from _pytest.reports import CollectReport
from _pytest.runner import collect_one_node
from _pytest.runner import SetupState
if TYPE_CHECKING:
from typing import Type
from _pytest.python import Package
class ExitCode(enum.IntEnum):
"""
.. versionadded:: 5.0
Encodes the valid exit codes by pytest.
Currently users and plugins may supply other exit codes as well.
"""
#: tests passed
OK = 0
#: tests failed
TESTS_FAILED = 1
#: pytest was interrupted
INTERRUPTED = 2
#: an internal error got in the way
INTERNAL_ERROR = 3
#: pytest was misused
USAGE_ERROR = 4
#: pytest couldn't find tests
NO_TESTS_COLLECTED = 5
def pytest_addoption(parser):
parser.addini(
"norecursedirs",
@ -199,7 +180,7 @@ def wrap_session(
config: Config, doit: Callable[[Config, "Session"], Optional[Union[int, ExitCode]]]
) -> Union[int, ExitCode]:
"""Skeleton command line program"""
session = Session(config)
session = Session.from_config(config)
session.exitstatus = ExitCode.OK
initstate = 0
try:
@ -406,9 +387,18 @@ class Session(nodes.FSCollector):
self._initialpaths = frozenset() # type: FrozenSet[py.path.local]
# Keep track of any collected nodes in here, so we don't duplicate fixtures
self._node_cache = {} # type: Dict[str, List[Node]]
self._collection_node_cache1 = (
{}
) # type: Dict[py.path.local, Sequence[nodes.Collector]]
self._collection_node_cache2 = (
{}
) # type: Dict[Tuple[Type[nodes.Collector], py.path.local], nodes.Collector]
self._collection_node_cache3 = (
{}
) # type: Dict[Tuple[Type[nodes.Collector], str], CollectReport]
# Dirnames of pkgs with dunder-init files.
self._pkg_roots = {} # type: Dict[py.path.local, Package]
self._collection_pkg_roots = {} # type: Dict[py.path.local, Package]
self._bestrelpathcache = _bestrelpath_cache(
config.rootdir
@ -416,6 +406,10 @@ class Session(nodes.FSCollector):
self.config.pluginmanager.register(self, name="session")
@classmethod
def from_config(cls, config):
return cls._create(config)
def __repr__(self):
return "<%s %s exitstatus=%r testsfailed=%d testscollected=%d>" % (
self.__class__.__name__,
@ -471,13 +465,13 @@ class Session(nodes.FSCollector):
self.trace("perform_collect", self, args)
self.trace.root.indent += 1
self._notfound = []
initialpaths = []
self._initialparts = []
initialpaths = [] # type: List[py.path.local]
self._initial_parts = [] # type: List[Tuple[py.path.local, List[str]]]
self.items = items = []
for arg in args:
parts = self._parsearg(arg)
self._initialparts.append(parts)
initialpaths.append(parts[0])
fspath, parts = self._parsearg(arg)
self._initial_parts.append((fspath, parts))
initialpaths.append(fspath)
self._initialpaths = frozenset(initialpaths)
rep = collect_one_node(self)
self.ihook.pytest_collectreport(report=rep)
@ -497,25 +491,26 @@ class Session(nodes.FSCollector):
return items
def collect(self):
for initialpart in self._initialparts:
self.trace("processing argument", initialpart)
for fspath, parts in self._initial_parts:
self.trace("processing argument", (fspath, parts))
self.trace.root.indent += 1
try:
yield from self._collect(initialpart)
yield from self._collect(fspath, parts)
except NoMatch:
report_arg = "::".join(map(str, initialpart))
report_arg = "::".join((str(fspath), *parts))
# we are inside a make_report hook so
# we cannot directly pass through the exception
self._notfound.append((report_arg, sys.exc_info()[1]))
self.trace.root.indent -= 1
self._collection_node_cache1.clear()
self._collection_node_cache2.clear()
self._collection_node_cache3.clear()
self._collection_pkg_roots.clear()
def _collect(self, arg):
def _collect(self, argpath, names):
from _pytest.python import Package
names = arg[:]
argpath = names.pop(0)
# Start with a Session root, and delve to argpath item (dir or file)
# and stack all Packages found on the way.
# No point in finding packages when collecting doctests
@ -528,18 +523,18 @@ class Session(nodes.FSCollector):
if parent.isdir():
pkginit = parent.join("__init__.py")
if pkginit.isfile():
if pkginit not in self._node_cache:
if pkginit not in self._collection_node_cache1:
col = self._collectfile(pkginit, handle_dupes=False)
if col:
if isinstance(col[0], Package):
self._pkg_roots[parent] = col[0]
self._collection_pkg_roots[parent] = col[0]
# always store a list in the cache, matchnodes expects it
self._node_cache[col[0].fspath] = [col[0]]
self._collection_node_cache1[col[0].fspath] = [col[0]]
# If it's a directory argument, recurse and look for any Subpackages.
# Let the Package collector deal with subnodes, don't collect here.
if argpath.check(dir=1):
assert not names, "invalid arg {!r}".format(arg)
assert not names, "invalid arg {!r}".format((argpath, names))
seen_dirs = set()
for path in argpath.visit(
@ -554,28 +549,28 @@ class Session(nodes.FSCollector):
for x in self._collectfile(pkginit):
yield x
if isinstance(x, Package):
self._pkg_roots[dirpath] = x
if dirpath in self._pkg_roots:
self._collection_pkg_roots[dirpath] = x
if dirpath in self._collection_pkg_roots:
# Do not collect packages here.
continue
for x in self._collectfile(path):
key = (type(x), x.fspath)
if key in self._node_cache:
yield self._node_cache[key]
if key in self._collection_node_cache2:
yield self._collection_node_cache2[key]
else:
self._node_cache[key] = x
self._collection_node_cache2[key] = x
yield x
else:
assert argpath.check(file=1)
if argpath in self._node_cache:
col = self._node_cache[argpath]
if argpath in self._collection_node_cache1:
col = self._collection_node_cache1[argpath]
else:
collect_root = self._pkg_roots.get(argpath.dirname, self)
collect_root = self._collection_pkg_roots.get(argpath.dirname, self)
col = collect_root._collectfile(argpath, handle_dupes=False)
if col:
self._node_cache[argpath] = col
self._collection_node_cache1[argpath] = col
m = self.matchnodes(col, names)
# If __init__.py was the only file requested, then the matched node will be
# the corresponding Package, and the first yielded item will be the __init__
@ -636,19 +631,19 @@ class Session(nodes.FSCollector):
def _parsearg(self, arg):
""" return (fspath, names) tuple after checking the file exists. """
parts = str(arg).split("::")
strpath, *parts = str(arg).split("::")
if self.config.option.pyargs:
parts[0] = self._tryconvertpyarg(parts[0])
relpath = parts[0].replace("/", os.sep)
path = self.config.invocation_dir.join(relpath, abs=True)
if not path.check():
strpath = self._tryconvertpyarg(strpath)
relpath = strpath.replace("/", os.sep)
fspath = self.config.invocation_dir.join(relpath, abs=True)
if not fspath.check():
if self.config.option.pyargs:
raise UsageError(
"file or package not found: " + arg + " (missing __init__.py?)"
)
raise UsageError("file not found: " + arg)
parts[0] = path.realpath()
return parts
fspath = fspath.realpath()
return (fspath, parts)
def matchnodes(self, matching, names):
self.trace("matchnodes", matching, names)
@ -675,11 +670,11 @@ class Session(nodes.FSCollector):
continue
assert isinstance(node, nodes.Collector)
key = (type(node), node.nodeid)
if key in self._node_cache:
rep = self._node_cache[key]
if key in self._collection_node_cache3:
rep = self._collection_node_cache3[key]
else:
rep = collect_one_node(node)
self._node_cache[key] = rep
self._collection_node_cache3[key] = rep
if rep.passed:
has_matched = False
for x in rep.result:

View File

@ -52,7 +52,8 @@ def pytest_addoption(parser):
"-k 'not test_method and not test_other' will eliminate the matches. "
"Additionally keywords are matched to classes and functions "
"containing extra names in their 'extra_keyword_matches' set, "
"as well as functions which have names assigned directly to them.",
"as well as functions which have names assigned directly to them. "
"The matching is case-insensitive.",
)
group._addoption(

View File

@ -57,7 +57,15 @@ class KeywordMapping:
return cls(mapped_names)
def __getitem__(self, subname):
for name in self._names:
"""Return whether subname is included within stored names.
The string inclusion check is case-insensitive.
"""
subname = subname.lower()
names = (name.lower() for name in self._names)
for name in names:
if subname in name:
return True
return False

View File

@ -4,6 +4,7 @@ from collections import namedtuple
from collections.abc import MutableMapping
from typing import Iterable
from typing import List
from typing import Optional
from typing import Set
from typing import Union
@ -11,7 +12,6 @@ import attr
from .._code.source import getfslineno
from ..compat import ascii_escaped
from ..compat import ATTRS_EQ_FIELD
from ..compat import NOTSET
from _pytest.outcomes import fail
from _pytest.warning_types import PytestUnknownMarkWarning
@ -147,6 +147,14 @@ class Mark:
#: keyword arguments of the mark decorator
kwargs = attr.ib() # Dict[str, object]
#: source Mark for ids with parametrize Marks
_param_ids_from = attr.ib(type=Optional["Mark"], default=None, repr=False)
#: resolved/generated ids with parametrize Marks
_param_ids_generated = attr.ib(type=Optional[List[str]], default=None, repr=False)
def _has_param_ids(self):
return "ids" in self.kwargs or len(self.args) >= 4
def combined_with(self, other: "Mark") -> "Mark":
"""
:param other: the mark to combine with
@ -156,8 +164,20 @@ class Mark:
combines by appending args and merging the mappings
"""
assert self.name == other.name
# Remember source of ids with parametrize Marks.
param_ids_from = None # type: Optional[Mark]
if self.name == "parametrize":
if other._has_param_ids():
param_ids_from = other
elif self._has_param_ids():
param_ids_from = self
return Mark(
self.name, self.args + other.args, dict(self.kwargs, **other.kwargs)
self.name,
self.args + other.args,
dict(self.kwargs, **other.kwargs),
param_ids_from=param_ids_from,
)
@ -328,6 +348,7 @@ class MarkGenerator:
"custom marks to avoid this warning - for details, see "
"https://docs.pytest.org/en/latest/mark.html" % name,
PytestUnknownMarkWarning,
2,
)
return MarkDecorator(Mark(name, (), {}))
@ -371,35 +392,3 @@ class NodeKeywords(MutableMapping):
def __repr__(self):
return "<NodeKeywords for node {}>".format(self.node)
# mypy cannot find this overload, remove when on attrs>=19.2
@attr.s(hash=False, **{ATTRS_EQ_FIELD: False}) # type: ignore
class NodeMarkers:
"""
internal structure for storing marks belonging to a node
..warning::
unstable api
"""
own_markers = attr.ib(default=attr.Factory(list))
def update(self, add_markers):
"""update the own markers
"""
self.own_markers.extend(add_markers)
def find(self, name):
"""
find markers in own nodes or parent nodes
needs a better place
"""
for mark in self.own_markers:
if mark.name == name:
yield mark
def __iter__(self):
return iter(self.own_markers)

View File

@ -20,6 +20,7 @@ from _pytest.compat import cached_property
from _pytest.compat import TYPE_CHECKING
from _pytest.config import Config
from _pytest.config import PytestPluginManager
from _pytest.deprecated import NODE_USE_FROM_PARENT
from _pytest.fixtures import FixtureDef
from _pytest.fixtures import FixtureLookupError
from _pytest.fixtures import FixtureLookupErrorRepr
@ -75,7 +76,16 @@ def ischildnode(baseid, nodeid):
return node_parts[: len(base_parts)] == base_parts
class Node:
class NodeMeta(type):
def __call__(self, *k, **kw):
warnings.warn(NODE_USE_FROM_PARENT.format(name=self.__name__), stacklevel=2)
return super().__call__(*k, **kw)
def _create(self, *k, **kw):
return super().__call__(*k, **kw)
class Node(metaclass=NodeMeta):
""" base class for Collector and Item the test collection tree.
Collector subclasses have children, Items are terminal nodes."""
@ -135,6 +145,24 @@ class Node:
if self.name != "()":
self._nodeid += "::" + self.name
@classmethod
def from_parent(cls, parent: "Node", **kw):
"""
Public Constructor for Nodes
This indirection got introduced in order to enable removing
the fragile logic from the node constructors.
Subclasses can use ``super().from_parent(...)`` when overriding the construction
:param parent: the parent node of this test Node
"""
if "config" in kw:
raise TypeError("config is not a valid argument for from_parent")
if "session" in kw:
raise TypeError("session is not a valid argument for from_parent")
return cls._create(parent=parent, **kw)
@property
def ihook(self):
""" fspath sensitive hook proxy used to call pytest hooks"""
@ -370,12 +398,14 @@ class Collector(Node):
def repr_failure(self, excinfo):
""" represent a collection failure. """
if excinfo.errisinstance(self.CollectError):
if excinfo.errisinstance(self.CollectError) and not self.config.getoption(
"fulltrace", False
):
exc = excinfo.value
return str(exc.args[0])
# Respect explicit tbstyle option, but default to "short"
# (None._repr_failure_py defaults to "long" without "fulltrace" option).
# (_repr_failure_py uses "long" with "fulltrace" option always).
tbstyle = self.config.getoption("tbstyle", "auto")
if tbstyle == "auto":
tbstyle = "short"
@ -437,6 +467,13 @@ class FSCollector(Collector):
self._norecursepatterns = self.config.getini("norecursedirs")
@classmethod
def from_parent(cls, parent, *, fspath):
"""
The public constructor
"""
return super().from_parent(parent=parent, fspath=fspath)
def _gethookproxy(self, fspath: py.path.local):
# check if we have the common case of running
# hooks with all conftest.py files

View File

@ -30,8 +30,8 @@ from _pytest.capture import MultiCapture
from _pytest.capture import SysCapture
from _pytest.compat import TYPE_CHECKING
from _pytest.config import _PluggyPlugin
from _pytest.config import ExitCode
from _pytest.fixtures import FixtureRequest
from _pytest.main import ExitCode
from _pytest.main import Session
from _pytest.monkeypatch import MonkeyPatch
from _pytest.nodes import Collector
@ -413,8 +413,8 @@ class RunResult:
def __init__(
self,
ret: Union[int, ExitCode],
outlines: Sequence[str],
errlines: Sequence[str],
outlines: List[str],
errlines: List[str],
duration: float,
) -> None:
try:
@ -561,10 +561,12 @@ class Testdir:
mp.delenv("TOX_ENV_DIR", raising=False)
# Discard outer pytest options.
mp.delenv("PYTEST_ADDOPTS", raising=False)
# Environment (updates) for inner runs.
# Ensure no user config is used.
tmphome = str(self.tmpdir)
self._env_run_update = {"HOME": tmphome, "USERPROFILE": tmphome}
mp.setenv("HOME", tmphome)
mp.setenv("USERPROFILE", tmphome)
# Do not use colors for inner runs by default.
mp.setenv("PY_COLORS", "0")
def __repr__(self):
return "<Testdir {!r}>".format(self.tmpdir)
@ -760,7 +762,7 @@ class Testdir:
:param arg: a :py:class:`py.path.local` instance of the file
"""
session = Session(config)
session = Session.from_config(config)
assert "::" not in str(arg)
p = py.path.local(arg)
config.hook.pytest_sessionstart(session=session)
@ -778,7 +780,7 @@ class Testdir:
"""
config = self.parseconfigure(path)
session = Session(config)
session = Session.from_config(config)
x = session.fspath.bestrelpath(path)
config.hook.pytest_sessionstart(session=session)
res = session.perform_collect([x], genitems=False)[0]
@ -870,12 +872,6 @@ class Testdir:
plugins = list(plugins)
finalizers = []
try:
# Do not load user config (during runs only).
mp_run = MonkeyPatch()
for k, v in self._env_run_update.items():
mp_run.setenv(k, v)
finalizers.append(mp_run.undo)
# Any sys.module or sys.path changes done while running pytest
# inline should be reverted after the test run completes to avoid
# clashing with later inline tests run within the same pytest test,
@ -1110,7 +1106,6 @@ class Testdir:
env["PYTHONPATH"] = os.pathsep.join(
filter(None, [os.getcwd(), env.get("PYTHONPATH", "")])
)
env.update(self._env_run_update)
kw["env"] = env
if stdin is Testdir.CLOSE_STDIN:
@ -1282,11 +1277,7 @@ class Testdir:
pytest.skip("pexpect.spawn not available")
logfile = self.tmpdir.join("spawn.out").open("wb")
# Do not load user config.
env = os.environ.copy()
env.update(self._env_run_update)
child = pexpect.spawn(cmd, logfile=logfile, env=env)
child = pexpect.spawn(cmd, logfile=logfile)
self.request.addfinalizer(logfile.close)
child.timeout = expect_timeout
return child
@ -1327,49 +1318,32 @@ class LineMatcher:
The constructor takes a list of lines without their trailing newlines, i.e.
``text.splitlines()``.
"""
def __init__(self, lines):
def __init__(self, lines: List[str]) -> None:
self.lines = lines
self._log_output = []
self._log_output = [] # type: List[str]
def str(self):
"""Return the entire original text."""
return "\n".join(self.lines)
def _getlines(self, lines2):
def _getlines(self, lines2: Union[str, Sequence[str], Source]) -> Sequence[str]:
if isinstance(lines2, str):
lines2 = Source(lines2)
if isinstance(lines2, Source):
lines2 = lines2.strip().lines
return lines2
def fnmatch_lines_random(self, lines2):
"""Check lines exist in the output using in any order.
Lines are checked using ``fnmatch.fnmatch``. The argument is a list of
lines which have to occur in the output, in any order.
def fnmatch_lines_random(self, lines2: Sequence[str]) -> None:
"""Check lines exist in the output in any order (using :func:`python:fnmatch.fnmatch`).
"""
self._match_lines_random(lines2, fnmatch)
def re_match_lines_random(self, lines2):
"""Check lines exist in the output using ``re.match``, in any order.
The argument is a list of lines which have to occur in the output, in
any order.
def re_match_lines_random(self, lines2: Sequence[str]) -> None:
"""Check lines exist in the output in any order (using :func:`python:re.match`).
"""
self._match_lines_random(lines2, lambda name, pat: re.match(pat, name))
self._match_lines_random(lines2, lambda name, pat: bool(re.match(pat, name)))
def _match_lines_random(self, lines2, match_func):
"""Check lines exist in the output.
The argument is a list of lines which have to occur in the output, in
any order. Each line can contain glob whildcards.
"""
def _match_lines_random(
self, lines2: Sequence[str], match_func: Callable[[str, str], bool]
) -> None:
lines2 = self._getlines(lines2)
for line in lines2:
for x in self.lines:
@ -1380,46 +1354,67 @@ class LineMatcher:
self._log("line %r not found in output" % line)
raise ValueError(self._log_text)
def get_lines_after(self, fnline):
def get_lines_after(self, fnline: str) -> Sequence[str]:
"""Return all lines following the given line in the text.
The given line can contain glob wildcards.
"""
for i, line in enumerate(self.lines):
if fnline == line or fnmatch(line, fnline):
return self.lines[i + 1 :]
raise ValueError("line %r not found in output" % fnline)
def _log(self, *args):
def _log(self, *args) -> None:
self._log_output.append(" ".join(str(x) for x in args))
@property
def _log_text(self):
def _log_text(self) -> str:
return "\n".join(self._log_output)
def fnmatch_lines(self, lines2):
"""Search captured text for matching lines using ``fnmatch.fnmatch``.
def fnmatch_lines(
self, lines2: Sequence[str], *, consecutive: bool = False
) -> None:
"""Check lines exist in the output (using :func:`python:fnmatch.fnmatch`).
The argument is a list of lines which have to match and can use glob
wildcards. If they do not match a pytest.fail() is called. The
matches and non-matches are also shown as part of the error message.
:param lines2: string patterns to match.
:param consecutive: match lines consecutive?
"""
__tracebackhide__ = True
self._match_lines(lines2, fnmatch, "fnmatch")
self._match_lines(lines2, fnmatch, "fnmatch", consecutive=consecutive)
def re_match_lines(self, lines2):
"""Search captured text for matching lines using ``re.match``.
def re_match_lines(
self, lines2: Sequence[str], *, consecutive: bool = False
) -> None:
"""Check lines exist in the output (using :func:`python:re.match`).
The argument is a list of lines which have to match using ``re.match``.
If they do not match a pytest.fail() is called.
The matches and non-matches are also shown as part of the error message.
:param lines2: string patterns to match.
:param consecutive: match lines consecutively?
"""
__tracebackhide__ = True
self._match_lines(lines2, lambda name, pat: re.match(pat, name), "re.match")
self._match_lines(
lines2,
lambda name, pat: bool(re.match(pat, name)),
"re.match",
consecutive=consecutive,
)
def _match_lines(self, lines2, match_func, match_nickname):
def _match_lines(
self,
lines2: Sequence[str],
match_func: Callable[[str, str], bool],
match_nickname: str,
*,
consecutive: bool = False
) -> None:
"""Underlying implementation of ``fnmatch_lines`` and ``re_match_lines``.
:param list[str] lines2: list of string patterns to match. The actual
@ -1429,28 +1424,40 @@ class LineMatcher:
pattern
:param str match_nickname: the nickname for the match function that
will be logged to stdout when a match occurs
:param consecutive: match lines consecutively?
"""
assert isinstance(lines2, collections.abc.Sequence)
if not isinstance(lines2, collections.abc.Sequence):
raise TypeError("invalid type for lines2: {}".format(type(lines2).__name__))
lines2 = self._getlines(lines2)
lines1 = self.lines[:]
nextline = None
extralines = []
__tracebackhide__ = True
wnick = len(match_nickname) + 1
started = False
for line in lines2:
nomatchprinted = False
while lines1:
nextline = lines1.pop(0)
if line == nextline:
self._log("exact match:", repr(line))
started = True
break
elif match_func(nextline, line):
self._log("%s:" % match_nickname, repr(line))
self._log(
"{:>{width}}".format("with:", width=wnick), repr(nextline)
)
started = True
break
else:
if consecutive and started:
msg = "no consecutive match: {!r}".format(line)
self._log(msg)
self._log(
"{:>{width}}".format("with:", width=wnick), repr(nextline)
)
self._fail(msg)
if not nomatchprinted:
self._log(
"{:>{width}}".format("nomatch:", width=wnick), repr(line)
@ -1464,7 +1471,7 @@ class LineMatcher:
self._fail(msg)
self._log_output = []
def no_fnmatch_line(self, pat):
def no_fnmatch_line(self, pat: str) -> None:
"""Ensure captured lines do not match the given pattern, using ``fnmatch.fnmatch``.
:param str pat: the pattern to match lines.
@ -1472,15 +1479,19 @@ class LineMatcher:
__tracebackhide__ = True
self._no_match_line(pat, fnmatch, "fnmatch")
def no_re_match_line(self, pat):
def no_re_match_line(self, pat: str) -> None:
"""Ensure captured lines do not match the given pattern, using ``re.match``.
:param str pat: the regular expression to match lines.
"""
__tracebackhide__ = True
self._no_match_line(pat, lambda name, pat: re.match(pat, name), "re.match")
self._no_match_line(
pat, lambda name, pat: bool(re.match(pat, name)), "re.match"
)
def _no_match_line(self, pat, match_func, match_nickname):
def _no_match_line(
self, pat: str, match_func: Callable[[str, str], bool], match_nickname: str
) -> None:
"""Ensure captured lines does not have a the given pattern, using ``fnmatch.fnmatch``
:param str pat: the pattern to match lines
@ -1501,8 +1512,12 @@ class LineMatcher:
self._log("{:>{width}}".format("and:", width=wnick), repr(line))
self._log_output = []
def _fail(self, msg):
def _fail(self, msg: str) -> None:
__tracebackhide__ = True
log_text = self._log_text
self._log_output = []
pytest.fail(log_text)
def str(self) -> str:
"""Return the entire original text."""
return "\n".join(self.lines)

View File

@ -9,9 +9,9 @@ from collections import Counter
from collections import defaultdict
from collections.abc import Sequence
from functools import partial
from textwrap import dedent
from typing import Dict
from typing import List
from typing import Optional
from typing import Tuple
from typing import Union
@ -21,6 +21,7 @@ import _pytest
from _pytest import fixtures
from _pytest import nodes
from _pytest._code import filter_traceback
from _pytest._code.code import ExceptionInfo
from _pytest._code.source import getfslineno
from _pytest.compat import ascii_escaped
from _pytest.compat import get_default_arg_names
@ -39,6 +40,7 @@ from _pytest.deprecated import FUNCARGNAMES
from _pytest.mark import MARK_GEN
from _pytest.mark import ParameterSet
from _pytest.mark.structures import get_unpacked_marks
from _pytest.mark.structures import Mark
from _pytest.mark.structures import normalize_mark_list
from _pytest.outcomes import fail
from _pytest.outcomes import skip
@ -125,7 +127,7 @@ def pytest_cmdline_main(config):
def pytest_generate_tests(metafunc):
for marker in metafunc.definition.iter_markers(name="parametrize"):
metafunc.parametrize(*marker.args, **marker.kwargs)
metafunc.parametrize(*marker.args, **marker.kwargs, _param_mark=marker)
def pytest_configure(config):
@ -194,8 +196,8 @@ def path_matches_patterns(path, patterns):
def pytest_pycollect_makemodule(path, parent):
if path.basename == "__init__.py":
return Package(path, parent)
return Module(path, parent)
return Package.from_parent(parent, fspath=path)
return Module.from_parent(parent, fspath=path)
@hookimpl(hookwrapper=True)
@ -207,7 +209,7 @@ def pytest_pycollect_makeitem(collector, name, obj):
# nothing was collected elsewhere, let's do it here
if safe_isclass(obj):
if collector.istestclass(obj, name):
outcome.force_result(Class(name, parent=collector))
outcome.force_result(Class.from_parent(collector, name=name, obj=obj))
elif collector.istestfunction(obj, name):
# mock seems to store unbound methods (issue473), normalize it
obj = getattr(obj, "__func__", obj)
@ -226,7 +228,7 @@ def pytest_pycollect_makeitem(collector, name, obj):
)
elif getattr(obj, "__test__", True):
if is_generator(obj):
res = Function(name, parent=collector)
res = Function.from_parent(collector, name=name)
reason = "yield tests were removed in pytest 4.0 - {name} will be ignored".format(
name=name
)
@ -391,7 +393,7 @@ class PyCollector(PyobjMixin, nodes.Collector):
cls = clscol and clscol.obj or None
fm = self.session._fixturemanager
definition = FunctionDefinition(name=name, parent=self, callobj=funcobj)
definition = FunctionDefinition.from_parent(self, name=name, callobj=funcobj)
fixtureinfo = definition._fixtureinfo
metafunc = Metafunc(
@ -406,7 +408,7 @@ class PyCollector(PyobjMixin, nodes.Collector):
self.ihook.pytest_generate_tests.call_extra(methods, dict(metafunc=metafunc))
if not metafunc._calls:
yield Function(name, parent=self, fixtureinfo=fixtureinfo)
yield Function.from_parent(self, name=name, fixtureinfo=fixtureinfo)
else:
# add funcargs() as fixturedefs to fixtureinfo.arg2fixturedefs
fixtures.add_funcarg_pseudo_fixture_def(self, metafunc, fm)
@ -418,9 +420,9 @@ class PyCollector(PyobjMixin, nodes.Collector):
for callspec in metafunc._calls:
subname = "{}[{}]".format(name, callspec.id)
yield Function(
yield Function.from_parent(
self,
name=subname,
parent=self,
callspec=callspec,
callobj=funcobj,
fixtureinfo=fixtureinfo,
@ -503,9 +505,7 @@ class Module(nodes.File, PyCollector):
try:
mod = self.fspath.pyimport(ensuresyspath=importmode)
except SyntaxError:
raise self.CollectError(
_pytest._code.ExceptionInfo.from_current().getrepr(style="short")
)
raise self.CollectError(ExceptionInfo.from_current().getrepr(style="short"))
except self.fspath.ImportMismatchError:
e = sys.exc_info()[1]
raise self.CollectError(
@ -518,8 +518,6 @@ class Module(nodes.File, PyCollector):
"unique basename for your test file modules" % e.args
)
except ImportError:
from _pytest._code.code import ExceptionInfo
exc_info = ExceptionInfo.from_current()
if self.config.getoption("verbose") < 2:
exc_info.traceback = exc_info.traceback.filter(filter_traceback)
@ -620,7 +618,7 @@ class Package(Module):
if init_module.check(file=1) and path_matches_patterns(
init_module, self.config.getini("python_files")
):
yield Module(init_module, self)
yield Module.from_parent(self, fspath=init_module)
pkg_prefixes = set()
for path in this_path.visit(rec=self._recurse, bf=True, sort=True):
# We will visit our own __init__.py file, in which case we skip it.
@ -671,6 +669,13 @@ def _get_first_non_fixture_func(obj, names):
class Class(PyCollector):
""" Collector for test methods. """
@classmethod
def from_parent(cls, parent, *, name, obj=None):
"""
The public constructor
"""
return super().from_parent(name=name, parent=parent)
def collect(self):
if not safe_getattr(self.obj, "__test__", True):
return []
@ -696,7 +701,7 @@ class Class(PyCollector):
self._inject_setup_class_fixture()
self._inject_setup_method_fixture()
return [Instance(name="()", parent=self)]
return [Instance.from_parent(self, name="()")]
def _inject_setup_class_fixture(self):
"""Injects a hidden autouse, class scoped fixture into the collected class object
@ -766,45 +771,6 @@ class Instance(PyCollector):
return self.obj
class FunctionMixin(PyobjMixin):
""" mixin for the code common to Function and Generator.
"""
def setup(self):
""" perform setup for this test function. """
if isinstance(self.parent, Instance):
self.parent.newinstance()
self.obj = self._getobj()
def _prunetraceback(self, excinfo):
if hasattr(self, "_obj") and not self.config.getoption("fulltrace", False):
code = _pytest._code.Code(get_real_func(self.obj))
path, firstlineno = code.path, code.firstlineno
traceback = excinfo.traceback
ntraceback = traceback.cut(path=path, firstlineno=firstlineno)
if ntraceback == traceback:
ntraceback = ntraceback.cut(path=path)
if ntraceback == traceback:
ntraceback = ntraceback.filter(filter_traceback)
if not ntraceback:
ntraceback = traceback
excinfo.traceback = ntraceback.filter()
# issue364: mark all but first and last frames to
# only show a single-line message for each frame
if self.config.getoption("tbstyle", "auto") == "auto":
if len(excinfo.traceback) > 2:
for entry in excinfo.traceback[1:-1]:
entry.set_repr_style("short")
def repr_failure(self, excinfo, outerr=None):
assert outerr is None, "XXX outerr usage is deprecated"
style = self.config.getoption("tbstyle", "auto")
if style == "auto":
style = "long"
return self._repr_failure_py(excinfo, style=style)
def hasinit(obj):
init = getattr(obj, "__init__", None)
if init:
@ -855,7 +821,7 @@ class CallSpec2:
@property
def id(self):
return "-".join(map(str, filter(None, self._idlist)))
return "-".join(map(str, self._idlist))
def setmulti2(self, valtypes, argnames, valset, id, marks, scopenum, param_index):
for arg, val in zip(argnames, valset):
@ -910,7 +876,16 @@ class Metafunc:
warnings.warn(FUNCARGNAMES, stacklevel=2)
return self.fixturenames
def parametrize(self, argnames, argvalues, indirect=False, ids=None, scope=None):
def parametrize(
self,
argnames,
argvalues,
indirect=False,
ids=None,
scope=None,
*,
_param_mark: Optional[Mark] = None
):
""" Add new invocations to the underlying test function using the list
of argvalues for the given argnames. Parametrization is performed
during the collection phase. If you need to setup expensive resources
@ -933,13 +908,22 @@ class Metafunc:
function so that it can perform more expensive setups during the
setup phase of a test rather than at collection time.
:arg ids: list of string ids, or a callable.
If strings, each is corresponding to the argvalues so that they are
part of the test id. If None is given as id of specific test, the
automatically generated id for that argument will be used.
If callable, it should take one argument (a single argvalue) and return
a string or return None. If None, the automatically generated id for that
argument will be used.
:arg ids: sequence of (or generator for) ids for ``argvalues``,
or a callable to return part of the id for each argvalue.
With sequences (and generators like ``itertools.count()``) the
returned ids should be of type ``string``, ``int``, ``float``,
``bool``, or ``None``.
They are mapped to the corresponding index in ``argvalues``.
``None`` means to use the auto-generated id.
If it is a callable it will be called for each entry in
``argvalues``, and the return value is used as part of the
auto-generated id for the whole set (where parts are joined with
dashes ("-")).
This is useful to provide more specific ids for certain items, e.g.
dates. Returning ``None`` will use an auto-generated id.
If no ids are provided they will be generated automatically from
the argvalues.
@ -972,8 +956,20 @@ class Metafunc:
arg_values_types = self._resolve_arg_value_types(argnames, indirect)
self._validate_explicit_parameters(argnames, indirect)
# Use any already (possibly) generated ids with parametrize Marks.
if _param_mark and _param_mark._param_ids_from:
generated_ids = _param_mark._param_ids_from._param_ids_generated
if generated_ids is not None:
ids = generated_ids
ids = self._resolve_arg_ids(argnames, ids, parameters, item=self.definition)
# Store used (possibly generated) ids with parametrize Marks.
if _param_mark and _param_mark._param_ids_from and generated_ids is None:
object.__setattr__(_param_mark._param_ids_from, "_param_ids_generated", ids)
scopenum = scope2index(
scope, descr="parametrize() call in {}".format(self.function.__name__)
)
@ -1010,27 +1006,48 @@ class Metafunc:
:rtype: List[str]
:return: the list of ids for each argname given
"""
from _pytest._io.saferepr import saferepr
idfn = None
if callable(ids):
idfn = ids
ids = None
if ids:
func_name = self.function.__name__
if len(ids) != len(parameters):
msg = "In {}: {} parameter sets specified, with different number of ids: {}"
fail(msg.format(func_name, len(parameters), len(ids)), pytrace=False)
for id_value in ids:
if id_value is not None and not isinstance(id_value, str):
msg = "In {}: ids must be list of strings, found: {} (type: {!r})"
fail(
msg.format(func_name, saferepr(id_value), type(id_value)),
pytrace=False,
)
ids = self._validate_ids(ids, parameters, func_name)
ids = idmaker(argnames, parameters, idfn, ids, self.config, item=item)
return ids
def _validate_ids(self, ids, parameters, func_name):
try:
len(ids)
except TypeError:
try:
it = iter(ids)
except TypeError:
raise TypeError("ids must be a callable, sequence or generator")
else:
import itertools
new_ids = list(itertools.islice(it, len(parameters)))
else:
new_ids = list(ids)
if len(new_ids) != len(parameters):
msg = "In {}: {} parameter sets specified, with different number of ids: {}"
fail(msg.format(func_name, len(parameters), len(ids)), pytrace=False)
for idx, id_value in enumerate(new_ids):
if id_value is not None:
if isinstance(id_value, (float, int, bool)):
new_ids[idx] = str(id_value)
elif not isinstance(id_value, str):
from _pytest._io.saferepr import saferepr
msg = "In {}: ids must be list of string/float/int/bool, found: {} (type: {!r}) at index {}"
fail(
msg.format(func_name, saferepr(id_value), type(id_value), idx),
pytrace=False,
)
return new_ids
def _resolve_arg_value_types(self, argnames: List[str], indirect) -> Dict[str, str]:
"""Resolves if each parametrized argument must be considered a parameter to a fixture or a "funcarg"
to the function, based on the ``indirect`` parameter of the parametrized() call.
@ -1093,6 +1110,37 @@ class Metafunc:
pytrace=False,
)
def _validate_explicit_parameters(self, argnames, indirect):
"""
The argnames in *parametrize* should either be declared explicitly via
indirect list or in the function signature
:param List[str] argnames: list of argument names passed to ``parametrize()``.
:param indirect: same ``indirect`` parameter of ``parametrize()``.
:raise ValueError: if validation fails
"""
if isinstance(indirect, bool) and indirect is True:
return
parametrized_argnames = list()
funcargnames = _pytest.compat.getfuncargnames(self.function)
if isinstance(indirect, Sequence):
for arg in argnames:
if arg not in indirect:
parametrized_argnames.append(arg)
elif indirect is False:
parametrized_argnames = argnames
usefixtures = fixtures.get_use_fixtures_for_node(self.definition)
for arg in parametrized_argnames:
if arg not in funcargnames and arg not in usefixtures:
func_name = self.function.__name__
msg = (
'In function "{func_name}":\n'
'Parameter "{arg}" should be declared explicitly via indirect or in function itself'
).format(func_name=func_name, arg=arg)
fail(msg, pytrace=False)
def _find_parametrized_scope(argnames, arg2fixturedefs, indirect):
"""Find the most appropriate scope for a parametrized call based on its arguments.
@ -1144,8 +1192,7 @@ def _idval(val, argname, idx, idfn, item, config):
if generated_id is not None:
val = generated_id
except Exception as e:
# See issue https://github.com/pytest-dev/pytest/issues/2169
msg = "{}: error raised while trying to determine id of parameter '{}' at position {}\n"
msg = "{}: error raised while trying to determine id of parameter '{}' at position {}"
msg = msg.format(item.nodeid, argname, idx)
raise ValueError(msg) from e
elif config:
@ -1235,7 +1282,7 @@ def _show_fixtures_per_test(config, session):
else:
funcargspec = argname
tw.line(funcargspec, green=True)
fixture_doc = fixture_def.func.__doc__
fixture_doc = inspect.getdoc(fixture_def.func)
if fixture_doc:
write_docstring(tw, fixture_doc)
else:
@ -1320,7 +1367,7 @@ def _showfixtures_main(config, session):
tw.write(" -- %s" % bestrel, yellow=True)
tw.write("\n")
loc = getlocation(fixturedef.func, curdir)
doc = fixturedef.func.__doc__ or ""
doc = inspect.getdoc(fixturedef.func)
if doc:
write_docstring(tw, doc)
else:
@ -1329,21 +1376,11 @@ def _showfixtures_main(config, session):
def write_docstring(tw, doc, indent=" "):
doc = doc.rstrip()
if "\n" in doc:
firstline, rest = doc.split("\n", 1)
else:
firstline, rest = doc, ""
if firstline.strip():
tw.line(indent + firstline.strip())
if rest:
for line in dedent(rest).split("\n"):
tw.write(indent + line + "\n")
for line in doc.split("\n"):
tw.write(indent + line + "\n")
class Function(FunctionMixin, nodes.Item):
class Function(PyobjMixin, nodes.Item):
""" a Function Item is responsible for setting up and executing a
Python test function.
"""
@ -1409,6 +1446,13 @@ class Function(FunctionMixin, nodes.Item):
#: .. versionadded:: 3.0
self.originalname = originalname
@classmethod
def from_parent(cls, parent, **kw): # todo: determine sound type limitations
"""
The public constructor
"""
return super().from_parent(parent=parent, **kw)
def _initrequest(self):
self.funcargs = {}
self._request = fixtures.FixtureRequest(self)
@ -1440,10 +1484,40 @@ class Function(FunctionMixin, nodes.Item):
""" execute the underlying test function. """
self.ihook.pytest_pyfunc_call(pyfuncitem=self)
def setup(self):
super().setup()
def setup(self) -> None:
if isinstance(self.parent, Instance):
self.parent.newinstance()
self.obj = self._getobj()
fixtures.fillfixtures(self)
def _prunetraceback(self, excinfo: ExceptionInfo) -> None:
if hasattr(self, "_obj") and not self.config.getoption("fulltrace", False):
code = _pytest._code.Code(get_real_func(self.obj))
path, firstlineno = code.path, code.firstlineno
traceback = excinfo.traceback
ntraceback = traceback.cut(path=path, firstlineno=firstlineno)
if ntraceback == traceback:
ntraceback = ntraceback.cut(path=path)
if ntraceback == traceback:
ntraceback = ntraceback.filter(filter_traceback)
if not ntraceback:
ntraceback = traceback
excinfo.traceback = ntraceback.filter()
# issue364: mark all but first and last frames to
# only show a single-line message for each frame
if self.config.getoption("tbstyle", "auto") == "auto":
if len(excinfo.traceback) > 2:
for entry in excinfo.traceback[1:-1]:
entry.set_repr_style("short")
def repr_failure(self, excinfo, outerr=None):
assert outerr is None, "XXX outerr usage is deprecated"
style = self.config.getoption("tbstyle", "auto")
if style == "auto":
style = "long"
return self._repr_failure_py(excinfo, style=style)
class FunctionDefinition(Function):
"""

View File

@ -39,7 +39,7 @@ def pytest_addoption(parser):
default=None,
metavar="N",
help="show N slowest setup/test durations (N=0 for all).",
),
)
def pytest_terminal_summary(terminalreporter):

View File

@ -26,13 +26,15 @@ from more_itertools import collapse
import pytest
from _pytest import nodes
from _pytest.config import Config
from _pytest.main import ExitCode
from _pytest.config import ExitCode
from _pytest.main import Session
from _pytest.reports import CollectReport
from _pytest.reports import TestReport
REPORT_COLLECTING_RESOLUTION = 0.5
_REPORTCHARS_DEFAULT = "fE"
class MoreQuietAction(argparse.Action):
"""
@ -68,7 +70,7 @@ def pytest_addoption(parser):
default=0,
dest="verbose",
help="increase verbosity.",
),
)
group._addoption(
"-q",
"--quiet",
@ -76,7 +78,7 @@ def pytest_addoption(parser):
default=0,
dest="verbose",
help="decrease verbosity.",
),
)
group._addoption(
"--verbosity",
dest="verbose",
@ -88,12 +90,13 @@ def pytest_addoption(parser):
"-r",
action="store",
dest="reportchars",
default="",
default=_REPORTCHARS_DEFAULT,
metavar="chars",
help="show extra test summary info as specified by chars: (f)ailed, "
"(E)rror, (s)kipped, (x)failed, (X)passed, "
"(p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. "
"(w)arnings are enabled by default (see --disable-warnings).",
"(w)arnings are enabled by default (see --disable-warnings), "
"'N' can be used to reset the list. (default: 'fE').",
)
group._addoption(
"--disable-warnings",
@ -166,38 +169,42 @@ def pytest_configure(config: Config) -> None:
def getreportopt(config: Config) -> str:
reportopts = ""
reportchars = config.option.reportchars
if not config.option.disable_warnings and "w" not in reportchars:
reportchars += "w"
elif config.option.disable_warnings and "w" in reportchars:
reportchars = reportchars.replace("w", "")
old_aliases = {"F", "S"}
reportopts = ""
for char in reportchars:
if char in old_aliases:
char = char.lower()
if char == "a":
reportopts = "sxXwEf"
reportopts = "sxXEf"
elif char == "A":
reportopts = "PpsxXwEf"
break
reportopts = "PpsxXEf"
elif char == "N":
reportopts = ""
elif char not in reportopts:
reportopts += char
if not config.option.disable_warnings and "w" not in reportopts:
reportopts = "w" + reportopts
elif config.option.disable_warnings and "w" in reportopts:
reportopts = reportopts.replace("w", "")
return reportopts
@pytest.hookimpl(trylast=True) # after _pytest.runner
def pytest_report_teststatus(report: TestReport) -> Tuple[str, str, str]:
letter = "F"
if report.passed:
letter = "."
elif report.skipped:
letter = "s"
elif report.failed:
letter = "F"
if report.when != "call":
letter = "f"
# Report failed CollectReports as "error" (in line with pytest_collectreport).
outcome = report.outcome
if report.when == "collect" and outcome == "failed":
if report.when in ("collect", "setup", "teardown") and outcome == "failed":
outcome = "error"
letter = "E"
return outcome, letter, outcome.upper()
@ -994,9 +1001,7 @@ class TerminalReporter:
"x": show_xfailed,
"X": show_xpassed,
"f": partial(show_simple, "failed"),
"F": partial(show_simple, "failed"),
"s": show_skipped,
"S": show_skipped,
"p": partial(show_simple, "passed"),
"E": partial(show_simple, "error"),
} # type: Mapping[str, Callable[[List[str]], None]]
@ -1114,7 +1119,7 @@ def _get_main_color(stats) -> Tuple[str, List[str]]:
# main color
if "failed" in stats or "error" in stats:
main_color = "red"
elif "warnings" in stats or unknown_type_seen:
elif "warnings" in stats or "xpassed" in stats or unknown_type_seen:
main_color = "yellow"
elif "passed" in stats:
main_color = "green"

View File

@ -45,8 +45,30 @@ class TempPathFactory:
given_basetemp=config.option.basetemp, trace=config.trace.get("tmpdir")
)
def _ensure_relative_to_basetemp(self, basename: str):
basename = os.path.normpath(basename)
if (self.getbasetemp() / basename).resolve().parent != self.getbasetemp():
raise ValueError(
"{} is not a normalized and relative path".format(basename)
)
return basename
def mktemp(self, basename: str, numbered: bool = True) -> Path:
"""makes a temporary directory managed by the factory"""
"""Creates a new temporary directory managed by the factory.
:param basename:
Directory base name, must be a relative path.
:param numbered:
If True, ensure the directory is unique by adding a number
prefix greater than any existing one: ``basename="foo"`` and ``numbered=True``
means that this function will create directories named ``"foo-0"``,
``"foo-1"``, ``"foo-2"`` and so on.
:return:
The path to the new directory.
"""
basename = self._ensure_relative_to_basetemp(basename)
if not numbered:
p = self.getbasetemp().joinpath(basename)
p.mkdir()
@ -90,10 +112,9 @@ class TempdirFactory:
_tmppath_factory = attr.ib(type=TempPathFactory)
def mktemp(self, basename: str, numbered: bool = True):
"""Create a subdirectory of the base temporary directory and return it.
If ``numbered``, ensure the directory is unique by adding a number
prefix greater than any existing one.
def mktemp(self, basename: str, numbered: bool = True) -> py.path.local:
"""
Same as :meth:`TempPathFactory.mkdir`, but returns a ``py.path.local`` object.
"""
return py.path.local(self._tmppath_factory.mktemp(basename, numbered).resolve())

View File

@ -1,4 +1,5 @@
""" discovery and running of std-library "unittest" style tests. """
import functools
import sys
import traceback
@ -23,7 +24,7 @@ def pytest_pycollect_makeitem(collector, name, obj):
except Exception:
return
# yes, so let's collect it
return UnitTestCase(name, parent=collector)
return UnitTestCase.from_parent(collector, name=name, obj=obj)
class UnitTestCase(Class):
@ -51,7 +52,7 @@ class UnitTestCase(Class):
if not getattr(x, "__test__", True):
continue
funcobj = getimfunc(x)
yield TestCaseFunction(name, parent=self, callobj=funcobj)
yield TestCaseFunction.from_parent(self, name=name, callobj=funcobj)
foundsomething = True
if not foundsomething:
@ -59,7 +60,8 @@ class UnitTestCase(Class):
if runtest is not None:
ut = sys.modules.get("twisted.trial.unittest", None)
if ut is None or runtest != ut.TestCase.runTest:
yield TestCaseFunction("runTest", parent=self)
# TODO: callobj consistency
yield TestCaseFunction.from_parent(self, name="runTest")
def _inject_setup_teardown_fixtures(self, cls):
"""Injects a hidden auto-use fixture to invoke setUpClass/setup_method and corresponding
@ -109,12 +111,15 @@ class TestCaseFunction(Function):
_testcase = None
def setup(self):
self._needs_explicit_tearDown = False
self._testcase = self.parent.obj(self.name)
self._obj = getattr(self._testcase, self.name)
if hasattr(self, "_request"):
self._request._fillfixtures()
def teardown(self):
if self._needs_explicit_tearDown:
self._testcase.tearDown()
self._testcase = None
self._obj = None
@ -187,29 +192,46 @@ class TestCaseFunction(Function):
def stopTest(self, testcase):
pass
def _handle_skip(self):
# implements the skipping machinery (see #2137)
# analog to pythons Lib/unittest/case.py:run
testMethod = getattr(self._testcase, self._testcase._testMethodName)
if getattr(self._testcase.__class__, "__unittest_skip__", False) or getattr(
testMethod, "__unittest_skip__", False
):
# If the class or method was skipped.
skip_why = getattr(
self._testcase.__class__, "__unittest_skip_why__", ""
) or getattr(testMethod, "__unittest_skip_why__", "")
self._testcase._addSkip(self, self._testcase, skip_why)
return True
return False
def _expecting_failure(self, test_method) -> bool:
"""Return True if the given unittest method (or the entire class) is marked
with @expectedFailure"""
expecting_failure_method = getattr(
test_method, "__unittest_expecting_failure__", False
)
expecting_failure_class = getattr(self, "__unittest_expecting_failure__", False)
return bool(expecting_failure_class or expecting_failure_method)
def runtest(self):
if self.config.pluginmanager.get_plugin("pdbinvoke") is None:
# TODO: move testcase reporter into separate class, this shouldnt be on item
import unittest
testMethod = getattr(self._testcase, self._testcase._testMethodName)
class _GetOutOf_testPartExecutor(KeyboardInterrupt):
"""Helper exception to get out of unittests's testPartExecutor (see TestCase.run)."""
@functools.wraps(testMethod)
def wrapped_testMethod(*args, **kwargs):
"""Wrap the original method to call into pytest's machinery, so other pytest
features can have a chance to kick in (notably --pdb)"""
try:
self.ihook.pytest_pyfunc_call(pyfuncitem=self)
except unittest.SkipTest:
raise
except Exception as exc:
expecting_failure = self._expecting_failure(testMethod)
if expecting_failure:
raise
self._needs_explicit_tearDown = True
raise _GetOutOf_testPartExecutor(exc)
setattr(self._testcase, self._testcase._testMethodName, wrapped_testMethod)
try:
self._testcase(result=self)
else:
# disables tearDown and cleanups for post mortem debugging (see #1890)
if self._handle_skip():
return
self._testcase.debug()
except _GetOutOf_testPartExecutor as exc:
raise exc.args[0] from exc.args[0]
finally:
delattr(self._testcase, self._testcase._testMethodName)
def _prunetraceback(self, excinfo):
Function._prunetraceback(self, excinfo)

View File

@ -151,6 +151,10 @@ def _issue_warning_captured(warning, hook, stacklevel):
warnings.warn(warning, stacklevel=stacklevel)
# Mypy can't infer that record=True means records is not None; help it.
assert records is not None
frame = sys._getframe(stacklevel - 1)
location = frame.f_code.co_filename, frame.f_lineno, frame.f_code.co_name
hook.pytest_warning_captured.call_historic(
kwargs=dict(warning_message=records[0], when="config", item=None)
kwargs=dict(
warning_message=records[0], when="config", item=None, location=location
)
)

View File

@ -6,6 +6,7 @@ from _pytest import __version__
from _pytest.assertion import register_assert_rewrite
from _pytest.compat import _setup_collect_fakemodule
from _pytest.config import cmdline
from _pytest.config import ExitCode
from _pytest.config import hookimpl
from _pytest.config import hookspec
from _pytest.config import main
@ -15,7 +16,6 @@ from _pytest.fixtures import fillfixtures as _fillfuncargs
from _pytest.fixtures import fixture
from _pytest.fixtures import yield_fixture
from _pytest.freeze_support import freeze_includes
from _pytest.main import ExitCode
from _pytest.main import Session
from _pytest.mark import MARK_GEN as mark
from _pytest.mark import param

View File

@ -8,7 +8,7 @@ import py
import pytest
from _pytest.compat import importlib_metadata
from _pytest.main import ExitCode
from _pytest.config import ExitCode
def prepend_pythonpath(*dirs):
@ -414,7 +414,7 @@ class TestGeneralUsage:
def test_report_all_failed_collections_initargs(self, testdir):
testdir.makeconftest(
"""
from _pytest.main import ExitCode
from _pytest.config import ExitCode
def pytest_sessionfinish(exitstatus):
assert exitstatus == ExitCode.USAGE_ERROR
@ -1287,3 +1287,31 @@ def test_pdb_can_be_rewritten(testdir):
]
)
assert result.ret == 1
def test_tee_stdio_captures_and_live_prints(testdir):
testpath = testdir.makepyfile(
"""
import sys
def test_simple():
print ("@this is stdout@")
print ("@this is stderr@", file=sys.stderr)
"""
)
result = testdir.runpytest_subprocess(
testpath,
"--capture=tee-sys",
"--junitxml=output.xml",
"-o",
"junit_logging=all",
)
# ensure stdout/stderr were 'live printed'
result.stdout.fnmatch_lines(["*@this is stdout@*"])
result.stderr.fnmatch_lines(["*@this is stderr@*"])
# now ensure the output is in the junitxml
with open(os.path.join(testdir.tmpdir.strpath, "output.xml"), "r") as f:
fullXml = f.read()
assert "@this is stdout@\n" in fullXml
assert "@this is stderr@\n" in fullXml

View File

@ -0,0 +1,28 @@
import re
from io import StringIO
import pytest
from _pytest._io import TerminalWriter
@pytest.mark.parametrize(
"has_markup, expected",
[
pytest.param(
True, "{kw}assert{hl-reset} {number}0{hl-reset}\n", id="with markup"
),
pytest.param(False, "assert 0\n", id="no markup"),
],
)
def test_code_highlight(has_markup, expected, color_mapping):
f = StringIO()
tw = TerminalWriter(f)
tw.hasmarkup = has_markup
tw._write_source(["assert 0"])
assert f.getvalue().splitlines(keepends=True) == color_mapping.format([expected])
with pytest.raises(
ValueError,
match=re.escape("indents size (2) should have same size as lines (1)"),
):
tw._write_source(["assert 0"], [" ", " "])

View File

@ -1,6 +1,9 @@
import re
import sys
from typing import List
import pytest
from _pytest.pytester import RunResult
from _pytest.pytester import Testdir
if sys.gettrace():
@ -78,6 +81,12 @@ def tw_mock():
def write(self, msg, **kw):
self.lines.append((TWMock.WRITE, msg))
def _write_source(self, lines, indents=()):
if not indents:
indents = [""] * len(lines)
for indent, line in zip(indents, lines):
self.line(indent + line)
def line(self, line, **kw):
self.lines.append(line)
@ -125,3 +134,64 @@ def dummy_yaml_custom_test(testdir):
def testdir(testdir: Testdir) -> Testdir:
testdir.monkeypatch.setenv("PYTEST_DISABLE_PLUGIN_AUTOLOAD", "1")
return testdir
@pytest.fixture(scope="session")
def color_mapping():
"""Returns a utility class which can replace keys in strings in the form "{NAME}"
by their equivalent ASCII codes in the terminal.
Used by tests which check the actual colors output by pytest.
"""
class ColorMapping:
COLORS = {
"red": "\x1b[31m",
"green": "\x1b[32m",
"yellow": "\x1b[33m",
"bold": "\x1b[1m",
"reset": "\x1b[0m",
"kw": "\x1b[94m",
"hl-reset": "\x1b[39;49;00m",
"function": "\x1b[92m",
"number": "\x1b[94m",
"str": "\x1b[33m",
"print": "\x1b[96m",
}
RE_COLORS = {k: re.escape(v) for k, v in COLORS.items()}
@classmethod
def format(cls, lines: List[str]) -> List[str]:
"""Straightforward replacement of color names to their ASCII codes."""
return [line.format(**cls.COLORS) for line in lines]
@classmethod
def format_for_fnmatch(cls, lines: List[str]) -> List[str]:
"""Replace color names for use with LineMatcher.fnmatch_lines"""
return [line.format(**cls.COLORS).replace("[", "[[]") for line in lines]
@classmethod
def format_for_rematch(cls, lines: List[str]) -> List[str]:
"""Replace color names for use with LineMatcher.re_match_lines"""
return [line.format(**cls.RE_COLORS) for line in lines]
@classmethod
def requires_ordered_markup(cls, result: RunResult):
"""Should be called if a test expects markup to appear in the output
in the order they were passed, for example:
tw.write(line, bold=True, red=True)
In Python 3.5 there's no guarantee that the generated markup will appear
in the order called, so we do some limited color testing and skip the rest of
the test.
"""
if sys.version_info < (3, 6):
# terminal writer.write accepts keyword arguments, so
# py36+ is required so the markup appears in the expected order
output = result.stdout.str()
assert "test session starts" in output
assert "\x1b[1m" in output
pytest.skip("doing limited testing because lacking ordered markup")
return ColorMapping

View File

@ -1,5 +1,8 @@
import inspect
import pytest
from _pytest import deprecated
from _pytest import nodes
@pytest.mark.filterwarnings("default")
@ -73,3 +76,58 @@ def test_warn_about_imminent_junit_family_default_change(testdir, junit_family):
result.stdout.no_fnmatch_line(warning_msg)
else:
result.stdout.fnmatch_lines([warning_msg])
def test_node_direct_ctor_warning():
class MockConfig:
pass
ms = MockConfig()
with pytest.warns(
DeprecationWarning,
match="direct construction of .* has been deprecated, please use .*.from_parent",
) as w:
nodes.Node(name="test", config=ms, session=ms, nodeid="None")
assert w[0].lineno == inspect.currentframe().f_lineno - 1
assert w[0].filename == __file__
def assert_no_print_logs(testdir, args):
result = testdir.runpytest(*args)
result.stdout.fnmatch_lines(
[
"*--no-print-logs is deprecated and scheduled for removal in pytest 6.0*",
"*Please use --show-capture instead.*",
]
)
@pytest.mark.filterwarnings("default")
def test_noprintlogs_is_deprecated_cmdline(testdir):
testdir.makepyfile(
"""
def test_foo():
pass
"""
)
assert_no_print_logs(testdir, ("--no-print-logs",))
@pytest.mark.filterwarnings("default")
def test_noprintlogs_is_deprecated_ini(testdir):
testdir.makeini(
"""
[pytest]
log_print=False
"""
)
testdir.makepyfile(
"""
def test_foo():
pass
"""
)
assert_no_print_logs(testdir, ())

View File

@ -4,7 +4,7 @@ import textwrap
import _pytest._code
import pytest
from _pytest.main import ExitCode
from _pytest.config import ExitCode
from _pytest.nodes import Collector
@ -281,10 +281,10 @@ class TestFunction:
from _pytest.fixtures import FixtureManager
config = testdir.parseconfigure()
session = testdir.Session(config)
session = testdir.Session.from_config(config)
session._fixturemanager = FixtureManager(session)
return pytest.Function(config=config, parent=session, **kwargs)
return pytest.Function.from_parent(parent=session, **kwargs)
def test_function_equality(self, testdir):
def func1():
@ -463,7 +463,7 @@ class TestFunction:
return '3'
@pytest.mark.parametrize('fix2', ['2'])
def test_it(fix1):
def test_it(fix1, fix2):
assert fix1 == '21'
assert not fix3_instantiated
"""
@ -492,6 +492,19 @@ class TestFunction:
)
assert "foo" in keywords[1] and "bar" in keywords[1] and "baz" in keywords[1]
def test_parametrize_with_empty_string_arguments(self, testdir):
items = testdir.getitems(
"""\
import pytest
@pytest.mark.parametrize('v', ('', ' '))
@pytest.mark.parametrize('w', ('', ' '))
def test(v, w): ...
"""
)
names = {item.name for item in items}
assert names == {"test[-]", "test[ -]", "test[- ]", "test[ - ]"}
def test_function_equality_with_callspec(self, testdir):
items = testdir.getitems(
"""
@ -1024,7 +1037,7 @@ class TestReportInfo:
return "ABCDE", 42, "custom"
def pytest_pycollect_makeitem(collector, name, obj):
if name == "test_func":
return MyFunction(name, parent=collector)
return MyFunction.from_parent(name=name, parent=collector)
"""
)
item = testdir.getitem("def test_func(): pass")
@ -1210,6 +1223,28 @@ def test_syntax_error_with_non_ascii_chars(testdir):
result.stdout.fnmatch_lines(["*ERROR collecting*", "*SyntaxError*", "*1 error in*"])
def test_collecterror_with_fulltrace(testdir):
testdir.makepyfile("assert 0")
result = testdir.runpytest("--fulltrace")
result.stdout.fnmatch_lines(
[
"collected 0 items / 1 error",
"",
"*= ERRORS =*",
"*_ ERROR collecting test_collecterror_with_fulltrace.py _*",
"",
"*/_pytest/python.py:*: ",
"_ _ _ _ _ _ _ _ *",
"",
"> assert 0",
"E assert 0",
"",
"test_collecterror_with_fulltrace.py:1: AssertionError",
"*! Interrupted: 1 error during collection !*",
]
)
def test_skip_duplicates_by_default(testdir):
"""Test for issue https://github.com/pytest-dev/pytest/issues/1609 (#1609)

View File

@ -10,7 +10,7 @@ class TestOEJSKITSpecials:
import pytest
def pytest_pycollect_makeitem(collector, name, obj):
if name == "MyClass":
return MyCollector(name, parent=collector)
return MyCollector.from_parent(collector, name=name)
class MyCollector(pytest.Collector):
def reportinfo(self):
return self.fspath, 3, "xyz"
@ -40,7 +40,7 @@ class TestOEJSKITSpecials:
import pytest
def pytest_pycollect_makeitem(collector, name, obj):
if name == "MyClass":
return MyCollector(name, parent=collector)
return MyCollector.from_parent(collector, name=name)
class MyCollector(pytest.Collector):
def reportinfo(self):
return self.fspath, 3, "xyz"

View File

@ -9,6 +9,8 @@ from hypothesis import strategies
import pytest
from _pytest import fixtures
from _pytest import python
from _pytest.outcomes import fail
from _pytest.python import _idval
class TestMetafunc:
@ -26,9 +28,12 @@ class TestMetafunc:
class DefinitionMock(python.FunctionDefinition):
obj = attr.ib()
def listchain(self):
return []
names = fixtures.getfuncargnames(func)
fixtureinfo = FixtureInfo(names)
definition = DefinitionMock(func)
definition = DefinitionMock._create(func)
return python.Metafunc(definition, fixtureinfo, config)
def test_no_funcargs(self):
@ -61,6 +66,39 @@ class TestMetafunc:
pytest.raises(ValueError, lambda: metafunc.parametrize("y", [5, 6]))
pytest.raises(ValueError, lambda: metafunc.parametrize("y", [5, 6]))
with pytest.raises(
TypeError, match="^ids must be a callable, sequence or generator$"
):
metafunc.parametrize("y", [5, 6], ids=42)
def test_parametrize_error_iterator(self):
def func(x):
raise NotImplementedError()
class Exc(Exception):
def __repr__(self):
return "Exc(from_gen)"
def gen():
yield 0
yield None
yield Exc()
metafunc = self.Metafunc(func)
metafunc.parametrize("x", [1, 2], ids=gen())
assert [(x.funcargs, x.id) for x in metafunc._calls] == [
({"x": 1}, "0"),
({"x": 2}, "2"),
]
with pytest.raises(
fail.Exception,
match=(
r"In func: ids must be list of string/float/int/bool, found:"
r" Exc\(from_gen\) \(type: <class .*Exc'>\) at index 2"
),
):
metafunc.parametrize("x", [1, 2, 3], ids=gen())
def test_parametrize_bad_scope(self):
def func(x):
pass
@ -167,6 +205,26 @@ class TestMetafunc:
("x", "y"), [("abc", "def"), ("ghi", "jkl")], ids=["one"]
)
def test_parametrize_ids_iterator_without_mark(self):
import itertools
def func(x, y):
pass
it = itertools.count()
metafunc = self.Metafunc(func)
metafunc.parametrize("x", [1, 2], ids=it)
metafunc.parametrize("y", [3, 4], ids=it)
ids = [x.id for x in metafunc._calls]
assert ids == ["0-2", "0-3", "1-2", "1-3"]
metafunc = self.Metafunc(func)
metafunc.parametrize("x", [1, 2], ids=it)
metafunc.parametrize("y", [3, 4], ids=it)
ids = [x.id for x in metafunc._calls]
assert ids == ["4-6", "4-7", "5-6", "5-7"]
def test_parametrize_empty_list(self):
"""#510"""
@ -209,8 +267,6 @@ class TestMetafunc:
deadline=400.0
) # very close to std deadline and CI boxes are not reliable in CPU power
def test_idval_hypothesis(self, value):
from _pytest.python import _idval
escaped = _idval(value, "a", 6, None, item=None, config=None)
assert isinstance(escaped, str)
escaped.encode("ascii")
@ -221,8 +277,6 @@ class TestMetafunc:
escapes if they're not.
"""
from _pytest.python import _idval
values = [
("", ""),
("ascii", "ascii"),
@ -242,7 +296,6 @@ class TestMetafunc:
disable_test_id_escaping_and_forfeit_all_rights_to_community_support
option. (#5294)
"""
from _pytest.python import _idval
class MockConfig:
def __init__(self, config):
@ -274,8 +327,6 @@ class TestMetafunc:
"binary escape", where any byte < 127 is escaped into its hex form.
- python3: bytes objects are always escaped using "binary escape".
"""
from _pytest.python import _idval
values = [
(b"", ""),
(b"\xc3\xb4\xff\xe4", "\\xc3\\xb4\\xff\\xe4"),
@ -289,7 +340,6 @@ class TestMetafunc:
"""unittest for the expected behavior to obtain ids for parametrized
values that are classes or functions: their __name__.
"""
from _pytest.python import _idval
class TestClass:
pass
@ -534,9 +584,22 @@ class TestMetafunc:
@pytest.mark.parametrize("arg", ({1: 2}, {3, 4}), ids=ids)
def test(arg):
assert arg
@pytest.mark.parametrize("arg", (1, 2.0, True), ids=ids)
def test_int(arg):
assert arg
"""
)
assert testdir.runpytest().ret == 0
result = testdir.runpytest("-vv", "-s")
result.stdout.fnmatch_lines(
[
"test_parametrize_ids_returns_non_string.py::test[arg0] PASSED",
"test_parametrize_ids_returns_non_string.py::test[arg1] PASSED",
"test_parametrize_ids_returns_non_string.py::test_int[1] PASSED",
"test_parametrize_ids_returns_non_string.py::test_int[2.0] PASSED",
"test_parametrize_ids_returns_non_string.py::test_int[True] PASSED",
]
)
def test_idmaker_with_ids(self):
from _pytest.python import idmaker
@ -1186,12 +1249,12 @@ class TestMetafuncFunctional:
result.stdout.fnmatch_lines(["* 1 skipped *"])
def test_parametrized_ids_invalid_type(self, testdir):
"""Tests parametrized with ids as non-strings (#1857)."""
"""Test error with non-strings/non-ints, without generator (#1857)."""
testdir.makepyfile(
"""
import pytest
@pytest.mark.parametrize("x, expected", [(10, 20), (40, 80)], ids=(None, 2))
@pytest.mark.parametrize("x, expected", [(1, 2), (3, 4), (5, 6)], ids=(None, 2, type))
def test_ids_numbers(x,expected):
assert x * 2 == expected
"""
@ -1199,7 +1262,8 @@ class TestMetafuncFunctional:
result = testdir.runpytest()
result.stdout.fnmatch_lines(
[
"*In test_ids_numbers: ids must be list of strings, found: 2 (type: *'int'>)*"
"In test_ids_numbers: ids must be list of string/float/int/bool,"
" found: <class 'type'> (type: <class 'type'>) at index 2"
]
)
@ -1780,3 +1844,87 @@ class TestMarkersWithParametrization:
)
result = testdir.runpytest()
result.assert_outcomes(passed=1)
def test_parametrize_iterator(self, testdir):
testdir.makepyfile(
"""
import itertools
import pytest
id_parametrize = pytest.mark.parametrize(
ids=("param%d" % i for i in itertools.count())
)
@id_parametrize('y', ['a', 'b'])
def test1(y):
pass
@id_parametrize('y', ['a', 'b'])
def test2(y):
pass
@pytest.mark.parametrize("a, b", [(1, 2), (3, 4)], ids=itertools.count())
def test_converted_to_str(a, b):
pass
"""
)
result = testdir.runpytest("-vv", "-s")
result.stdout.fnmatch_lines(
[
"test_parametrize_iterator.py::test1[param0] PASSED",
"test_parametrize_iterator.py::test1[param1] PASSED",
"test_parametrize_iterator.py::test2[param0] PASSED",
"test_parametrize_iterator.py::test2[param1] PASSED",
"test_parametrize_iterator.py::test_converted_to_str[0] PASSED",
"test_parametrize_iterator.py::test_converted_to_str[1] PASSED",
"*= 6 passed in *",
]
)
def test_parametrize_explicit_parameters_func(self, testdir):
testdir.makepyfile(
"""
import pytest
@pytest.fixture
def fixture(arg):
return arg
@pytest.mark.parametrize("arg", ["baz"])
def test_without_arg(fixture):
assert "baz" == fixture
"""
)
result = testdir.runpytest()
result.assert_outcomes(error=1)
result.stdout.fnmatch_lines(
[
'*In function "test_without_arg"*',
'*Parameter "arg" should be declared explicitly via indirect or in function itself*',
]
)
def test_parametrize_explicit_parameters_method(self, testdir):
testdir.makepyfile(
"""
import pytest
class Test:
@pytest.fixture
def test_fixture(self, argument):
return argument
@pytest.mark.parametrize("argument", ["foobar"])
def test_without_argument(self, test_fixture):
assert "foobar" == test_fixture
"""
)
result = testdir.runpytest()
result.assert_outcomes(error=1)
result.stdout.fnmatch_lines(
[
'*In function "test_without_argument"*',
'*Parameter "argument" should be declared explicitly via indirect or in function itself*',
]
)

View File

@ -21,7 +21,7 @@ from _pytest.assertion.rewrite import get_cache_dir
from _pytest.assertion.rewrite import PYC_TAIL
from _pytest.assertion.rewrite import PYTEST_TAG
from _pytest.assertion.rewrite import rewrite_asserts
from _pytest.main import ExitCode
from _pytest.config import ExitCode
from _pytest.pathlib import Path

View File

@ -6,7 +6,7 @@ import sys
import py
import pytest
from _pytest.main import ExitCode
from _pytest.config import ExitCode
pytest_plugins = ("pytester",)
@ -683,11 +683,28 @@ class TestLastFailed:
result.stdout.fnmatch_lines(["*2 passed*"])
result = testdir.runpytest("--lf", "--lfnf", "all")
result.stdout.fnmatch_lines(["*2 passed*"])
# Ensure the list passed to pytest_deselected is a copy,
# and not a reference which is cleared right after.
testdir.makeconftest(
"""
deselected = []
def pytest_deselected(items):
global deselected
deselected = items
def pytest_sessionfinish():
print("\\ndeselected={}".format(len(deselected)))
"""
)
result = testdir.runpytest("--lf", "--lfnf", "none")
result.stdout.fnmatch_lines(
[
"collected 2 items / 2 deselected",
"run-last-failure: no previously failed tests, deselecting all items.",
"deselected=2",
"* 2 deselected in *",
]
)

View File

@ -15,7 +15,7 @@ from typing import TextIO
import pytest
from _pytest import capture
from _pytest.capture import CaptureManager
from _pytest.main import ExitCode
from _pytest.config import ExitCode
# note: py.io capture tests where copied from
# pylib 1.4.20.dev2 (rev 13d9af95547e)
@ -34,6 +34,10 @@ def StdCapture(out=True, err=True, in_=True):
return capture.MultiCapture(out, err, in_, Capture=capture.SysCapture)
def TeeStdCapture(out=True, err=True, in_=True):
return capture.MultiCapture(out, err, in_, Capture=capture.TeeSysCapture)
class TestCaptureManager:
def test_getmethod_default_no_fd(self, monkeypatch):
from _pytest.capture import pytest_addoption
@ -473,9 +477,9 @@ class TestCaptureFixture:
result.stdout.fnmatch_lines(
[
"*test_one*",
"*capsys*capfd*same*time*",
"E * cannot use capfd and capsys at the same time",
"*test_two*",
"*capfd*capsys*same*time*",
"E * cannot use capsys and capfd at the same time",
"*2 failed in*",
]
)
@ -818,6 +822,25 @@ class TestCaptureIO:
assert f.getvalue() == "foo\r\n"
class TestCaptureAndPassthroughIO(TestCaptureIO):
def test_text(self):
sio = io.StringIO()
f = capture.CaptureAndPassthroughIO(sio)
f.write("hello")
s1 = f.getvalue()
assert s1 == "hello"
s2 = sio.getvalue()
assert s2 == s1
f.close()
sio.close()
def test_unicode_and_str_mixture(self):
sio = io.StringIO()
f = capture.CaptureAndPassthroughIO(sio)
f.write("\u00f6")
pytest.raises(TypeError, f.write, b"hello")
def test_dontreadfrominput():
from _pytest.capture import DontReadFromInput
@ -1114,6 +1137,23 @@ class TestStdCapture:
pytest.raises(IOError, sys.stdin.read)
class TestTeeStdCapture(TestStdCapture):
captureclass = staticmethod(TeeStdCapture)
def test_capturing_error_recursive(self):
""" for TeeStdCapture since we passthrough stderr/stdout, cap1
should get all output, while cap2 should only get "cap2\n" """
with self.getcapture() as cap1:
print("cap1")
with self.getcapture() as cap2:
print("cap2")
out2, err2 = cap2.readouterr()
out1, err1 = cap1.readouterr()
assert out1 == "cap1\ncap2\n"
assert out2 == "cap2\n"
class TestStdCaptureFD(TestStdCapture):
pytestmark = needsosdup
captureclass = staticmethod(StdCaptureFD)
@ -1254,7 +1294,7 @@ def test_close_and_capture_again(testdir):
)
@pytest.mark.parametrize("method", ["SysCapture", "FDCapture"])
@pytest.mark.parametrize("method", ["SysCapture", "FDCapture", "TeeSysCapture"])
def test_capturing_and_logging_fundamentals(testdir, method):
if method == "StdCaptureFD" and not hasattr(os, "dup"):
pytest.skip("need os.dup")

View File

@ -6,8 +6,8 @@ import textwrap
import py
import pytest
from _pytest.config import ExitCode
from _pytest.main import _in_venv
from _pytest.main import ExitCode
from _pytest.main import Session
from _pytest.pytester import Testdir
@ -79,7 +79,7 @@ class TestCollector:
pass
def pytest_collect_file(path, parent):
if path.ext == ".xxx":
return CustomFile(path, parent=parent)
return CustomFile.from_parent(fspath=path, parent=parent)
"""
)
node = testdir.getpathnode(hello)
@ -442,7 +442,7 @@ class TestCustomConftests:
class TestSession:
def test_parsearg(self, testdir):
def test_parsearg(self, testdir) -> None:
p = testdir.makepyfile("def test_func(): pass")
subdir = testdir.mkdir("sub")
subdir.ensure("__init__.py")
@ -450,16 +450,16 @@ class TestSession:
p.move(target)
subdir.chdir()
config = testdir.parseconfig(p.basename)
rcol = Session(config=config)
rcol = Session.from_config(config)
assert rcol.fspath == subdir
parts = rcol._parsearg(p.basename)
fspath, parts = rcol._parsearg(p.basename)
assert parts[0] == target
assert fspath == target
assert len(parts) == 0
fspath, parts = rcol._parsearg(p.basename + "::test_func")
assert fspath == target
assert parts[0] == "test_func"
assert len(parts) == 1
parts = rcol._parsearg(p.basename + "::test_func")
assert parts[0] == target
assert parts[1] == "test_func"
assert len(parts) == 2
def test_collect_topdir(self, testdir):
p = testdir.makepyfile("def test_func(): pass")
@ -467,7 +467,7 @@ class TestSession:
# XXX migrate to collectonly? (see below)
config = testdir.parseconfig(id)
topdir = testdir.tmpdir
rcol = Session(config)
rcol = Session.from_config(config)
assert topdir == rcol.fspath
# rootid = rcol.nodeid
# root2 = rcol.perform_collect([rcol.nodeid], genitems=False)[0]
@ -813,6 +813,43 @@ class TestNodekeywords:
reprec = testdir.inline_run("-k repr")
reprec.assertoutcome(passed=1, failed=0)
def test_keyword_matching_is_case_insensitive_by_default(self, testdir):
"""Check that selection via -k EXPRESSION is case-insensitive.
Since markers are also added to the node keywords, they too can
be matched without having to think about case sensitivity.
"""
testdir.makepyfile(
"""
import pytest
def test_sPeCiFiCToPiC_1():
assert True
class TestSpecificTopic_2:
def test(self):
assert True
@pytest.mark.sPeCiFiCToPic_3
def test():
assert True
@pytest.mark.sPeCiFiCToPic_4
class Test:
def test(self):
assert True
def test_failing_5():
assert False, "This should not match"
"""
)
num_matching_tests = 4
for expression in ("specifictopic", "SPECIFICTOPIC", "SpecificTopic"):
reprec = testdir.inline_run("-k " + expression)
reprec.assertoutcome(passed=num_matching_tests, failed=0)
COLLECTION_ERROR_PY_FILES = dict(
test_01_failure="""

View File

@ -7,11 +7,11 @@ import pytest
from _pytest.compat import importlib_metadata
from _pytest.config import _iter_rewritable_modules
from _pytest.config import Config
from _pytest.config import ExitCode
from _pytest.config.exceptions import UsageError
from _pytest.config.findpaths import determine_setup
from _pytest.config.findpaths import get_common_ancestor
from _pytest.config.findpaths import getcfg
from _pytest.main import ExitCode
from _pytest.pathlib import Path
@ -659,6 +659,13 @@ def test_disable_plugin_autoload(testdir, monkeypatch, parse_args, should_load):
class PseudoPlugin:
x = 42
attrs_used = []
def __getattr__(self, name):
assert name == "__loader__"
self.attrs_used.append(name)
return object()
def distributions():
return (Distribution(),)
@ -668,6 +675,10 @@ def test_disable_plugin_autoload(testdir, monkeypatch, parse_args, should_load):
config = testdir.parseconfig(*parse_args)
has_loaded = config.pluginmanager.get_plugin("mytestplugin") is not None
assert has_loaded == should_load
if should_load:
assert PseudoPlugin.attrs_used == ["__loader__"]
else:
assert PseudoPlugin.attrs_used == []
def test_plugin_loading_order(testdir):
@ -676,7 +687,7 @@ def test_plugin_loading_order(testdir):
"""
def test_terminal_plugin(request):
import myplugin
assert myplugin.terminal_plugin == [True, True]
assert myplugin.terminal_plugin == [False, True]
""",
**{
"myplugin": """
@ -1123,7 +1134,7 @@ class TestOverrideIniArgs:
% (testdir.request.config._parser.optparser.prog,)
]
)
assert result.ret == _pytest.main.ExitCode.USAGE_ERROR
assert result.ret == _pytest.config.ExitCode.USAGE_ERROR
def test_override_ini_does_not_contain_paths(self, _config_for_test, _sys_snapshot):
"""Check that -o no longer swallows all options after it (#3103)"""

View File

@ -4,8 +4,8 @@ import textwrap
import py
import pytest
from _pytest.config import ExitCode
from _pytest.config import PytestPluginManager
from _pytest.main import ExitCode
from _pytest.pathlib import Path

View File

@ -22,7 +22,7 @@ def pdb_env(request):
if "testdir" in request.fixturenames:
# Disable pdb++ with inner tests.
testdir = request.getfixturevalue("testdir")
testdir._env_run_update["PDBPP_HIJACK_PDB"] = "0"
testdir.monkeypatch.setenv("PDBPP_HIJACK_PDB", "0")
def runpdb_and_get_report(testdir, source):
@ -193,7 +193,7 @@ class TestPDB:
)
child = testdir.spawn_pytest("-rs --pdb %s" % p1)
child.expect("Skipping also with pdb active")
child.expect_exact("= \x1b[33m\x1b[1m1 skipped\x1b[0m\x1b[33m in")
child.expect_exact("= 1 skipped in")
child.sendeof()
self.flush(child)
@ -221,7 +221,7 @@ class TestPDB:
child.sendeof()
rest = child.read().decode("utf8")
assert "Exit: Quitting debugger" in rest
assert "= \x1b[31m\x1b[1m1 failed\x1b[0m\x1b[31m in" in rest
assert "= 1 failed in" in rest
assert "def test_1" not in rest
assert "get rekt" not in rest
self.flush(child)
@ -506,7 +506,7 @@ class TestPDB:
rest = child.read().decode("utf8")
assert "! _pytest.outcomes.Exit: Quitting debugger !" in rest
assert "= \x1b[33mno tests ran\x1b[0m\x1b[33m in" in rest
assert "= no tests ran in" in rest
assert "BdbQuit" not in rest
assert "UNEXPECTED EXCEPTION" not in rest
@ -725,7 +725,7 @@ class TestPDB:
assert "> PDB continue (IO-capturing resumed) >" in rest
else:
assert "> PDB continue >" in rest
assert "= \x1b[32m\x1b[1m1 passed\x1b[0m\x1b[32m in" in rest
assert "= 1 passed in" in rest
def test_pdb_used_outside_test(self, testdir):
p1 = testdir.makepyfile(
@ -1041,7 +1041,7 @@ class TestTraceOption:
child.sendline("q")
child.expect_exact("Exit: Quitting debugger")
rest = child.read().decode("utf8")
assert "= \x1b[32m\x1b[1m2 passed\x1b[0m\x1b[32m in" in rest
assert "= 2 passed in" in rest
assert "reading from stdin while output" not in rest
# Only printed once - not on stderr.
assert "Exit: Quitting debugger" not in child.before.decode("utf8")
@ -1086,7 +1086,7 @@ class TestTraceOption:
child.sendline("c")
child.expect_exact("> PDB continue (IO-capturing resumed) >")
rest = child.read().decode("utf8")
assert "= \x1b[32m\x1b[1m6 passed\x1b[0m\x1b[32m in" in rest
assert "= 6 passed in" in rest
assert "reading from stdin while output" not in rest
# Only printed once - not on stderr.
assert "Exit: Quitting debugger" not in child.before.decode("utf8")
@ -1197,7 +1197,7 @@ def test_pdb_suspends_fixture_capturing(testdir, fixture):
TestPDB.flush(child)
assert child.exitstatus == 0
assert "= \x1b[32m\x1b[1m1 passed\x1b[0m\x1b[32m in" in rest
assert "= 1 passed in" in rest
assert "> PDB continue (IO-capturing resumed for fixture %s) >" % (fixture) in rest

View File

@ -1,5 +1,5 @@
import pytest
from _pytest.main import ExitCode
from _pytest.config import ExitCode
def test_version(testdir, pytestconfig):

View File

@ -445,7 +445,9 @@ class TestPython:
fnode.assert_attr(message="internal error")
assert "Division" in fnode.toxml()
@pytest.mark.parametrize("junit_logging", ["no", "system-out", "system-err"])
@pytest.mark.parametrize(
"junit_logging", ["no", "log", "system-out", "system-err", "out-err", "all"]
)
@parametrize_families
def test_failure_function(
self, testdir, junit_logging, run_and_parse, xunit_family
@ -467,35 +469,48 @@ class TestPython:
result, dom = run_and_parse(
"-o", "junit_logging=%s" % junit_logging, family=xunit_family
)
assert result.ret
assert result.ret, "Expected ret > 0"
node = dom.find_first_by_tag("testsuite")
node.assert_attr(failures=1, tests=1)
tnode = node.find_first_by_tag("testcase")
tnode.assert_attr(classname="test_failure_function", name="test_fail")
fnode = tnode.find_first_by_tag("failure")
fnode.assert_attr(message="ValueError: 42")
assert "ValueError" in fnode.toxml()
systemout = fnode.next_sibling
assert systemout.tag == "system-out"
systemout_xml = systemout.toxml()
assert "hello-stdout" in systemout_xml
assert "info msg" not in systemout_xml
systemerr = systemout.next_sibling
assert systemerr.tag == "system-err"
systemerr_xml = systemerr.toxml()
assert "hello-stderr" in systemerr_xml
assert "info msg" not in systemerr_xml
assert "ValueError" in fnode.toxml(), "ValueError not included"
if junit_logging == "system-out":
assert "warning msg" in systemout_xml
assert "warning msg" not in systemerr_xml
elif junit_logging == "system-err":
assert "warning msg" not in systemout_xml
assert "warning msg" in systemerr_xml
else:
assert junit_logging == "no"
assert "warning msg" not in systemout_xml
assert "warning msg" not in systemerr_xml
if junit_logging in ["log", "all"]:
logdata = tnode.find_first_by_tag("system-out")
log_xml = logdata.toxml()
assert logdata.tag == "system-out", "Expected tag: system-out"
assert "info msg" not in log_xml, "Unexpected INFO message"
assert "warning msg" in log_xml, "Missing WARN message"
if junit_logging in ["system-out", "out-err", "all"]:
systemout = tnode.find_first_by_tag("system-out")
systemout_xml = systemout.toxml()
assert systemout.tag == "system-out", "Expected tag: system-out"
assert "info msg" not in systemout_xml, "INFO message found in system-out"
assert (
"hello-stdout" in systemout_xml
), "Missing 'hello-stdout' in system-out"
if junit_logging in ["system-err", "out-err", "all"]:
systemerr = tnode.find_first_by_tag("system-err")
systemerr_xml = systemerr.toxml()
assert systemerr.tag == "system-err", "Expected tag: system-err"
assert "info msg" not in systemerr_xml, "INFO message found in system-err"
assert (
"hello-stderr" in systemerr_xml
), "Missing 'hello-stderr' in system-err"
assert (
"warning msg" not in systemerr_xml
), "WARN message found in system-err"
if junit_logging == "no":
assert not tnode.find_by_tag("log"), "Found unexpected content: log"
assert not tnode.find_by_tag(
"system-out"
), "Found unexpected content: system-out"
assert not tnode.find_by_tag(
"system-err"
), "Found unexpected content: system-err"
@parametrize_families
def test_failure_verbose_message(self, testdir, run_and_parse, xunit_family):
@ -523,7 +538,9 @@ class TestPython:
assert 0
"""
)
result, dom = run_and_parse(family=xunit_family)
result, dom = run_and_parse(
"-o", "junit_logging=system-out", family=xunit_family
)
assert result.ret
node = dom.find_first_by_tag("testsuite")
node.assert_attr(failures=3, tests=3)
@ -536,7 +553,7 @@ class TestPython:
)
sysout = tnode.find_first_by_tag("system-out")
text = sysout.text
assert text == "%s\n" % char
assert "%s\n" % char in text
@parametrize_families
def test_junit_prefixing(self, testdir, run_and_parse, xunit_family):
@ -597,7 +614,10 @@ class TestPython:
fnode = tnode.find_first_by_tag("skipped")
fnode.assert_attr(type="pytest.xfail", message="42")
def test_xfail_captures_output_once(self, testdir, run_and_parse):
@pytest.mark.parametrize(
"junit_logging", ["no", "log", "system-out", "system-err", "out-err", "all"]
)
def test_xfail_captures_output_once(self, testdir, junit_logging, run_and_parse):
testdir.makepyfile(
"""
import sys
@ -610,11 +630,18 @@ class TestPython:
assert 0
"""
)
result, dom = run_and_parse()
result, dom = run_and_parse("-o", "junit_logging=%s" % junit_logging)
node = dom.find_first_by_tag("testsuite")
tnode = node.find_first_by_tag("testcase")
assert len(tnode.find_by_tag("system-err")) == 1
assert len(tnode.find_by_tag("system-out")) == 1
if junit_logging in ["system-err", "out-err", "all"]:
assert len(tnode.find_by_tag("system-err")) == 1
else:
assert len(tnode.find_by_tag("system-err")) == 0
if junit_logging in ["log", "system-out", "out-err", "all"]:
assert len(tnode.find_by_tag("system-out")) == 1
else:
assert len(tnode.find_by_tag("system-out")) == 0
@parametrize_families
def test_xfailure_xpass(self, testdir, run_and_parse, xunit_family):
@ -696,20 +723,29 @@ class TestPython:
result, dom = run_and_parse()
print(dom.toxml())
def test_pass_captures_stdout(self, testdir, run_and_parse):
@pytest.mark.parametrize("junit_logging", ["no", "system-out"])
def test_pass_captures_stdout(self, testdir, run_and_parse, junit_logging):
testdir.makepyfile(
"""
def test_pass():
print('hello-stdout')
"""
)
result, dom = run_and_parse()
result, dom = run_and_parse("-o", "junit_logging=%s" % junit_logging)
node = dom.find_first_by_tag("testsuite")
pnode = node.find_first_by_tag("testcase")
systemout = pnode.find_first_by_tag("system-out")
assert "hello-stdout" in systemout.toxml()
if junit_logging == "no":
assert not node.find_by_tag(
"system-out"
), "system-out should not be generated"
if junit_logging == "system-out":
systemout = pnode.find_first_by_tag("system-out")
assert (
"hello-stdout" in systemout.toxml()
), "'hello-stdout' should be in system-out"
def test_pass_captures_stderr(self, testdir, run_and_parse):
@pytest.mark.parametrize("junit_logging", ["no", "system-err"])
def test_pass_captures_stderr(self, testdir, run_and_parse, junit_logging):
testdir.makepyfile(
"""
import sys
@ -717,13 +753,21 @@ class TestPython:
sys.stderr.write('hello-stderr')
"""
)
result, dom = run_and_parse()
result, dom = run_and_parse("-o", "junit_logging=%s" % junit_logging)
node = dom.find_first_by_tag("testsuite")
pnode = node.find_first_by_tag("testcase")
systemout = pnode.find_first_by_tag("system-err")
assert "hello-stderr" in systemout.toxml()
if junit_logging == "no":
assert not node.find_by_tag(
"system-err"
), "system-err should not be generated"
if junit_logging == "system-err":
systemerr = pnode.find_first_by_tag("system-err")
assert (
"hello-stderr" in systemerr.toxml()
), "'hello-stderr' should be in system-err"
def test_setup_error_captures_stdout(self, testdir, run_and_parse):
@pytest.mark.parametrize("junit_logging", ["no", "system-out"])
def test_setup_error_captures_stdout(self, testdir, run_and_parse, junit_logging):
testdir.makepyfile(
"""
import pytest
@ -736,13 +780,21 @@ class TestPython:
pass
"""
)
result, dom = run_and_parse()
result, dom = run_and_parse("-o", "junit_logging=%s" % junit_logging)
node = dom.find_first_by_tag("testsuite")
pnode = node.find_first_by_tag("testcase")
systemout = pnode.find_first_by_tag("system-out")
assert "hello-stdout" in systemout.toxml()
if junit_logging == "no":
assert not node.find_by_tag(
"system-out"
), "system-out should not be generated"
if junit_logging == "system-out":
systemout = pnode.find_first_by_tag("system-out")
assert (
"hello-stdout" in systemout.toxml()
), "'hello-stdout' should be in system-out"
def test_setup_error_captures_stderr(self, testdir, run_and_parse):
@pytest.mark.parametrize("junit_logging", ["no", "system-err"])
def test_setup_error_captures_stderr(self, testdir, run_and_parse, junit_logging):
testdir.makepyfile(
"""
import sys
@ -756,13 +808,21 @@ class TestPython:
pass
"""
)
result, dom = run_and_parse()
result, dom = run_and_parse("-o", "junit_logging=%s" % junit_logging)
node = dom.find_first_by_tag("testsuite")
pnode = node.find_first_by_tag("testcase")
systemout = pnode.find_first_by_tag("system-err")
assert "hello-stderr" in systemout.toxml()
if junit_logging == "no":
assert not node.find_by_tag(
"system-err"
), "system-err should not be generated"
if junit_logging == "system-err":
systemerr = pnode.find_first_by_tag("system-err")
assert (
"hello-stderr" in systemerr.toxml()
), "'hello-stderr' should be in system-err"
def test_avoid_double_stdout(self, testdir, run_and_parse):
@pytest.mark.parametrize("junit_logging", ["no", "system-out"])
def test_avoid_double_stdout(self, testdir, run_and_parse, junit_logging):
testdir.makepyfile(
"""
import sys
@ -777,12 +837,17 @@ class TestPython:
sys.stdout.write('hello-stdout call')
"""
)
result, dom = run_and_parse()
result, dom = run_and_parse("-o", "junit_logging=%s" % junit_logging)
node = dom.find_first_by_tag("testsuite")
pnode = node.find_first_by_tag("testcase")
systemout = pnode.find_first_by_tag("system-out")
assert "hello-stdout call" in systemout.toxml()
assert "hello-stdout teardown" in systemout.toxml()
if junit_logging == "no":
assert not node.find_by_tag(
"system-out"
), "system-out should not be generated"
if junit_logging == "system-out":
systemout = pnode.find_first_by_tag("system-out")
assert "hello-stdout call" in systemout.toxml()
assert "hello-stdout teardown" in systemout.toxml()
def test_mangle_test_address():
@ -850,7 +915,8 @@ class TestNonPython:
assert "custom item runtest failed" in fnode.toxml()
def test_nullbyte(testdir):
@pytest.mark.parametrize("junit_logging", ["no", "system-out"])
def test_nullbyte(testdir, junit_logging):
# A null byte can not occur in XML (see section 2.2 of the spec)
testdir.makepyfile(
"""
@ -862,13 +928,17 @@ def test_nullbyte(testdir):
"""
)
xmlf = testdir.tmpdir.join("junit.xml")
testdir.runpytest("--junitxml=%s" % xmlf)
testdir.runpytest("--junitxml=%s" % xmlf, "-o", "junit_logging=%s" % junit_logging)
text = xmlf.read()
assert "\x00" not in text
assert "#x00" in text
if junit_logging == "system-out":
assert "#x00" in text
if junit_logging == "no":
assert "#x00" not in text
def test_nullbyte_replace(testdir):
@pytest.mark.parametrize("junit_logging", ["no", "system-out"])
def test_nullbyte_replace(testdir, junit_logging):
# Check if the null byte gets replaced
testdir.makepyfile(
"""
@ -880,9 +950,12 @@ def test_nullbyte_replace(testdir):
"""
)
xmlf = testdir.tmpdir.join("junit.xml")
testdir.runpytest("--junitxml=%s" % xmlf)
testdir.runpytest("--junitxml=%s" % xmlf, "-o", "junit_logging=%s" % junit_logging)
text = xmlf.read()
assert "#x0" in text
if junit_logging == "system-out":
assert "#x0" in text
if junit_logging == "no":
assert "#x0" not in text
def test_invalid_xml_escape():

View File

@ -1,7 +1,7 @@
from typing import Optional
import pytest
from _pytest.main import ExitCode
from _pytest.config import ExitCode
from _pytest.pytester import Testdir

View File

@ -3,7 +3,7 @@ import sys
from unittest import mock
import pytest
from _pytest.main import ExitCode
from _pytest.config import ExitCode
from _pytest.mark import EMPTY_PARAMETERSET_OPTION
from _pytest.mark import MarkGenerator as Mark
from _pytest.nodes import Collector
@ -962,7 +962,11 @@ def test_mark_expressions_no_smear(testdir):
def test_addmarker_order():
node = Node("Test", config=mock.Mock(), session=mock.Mock(), nodeid="Test")
session = mock.Mock()
session.own_markers = []
session.parent = None
session.nodeid = ""
node = Node.from_parent(session, name="Test")
node.add_marker("foo")
node.add_marker("bar")
node.add_marker("baz", append=False)

View File

@ -22,6 +22,13 @@ def test_ischildnode(baseid, nodeid, expected):
assert result is expected
def test_node_from_parent_disallowed_arguments():
with pytest.raises(TypeError, match="session is"):
nodes.Node.from_parent(None, session=None)
with pytest.raises(TypeError, match="config is"):
nodes.Node.from_parent(None, config=None)
def test_std_warn_not_pytestwarning(testdir):
items = testdir.getitems(
"""

View File

@ -12,22 +12,22 @@ from _pytest.config.exceptions import UsageError
@pytest.fixture
def parser():
def parser() -> parseopt.Parser:
return parseopt.Parser()
class TestParser:
def test_no_help_by_default(self):
def test_no_help_by_default(self) -> None:
parser = parseopt.Parser(usage="xyz")
pytest.raises(UsageError, lambda: parser.parse(["-h"]))
def test_custom_prog(self, parser):
def test_custom_prog(self, parser: parseopt.Parser) -> None:
"""Custom prog can be set for `argparse.ArgumentParser`."""
assert parser._getparser().prog == os.path.basename(sys.argv[0])
parser.prog = "custom-prog"
assert parser._getparser().prog == "custom-prog"
def test_argument(self):
def test_argument(self) -> None:
with pytest.raises(parseopt.ArgumentError):
# need a short or long option
argument = parseopt.Argument()
@ -45,7 +45,7 @@ class TestParser:
"Argument(_short_opts: ['-t'], _long_opts: ['--test'], dest: 'abc')"
)
def test_argument_type(self):
def test_argument_type(self) -> None:
argument = parseopt.Argument("-t", dest="abc", type=int)
assert argument.type is int
argument = parseopt.Argument("-t", dest="abc", type=str)
@ -60,7 +60,7 @@ class TestParser:
)
assert argument.type is str
def test_argument_processopt(self):
def test_argument_processopt(self) -> None:
argument = parseopt.Argument("-t", type=int)
argument.default = 42
argument.dest = "abc"
@ -68,19 +68,19 @@ class TestParser:
assert res["default"] == 42
assert res["dest"] == "abc"
def test_group_add_and_get(self, parser):
def test_group_add_and_get(self, parser: parseopt.Parser) -> None:
group = parser.getgroup("hello", description="desc")
assert group.name == "hello"
assert group.description == "desc"
def test_getgroup_simple(self, parser):
def test_getgroup_simple(self, parser: parseopt.Parser) -> None:
group = parser.getgroup("hello", description="desc")
assert group.name == "hello"
assert group.description == "desc"
group2 = parser.getgroup("hello")
assert group2 is group
def test_group_ordering(self, parser):
def test_group_ordering(self, parser: parseopt.Parser) -> None:
parser.getgroup("1")
parser.getgroup("2")
parser.getgroup("3", after="1")
@ -88,20 +88,20 @@ class TestParser:
groups_names = [x.name for x in groups]
assert groups_names == list("132")
def test_group_addoption(self):
def test_group_addoption(self) -> None:
group = parseopt.OptionGroup("hello")
group.addoption("--option1", action="store_true")
assert len(group.options) == 1
assert isinstance(group.options[0], parseopt.Argument)
def test_group_addoption_conflict(self):
def test_group_addoption_conflict(self) -> None:
group = parseopt.OptionGroup("hello again")
group.addoption("--option1", "--option-1", action="store_true")
with pytest.raises(ValueError) as err:
group.addoption("--option1", "--option-one", action="store_true")
assert str({"--option1"}) in str(err.value)
def test_group_shortopt_lowercase(self, parser):
def test_group_shortopt_lowercase(self, parser: parseopt.Parser) -> None:
group = parser.getgroup("hello")
with pytest.raises(ValueError):
group.addoption("-x", action="store_true")
@ -109,30 +109,30 @@ class TestParser:
group._addoption("-x", action="store_true")
assert len(group.options) == 1
def test_parser_addoption(self, parser):
def test_parser_addoption(self, parser: parseopt.Parser) -> None:
group = parser.getgroup("custom options")
assert len(group.options) == 0
group.addoption("--option1", action="store_true")
assert len(group.options) == 1
def test_parse(self, parser):
def test_parse(self, parser: parseopt.Parser) -> None:
parser.addoption("--hello", dest="hello", action="store")
args = parser.parse(["--hello", "world"])
assert args.hello == "world"
assert not getattr(args, parseopt.FILE_OR_DIR)
def test_parse2(self, parser):
def test_parse2(self, parser: parseopt.Parser) -> None:
args = parser.parse([py.path.local()])
assert getattr(args, parseopt.FILE_OR_DIR)[0] == py.path.local()
def test_parse_known_args(self, parser):
def test_parse_known_args(self, parser: parseopt.Parser) -> None:
parser.parse_known_args([py.path.local()])
parser.addoption("--hello", action="store_true")
ns = parser.parse_known_args(["x", "--y", "--hello", "this"])
assert ns.hello
assert ns.file_or_dir == ["x"]
def test_parse_known_and_unknown_args(self, parser):
def test_parse_known_and_unknown_args(self, parser: parseopt.Parser) -> None:
parser.addoption("--hello", action="store_true")
ns, unknown = parser.parse_known_and_unknown_args(
["x", "--y", "--hello", "this"]
@ -141,7 +141,7 @@ class TestParser:
assert ns.file_or_dir == ["x"]
assert unknown == ["--y", "this"]
def test_parse_will_set_default(self, parser):
def test_parse_will_set_default(self, parser: parseopt.Parser) -> None:
parser.addoption("--hello", dest="hello", default="x", action="store")
option = parser.parse([])
assert option.hello == "x"
@ -149,25 +149,22 @@ class TestParser:
parser.parse_setoption([], option)
assert option.hello == "x"
def test_parse_setoption(self, parser):
def test_parse_setoption(self, parser: parseopt.Parser) -> None:
parser.addoption("--hello", dest="hello", action="store")
parser.addoption("--world", dest="world", default=42)
class A:
pass
option = A()
option = argparse.Namespace()
args = parser.parse_setoption(["--hello", "world"], option)
assert option.hello == "world"
assert option.world == 42
assert not args
def test_parse_special_destination(self, parser):
def test_parse_special_destination(self, parser: parseopt.Parser) -> None:
parser.addoption("--ultimate-answer", type=int)
args = parser.parse(["--ultimate-answer", "42"])
assert args.ultimate_answer == 42
def test_parse_split_positional_arguments(self, parser):
def test_parse_split_positional_arguments(self, parser: parseopt.Parser) -> None:
parser.addoption("-R", action="store_true")
parser.addoption("-S", action="store_false")
args = parser.parse(["-R", "4", "2", "-S"])
@ -181,7 +178,7 @@ class TestParser:
assert args.R is True
assert args.S is False
def test_parse_defaultgetter(self):
def test_parse_defaultgetter(self) -> None:
def defaultget(option):
if not hasattr(option, "type"):
return
@ -199,17 +196,17 @@ class TestParser:
assert option.this == 42
assert option.no is False
def test_drop_short_helper(self):
def test_drop_short_helper(self) -> None:
parser = argparse.ArgumentParser(
formatter_class=parseopt.DropShorterLongHelpFormatter, allow_abbrev=False
)
parser.add_argument(
"-t", "--twoword", "--duo", "--two-word", "--two", help="foo"
).map_long_option = {"two": "two-word"}
)
# throws error on --deux only!
parser.add_argument(
"-d", "--deuxmots", "--deux-mots", action="store_true", help="foo"
).map_long_option = {"deux": "deux-mots"}
)
parser.add_argument("-s", action="store_true", help="single short")
parser.add_argument("--abc", "-a", action="store_true", help="bar")
parser.add_argument("--klm", "-k", "--kl-m", action="store_true", help="bar")
@ -221,7 +218,7 @@ class TestParser:
)
parser.add_argument(
"-x", "--exit-on-first", "--exitfirst", action="store_true", help="spam"
).map_long_option = {"exitfirst": "exit-on-first"}
)
parser.add_argument("files_and_dirs", nargs="*")
args = parser.parse_args(["-k", "--duo", "hallo", "--exitfirst"])
assert args.twoword == "hallo"
@ -236,32 +233,32 @@ class TestParser:
args = parser.parse_args(["file", "dir"])
assert "|".join(args.files_and_dirs) == "file|dir"
def test_drop_short_0(self, parser):
def test_drop_short_0(self, parser: parseopt.Parser) -> None:
parser.addoption("--funcarg", "--func-arg", action="store_true")
parser.addoption("--abc-def", "--abc-def", action="store_true")
parser.addoption("--klm-hij", action="store_true")
with pytest.raises(UsageError):
parser.parse(["--funcarg", "--k"])
def test_drop_short_2(self, parser):
def test_drop_short_2(self, parser: parseopt.Parser) -> None:
parser.addoption("--func-arg", "--doit", action="store_true")
args = parser.parse(["--doit"])
assert args.func_arg is True
def test_drop_short_3(self, parser):
def test_drop_short_3(self, parser: parseopt.Parser) -> None:
parser.addoption("--func-arg", "--funcarg", "--doit", action="store_true")
args = parser.parse(["abcd"])
assert args.func_arg is False
assert args.file_or_dir == ["abcd"]
def test_drop_short_help0(self, parser):
def test_drop_short_help0(self, parser: parseopt.Parser) -> None:
parser.addoption("--func-args", "--doit", help="foo", action="store_true")
parser.parse([])
help = parser.optparser.format_help()
assert "--func-args, --doit foo" in help
# testing would be more helpful with all help generated
def test_drop_short_help1(self, parser):
def test_drop_short_help1(self, parser: parseopt.Parser) -> None:
group = parser.getgroup("general")
group.addoption("--doit", "--func-args", action="store_true", help="foo")
group._addoption(
@ -275,7 +272,7 @@ class TestParser:
help = parser.optparser.format_help()
assert "-doit, --func-args foo" in help
def test_multiple_metavar_help(self, parser):
def test_multiple_metavar_help(self, parser: parseopt.Parser) -> None:
"""
Help text for options with a metavar tuple should display help
in the form "--preferences=value1 value2 value3" (#2004).
@ -290,7 +287,7 @@ class TestParser:
assert "--preferences=value1 value2 value3" in help
def test_argcomplete(testdir, monkeypatch):
def test_argcomplete(testdir, monkeypatch) -> None:
if not shutil.which("bash"):
pytest.skip("bash not available")
script = str(testdir.tmpdir.join("test_argcomplete"))

View File

@ -3,9 +3,9 @@ import sys
import types
import pytest
from _pytest.config import ExitCode
from _pytest.config import PytestPluginManager
from _pytest.config.exceptions import UsageError
from _pytest.main import ExitCode
from _pytest.main import Session
@ -122,7 +122,7 @@ class TestPytestPluginInteractions:
def test_hook_proxy(self, testdir):
"""Test the gethookproxy function(#2016)"""
config = testdir.parseconfig()
session = Session(config)
session = Session.from_config(config)
testdir.makepyfile(**{"tests/conftest.py": "", "tests/subdir/conftest.py": ""})
conftest1 = testdir.tmpdir.join("tests/conftest.py")

View File

@ -8,8 +8,8 @@ import py.path
import _pytest.pytester as pytester
import pytest
from _pytest.config import ExitCode
from _pytest.config import PytestPluginManager
from _pytest.main import ExitCode
from _pytest.outcomes import Failed
from _pytest.pytester import CwdSnapshot
from _pytest.pytester import HookRecorder
@ -458,17 +458,26 @@ def test_testdir_run_timeout_expires(testdir) -> None:
def test_linematcher_with_nonlist() -> None:
"""Test LineMatcher with regard to passing in a set (accidentally)."""
lm = LineMatcher([])
from _pytest._code.source import Source
with pytest.raises(AssertionError):
lm.fnmatch_lines(set())
with pytest.raises(AssertionError):
lm.fnmatch_lines({})
lm = LineMatcher([])
with pytest.raises(TypeError, match="invalid type for lines2: set"):
lm.fnmatch_lines(set()) # type: ignore[arg-type] # noqa: F821
with pytest.raises(TypeError, match="invalid type for lines2: dict"):
lm.fnmatch_lines({}) # type: ignore[arg-type] # noqa: F821
with pytest.raises(TypeError, match="invalid type for lines2: set"):
lm.re_match_lines(set()) # type: ignore[arg-type] # noqa: F821
with pytest.raises(TypeError, match="invalid type for lines2: dict"):
lm.re_match_lines({}) # type: ignore[arg-type] # noqa: F821
with pytest.raises(TypeError, match="invalid type for lines2: Source"):
lm.fnmatch_lines(Source()) # type: ignore[arg-type] # noqa: F821
lm.fnmatch_lines([])
lm.fnmatch_lines(())
assert lm._getlines({}) == {}
assert lm._getlines(set()) == set()
lm.fnmatch_lines("")
assert lm._getlines({}) == {} # type: ignore[arg-type,comparison-overlap] # noqa: F821
assert lm._getlines(set()) == set() # type: ignore[arg-type,comparison-overlap] # noqa: F821
assert lm._getlines(Source()) == []
assert lm._getlines(Source("pass\npass")) == ["pass", "pass"]
def test_linematcher_match_failure() -> None:
@ -499,8 +508,28 @@ def test_linematcher_match_failure() -> None:
]
def test_linematcher_consecutive():
lm = LineMatcher(["1", "", "2"])
with pytest.raises(pytest.fail.Exception) as excinfo:
lm.fnmatch_lines(["1", "2"], consecutive=True)
assert str(excinfo.value).splitlines() == [
"exact match: '1'",
"no consecutive match: '2'",
" with: ''",
]
lm.re_match_lines(["1", r"\d?", "2"], consecutive=True)
with pytest.raises(pytest.fail.Exception) as excinfo:
lm.re_match_lines(["1", r"\d", "2"], consecutive=True)
assert str(excinfo.value).splitlines() == [
"exact match: '1'",
r"no consecutive match: '\\d'",
" with: ''",
]
@pytest.mark.parametrize("function", ["no_fnmatch_line", "no_re_match_line"])
def test_no_matching(function) -> None:
def test_linematcher_no_matching(function) -> None:
if function == "no_fnmatch_line":
good_pattern = "*.py OK*"
bad_pattern = "*X.py OK*"
@ -548,7 +577,7 @@ def test_no_matching(function) -> None:
func(bad_pattern) # bad pattern does not match any line: passes
def test_no_matching_after_match() -> None:
def test_linematcher_no_matching_after_match() -> None:
lm = LineMatcher(["1", "2", "3"])
lm.fnmatch_lines(["1", "3"])
with pytest.raises(Failed) as e:
@ -556,17 +585,15 @@ def test_no_matching_after_match() -> None:
assert str(e.value).splitlines() == ["fnmatch: '*'", " with: '1'"]
def test_pytester_addopts(request, monkeypatch) -> None:
def test_pytester_addopts_before_testdir(request, monkeypatch) -> None:
orig = os.environ.get("PYTEST_ADDOPTS", None)
monkeypatch.setenv("PYTEST_ADDOPTS", "--orig-unused")
testdir = request.getfixturevalue("testdir")
try:
assert "PYTEST_ADDOPTS" not in os.environ
finally:
testdir.finalize()
assert os.environ["PYTEST_ADDOPTS"] == "--orig-unused"
assert "PYTEST_ADDOPTS" not in os.environ
testdir.finalize()
assert os.environ.get("PYTEST_ADDOPTS") == "--orig-unused"
monkeypatch.undo()
assert os.environ.get("PYTEST_ADDOPTS") == orig
def test_run_stdin(testdir) -> None:
@ -646,14 +673,10 @@ def test_popen_default_stdin_stderr_and_stdin_None(testdir) -> None:
def test_spawn_uses_tmphome(testdir) -> None:
import os
tmphome = str(testdir.tmpdir)
assert os.environ.get("HOME") == tmphome
# Does use HOME only during run.
assert os.environ.get("HOME") != tmphome
testdir._env_run_update["CUSTOMENV"] = "42"
testdir.monkeypatch.setenv("CUSTOMENV", "42")
p1 = testdir.makepyfile(
"""

View File

@ -10,10 +10,10 @@ import py
import _pytest._code
import pytest
from _pytest import main
from _pytest import outcomes
from _pytest import reports
from _pytest import runner
from _pytest.config import ExitCode
from _pytest.outcomes import Exit
from _pytest.outcomes import Failed
from _pytest.outcomes import OutcomeException
@ -681,7 +681,7 @@ def test_pytest_fail_notrace_non_ascii(testdir) -> None:
def test_pytest_no_tests_collected_exit_status(testdir) -> None:
result = testdir.runpytest()
result.stdout.fnmatch_lines(["*collected 0 items*"])
assert result.ret == main.ExitCode.NO_TESTS_COLLECTED
assert result.ret == ExitCode.NO_TESTS_COLLECTED
testdir.makepyfile(
test_foo="""
@ -692,12 +692,12 @@ def test_pytest_no_tests_collected_exit_status(testdir) -> None:
result = testdir.runpytest()
result.stdout.fnmatch_lines(["*collected 1 item*"])
result.stdout.fnmatch_lines(["*1 passed*"])
assert result.ret == main.ExitCode.OK
assert result.ret == ExitCode.OK
result = testdir.runpytest("-k nonmatch")
result.stdout.fnmatch_lines(["*collected 1 item*"])
result.stdout.fnmatch_lines(["*1 deselected*"])
assert result.ret == main.ExitCode.NO_TESTS_COLLECTED
assert result.ret == ExitCode.NO_TESTS_COLLECTED
def test_exception_printing_skip() -> None:

View File

@ -1,5 +1,5 @@
import pytest
from _pytest.main import ExitCode
from _pytest.config import ExitCode
class SessionTests:

View File

@ -1,5 +1,5 @@
import pytest
from _pytest.main import ExitCode
from _pytest.config import ExitCode
@pytest.fixture(params=["--setup-only", "--setup-plan", "--setup-show"], scope="module")

View File

@ -3,7 +3,6 @@ terminal reporting of the full testing process.
"""
import collections
import os
import re
import sys
import textwrap
from io import StringIO
@ -12,7 +11,7 @@ import pluggy
import py
import pytest
from _pytest.main import ExitCode
from _pytest.config import ExitCode
from _pytest.pytester import Testdir
from _pytest.reports import BaseReport
from _pytest.terminal import _folded_skips
@ -24,14 +23,6 @@ from _pytest.terminal import TerminalReporter
DistInfo = collections.namedtuple("DistInfo", ["project_name", "version"])
COLORS = {
"red": "\x1b[31m",
"green": "\x1b[32m",
"yellow": "\x1b[33m",
"bold": "\x1b[1m",
"reset": "\x1b[0m",
}
RE_COLORS = {k: re.escape(v) for k, v in COLORS.items()}
TRANS_FNMATCH = str.maketrans({"[": "[[]", "]": "[]]"})
@ -163,6 +154,8 @@ class TestTerminal:
"test2.py": "def test_2(): pass",
}
)
# Explicitly test colored output.
testdir.monkeypatch.setenv("PY_COLORS", "1")
child = testdir.spawn_pytest("-v test1.py test2.py")
child.expect(r"collecting \.\.\.")
@ -678,6 +671,26 @@ class TestTerminalFunctional:
]
)
def test_showlocals_short(self, testdir):
p1 = testdir.makepyfile(
"""
def test_showlocals_short():
x = 3
y = "xxxx"
assert 0
"""
)
result = testdir.runpytest(p1, "-l", "--tb=short")
result.stdout.fnmatch_lines(
[
"test_showlocals_short.py:*",
" assert 0",
"E assert 0",
" x = 3",
" y = 'xxxx'",
]
)
@pytest.fixture
def verbose_testfile(self, testdir):
return testdir.makepyfile(
@ -763,13 +776,42 @@ class TestTerminalFunctional:
result = testdir.runpytest(*params)
result.stdout.fnmatch_lines(["collected 3 items", "hello from hook: 3 items"])
def test_summary_f_alias(self, testdir):
"""Test that 'f' and 'F' report chars are aliases and don't show up twice in the summary (#6334)"""
testdir.makepyfile(
"""
def test():
assert False
"""
)
result = testdir.runpytest("-rfF")
expected = "FAILED test_summary_f_alias.py::test - assert False"
result.stdout.fnmatch_lines([expected])
assert result.stdout.lines.count(expected) == 1
def test_summary_s_alias(self, testdir):
"""Test that 's' and 'S' report chars are aliases and don't show up twice in the summary"""
testdir.makepyfile(
"""
import pytest
@pytest.mark.skip
def test():
pass
"""
)
result = testdir.runpytest("-rsS")
expected = "SKIPPED [1] test_summary_s_alias.py:3: unconditional skip"
result.stdout.fnmatch_lines([expected])
assert result.stdout.lines.count(expected) == 1
def test_fail_extra_reporting(testdir, monkeypatch):
monkeypatch.setenv("COLUMNS", "80")
testdir.makepyfile("def test_this(): assert 0, 'this_failed' * 100")
result = testdir.runpytest()
result = testdir.runpytest("-rN")
result.stdout.no_fnmatch_line("*short test summary*")
result = testdir.runpytest("-rf")
result = testdir.runpytest()
result.stdout.fnmatch_lines(
[
"*test summary*",
@ -838,7 +880,7 @@ def test_pass_output_reporting(testdir):
)
def test_color_yes(testdir):
def test_color_yes(testdir, color_mapping):
p1 = testdir.makepyfile(
"""
def fail():
@ -849,16 +891,10 @@ def test_color_yes(testdir):
"""
)
result = testdir.runpytest("--color=yes", str(p1))
if sys.version_info < (3, 6):
# py36 required for ordered markup
output = result.stdout.str()
assert "test session starts" in output
assert "\x1b[1m" in output
return
color_mapping.requires_ordered_markup(result)
result.stdout.fnmatch_lines(
[
line.format(**COLORS).replace("[", "[[]")
for line in [
color_mapping.format_for_fnmatch(
[
"{bold}=*= test session starts =*={reset}",
"collected 1 item",
"",
@ -867,26 +903,25 @@ def test_color_yes(testdir):
"=*= FAILURES =*=",
"{red}{bold}_*_ test_this _*_{reset}",
"",
"{bold} def test_this():{reset}",
"{bold}> fail(){reset}",
" {kw}def{hl-reset} {function}test_this{hl-reset}():",
"> fail()",
"",
"{bold}{red}test_color_yes.py{reset}:5: ",
"_ _ * _ _*",
"",
"{bold} def fail():{reset}",
"{bold}> assert 0{reset}",
" {kw}def{hl-reset} {function}fail{hl-reset}():",
"> {kw}assert{hl-reset} {number}0{hl-reset}",
"{bold}{red}E assert 0{reset}",
"",
"{bold}{red}test_color_yes.py{reset}:2: AssertionError",
"{red}=*= {red}{bold}1 failed{reset}{red} in *s{reset}{red} =*={reset}",
]
]
)
)
result = testdir.runpytest("--color=yes", "--tb=short", str(p1))
result.stdout.fnmatch_lines(
[
line.format(**COLORS).replace("[", "[[]")
for line in [
color_mapping.format_for_fnmatch(
[
"{bold}=*= test session starts =*={reset}",
"collected 1 item",
"",
@ -895,13 +930,13 @@ def test_color_yes(testdir):
"=*= FAILURES =*=",
"{red}{bold}_*_ test_this _*_{reset}",
"{bold}{red}test_color_yes.py{reset}:5: in test_this",
"{bold} fail(){reset}",
" fail()",
"{bold}{red}test_color_yes.py{reset}:2: in fail",
"{bold} assert 0{reset}",
" {kw}assert{hl-reset} {number}0{hl-reset}",
"{bold}{red}E assert 0{reset}",
"{red}=*= {red}{bold}1 failed{reset}{red} in *s{reset}{red} =*={reset}",
]
]
)
)
@ -938,37 +973,62 @@ def test_color_yes_collection_on_non_atty(testdir, verbose):
def test_getreportopt():
from _pytest.terminal import _REPORTCHARS_DEFAULT
class Config:
class Option:
reportchars = ""
disable_warnings = True
reportchars = _REPORTCHARS_DEFAULT
disable_warnings = False
option = Option()
config = Config()
assert _REPORTCHARS_DEFAULT == "fE"
# Default.
assert getreportopt(config) == "wfE"
config.option.reportchars = "sf"
assert getreportopt(config) == "sf"
assert getreportopt(config) == "wsf"
config.option.reportchars = "sfxw"
assert getreportopt(config) == "sfxw"
config.option.reportchars = "a"
assert getreportopt(config) == "wsxXEf"
config.option.reportchars = "N"
assert getreportopt(config) == "w"
config.option.reportchars = "NwfE"
assert getreportopt(config) == "wfE"
config.option.reportchars = "NfENx"
assert getreportopt(config) == "wx"
# Now with --disable-warnings.
config.option.disable_warnings = True
config.option.reportchars = "a"
assert getreportopt(config) == "sxXEf"
config.option.reportchars = "sfx"
assert getreportopt(config) == "sfx"
config.option.reportchars = "sfxw"
assert getreportopt(config) == "sfx"
# Now with --disable-warnings.
config.option.disable_warnings = False
config.option.reportchars = "a"
assert getreportopt(config) == "sxXwEf" # NOTE: "w" included!
config.option.reportchars = "sfx"
assert getreportopt(config) == "sfxw"
config.option.reportchars = "sfxw"
assert getreportopt(config) == "sfxw"
config.option.reportchars = "a"
assert getreportopt(config) == "sxXwEf" # NOTE: "w" included!
assert getreportopt(config) == "sxXEf"
config.option.reportchars = "A"
assert getreportopt(config) == "PpsxXwEf"
assert getreportopt(config) == "PpsxXEf"
config.option.reportchars = "AN"
assert getreportopt(config) == ""
config.option.reportchars = "NwfE"
assert getreportopt(config) == "fE"
def test_terminalreporter_reportopt_addopts(testdir):
@ -1085,7 +1145,7 @@ class TestGenericReporting:
)
for tbopt in ["long", "short", "no"]:
print("testing --tb=%s..." % tbopt)
result = testdir.runpytest("--tb=%s" % tbopt)
result = testdir.runpytest("-rN", "--tb=%s" % tbopt)
s = result.stdout.str()
if tbopt == "long":
assert "print(6*7)" in s
@ -1436,10 +1496,10 @@ def test_terminal_summary_warnings_header_once(testdir):
),
("yellow", [("1 xpassed", {"bold": True, "yellow": True})], {"xpassed": (1,)}),
(
"green",
"yellow",
[
("1 passed", {"bold": True, "green": True}),
("1 xpassed", {"bold": False, "yellow": True}),
("1 passed", {"bold": False, "green": True}),
("1 xpassed", {"bold": True, "yellow": True}),
],
{"xpassed": (1,), "passed": (1,)},
),
@ -1597,7 +1657,7 @@ class TestProgressOutputStyle:
]
)
def test_colored_progress(self, testdir, monkeypatch):
def test_colored_progress(self, testdir, monkeypatch, color_mapping):
monkeypatch.setenv("PY_COLORS", "1")
testdir.makepyfile(
test_bar="""
@ -1621,14 +1681,13 @@ class TestProgressOutputStyle:
)
result = testdir.runpytest()
result.stdout.re_match_lines(
[
line.format(**RE_COLORS)
for line in [
color_mapping.format_for_rematch(
[
r"test_bar.py ({green}\.{reset}){{10}}{green} \s+ \[ 50%\]{reset}",
r"test_foo.py ({green}\.{reset}){{5}}{yellow} \s+ \[ 75%\]{reset}",
r"test_foobar.py ({red}F{reset}){{5}}{red} \s+ \[100%\]{reset}",
]
]
)
)
def test_count(self, many_tests_files, testdir):
@ -1762,12 +1821,16 @@ class TestProgressWithTeardown:
testdir.makepyfile(
"""
def test_foo(fail_teardown):
assert False
assert 0
"""
)
output = testdir.runpytest()
output = testdir.runpytest("-rfE")
output.stdout.re_match_lines(
[r"test_teardown_with_test_also_failing.py FE\s+\[100%\]"]
[
r"test_teardown_with_test_also_failing.py FE\s+\[100%\]",
"FAILED test_teardown_with_test_also_failing.py::test_foo - assert 0",
"ERROR test_teardown_with_test_also_failing.py::test_foo - assert False",
]
)
def test_teardown_many(self, testdir, many_files):
@ -1776,12 +1839,13 @@ class TestProgressWithTeardown:
[r"test_bar.py (\.E){5}\s+\[ 25%\]", r"test_foo.py (\.E){15}\s+\[100%\]"]
)
def test_teardown_many_verbose(self, testdir: Testdir, many_files) -> None:
def test_teardown_many_verbose(
self, testdir: Testdir, many_files, color_mapping
) -> None:
result = testdir.runpytest("-v")
result.stdout.fnmatch_lines(
[
line.translate(TRANS_FNMATCH)
for line in [
color_mapping.format_for_fnmatch(
[
"test_bar.py::test_bar[0] PASSED * [ 5%]",
"test_bar.py::test_bar[0] ERROR * [ 5%]",
"test_bar.py::test_bar[4] PASSED * [ 25%]",
@ -1789,7 +1853,7 @@ class TestProgressWithTeardown:
"test_foo.py::test_foo[14] ERROR * [100%]",
"=* 20 passed, 20 errors in *",
]
]
)
)
def test_xdist_normal(self, many_files, testdir, monkeypatch):
@ -1941,3 +2005,46 @@ def test_via_exec(testdir: Testdir) -> None:
result.stdout.fnmatch_lines(
["test_via_exec.py::test_via_exec <- <string> PASSED*", "*= 1 passed in *"]
)
class TestCodeHighlight:
def test_code_highlight_simple(self, testdir: Testdir, color_mapping) -> None:
testdir.makepyfile(
"""
def test_foo():
assert 1 == 10
"""
)
result = testdir.runpytest("--color=yes")
color_mapping.requires_ordered_markup(result)
result.stdout.fnmatch_lines(
color_mapping.format_for_fnmatch(
[
" {kw}def{hl-reset} {function}test_foo{hl-reset}():",
"> {kw}assert{hl-reset} {number}1{hl-reset} == {number}10{hl-reset}",
"{bold}{red}E assert 1 == 10{reset}",
]
)
)
def test_code_highlight_continuation(self, testdir: Testdir, color_mapping) -> None:
testdir.makepyfile(
"""
def test_foo():
print('''
'''); assert 0
"""
)
result = testdir.runpytest("--color=yes")
color_mapping.requires_ordered_markup(result)
result.stdout.fnmatch_lines(
color_mapping.format_for_fnmatch(
[
" {kw}def{hl-reset} {function}test_foo{hl-reset}():",
" {print}print{hl-reset}({str}'''{hl-reset}{str}{hl-reset}",
"> {str} {hl-reset}{str}'''{hl-reset}); {kw}assert{hl-reset} {number}0{hl-reset}",
"{bold}{red}E assert 0{reset}",
]
)
)

View File

@ -74,19 +74,38 @@ class TestConfigTmpdir:
assert not mytemp.join("hello").check()
def test_basetemp(testdir):
testdata = [
("mypath", True),
("/mypath1", False),
("./mypath1", True),
("../mypath3", False),
("../../mypath4", False),
("mypath5/..", False),
("mypath6/../mypath6", True),
("mypath7/../mypath7/..", False),
]
@pytest.mark.parametrize("basename, is_ok", testdata)
def test_mktemp(testdir, basename, is_ok):
mytemp = testdir.tmpdir.mkdir("mytemp")
p = testdir.makepyfile(
"""
import pytest
def test_1(tmpdir_factory):
tmpdir_factory.mktemp('hello', numbered=False)
"""
def test_abs_path(tmpdir_factory):
tmpdir_factory.mktemp('{}', numbered=False)
""".format(
basename
)
)
result = testdir.runpytest(p, "--basetemp=%s" % mytemp)
assert result.ret == 0
print(mytemp)
assert mytemp.join("hello").check()
if is_ok:
assert result.ret == 0
assert mytemp.join(basename).check()
else:
assert result.ret == 1
result.stdout.fnmatch_lines("*ValueError*")
def test_tmpdir_always_is_realpath(testdir):

Some files were not shown because too many files have changed in this diff Show More