Merge features into master after 5.3 (#6236)
Merge features into master after 5.3
This commit is contained in:
commit
7e5ad31428
|
@ -1,7 +1,7 @@
|
|||
exclude: doc/en/example/py2py3/test_py2.py
|
||||
repos:
|
||||
- repo: https://github.com/psf/black
|
||||
rev: 19.3b0
|
||||
rev: 19.10b0
|
||||
hooks:
|
||||
- id: black
|
||||
args: [--safe, --quiet]
|
||||
|
@ -42,7 +42,7 @@ repos:
|
|||
hooks:
|
||||
- id: rst-backticks
|
||||
- repo: https://github.com/pre-commit/mirrors-mypy
|
||||
rev: v0.720
|
||||
rev: v0.740
|
||||
hooks:
|
||||
- id: mypy
|
||||
files: ^(src/|testing/)
|
||||
|
|
27
.travis.yml
27
.travis.yml
|
@ -23,10 +23,13 @@ install:
|
|||
jobs:
|
||||
include:
|
||||
# OSX tests - first (in test stage), since they are the slower ones.
|
||||
# Coverage for:
|
||||
# - osx
|
||||
# - verbose=1
|
||||
- os: osx
|
||||
osx_image: xcode10.1
|
||||
language: generic
|
||||
env: TOXENV=py37-xdist PYTEST_COVERAGE=1
|
||||
env: TOXENV=py37-xdist PYTEST_COVERAGE=1 PYTEST_ADDOPTS=-v
|
||||
before_install:
|
||||
- which python3
|
||||
- python3 -V
|
||||
|
@ -36,8 +39,13 @@ jobs:
|
|||
|
||||
# Full run of latest supported version, without xdist.
|
||||
# Coverage for:
|
||||
# - pytester's LsofFdLeakChecker
|
||||
# - TestArgComplete (linux only)
|
||||
# - numpy
|
||||
# - old attrs
|
||||
# - verbose=0
|
||||
# - test_sys_breakpoint_interception (via pexpect).
|
||||
- env: TOXENV=py37-pexpect PYTEST_COVERAGE=1
|
||||
- env: TOXENV=py37-lsof-numpy-oldattrs-pexpect-twisted PYTEST_COVERAGE=1 PYTEST_ADDOPTS=
|
||||
python: '3.7'
|
||||
|
||||
# Coverage tracking is slow with pypy, skip it.
|
||||
|
@ -47,14 +55,6 @@ jobs:
|
|||
- env: TOXENV=py35-xdist
|
||||
python: '3.5'
|
||||
|
||||
# Coverage for:
|
||||
# - pytester's LsofFdLeakChecker
|
||||
# - TestArgComplete (linux only)
|
||||
# - numpy
|
||||
# - old attrs
|
||||
# Empty PYTEST_ADDOPTS to run this non-verbose.
|
||||
- env: TOXENV=py37-lsof-oldattrs-numpy-twisted-xdist PYTEST_COVERAGE=1 PYTEST_ADDOPTS=
|
||||
|
||||
# Specialized factors for py37.
|
||||
- env: TOXENV=py37-pluggymaster-xdist
|
||||
- env: TOXENV=py37-freeze
|
||||
|
@ -125,3 +125,10 @@ notifications:
|
|||
skip_join: true
|
||||
email:
|
||||
- pytest-commit@python.org
|
||||
|
||||
branches:
|
||||
only:
|
||||
- master
|
||||
- features
|
||||
- 4.6-maintenance
|
||||
- /^\d+(\.\d+)+$/
|
||||
|
|
3
AUTHORS
3
AUTHORS
|
@ -104,6 +104,7 @@ George Kussumoto
|
|||
Georgy Dyuldin
|
||||
Graham Horler
|
||||
Greg Price
|
||||
Gregory Lee
|
||||
Grig Gheorghiu
|
||||
Grigorii Eremeev (budulianin)
|
||||
Guido Wesdorp
|
||||
|
@ -162,6 +163,7 @@ Manuel Krebber
|
|||
Marc Schlaich
|
||||
Marcelo Duarte Trevisani
|
||||
Marcin Bachry
|
||||
Marco Gorelli
|
||||
Mark Abramowitz
|
||||
Markus Unterwaditzer
|
||||
Martijn Faassen
|
||||
|
@ -179,6 +181,7 @@ Michael Aquilina
|
|||
Michael Birtwell
|
||||
Michael Droettboom
|
||||
Michael Goerz
|
||||
Michael Krebs
|
||||
Michael Seifert
|
||||
Michal Wajszczuk
|
||||
Mihai Capotă
|
||||
|
|
202
CHANGELOG.rst
202
CHANGELOG.rst
|
@ -18,6 +18,208 @@ with advance notice in the **Deprecations** section of releases.
|
|||
|
||||
.. towncrier release notes start
|
||||
|
||||
pytest 5.3.0 (2019-11-19)
|
||||
=========================
|
||||
|
||||
Deprecations
|
||||
------------
|
||||
|
||||
- `#6179 <https://github.com/pytest-dev/pytest/issues/6179>`_: The default value of ``junit_family`` option will change to ``xunit2`` in pytest 6.0, given
|
||||
that this is the version supported by default in modern tools that manipulate this type of file.
|
||||
|
||||
In order to smooth the transition, pytest will issue a warning in case the ``--junitxml`` option
|
||||
is given in the command line but ``junit_family`` is not explicitly configured in ``pytest.ini``.
|
||||
|
||||
For more information, `see the docs <https://docs.pytest.org/en/latest/deprecations.html#junit-family-default-value-change-to-xunit2>`__.
|
||||
|
||||
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- `#4488 <https://github.com/pytest-dev/pytest/issues/4488>`_: The pytest team has created the `pytest-reportlog <https://github.com/pytest-dev/pytest-reportlog>`__
|
||||
plugin, which provides a new ``--report-log=FILE`` option that writes *report logs* into a file as the test session executes.
|
||||
|
||||
Each line of the report log contains a self contained JSON object corresponding to a testing event,
|
||||
such as a collection or a test result report. The file is guaranteed to be flushed after writing
|
||||
each line, so systems can read and process events in real-time.
|
||||
|
||||
The plugin is meant to replace the ``--resultlog`` option, which is deprecated and meant to be removed
|
||||
in a future release. If you use ``--resultlog``, please try out ``pytest-reportlog`` and
|
||||
provide feedback.
|
||||
|
||||
|
||||
- `#4730 <https://github.com/pytest-dev/pytest/issues/4730>`_: When ``sys.pycache_prefix`` (Python 3.8+) is set, it will be used by pytest to cache test files changed by the assertion rewriting mechanism.
|
||||
|
||||
This makes it easier to benefit of cached ``.pyc`` files even on file systems without permissions.
|
||||
|
||||
|
||||
- `#5515 <https://github.com/pytest-dev/pytest/issues/5515>`_: Allow selective auto-indentation of multiline log messages.
|
||||
|
||||
Adds command line option ``--log-auto-indent``, config option
|
||||
``log_auto_indent`` and support for per-entry configuration of
|
||||
indentation behavior on calls to ``logging.log()``.
|
||||
|
||||
Alters the default for auto-indention from ``on`` to ``off``. This
|
||||
restores the older behavior that existed prior to v4.6.0. This
|
||||
reversion to earlier behavior was done because it is better to
|
||||
activate new features that may lead to broken tests explicitly
|
||||
rather than implicitly.
|
||||
|
||||
|
||||
- `#5914 <https://github.com/pytest-dev/pytest/issues/5914>`_: ``pytester`` learned two new functions, `no_fnmatch_line <https://docs.pytest.org/en/latest/reference.html#_pytest.pytester.LineMatcher.no_fnmatch_line>`_ and
|
||||
`no_re_match_line <https://docs.pytest.org/en/latest/reference.html#_pytest.pytester.LineMatcher.no_re_match_line>`_.
|
||||
|
||||
The functions are used to ensure the captured text *does not* match the given
|
||||
pattern.
|
||||
|
||||
The previous idiom was to use ``re.match``:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
assert re.match(pat, result.stdout.str()) is None
|
||||
|
||||
Or the ``in`` operator:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
assert text in result.stdout.str()
|
||||
|
||||
But the new functions produce best output on failure.
|
||||
|
||||
|
||||
- `#6057 <https://github.com/pytest-dev/pytest/issues/6057>`_: Added tolerances to complex values when printing ``pytest.approx``.
|
||||
|
||||
For example, ``repr(pytest.approx(3+4j))`` returns ``(3+4j) ± 5e-06 ∠ ±180°``. This is polar notation indicating a circle around the expected value, with a radius of 5e-06. For ``approx`` comparisons to return ``True``, the actual value should fall within this circle.
|
||||
|
||||
|
||||
- `#6061 <https://github.com/pytest-dev/pytest/issues/6061>`_: Added the pluginmanager as an argument to ``pytest_addoption``
|
||||
so that hooks can be invoked when setting up command line options. This is
|
||||
useful for having one plugin communicate things to another plugin,
|
||||
such as default values or which set of command line options to add.
|
||||
|
||||
|
||||
|
||||
Improvements
|
||||
------------
|
||||
|
||||
- `#5061 <https://github.com/pytest-dev/pytest/issues/5061>`_: Use multiple colors with terminal summary statistics.
|
||||
|
||||
|
||||
- `#5630 <https://github.com/pytest-dev/pytest/issues/5630>`_: Quitting from debuggers is now properly handled in ``doctest`` items.
|
||||
|
||||
|
||||
- `#5924 <https://github.com/pytest-dev/pytest/issues/5924>`_: Improved verbose diff output with sequences.
|
||||
|
||||
Before:
|
||||
|
||||
.. code-block::
|
||||
|
||||
E AssertionError: assert ['version', '...version_info'] == ['version', '...version', ...]
|
||||
E Right contains 3 more items, first extra item: ' '
|
||||
E Full diff:
|
||||
E - ['version', 'version_info', 'sys.version', 'sys.version_info']
|
||||
E + ['version',
|
||||
E + 'version_info',
|
||||
E + 'sys.version',
|
||||
E + 'sys.version_info',
|
||||
E + ' ',
|
||||
E + 'sys.version',
|
||||
E + 'sys.version_info']
|
||||
|
||||
After:
|
||||
|
||||
.. code-block::
|
||||
|
||||
E AssertionError: assert ['version', '...version_info'] == ['version', '...version', ...]
|
||||
E Right contains 3 more items, first extra item: ' '
|
||||
E Full diff:
|
||||
E [
|
||||
E 'version',
|
||||
E 'version_info',
|
||||
E 'sys.version',
|
||||
E 'sys.version_info',
|
||||
E + ' ',
|
||||
E + 'sys.version',
|
||||
E + 'sys.version_info',
|
||||
E ]
|
||||
|
||||
|
||||
- `#5936 <https://github.com/pytest-dev/pytest/issues/5936>`_: Display untruncated assertion message with ``-vv``.
|
||||
|
||||
|
||||
- `#5990 <https://github.com/pytest-dev/pytest/issues/5990>`_: Fixed plurality mismatch in test summary (e.g. display "1 error" instead of "1 errors").
|
||||
|
||||
|
||||
- `#6008 <https://github.com/pytest-dev/pytest/issues/6008>`_: ``Config.InvocationParams.args`` is now always a ``tuple`` to better convey that it should be
|
||||
immutable and avoid accidental modifications.
|
||||
|
||||
|
||||
- `#6023 <https://github.com/pytest-dev/pytest/issues/6023>`_: ``pytest.main`` now returns a ``pytest.ExitCode`` instance now, except for when custom exit codes are used (where it returns ``int`` then still).
|
||||
|
||||
|
||||
- `#6026 <https://github.com/pytest-dev/pytest/issues/6026>`_: Align prefixes in output of pytester's ``LineMatcher``.
|
||||
|
||||
|
||||
- `#6059 <https://github.com/pytest-dev/pytest/issues/6059>`_: Collection errors are reported as errors (and not failures like before) in the terminal's short test summary.
|
||||
|
||||
|
||||
- `#6069 <https://github.com/pytest-dev/pytest/issues/6069>`_: ``pytester.spawn`` does not skip/xfail tests on FreeBSD anymore unconditionally.
|
||||
|
||||
|
||||
- `#6097 <https://github.com/pytest-dev/pytest/issues/6097>`_: The "[XXX%]" indicator in the test summary is now colored according to the final (new) multi-colored line's main color.
|
||||
|
||||
|
||||
- `#6116 <https://github.com/pytest-dev/pytest/issues/6116>`_: Added ``--co`` as a synonym to ``--collect-only``.
|
||||
|
||||
|
||||
- `#6148 <https://github.com/pytest-dev/pytest/issues/6148>`_: ``atomicwrites`` is now only used on Windows, fixing a performance regression with assertion rewriting on Unix.
|
||||
|
||||
|
||||
- `#6152 <https://github.com/pytest-dev/pytest/issues/6152>`_: Now parametrization will use the ``__name__`` attribute of any object for the id, if present. Previously it would only use ``__name__`` for functions and classes.
|
||||
|
||||
|
||||
- `#6176 <https://github.com/pytest-dev/pytest/issues/6176>`_: Improved failure reporting with pytester's ``Hookrecorder.assertoutcome``.
|
||||
|
||||
|
||||
- `#6181 <https://github.com/pytest-dev/pytest/issues/6181>`_: The reason for a stopped session, e.g. with ``--maxfail`` / ``-x``, now gets reported in the test summary.
|
||||
|
||||
|
||||
- `#6206 <https://github.com/pytest-dev/pytest/issues/6206>`_: Improved ``cache.set`` robustness and performance.
|
||||
|
||||
|
||||
|
||||
Bug Fixes
|
||||
---------
|
||||
|
||||
- `#2049 <https://github.com/pytest-dev/pytest/issues/2049>`_: Fixed ``--setup-plan`` showing inaccurate information about fixture lifetimes.
|
||||
|
||||
|
||||
- `#2548 <https://github.com/pytest-dev/pytest/issues/2548>`_: Fixed line offset mismatch of skipped tests in terminal summary.
|
||||
|
||||
|
||||
- `#6039 <https://github.com/pytest-dev/pytest/issues/6039>`_: The ``PytestDoctestRunner`` is now properly invalidated when unconfiguring the doctest plugin.
|
||||
|
||||
This is important when used with ``pytester``'s ``runpytest_inprocess``.
|
||||
|
||||
|
||||
- `#6047 <https://github.com/pytest-dev/pytest/issues/6047>`_: BaseExceptions are now handled in ``saferepr``, which includes ``pytest.fail.Exception`` etc.
|
||||
|
||||
|
||||
- `#6074 <https://github.com/pytest-dev/pytest/issues/6074>`_: pytester: fixed order of arguments in ``rm_rf`` warning when cleaning up temporary directories, and do not emit warnings for errors with ``os.open``.
|
||||
|
||||
|
||||
- `#6189 <https://github.com/pytest-dev/pytest/issues/6189>`_: Fixed result of ``getmodpath`` method.
|
||||
|
||||
|
||||
|
||||
Trivial/Internal Changes
|
||||
------------------------
|
||||
|
||||
- `#4901 <https://github.com/pytest-dev/pytest/issues/4901>`_: ``RunResult`` from ``pytester`` now displays the mnemonic of the ``ret`` attribute when it is a
|
||||
valid ``pytest.ExitCode`` value.
|
||||
|
||||
|
||||
pytest 5.2.4 (2019-11-15)
|
||||
=========================
|
||||
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
Fix ``-setup-plan`` showing inaccurate information about fixture lifetimes.
|
|
@ -1 +0,0 @@
|
|||
Fix incorrect result of ``getmodpath`` method.
|
|
@ -0,0 +1 @@
|
|||
Improve check for misspelling of ``pytest.mark.parametrize``.
|
|
@ -0,0 +1 @@
|
|||
``repr`` of ``ExceptionInfo`` objects has been improved to honor the ``__repr__`` method of the underlying exception.
|
|
@ -6,6 +6,7 @@ Release announcements
|
|||
:maxdepth: 2
|
||||
|
||||
|
||||
release-5.3.0
|
||||
release-5.2.4
|
||||
release-5.2.3
|
||||
release-5.2.2
|
||||
|
|
|
@ -0,0 +1,45 @@
|
|||
pytest-5.3.0
|
||||
=======================================
|
||||
|
||||
The pytest team is proud to announce the 5.3.0 release!
|
||||
|
||||
pytest is a mature Python testing tool with more than a 2000 tests
|
||||
against itself, passing on many different interpreters and platforms.
|
||||
|
||||
This release contains a number of bugs fixes and improvements, so users are encouraged
|
||||
to take a look at the CHANGELOG:
|
||||
|
||||
https://docs.pytest.org/en/latest/changelog.html
|
||||
|
||||
For complete documentation, please visit:
|
||||
|
||||
https://docs.pytest.org/en/latest/
|
||||
|
||||
As usual, you can upgrade from pypi via:
|
||||
|
||||
pip install -U pytest
|
||||
|
||||
Thanks to all who contributed to this release, among them:
|
||||
|
||||
* AnjoMan
|
||||
* Anthony Sottile
|
||||
* Anton Lodder
|
||||
* Bruno Oliveira
|
||||
* Daniel Hahler
|
||||
* Gregory Lee
|
||||
* Josh Karpel
|
||||
* JoshKarpel
|
||||
* Joshua Storck
|
||||
* Kale Kundert
|
||||
* MarcoGorelli
|
||||
* Michael Krebs
|
||||
* NNRepos
|
||||
* Ran Benita
|
||||
* TH3CHARLie
|
||||
* Tibor Arpas
|
||||
* Zac Hatfield-Dodds
|
||||
* 林玮
|
||||
|
||||
|
||||
Happy testing,
|
||||
The Pytest Development Team
|
|
@ -19,6 +19,27 @@ Below is a complete list of all pytest features which are considered deprecated.
|
|||
:class:`_pytest.warning_types.PytestWarning` or subclasses, which can be filtered using
|
||||
:ref:`standard warning filters <warnings>`.
|
||||
|
||||
``junit_family`` default value change to "xunit2"
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. deprecated:: 5.2
|
||||
|
||||
The default value of ``junit_family`` option will change to ``xunit2`` in pytest 6.0, given
|
||||
that this is the version supported by default in modern tools that manipulate this type of file.
|
||||
|
||||
In order to smooth the transition, pytest will issue a warning in case the ``--junitxml`` option
|
||||
is given in the command line but ``junit_family`` is not explicitly configured in ``pytest.ini``::
|
||||
|
||||
PytestDeprecationWarning: The 'junit_family' default value will change to 'xunit2' in pytest 6.0.
|
||||
Add 'junit_family=legacy' to your pytest.ini file to silence this warning and make your suite compatible.
|
||||
|
||||
In order to silence this warning, users just need to configure the ``junit_family`` option explicitly:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pytest]
|
||||
junit_family=legacy
|
||||
|
||||
|
||||
``funcargnames`` alias for ``fixturenames``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -40,15 +61,15 @@ Result log (``--result-log``)
|
|||
.. deprecated:: 4.0
|
||||
|
||||
The ``--result-log`` option produces a stream of test reports which can be
|
||||
analysed at runtime. It uses a custom format which requires users to implement their own
|
||||
parser, but the team believes using a line-based format that can be parsed using standard
|
||||
tools would provide a suitable and better alternative.
|
||||
analysed at runtime, but it uses a custom format which requires users to implement their own
|
||||
parser.
|
||||
|
||||
The current plan is to provide an alternative in the pytest 5.0 series and remove the ``--result-log``
|
||||
option in pytest 6.0 after the new implementation proves satisfactory to all users and is deemed
|
||||
stable.
|
||||
The `pytest-reportlog <https://github.com/pytest-dev/pytest-reportlog>`__ plugin provides a ``--report-log`` option, a more standard and extensible alternative, producing
|
||||
one JSON object per-line, and should cover the same use cases. Please try it out and provide feedback.
|
||||
|
||||
The actual alternative is still being discussed in issue `#4488 <https://github.com/pytest-dev/pytest/issues/4488>`__.
|
||||
The plan is remove the ``--result-log`` option in pytest 6.0 if ``pytest-reportlog`` proves satisfactory
|
||||
to all users and is deemed stable. The ``pytest-reportlog`` plugin might even be merged into the core
|
||||
at some point, depending on the plans for the plugins and number of users using it.
|
||||
|
||||
|
||||
Removed Features
|
||||
|
|
|
@ -622,7 +622,7 @@ then you will see two tests skipped and two executed tests as expected:
|
|||
test_plat.py s.s. [100%]
|
||||
|
||||
========================= short test summary info ==========================
|
||||
SKIPPED [2] $REGENDOC_TMPDIR/conftest.py:13: cannot run on platform linux
|
||||
SKIPPED [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux
|
||||
======================= 2 passed, 2 skipped in 0.12s =======================
|
||||
|
||||
Note that if you specify a platform via the marker-command line option like this:
|
||||
|
|
|
@ -475,10 +475,11 @@ Running it results in some skips if we don't have all the python interpreters in
|
|||
.. code-block:: pytest
|
||||
|
||||
. $ pytest -rs -q multipython.py
|
||||
ssssssssssss......sss...... [100%]
|
||||
ssssssssssss...ssssssssssss [100%]
|
||||
========================= short test summary info ==========================
|
||||
SKIPPED [15] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.5' not found
|
||||
12 passed, 15 skipped in 0.12s
|
||||
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:29: 'python3.5' not found
|
||||
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:29: 'python3.7' not found
|
||||
3 passed, 24 skipped in 0.12s
|
||||
|
||||
Indirect parametrization of optional implementations/imports
|
||||
--------------------------------------------------------------------
|
||||
|
@ -546,7 +547,7 @@ If you run this with reporting for skips enabled:
|
|||
test_module.py .s [100%]
|
||||
|
||||
========================= short test summary info ==========================
|
||||
SKIPPED [1] $REGENDOC_TMPDIR/conftest.py:13: could not import 'opt2': No module named 'opt2'
|
||||
SKIPPED [1] $REGENDOC_TMPDIR/conftest.py:12: could not import 'opt2': No module named 'opt2'
|
||||
======================= 1 passed, 1 skipped in 0.12s =======================
|
||||
|
||||
You'll see that we don't have an ``opt2`` module and thus the second test run
|
||||
|
|
|
@ -443,7 +443,7 @@ Now we can profile which test functions execute the slowest:
|
|||
========================= slowest 3 test durations =========================
|
||||
0.30s call test_some_are_slow.py::test_funcslow2
|
||||
0.20s call test_some_are_slow.py::test_funcslow1
|
||||
0.10s call test_some_are_slow.py::test_funcfast
|
||||
0.11s call test_some_are_slow.py::test_funcfast
|
||||
============================ 3 passed in 0.12s =============================
|
||||
|
||||
incremental testing - test steps
|
||||
|
|
|
@ -1192,6 +1192,29 @@ passed multiple times. The expected format is ``name=value``. For example::
|
|||
[pytest]
|
||||
junit_suite_name = my_suite
|
||||
|
||||
.. confval:: log_auto_indent
|
||||
|
||||
Allow selective auto-indentation of multiline log messages.
|
||||
|
||||
Supports command line option ``--log-auto-indent [value]``
|
||||
and config option ``log_auto_indent = [value]`` to set the
|
||||
auto-indentation behavior for all logging.
|
||||
|
||||
``[value]`` can be:
|
||||
* True or "On" - Dynamically auto-indent multiline log messages
|
||||
* False or "Off" or 0 - Do not auto-indent multiline log messages (the default behavior)
|
||||
* [positive integer] - auto-indent multiline log messages by [value] spaces
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pytest]
|
||||
log_auto_indent = False
|
||||
|
||||
Supports passing kwarg ``extra={"auto_indent": [value]}`` to
|
||||
calls to ``logging.log()`` to specify auto-indentation behavior for
|
||||
a specific entry in the log. ``extra`` kwarg overrides the value specified
|
||||
on the command line or in the config.
|
||||
|
||||
|
||||
.. confval:: log_cli_date_format
|
||||
|
||||
|
|
|
@ -241,7 +241,7 @@ Example:
|
|||
|
||||
test_example.py:14: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test
|
||||
SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:22: skipping this test
|
||||
XFAIL test_example.py::test_xfail
|
||||
reason: xfailing this test
|
||||
XPASS test_example.py::test_xpass always xfail
|
||||
|
@ -296,7 +296,7 @@ More than one character can be used, so for example to only see failed and skipp
|
|||
test_example.py:14: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
FAILED test_example.py::test_fail - assert 0
|
||||
SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test
|
||||
SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:22: skipping this test
|
||||
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===
|
||||
|
||||
Using ``p`` lists the passing tests, whilst ``P`` adds an extra section "PASSES" with those tests that passed but had
|
||||
|
@ -679,12 +679,6 @@ Creating resultlog format files
|
|||
----------------------------------------------------
|
||||
|
||||
|
||||
|
||||
This option is rarely used and is scheduled for removal in 5.0.
|
||||
|
||||
See `the deprecation docs <https://docs.pytest.org/en/latest/deprecations.html#result-log-result-log>`__
|
||||
for more information.
|
||||
|
||||
To create plain-text machine-readable result files you can issue:
|
||||
|
||||
.. code-block:: bash
|
||||
|
@ -694,6 +688,16 @@ To create plain-text machine-readable result files you can issue:
|
|||
and look at the content at the ``path`` location. Such files are used e.g.
|
||||
by the `PyPy-test`_ web page to show test results over several revisions.
|
||||
|
||||
.. warning::
|
||||
|
||||
This option is rarely used and is scheduled for removal in pytest 6.0.
|
||||
|
||||
If you use this option, consider using the new `pytest-reportlog <https://github.com/pytest-dev/pytest-reportlog>`__ plugin instead.
|
||||
|
||||
See `the deprecation docs <https://docs.pytest.org/en/latest/deprecations.html#result-log-result-log>`__
|
||||
for more information.
|
||||
|
||||
|
||||
.. _`PyPy-test`: http://buildbot.pypy.org/summary
|
||||
|
||||
|
||||
|
|
|
@ -41,7 +41,7 @@ Running pytest now produces this output:
|
|||
warnings.warn(UserWarning("api v1, should use functions from v2"))
|
||||
|
||||
-- Docs: https://docs.pytest.org/en/latest/warnings.html
|
||||
====================== 1 passed, 1 warnings in 0.12s =======================
|
||||
======================= 1 passed, 1 warning in 0.12s =======================
|
||||
|
||||
The ``-W`` flag can be passed to control which warnings will be displayed or even turn
|
||||
them into errors:
|
||||
|
@ -407,7 +407,7 @@ defines an ``__init__`` constructor, as this prevents the class from being insta
|
|||
class Test:
|
||||
|
||||
-- Docs: https://docs.pytest.org/en/latest/warnings.html
|
||||
1 warnings in 0.12s
|
||||
1 warning in 0.12s
|
||||
|
||||
These warnings might be filtered using the same builtin mechanisms used to filter other types of warnings.
|
||||
|
||||
|
|
|
@ -442,7 +442,7 @@ additionally it is possible to copy examples for an example folder before runnin
|
|||
testdir.copy_example("test_example.py")
|
||||
|
||||
-- Docs: https://docs.pytest.org/en/latest/warnings.html
|
||||
====================== 2 passed, 1 warnings in 0.12s =======================
|
||||
======================= 2 passed, 1 warning in 0.12s =======================
|
||||
|
||||
For more information about the result object that ``runpytest()`` returns, and
|
||||
the methods that it provides please check out the :py:class:`RunResult
|
||||
|
@ -677,6 +677,56 @@ Example:
|
|||
print(config.hook)
|
||||
|
||||
|
||||
.. _`addoptionhooks`:
|
||||
|
||||
|
||||
Using hooks in pytest_addoption
|
||||
-------------------------------
|
||||
|
||||
Occasionally, it is necessary to change the way in which command line options
|
||||
are defined by one plugin based on hooks in another plugin. For example,
|
||||
a plugin may expose a command line option for which another plugin needs
|
||||
to define the default value. The pluginmanager can be used to install and
|
||||
use hooks to accomplish this. The plugin would define and add the hooks
|
||||
and use pytest_addoption as follows:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# contents of hooks.py
|
||||
|
||||
# Use firstresult=True because we only want one plugin to define this
|
||||
# default value
|
||||
@hookspec(firstresult=True)
|
||||
def pytest_config_file_default_value():
|
||||
""" Return the default value for the config file command line option. """
|
||||
|
||||
|
||||
# contents of myplugin.py
|
||||
|
||||
|
||||
def pytest_addhooks(pluginmanager):
|
||||
""" This example assumes the hooks are grouped in the 'hooks' module. """
|
||||
from . import hook
|
||||
|
||||
pluginmanager.add_hookspecs(hook)
|
||||
|
||||
|
||||
def pytest_addoption(parser, pluginmanager):
|
||||
default_value = pluginmanager.hook.pytest_config_file_default_value()
|
||||
parser.addoption(
|
||||
"--config-file",
|
||||
help="Config file to use, defaults to %(default)s",
|
||||
default=default_value,
|
||||
)
|
||||
|
||||
The conftest.py that is using myplugin would simply define the hook as follows:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def pytest_config_file_default_value():
|
||||
return "config.yaml"
|
||||
|
||||
|
||||
Optionally using hooks from 3rd party plugins
|
||||
---------------------------------------------
|
||||
|
||||
|
|
|
@ -63,6 +63,7 @@ ignore =
|
|||
formats = sdist.tgz,bdist_wheel
|
||||
|
||||
[mypy]
|
||||
mypy_path = src
|
||||
ignore_missing_imports = True
|
||||
no_implicit_optional = True
|
||||
strict_equality = True
|
||||
|
|
2
setup.py
2
setup.py
|
@ -7,7 +7,7 @@ INSTALL_REQUIRES = [
|
|||
"packaging",
|
||||
"attrs>=17.4.0", # should match oldattrs tox env.
|
||||
"more-itertools>=4.0.0",
|
||||
"atomicwrites>=1.0",
|
||||
'atomicwrites>=1.0;sys_platform=="win32"',
|
||||
'pathlib2>=2.2.0;python_version<"3.6"',
|
||||
'colorama;sys_platform=="win32"',
|
||||
"pluggy>=0.12,<1.0",
|
||||
|
|
|
@ -7,13 +7,17 @@ from inspect import CO_VARKEYWORDS
|
|||
from io import StringIO
|
||||
from traceback import format_exception_only
|
||||
from types import CodeType
|
||||
from types import FrameType
|
||||
from types import TracebackType
|
||||
from typing import Any
|
||||
from typing import Callable
|
||||
from typing import Dict
|
||||
from typing import Generic
|
||||
from typing import Iterable
|
||||
from typing import List
|
||||
from typing import Optional
|
||||
from typing import Pattern
|
||||
from typing import Sequence
|
||||
from typing import Set
|
||||
from typing import Tuple
|
||||
from typing import TypeVar
|
||||
|
@ -27,9 +31,16 @@ import py
|
|||
import _pytest
|
||||
from _pytest._io.saferepr import safeformat
|
||||
from _pytest._io.saferepr import saferepr
|
||||
from _pytest.compat import overload
|
||||
|
||||
if False: # TYPE_CHECKING
|
||||
from typing import Type
|
||||
from typing_extensions import Literal
|
||||
from weakref import ReferenceType # noqa: F401
|
||||
|
||||
from _pytest._code import Source
|
||||
|
||||
_TracebackStyle = Literal["long", "short", "no", "native"]
|
||||
|
||||
|
||||
class Code:
|
||||
|
@ -38,13 +49,12 @@ class Code:
|
|||
def __init__(self, rawcode) -> None:
|
||||
if not hasattr(rawcode, "co_filename"):
|
||||
rawcode = getrawcode(rawcode)
|
||||
try:
|
||||
if not isinstance(rawcode, CodeType):
|
||||
raise TypeError("not a code object: {!r}".format(rawcode))
|
||||
self.filename = rawcode.co_filename
|
||||
self.firstlineno = rawcode.co_firstlineno - 1
|
||||
self.name = rawcode.co_name
|
||||
except AttributeError:
|
||||
raise TypeError("not a code object: {!r}".format(rawcode))
|
||||
self.raw = rawcode # type: CodeType
|
||||
self.raw = rawcode
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.raw == other.raw
|
||||
|
@ -72,7 +82,7 @@ class Code:
|
|||
return p
|
||||
|
||||
@property
|
||||
def fullsource(self):
|
||||
def fullsource(self) -> Optional["Source"]:
|
||||
""" return a _pytest._code.Source object for the full source file of the code
|
||||
"""
|
||||
from _pytest._code import source
|
||||
|
@ -80,7 +90,7 @@ class Code:
|
|||
full, _ = source.findsource(self.raw)
|
||||
return full
|
||||
|
||||
def source(self):
|
||||
def source(self) -> "Source":
|
||||
""" return a _pytest._code.Source object for the code object's source only
|
||||
"""
|
||||
# return source only for that part of code
|
||||
|
@ -88,7 +98,7 @@ class Code:
|
|||
|
||||
return _pytest._code.Source(self.raw)
|
||||
|
||||
def getargs(self, var=False):
|
||||
def getargs(self, var: bool = False) -> Tuple[str, ...]:
|
||||
""" return a tuple with the argument names for the code object
|
||||
|
||||
if 'var' is set True also return the names of the variable and
|
||||
|
@ -107,7 +117,7 @@ class Frame:
|
|||
"""Wrapper around a Python frame holding f_locals and f_globals
|
||||
in which expressions can be evaluated."""
|
||||
|
||||
def __init__(self, frame):
|
||||
def __init__(self, frame: FrameType) -> None:
|
||||
self.lineno = frame.f_lineno - 1
|
||||
self.f_globals = frame.f_globals
|
||||
self.f_locals = frame.f_locals
|
||||
|
@ -115,7 +125,7 @@ class Frame:
|
|||
self.code = Code(frame.f_code)
|
||||
|
||||
@property
|
||||
def statement(self):
|
||||
def statement(self) -> "Source":
|
||||
""" statement this frame is at """
|
||||
import _pytest._code
|
||||
|
||||
|
@ -134,7 +144,7 @@ class Frame:
|
|||
f_locals.update(vars)
|
||||
return eval(code, self.f_globals, f_locals)
|
||||
|
||||
def exec_(self, code, **vars):
|
||||
def exec_(self, code, **vars) -> None:
|
||||
""" exec 'code' in the frame
|
||||
|
||||
'vars' are optional; additional local variables
|
||||
|
@ -143,7 +153,7 @@ class Frame:
|
|||
f_locals.update(vars)
|
||||
exec(code, self.f_globals, f_locals)
|
||||
|
||||
def repr(self, object):
|
||||
def repr(self, object: object) -> str:
|
||||
""" return a 'safe' (non-recursive, one-line) string repr for 'object'
|
||||
"""
|
||||
return saferepr(object)
|
||||
|
@ -151,7 +161,7 @@ class Frame:
|
|||
def is_true(self, object):
|
||||
return object
|
||||
|
||||
def getargs(self, var=False):
|
||||
def getargs(self, var: bool = False):
|
||||
""" return a list of tuples (name, value) for all arguments
|
||||
|
||||
if 'var' is set True also include the variable and keyword
|
||||
|
@ -169,35 +179,34 @@ class Frame:
|
|||
class TracebackEntry:
|
||||
""" a single entry in a traceback """
|
||||
|
||||
_repr_style = None
|
||||
_repr_style = None # type: Optional[Literal["short", "long"]]
|
||||
exprinfo = None
|
||||
|
||||
def __init__(self, rawentry, excinfo=None):
|
||||
def __init__(self, rawentry: TracebackType, excinfo=None) -> None:
|
||||
self._excinfo = excinfo
|
||||
self._rawentry = rawentry
|
||||
self.lineno = rawentry.tb_lineno - 1
|
||||
|
||||
def set_repr_style(self, mode):
|
||||
def set_repr_style(self, mode: "Literal['short', 'long']") -> None:
|
||||
assert mode in ("short", "long")
|
||||
self._repr_style = mode
|
||||
|
||||
@property
|
||||
def frame(self):
|
||||
import _pytest._code
|
||||
|
||||
return _pytest._code.Frame(self._rawentry.tb_frame)
|
||||
def frame(self) -> Frame:
|
||||
return Frame(self._rawentry.tb_frame)
|
||||
|
||||
@property
|
||||
def relline(self):
|
||||
def relline(self) -> int:
|
||||
return self.lineno - self.frame.code.firstlineno
|
||||
|
||||
def __repr__(self):
|
||||
def __repr__(self) -> str:
|
||||
return "<TracebackEntry %s:%d>" % (self.frame.code.path, self.lineno + 1)
|
||||
|
||||
@property
|
||||
def statement(self):
|
||||
def statement(self) -> "Source":
|
||||
""" _pytest._code.Source object for the current statement """
|
||||
source = self.frame.code.fullsource
|
||||
assert source is not None
|
||||
return source.getstatement(self.lineno)
|
||||
|
||||
@property
|
||||
|
@ -206,14 +215,14 @@ class TracebackEntry:
|
|||
return self.frame.code.path
|
||||
|
||||
@property
|
||||
def locals(self):
|
||||
def locals(self) -> Dict[str, Any]:
|
||||
""" locals of underlying frame """
|
||||
return self.frame.f_locals
|
||||
|
||||
def getfirstlinesource(self):
|
||||
def getfirstlinesource(self) -> int:
|
||||
return self.frame.code.firstlineno
|
||||
|
||||
def getsource(self, astcache=None):
|
||||
def getsource(self, astcache=None) -> Optional["Source"]:
|
||||
""" return failing source code. """
|
||||
# we use the passed in astcache to not reparse asttrees
|
||||
# within exception info printing
|
||||
|
@ -258,7 +267,7 @@ class TracebackEntry:
|
|||
return tbh(None if self._excinfo is None else self._excinfo())
|
||||
return tbh
|
||||
|
||||
def __str__(self):
|
||||
def __str__(self) -> str:
|
||||
try:
|
||||
fn = str(self.path)
|
||||
except py.error.Error:
|
||||
|
@ -273,33 +282,42 @@ class TracebackEntry:
|
|||
return " File %r:%d in %s\n %s\n" % (fn, self.lineno + 1, name, line)
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
def name(self) -> str:
|
||||
""" co_name of underlying code """
|
||||
return self.frame.code.raw.co_name
|
||||
|
||||
|
||||
class Traceback(list):
|
||||
class Traceback(List[TracebackEntry]):
|
||||
""" Traceback objects encapsulate and offer higher level
|
||||
access to Traceback entries.
|
||||
"""
|
||||
|
||||
Entry = TracebackEntry
|
||||
|
||||
def __init__(self, tb, excinfo=None):
|
||||
def __init__(
|
||||
self,
|
||||
tb: Union[TracebackType, Iterable[TracebackEntry]],
|
||||
excinfo: Optional["ReferenceType[ExceptionInfo]"] = None,
|
||||
) -> None:
|
||||
""" initialize from given python traceback object and ExceptionInfo """
|
||||
self._excinfo = excinfo
|
||||
if hasattr(tb, "tb_next"):
|
||||
if isinstance(tb, TracebackType):
|
||||
|
||||
def f(cur):
|
||||
while cur is not None:
|
||||
yield self.Entry(cur, excinfo=excinfo)
|
||||
cur = cur.tb_next
|
||||
def f(cur: TracebackType) -> Iterable[TracebackEntry]:
|
||||
cur_ = cur # type: Optional[TracebackType]
|
||||
while cur_ is not None:
|
||||
yield TracebackEntry(cur_, excinfo=excinfo)
|
||||
cur_ = cur_.tb_next
|
||||
|
||||
list.__init__(self, f(tb))
|
||||
super().__init__(f(tb))
|
||||
else:
|
||||
list.__init__(self, tb)
|
||||
super().__init__(tb)
|
||||
|
||||
def cut(self, path=None, lineno=None, firstlineno=None, excludepath=None):
|
||||
def cut(
|
||||
self,
|
||||
path=None,
|
||||
lineno: Optional[int] = None,
|
||||
firstlineno: Optional[int] = None,
|
||||
excludepath=None,
|
||||
) -> "Traceback":
|
||||
""" return a Traceback instance wrapping part of this Traceback
|
||||
|
||||
by providing any combination of path, lineno and firstlineno, the
|
||||
|
@ -325,13 +343,25 @@ class Traceback(list):
|
|||
return Traceback(x._rawentry, self._excinfo)
|
||||
return self
|
||||
|
||||
def __getitem__(self, key):
|
||||
val = super().__getitem__(key)
|
||||
if isinstance(key, type(slice(0))):
|
||||
val = self.__class__(val)
|
||||
return val
|
||||
@overload
|
||||
def __getitem__(self, key: int) -> TracebackEntry:
|
||||
raise NotImplementedError()
|
||||
|
||||
def filter(self, fn=lambda x: not x.ishidden()):
|
||||
@overload # noqa: F811
|
||||
def __getitem__(self, key: slice) -> "Traceback": # noqa: F811
|
||||
raise NotImplementedError()
|
||||
|
||||
def __getitem__( # noqa: F811
|
||||
self, key: Union[int, slice]
|
||||
) -> Union[TracebackEntry, "Traceback"]:
|
||||
if isinstance(key, slice):
|
||||
return self.__class__(super().__getitem__(key))
|
||||
else:
|
||||
return super().__getitem__(key)
|
||||
|
||||
def filter(
|
||||
self, fn: Callable[[TracebackEntry], bool] = lambda x: not x.ishidden()
|
||||
) -> "Traceback":
|
||||
""" return a Traceback instance with certain items removed
|
||||
|
||||
fn is a function that gets a single argument, a TracebackEntry
|
||||
|
@ -343,7 +373,7 @@ class Traceback(list):
|
|||
"""
|
||||
return Traceback(filter(fn, self), self._excinfo)
|
||||
|
||||
def getcrashentry(self):
|
||||
def getcrashentry(self) -> TracebackEntry:
|
||||
""" return last non-hidden traceback entry that lead
|
||||
to the exception of a traceback.
|
||||
"""
|
||||
|
@ -353,7 +383,7 @@ class Traceback(list):
|
|||
return entry
|
||||
return self[-1]
|
||||
|
||||
def recursionindex(self):
|
||||
def recursionindex(self) -> Optional[int]:
|
||||
""" return the index of the frame/TracebackEntry where recursion
|
||||
originates if appropriate, None if no recursion occurred
|
||||
"""
|
||||
|
@ -449,7 +479,7 @@ class ExceptionInfo(Generic[_E]):
|
|||
assert tup[1] is not None, "no current exception"
|
||||
assert tup[2] is not None, "no current exception"
|
||||
exc_info = (tup[0], tup[1], tup[2])
|
||||
return cls.from_exc_info(exc_info)
|
||||
return cls.from_exc_info(exc_info, exprinfo)
|
||||
|
||||
@classmethod
|
||||
def for_later(cls) -> "ExceptionInfo[_E]":
|
||||
|
@ -508,7 +538,9 @@ class ExceptionInfo(Generic[_E]):
|
|||
def __repr__(self) -> str:
|
||||
if self._excinfo is None:
|
||||
return "<ExceptionInfo for raises contextmanager>"
|
||||
return "<ExceptionInfo %s tblen=%d>" % (self.typename, len(self.traceback))
|
||||
return "<{} {} tblen={}>".format(
|
||||
self.__class__.__name__, saferepr(self._excinfo[1]), len(self.traceback)
|
||||
)
|
||||
|
||||
def exconly(self, tryshort: bool = False) -> str:
|
||||
""" return the exception as a string
|
||||
|
@ -541,13 +573,13 @@ class ExceptionInfo(Generic[_E]):
|
|||
def getrepr(
|
||||
self,
|
||||
showlocals: bool = False,
|
||||
style: str = "long",
|
||||
style: "_TracebackStyle" = "long",
|
||||
abspath: bool = False,
|
||||
tbfilter: bool = True,
|
||||
funcargs: bool = False,
|
||||
truncate_locals: bool = True,
|
||||
chain: bool = True,
|
||||
):
|
||||
) -> Union["ReprExceptionInfo", "ExceptionChainRepr"]:
|
||||
"""
|
||||
Return str()able representation of this exception info.
|
||||
|
||||
|
@ -619,16 +651,16 @@ class FormattedExcinfo:
|
|||
flow_marker = ">"
|
||||
fail_marker = "E"
|
||||
|
||||
showlocals = attr.ib(default=False)
|
||||
style = attr.ib(default="long")
|
||||
abspath = attr.ib(default=True)
|
||||
tbfilter = attr.ib(default=True)
|
||||
funcargs = attr.ib(default=False)
|
||||
truncate_locals = attr.ib(default=True)
|
||||
chain = attr.ib(default=True)
|
||||
showlocals = attr.ib(type=bool, default=False)
|
||||
style = attr.ib(type="_TracebackStyle", default="long")
|
||||
abspath = attr.ib(type=bool, default=True)
|
||||
tbfilter = attr.ib(type=bool, default=True)
|
||||
funcargs = attr.ib(type=bool, default=False)
|
||||
truncate_locals = attr.ib(type=bool, default=True)
|
||||
chain = attr.ib(type=bool, default=True)
|
||||
astcache = attr.ib(default=attr.Factory(dict), init=False, repr=False)
|
||||
|
||||
def _getindent(self, source):
|
||||
def _getindent(self, source: "Source") -> int:
|
||||
# figure out indent for given source
|
||||
try:
|
||||
s = str(source.getstatement(len(source) - 1))
|
||||
|
@ -643,20 +675,27 @@ class FormattedExcinfo:
|
|||
return 0
|
||||
return 4 + (len(s) - len(s.lstrip()))
|
||||
|
||||
def _getentrysource(self, entry):
|
||||
def _getentrysource(self, entry: TracebackEntry) -> Optional["Source"]:
|
||||
source = entry.getsource(self.astcache)
|
||||
if source is not None:
|
||||
source = source.deindent()
|
||||
return source
|
||||
|
||||
def repr_args(self, entry):
|
||||
def repr_args(self, entry: TracebackEntry) -> Optional["ReprFuncArgs"]:
|
||||
if self.funcargs:
|
||||
args = []
|
||||
for argname, argvalue in entry.frame.getargs(var=True):
|
||||
args.append((argname, saferepr(argvalue)))
|
||||
return ReprFuncArgs(args)
|
||||
return None
|
||||
|
||||
def get_source(self, source, line_index=-1, excinfo=None, short=False) -> List[str]:
|
||||
def get_source(
|
||||
self,
|
||||
source: "Source",
|
||||
line_index: int = -1,
|
||||
excinfo: Optional[ExceptionInfo] = None,
|
||||
short: bool = False,
|
||||
) -> List[str]:
|
||||
""" return formatted and marked up source lines. """
|
||||
import _pytest._code
|
||||
|
||||
|
@ -680,19 +719,21 @@ class FormattedExcinfo:
|
|||
lines.extend(self.get_exconly(excinfo, indent=indent, markall=True))
|
||||
return lines
|
||||
|
||||
def get_exconly(self, excinfo, indent=4, markall=False):
|
||||
def get_exconly(
|
||||
self, excinfo: ExceptionInfo, indent: int = 4, markall: bool = False
|
||||
) -> List[str]:
|
||||
lines = []
|
||||
indent = " " * indent
|
||||
indentstr = " " * indent
|
||||
# get the real exception information out
|
||||
exlines = excinfo.exconly(tryshort=True).split("\n")
|
||||
failindent = self.fail_marker + indent[1:]
|
||||
failindent = self.fail_marker + indentstr[1:]
|
||||
for line in exlines:
|
||||
lines.append(failindent + line)
|
||||
if not markall:
|
||||
failindent = indent
|
||||
failindent = indentstr
|
||||
return lines
|
||||
|
||||
def repr_locals(self, locals):
|
||||
def repr_locals(self, locals: Dict[str, object]) -> Optional["ReprLocals"]:
|
||||
if self.showlocals:
|
||||
lines = []
|
||||
keys = [loc for loc in locals if loc[0] != "@"]
|
||||
|
@ -717,8 +758,11 @@ class FormattedExcinfo:
|
|||
# # XXX
|
||||
# pprint.pprint(value, stream=self.excinfowriter)
|
||||
return ReprLocals(lines)
|
||||
return None
|
||||
|
||||
def repr_traceback_entry(self, entry, excinfo=None):
|
||||
def repr_traceback_entry(
|
||||
self, entry: TracebackEntry, excinfo: Optional[ExceptionInfo] = None
|
||||
) -> "ReprEntry":
|
||||
import _pytest._code
|
||||
|
||||
source = self._getentrysource(entry)
|
||||
|
@ -729,9 +773,7 @@ class FormattedExcinfo:
|
|||
line_index = entry.lineno - entry.getfirstlinesource()
|
||||
|
||||
lines = [] # type: List[str]
|
||||
style = entry._repr_style
|
||||
if style is None:
|
||||
style = self.style
|
||||
style = entry._repr_style if entry._repr_style is not None else self.style
|
||||
if style in ("short", "long"):
|
||||
short = style == "short"
|
||||
reprargs = self.repr_args(entry) if not short else None
|
||||
|
@ -761,7 +803,7 @@ class FormattedExcinfo:
|
|||
path = np
|
||||
return path
|
||||
|
||||
def repr_traceback(self, excinfo):
|
||||
def repr_traceback(self, excinfo: ExceptionInfo) -> "ReprTraceback":
|
||||
traceback = excinfo.traceback
|
||||
if self.tbfilter:
|
||||
traceback = traceback.filter()
|
||||
|
@ -779,7 +821,9 @@ class FormattedExcinfo:
|
|||
entries.append(reprentry)
|
||||
return ReprTraceback(entries, extraline, style=self.style)
|
||||
|
||||
def _truncate_recursive_traceback(self, traceback):
|
||||
def _truncate_recursive_traceback(
|
||||
self, traceback: Traceback
|
||||
) -> Tuple[Traceback, Optional[str]]:
|
||||
"""
|
||||
Truncate the given recursive traceback trying to find the starting point
|
||||
of the recursion.
|
||||
|
@ -806,7 +850,9 @@ class FormattedExcinfo:
|
|||
max_frames=max_frames,
|
||||
total=len(traceback),
|
||||
) # type: Optional[str]
|
||||
traceback = traceback[:max_frames] + traceback[-max_frames:]
|
||||
# Type ignored because adding two instaces of a List subtype
|
||||
# currently incorrectly has type List instead of the subtype.
|
||||
traceback = traceback[:max_frames] + traceback[-max_frames:] # type: ignore
|
||||
else:
|
||||
if recursionindex is not None:
|
||||
extraline = "!!! Recursion detected (same locals & position)"
|
||||
|
@ -816,19 +862,19 @@ class FormattedExcinfo:
|
|||
|
||||
return traceback, extraline
|
||||
|
||||
def repr_excinfo(self, excinfo):
|
||||
|
||||
def repr_excinfo(self, excinfo: ExceptionInfo) -> "ExceptionChainRepr":
|
||||
repr_chain = (
|
||||
[]
|
||||
) # type: List[Tuple[ReprTraceback, Optional[ReprFileLocation], Optional[str]]]
|
||||
e = excinfo.value
|
||||
excinfo_ = excinfo # type: Optional[ExceptionInfo]
|
||||
descr = None
|
||||
seen = set() # type: Set[int]
|
||||
while e is not None and id(e) not in seen:
|
||||
seen.add(id(e))
|
||||
if excinfo:
|
||||
reprtraceback = self.repr_traceback(excinfo)
|
||||
reprcrash = excinfo._getreprcrash()
|
||||
if excinfo_:
|
||||
reprtraceback = self.repr_traceback(excinfo_)
|
||||
reprcrash = excinfo_._getreprcrash() # type: Optional[ReprFileLocation]
|
||||
else:
|
||||
# fallback to native repr if the exception doesn't have a traceback:
|
||||
# ExceptionInfo objects require a full traceback to work
|
||||
|
@ -840,7 +886,7 @@ class FormattedExcinfo:
|
|||
repr_chain += [(reprtraceback, reprcrash, descr)]
|
||||
if e.__cause__ is not None and self.chain:
|
||||
e = e.__cause__
|
||||
excinfo = (
|
||||
excinfo_ = (
|
||||
ExceptionInfo((type(e), e, e.__traceback__))
|
||||
if e.__traceback__
|
||||
else None
|
||||
|
@ -850,7 +896,7 @@ class FormattedExcinfo:
|
|||
e.__context__ is not None and not e.__suppress_context__ and self.chain
|
||||
):
|
||||
e = e.__context__
|
||||
excinfo = (
|
||||
excinfo_ = (
|
||||
ExceptionInfo((type(e), e, e.__traceback__))
|
||||
if e.__traceback__
|
||||
else None
|
||||
|
@ -863,7 +909,7 @@ class FormattedExcinfo:
|
|||
|
||||
|
||||
class TerminalRepr:
|
||||
def __str__(self):
|
||||
def __str__(self) -> str:
|
||||
# FYI this is called from pytest-xdist's serialization of exception
|
||||
# information.
|
||||
io = StringIO()
|
||||
|
@ -871,25 +917,33 @@ class TerminalRepr:
|
|||
self.toterminal(tw)
|
||||
return io.getvalue().strip()
|
||||
|
||||
def __repr__(self):
|
||||
def __repr__(self) -> str:
|
||||
return "<{} instance at {:0x}>".format(self.__class__, id(self))
|
||||
|
||||
def toterminal(self, tw) -> None:
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
class ExceptionRepr(TerminalRepr):
|
||||
def __init__(self) -> None:
|
||||
self.sections = [] # type: List[Tuple[str, str, str]]
|
||||
|
||||
def addsection(self, name, content, sep="-"):
|
||||
def addsection(self, name: str, content: str, sep: str = "-") -> None:
|
||||
self.sections.append((name, content, sep))
|
||||
|
||||
def toterminal(self, tw):
|
||||
def toterminal(self, tw) -> None:
|
||||
for name, content, sep in self.sections:
|
||||
tw.sep(sep, name)
|
||||
tw.line(content)
|
||||
|
||||
|
||||
class ExceptionChainRepr(ExceptionRepr):
|
||||
def __init__(self, chain):
|
||||
def __init__(
|
||||
self,
|
||||
chain: Sequence[
|
||||
Tuple["ReprTraceback", Optional["ReprFileLocation"], Optional[str]]
|
||||
],
|
||||
) -> None:
|
||||
super().__init__()
|
||||
self.chain = chain
|
||||
# reprcrash and reprtraceback of the outermost (the newest) exception
|
||||
|
@ -897,7 +951,7 @@ class ExceptionChainRepr(ExceptionRepr):
|
|||
self.reprtraceback = chain[-1][0]
|
||||
self.reprcrash = chain[-1][1]
|
||||
|
||||
def toterminal(self, tw):
|
||||
def toterminal(self, tw) -> None:
|
||||
for element in self.chain:
|
||||
element[0].toterminal(tw)
|
||||
if element[2] is not None:
|
||||
|
@ -907,12 +961,14 @@ class ExceptionChainRepr(ExceptionRepr):
|
|||
|
||||
|
||||
class ReprExceptionInfo(ExceptionRepr):
|
||||
def __init__(self, reprtraceback, reprcrash):
|
||||
def __init__(
|
||||
self, reprtraceback: "ReprTraceback", reprcrash: "ReprFileLocation"
|
||||
) -> None:
|
||||
super().__init__()
|
||||
self.reprtraceback = reprtraceback
|
||||
self.reprcrash = reprcrash
|
||||
|
||||
def toterminal(self, tw):
|
||||
def toterminal(self, tw) -> None:
|
||||
self.reprtraceback.toterminal(tw)
|
||||
super().toterminal(tw)
|
||||
|
||||
|
@ -920,12 +976,17 @@ class ReprExceptionInfo(ExceptionRepr):
|
|||
class ReprTraceback(TerminalRepr):
|
||||
entrysep = "_ "
|
||||
|
||||
def __init__(self, reprentries, extraline, style):
|
||||
def __init__(
|
||||
self,
|
||||
reprentries: Sequence[Union["ReprEntry", "ReprEntryNative"]],
|
||||
extraline: Optional[str],
|
||||
style: "_TracebackStyle",
|
||||
) -> None:
|
||||
self.reprentries = reprentries
|
||||
self.extraline = extraline
|
||||
self.style = style
|
||||
|
||||
def toterminal(self, tw):
|
||||
def toterminal(self, tw) -> None:
|
||||
# the entries might have different styles
|
||||
for i, entry in enumerate(self.reprentries):
|
||||
if entry.style == "long":
|
||||
|
@ -945,32 +1006,40 @@ class ReprTraceback(TerminalRepr):
|
|||
|
||||
|
||||
class ReprTracebackNative(ReprTraceback):
|
||||
def __init__(self, tblines):
|
||||
def __init__(self, tblines: Sequence[str]) -> None:
|
||||
self.style = "native"
|
||||
self.reprentries = [ReprEntryNative(tblines)]
|
||||
self.extraline = None
|
||||
|
||||
|
||||
class ReprEntryNative(TerminalRepr):
|
||||
style = "native"
|
||||
style = "native" # type: _TracebackStyle
|
||||
|
||||
def __init__(self, tblines):
|
||||
def __init__(self, tblines: Sequence[str]) -> None:
|
||||
self.lines = tblines
|
||||
|
||||
def toterminal(self, tw):
|
||||
def toterminal(self, tw) -> None:
|
||||
tw.write("".join(self.lines))
|
||||
|
||||
|
||||
class ReprEntry(TerminalRepr):
|
||||
def __init__(self, lines, reprfuncargs, reprlocals, filelocrepr, style):
|
||||
def __init__(
|
||||
self,
|
||||
lines: Sequence[str],
|
||||
reprfuncargs: Optional["ReprFuncArgs"],
|
||||
reprlocals: Optional["ReprLocals"],
|
||||
filelocrepr: Optional["ReprFileLocation"],
|
||||
style: "_TracebackStyle",
|
||||
) -> None:
|
||||
self.lines = lines
|
||||
self.reprfuncargs = reprfuncargs
|
||||
self.reprlocals = reprlocals
|
||||
self.reprfileloc = filelocrepr
|
||||
self.style = style
|
||||
|
||||
def toterminal(self, tw):
|
||||
def toterminal(self, tw) -> None:
|
||||
if self.style == "short":
|
||||
assert self.reprfileloc is not None
|
||||
self.reprfileloc.toterminal(tw)
|
||||
for line in self.lines:
|
||||
red = line.startswith("E ")
|
||||
|
@ -989,19 +1058,19 @@ class ReprEntry(TerminalRepr):
|
|||
tw.line("")
|
||||
self.reprfileloc.toterminal(tw)
|
||||
|
||||
def __str__(self):
|
||||
def __str__(self) -> str:
|
||||
return "{}\n{}\n{}".format(
|
||||
"\n".join(self.lines), self.reprlocals, self.reprfileloc
|
||||
)
|
||||
|
||||
|
||||
class ReprFileLocation(TerminalRepr):
|
||||
def __init__(self, path, lineno, message):
|
||||
def __init__(self, path, lineno: int, message: str) -> None:
|
||||
self.path = str(path)
|
||||
self.lineno = lineno
|
||||
self.message = message
|
||||
|
||||
def toterminal(self, tw):
|
||||
def toterminal(self, tw) -> None:
|
||||
# filename and lineno output for each entry,
|
||||
# using an output format that most editors understand
|
||||
msg = self.message
|
||||
|
@ -1013,19 +1082,19 @@ class ReprFileLocation(TerminalRepr):
|
|||
|
||||
|
||||
class ReprLocals(TerminalRepr):
|
||||
def __init__(self, lines):
|
||||
def __init__(self, lines: Sequence[str]) -> None:
|
||||
self.lines = lines
|
||||
|
||||
def toterminal(self, tw):
|
||||
def toterminal(self, tw) -> None:
|
||||
for line in self.lines:
|
||||
tw.line(line)
|
||||
|
||||
|
||||
class ReprFuncArgs(TerminalRepr):
|
||||
def __init__(self, args):
|
||||
def __init__(self, args: Sequence[Tuple[str, object]]) -> None:
|
||||
self.args = args
|
||||
|
||||
def toterminal(self, tw):
|
||||
def toterminal(self, tw) -> None:
|
||||
if self.args:
|
||||
linesofar = ""
|
||||
for name, value in self.args:
|
||||
|
@ -1044,13 +1113,11 @@ class ReprFuncArgs(TerminalRepr):
|
|||
tw.line("")
|
||||
|
||||
|
||||
def getrawcode(obj, trycall=True):
|
||||
def getrawcode(obj, trycall: bool = True):
|
||||
""" return code object for given function. """
|
||||
try:
|
||||
return obj.__code__
|
||||
except AttributeError:
|
||||
obj = getattr(obj, "im_func", obj)
|
||||
obj = getattr(obj, "func_code", obj)
|
||||
obj = getattr(obj, "f_code", obj)
|
||||
obj = getattr(obj, "__code__", obj)
|
||||
if trycall and not hasattr(obj, "co_firstlineno"):
|
||||
|
@ -1074,7 +1141,7 @@ _PYTEST_DIR = py.path.local(_pytest.__file__).dirpath()
|
|||
_PY_DIR = py.path.local(py.__file__).dirpath()
|
||||
|
||||
|
||||
def filter_traceback(entry):
|
||||
def filter_traceback(entry: TracebackEntry) -> bool:
|
||||
"""Return True if a TracebackEntry instance should be removed from tracebacks:
|
||||
* dynamically generated code (no code to show up for it);
|
||||
* internal traceback from pytest or its internal libraries, py and pluggy.
|
||||
|
|
|
@ -7,10 +7,18 @@ import tokenize
|
|||
import warnings
|
||||
from ast import PyCF_ONLY_AST as _AST_FLAG
|
||||
from bisect import bisect_right
|
||||
from types import FrameType
|
||||
from typing import Iterator
|
||||
from typing import List
|
||||
from typing import Optional
|
||||
from typing import Sequence
|
||||
from typing import Tuple
|
||||
from typing import Union
|
||||
|
||||
import py
|
||||
|
||||
from _pytest.compat import overload
|
||||
|
||||
|
||||
class Source:
|
||||
""" an immutable object holding a source code fragment,
|
||||
|
@ -19,7 +27,7 @@ class Source:
|
|||
|
||||
_compilecounter = 0
|
||||
|
||||
def __init__(self, *parts, **kwargs):
|
||||
def __init__(self, *parts, **kwargs) -> None:
|
||||
self.lines = lines = [] # type: List[str]
|
||||
de = kwargs.get("deindent", True)
|
||||
for part in parts:
|
||||
|
@ -48,7 +56,15 @@ class Source:
|
|||
# Ignore type because of https://github.com/python/mypy/issues/4266.
|
||||
__hash__ = None # type: ignore
|
||||
|
||||
def __getitem__(self, key):
|
||||
@overload
|
||||
def __getitem__(self, key: int) -> str:
|
||||
raise NotImplementedError()
|
||||
|
||||
@overload # noqa: F811
|
||||
def __getitem__(self, key: slice) -> "Source": # noqa: F811
|
||||
raise NotImplementedError()
|
||||
|
||||
def __getitem__(self, key: Union[int, slice]) -> Union[str, "Source"]: # noqa: F811
|
||||
if isinstance(key, int):
|
||||
return self.lines[key]
|
||||
else:
|
||||
|
@ -58,10 +74,13 @@ class Source:
|
|||
newsource.lines = self.lines[key.start : key.stop]
|
||||
return newsource
|
||||
|
||||
def __len__(self):
|
||||
def __iter__(self) -> Iterator[str]:
|
||||
return iter(self.lines)
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self.lines)
|
||||
|
||||
def strip(self):
|
||||
def strip(self) -> "Source":
|
||||
""" return new source object with trailing
|
||||
and leading blank lines removed.
|
||||
"""
|
||||
|
@ -74,18 +93,20 @@ class Source:
|
|||
source.lines[:] = self.lines[start:end]
|
||||
return source
|
||||
|
||||
def putaround(self, before="", after="", indent=" " * 4):
|
||||
def putaround(
|
||||
self, before: str = "", after: str = "", indent: str = " " * 4
|
||||
) -> "Source":
|
||||
""" return a copy of the source object with
|
||||
'before' and 'after' wrapped around it.
|
||||
"""
|
||||
before = Source(before)
|
||||
after = Source(after)
|
||||
beforesource = Source(before)
|
||||
aftersource = Source(after)
|
||||
newsource = Source()
|
||||
lines = [(indent + line) for line in self.lines]
|
||||
newsource.lines = before.lines + lines + after.lines
|
||||
newsource.lines = beforesource.lines + lines + aftersource.lines
|
||||
return newsource
|
||||
|
||||
def indent(self, indent=" " * 4):
|
||||
def indent(self, indent: str = " " * 4) -> "Source":
|
||||
""" return a copy of the source object with
|
||||
all lines indented by the given indent-string.
|
||||
"""
|
||||
|
@ -93,14 +114,14 @@ class Source:
|
|||
newsource.lines = [(indent + line) for line in self.lines]
|
||||
return newsource
|
||||
|
||||
def getstatement(self, lineno):
|
||||
def getstatement(self, lineno: int) -> "Source":
|
||||
""" return Source statement which contains the
|
||||
given linenumber (counted from 0).
|
||||
"""
|
||||
start, end = self.getstatementrange(lineno)
|
||||
return self[start:end]
|
||||
|
||||
def getstatementrange(self, lineno):
|
||||
def getstatementrange(self, lineno: int):
|
||||
""" return (start, end) tuple which spans the minimal
|
||||
statement region which containing the given lineno.
|
||||
"""
|
||||
|
@ -109,13 +130,13 @@ class Source:
|
|||
ast, start, end = getstatementrange_ast(lineno, self)
|
||||
return start, end
|
||||
|
||||
def deindent(self):
|
||||
def deindent(self) -> "Source":
|
||||
"""return a new source object deindented."""
|
||||
newsource = Source()
|
||||
newsource.lines[:] = deindent(self.lines)
|
||||
return newsource
|
||||
|
||||
def isparseable(self, deindent=True):
|
||||
def isparseable(self, deindent: bool = True) -> bool:
|
||||
""" return True if source is parseable, heuristically
|
||||
deindenting it by default.
|
||||
"""
|
||||
|
@ -135,11 +156,16 @@ class Source:
|
|||
else:
|
||||
return True
|
||||
|
||||
def __str__(self):
|
||||
def __str__(self) -> str:
|
||||
return "\n".join(self.lines)
|
||||
|
||||
def compile(
|
||||
self, filename=None, mode="exec", flag=0, dont_inherit=0, _genframe=None
|
||||
self,
|
||||
filename=None,
|
||||
mode="exec",
|
||||
flag: int = 0,
|
||||
dont_inherit: int = 0,
|
||||
_genframe: Optional[FrameType] = None,
|
||||
):
|
||||
""" return compiled code object. if filename is None
|
||||
invent an artificial filename which displays
|
||||
|
@ -183,7 +209,7 @@ class Source:
|
|||
#
|
||||
|
||||
|
||||
def compile_(source, filename=None, mode="exec", flags=0, dont_inherit=0):
|
||||
def compile_(source, filename=None, mode="exec", flags: int = 0, dont_inherit: int = 0):
|
||||
""" compile the given source to a raw code object,
|
||||
and maintain an internal cache which allows later
|
||||
retrieval of the source code for the code object
|
||||
|
@ -233,7 +259,7 @@ def getfslineno(obj):
|
|||
#
|
||||
|
||||
|
||||
def findsource(obj):
|
||||
def findsource(obj) -> Tuple[Optional[Source], int]:
|
||||
try:
|
||||
sourcelines, lineno = inspect.findsource(obj)
|
||||
except Exception:
|
||||
|
@ -243,7 +269,7 @@ def findsource(obj):
|
|||
return source, lineno
|
||||
|
||||
|
||||
def getsource(obj, **kwargs):
|
||||
def getsource(obj, **kwargs) -> Source:
|
||||
from .code import getrawcode
|
||||
|
||||
obj = getrawcode(obj)
|
||||
|
@ -255,21 +281,21 @@ def getsource(obj, **kwargs):
|
|||
return Source(strsrc, **kwargs)
|
||||
|
||||
|
||||
def deindent(lines):
|
||||
def deindent(lines: Sequence[str]) -> List[str]:
|
||||
return textwrap.dedent("\n".join(lines)).splitlines()
|
||||
|
||||
|
||||
def get_statement_startend2(lineno, node):
|
||||
def get_statement_startend2(lineno: int, node: ast.AST) -> Tuple[int, Optional[int]]:
|
||||
import ast
|
||||
|
||||
# flatten all statements and except handlers into one lineno-list
|
||||
# AST's line numbers start indexing at 1
|
||||
values = []
|
||||
values = [] # type: List[int]
|
||||
for x in ast.walk(node):
|
||||
if isinstance(x, (ast.stmt, ast.ExceptHandler)):
|
||||
values.append(x.lineno - 1)
|
||||
for name in ("finalbody", "orelse"):
|
||||
val = getattr(x, name, None)
|
||||
val = getattr(x, name, None) # type: Optional[List[ast.stmt]]
|
||||
if val:
|
||||
# treat the finally/orelse part as its own statement
|
||||
values.append(val[0].lineno - 1 - 1)
|
||||
|
@ -283,7 +309,12 @@ def get_statement_startend2(lineno, node):
|
|||
return start, end
|
||||
|
||||
|
||||
def getstatementrange_ast(lineno, source: Source, assertion=False, astnode=None):
|
||||
def getstatementrange_ast(
|
||||
lineno: int,
|
||||
source: Source,
|
||||
assertion: bool = False,
|
||||
astnode: Optional[ast.AST] = None,
|
||||
) -> Tuple[ast.AST, int, int]:
|
||||
if astnode is None:
|
||||
content = str(source)
|
||||
# See #4260:
|
||||
|
|
|
@ -1,19 +1,30 @@
|
|||
import pprint
|
||||
import reprlib
|
||||
from typing import Any
|
||||
|
||||
|
||||
def _format_repr_exception(exc, obj):
|
||||
exc_name = type(exc).__name__
|
||||
def _try_repr_or_str(obj):
|
||||
try:
|
||||
exc_info = str(exc)
|
||||
except Exception:
|
||||
exc_info = "unknown"
|
||||
return '<[{}("{}") raised in repr()] {} object at 0x{:x}>'.format(
|
||||
exc_name, exc_info, obj.__class__.__name__, id(obj)
|
||||
return repr(obj)
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
raise
|
||||
except BaseException:
|
||||
return '{}("{}")'.format(type(obj).__name__, obj)
|
||||
|
||||
|
||||
def _format_repr_exception(exc: BaseException, obj: Any) -> str:
|
||||
try:
|
||||
exc_info = _try_repr_or_str(exc)
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
raise
|
||||
except BaseException as exc:
|
||||
exc_info = "unpresentable exception ({})".format(_try_repr_or_str(exc))
|
||||
return "<[{} raised in repr()] {} object at 0x{:x}>".format(
|
||||
exc_info, obj.__class__.__name__, id(obj)
|
||||
)
|
||||
|
||||
|
||||
def _ellipsize(s, maxsize):
|
||||
def _ellipsize(s: str, maxsize: int) -> str:
|
||||
if len(s) > maxsize:
|
||||
i = max(0, (maxsize - 3) // 2)
|
||||
j = max(0, maxsize - 3 - i)
|
||||
|
@ -26,27 +37,31 @@ class SafeRepr(reprlib.Repr):
|
|||
and includes information on exceptions raised during the call.
|
||||
"""
|
||||
|
||||
def __init__(self, maxsize):
|
||||
def __init__(self, maxsize: int) -> None:
|
||||
super().__init__()
|
||||
self.maxstring = maxsize
|
||||
self.maxsize = maxsize
|
||||
|
||||
def repr(self, x):
|
||||
def repr(self, x: Any) -> str:
|
||||
try:
|
||||
s = super().repr(x)
|
||||
except Exception as exc:
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
raise
|
||||
except BaseException as exc:
|
||||
s = _format_repr_exception(exc, x)
|
||||
return _ellipsize(s, self.maxsize)
|
||||
|
||||
def repr_instance(self, x, level):
|
||||
def repr_instance(self, x: Any, level: int) -> str:
|
||||
try:
|
||||
s = repr(x)
|
||||
except Exception as exc:
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
raise
|
||||
except BaseException as exc:
|
||||
s = _format_repr_exception(exc, x)
|
||||
return _ellipsize(s, self.maxsize)
|
||||
|
||||
|
||||
def safeformat(obj):
|
||||
def safeformat(obj: Any) -> str:
|
||||
"""return a pretty printed string for the given object.
|
||||
Failing __repr__ functions of user instances will be represented
|
||||
with a short exception info.
|
||||
|
@ -57,7 +72,7 @@ def safeformat(obj):
|
|||
return _format_repr_exception(exc, obj)
|
||||
|
||||
|
||||
def saferepr(obj, maxsize=240):
|
||||
def saferepr(obj: Any, maxsize: int = 240) -> str:
|
||||
"""return a size-limited safe repr-string for the given object.
|
||||
Failing __repr__ functions of user instances will be represented
|
||||
with a short exception info and 'saferepr' generally takes
|
||||
|
|
|
@ -163,5 +163,5 @@ def pytest_sessionfinish(session):
|
|||
assertstate.hook.set_session(None)
|
||||
|
||||
|
||||
# Expose this plugin's implementation for the pytest_assertrepr_compare hook
|
||||
pytest_assertrepr_compare = util.assertrepr_compare
|
||||
def pytest_assertrepr_compare(config, op, left, right):
|
||||
return util.assertrepr_compare(config=config, op=op, left=left, right=right)
|
||||
|
|
|
@ -19,18 +19,18 @@ from typing import Optional
|
|||
from typing import Set
|
||||
from typing import Tuple
|
||||
|
||||
import atomicwrites
|
||||
|
||||
from _pytest._io.saferepr import saferepr
|
||||
from _pytest._version import version
|
||||
from _pytest.assertion import util
|
||||
from _pytest.assertion.util import ( # noqa: F401
|
||||
format_explanation as _format_explanation,
|
||||
)
|
||||
from _pytest.compat import fspath
|
||||
from _pytest.pathlib import fnmatch_ex
|
||||
from _pytest.pathlib import Path
|
||||
from _pytest.pathlib import PurePath
|
||||
|
||||
# pytest caches rewritten pycs in __pycache__.
|
||||
# pytest caches rewritten pycs in pycache dirs
|
||||
PYTEST_TAG = "{}-pytest-{}".format(sys.implementation.cache_tag, version)
|
||||
PYC_EXT = ".py" + (__debug__ and "c" or "o")
|
||||
PYC_TAIL = "." + PYTEST_TAG + PYC_EXT
|
||||
|
@ -78,7 +78,8 @@ class AssertionRewritingHook(importlib.abc.MetaPathFinder):
|
|||
# there's nothing to rewrite there
|
||||
# python3.5 - python3.6: `namespace`
|
||||
# python3.7+: `None`
|
||||
or spec.origin in {None, "namespace"}
|
||||
or spec.origin == "namespace"
|
||||
or spec.origin is None
|
||||
# we can only rewrite source files
|
||||
or not isinstance(spec.loader, importlib.machinery.SourceFileLoader)
|
||||
# if the file doesn't exist, we can't rewrite it
|
||||
|
@ -102,7 +103,7 @@ class AssertionRewritingHook(importlib.abc.MetaPathFinder):
|
|||
return None # default behaviour is fine
|
||||
|
||||
def exec_module(self, module):
|
||||
fn = module.__spec__.origin
|
||||
fn = Path(module.__spec__.origin)
|
||||
state = self.config._assertstate
|
||||
|
||||
self._rewritten_names.add(module.__name__)
|
||||
|
@ -116,15 +117,15 @@ class AssertionRewritingHook(importlib.abc.MetaPathFinder):
|
|||
# cached pyc is always a complete, valid pyc. Operations on it must be
|
||||
# atomic. POSIX's atomic rename comes in handy.
|
||||
write = not sys.dont_write_bytecode
|
||||
cache_dir = os.path.join(os.path.dirname(fn), "__pycache__")
|
||||
cache_dir = get_cache_dir(fn)
|
||||
if write:
|
||||
ok = try_mkdir(cache_dir)
|
||||
ok = try_makedirs(cache_dir)
|
||||
if not ok:
|
||||
write = False
|
||||
state.trace("read only directory: {}".format(os.path.dirname(fn)))
|
||||
state.trace("read only directory: {}".format(cache_dir))
|
||||
|
||||
cache_name = os.path.basename(fn)[:-3] + PYC_TAIL
|
||||
pyc = os.path.join(cache_dir, cache_name)
|
||||
cache_name = fn.name[:-3] + PYC_TAIL
|
||||
pyc = cache_dir / cache_name
|
||||
# Notice that even if we're in a read-only directory, I'm going
|
||||
# to check for a cached pyc. This may not be optimal...
|
||||
co = _read_pyc(fn, pyc, state.trace)
|
||||
|
@ -138,7 +139,7 @@ class AssertionRewritingHook(importlib.abc.MetaPathFinder):
|
|||
finally:
|
||||
self._writing_pyc = False
|
||||
else:
|
||||
state.trace("found cached rewritten pyc for {!r}".format(fn))
|
||||
state.trace("found cached rewritten pyc for {}".format(fn))
|
||||
exec(co, module.__dict__)
|
||||
|
||||
def _early_rewrite_bailout(self, name, state):
|
||||
|
@ -252,12 +253,10 @@ class AssertionRewritingHook(importlib.abc.MetaPathFinder):
|
|||
return f.read()
|
||||
|
||||
|
||||
def _write_pyc(state, co, source_stat, pyc):
|
||||
def _write_pyc_fp(fp, source_stat, co):
|
||||
# Technically, we don't have to have the same pyc format as
|
||||
# (C)Python, since these "pycs" should never be seen by builtin
|
||||
# import. However, there's little reason deviate.
|
||||
try:
|
||||
with atomicwrites.atomic_write(pyc, mode="wb", overwrite=True) as fp:
|
||||
fp.write(importlib.util.MAGIC_NUMBER)
|
||||
# as of now, bytecode header expects 32-bit numbers for size and mtime (#4903)
|
||||
mtime = int(source_stat.st_mtime) & 0xFFFFFFFF
|
||||
|
@ -265,17 +264,53 @@ def _write_pyc(state, co, source_stat, pyc):
|
|||
# "<LL" stands for 2 unsigned longs, little-ending
|
||||
fp.write(struct.pack("<LL", mtime, size))
|
||||
fp.write(marshal.dumps(co))
|
||||
|
||||
|
||||
if sys.platform == "win32":
|
||||
from atomicwrites import atomic_write
|
||||
|
||||
def _write_pyc(state, co, source_stat, pyc):
|
||||
try:
|
||||
with atomic_write(fspath(pyc), mode="wb", overwrite=True) as fp:
|
||||
_write_pyc_fp(fp, source_stat, co)
|
||||
except EnvironmentError as e:
|
||||
state.trace("error writing pyc file at {}: errno={}".format(pyc, e.errno))
|
||||
# we ignore any failure to write the cache file
|
||||
# there are many reasons, permission-denied, __pycache__ being a
|
||||
# there are many reasons, permission-denied, pycache dir being a
|
||||
# file etc.
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
else:
|
||||
|
||||
def _write_pyc(state, co, source_stat, pyc):
|
||||
proc_pyc = "{}.{}".format(pyc, os.getpid())
|
||||
try:
|
||||
fp = open(proc_pyc, "wb")
|
||||
except EnvironmentError as e:
|
||||
state.trace(
|
||||
"error writing pyc file at {}: errno={}".format(proc_pyc, e.errno)
|
||||
)
|
||||
return False
|
||||
|
||||
try:
|
||||
_write_pyc_fp(fp, source_stat, co)
|
||||
os.rename(proc_pyc, fspath(pyc))
|
||||
except BaseException as e:
|
||||
state.trace("error writing pyc file at {}: errno={}".format(pyc, e.errno))
|
||||
# we ignore any failure to write the cache file
|
||||
# there are many reasons, permission-denied, pycache dir being a
|
||||
# file etc.
|
||||
return False
|
||||
finally:
|
||||
fp.close()
|
||||
return True
|
||||
|
||||
|
||||
def _rewrite_test(fn, config):
|
||||
"""read and rewrite *fn* and return the code object."""
|
||||
fn = fspath(fn)
|
||||
stat = os.stat(fn)
|
||||
with open(fn, "rb") as f:
|
||||
source = f.read()
|
||||
|
@ -291,12 +326,12 @@ def _read_pyc(source, pyc, trace=lambda x: None):
|
|||
Return rewritten code if successful or None if not.
|
||||
"""
|
||||
try:
|
||||
fp = open(pyc, "rb")
|
||||
fp = open(fspath(pyc), "rb")
|
||||
except IOError:
|
||||
return None
|
||||
with fp:
|
||||
try:
|
||||
stat_result = os.stat(source)
|
||||
stat_result = os.stat(fspath(source))
|
||||
mtime = int(stat_result.st_mtime)
|
||||
size = stat_result.st_size
|
||||
data = fp.read(12)
|
||||
|
@ -743,13 +778,12 @@ class AssertionRewriter(ast.NodeVisitor):
|
|||
from _pytest.warning_types import PytestAssertRewriteWarning
|
||||
import warnings
|
||||
|
||||
# Ignore type: typeshed bug https://github.com/python/typeshed/pull/3121
|
||||
warnings.warn_explicit( # type: ignore
|
||||
warnings.warn_explicit(
|
||||
PytestAssertRewriteWarning(
|
||||
"assertion is always true, perhaps remove parentheses?"
|
||||
),
|
||||
category=None,
|
||||
filename=self.module_path,
|
||||
filename=fspath(self.module_path),
|
||||
lineno=assert_.lineno,
|
||||
)
|
||||
|
||||
|
@ -773,8 +807,9 @@ class AssertionRewriter(ast.NodeVisitor):
|
|||
)
|
||||
)
|
||||
|
||||
if self.enable_assertion_pass_hook: # Experimental pytest_assertion_pass hook
|
||||
negation = ast.UnaryOp(ast.Not(), top_condition)
|
||||
|
||||
if self.enable_assertion_pass_hook: # Experimental pytest_assertion_pass hook
|
||||
msg = self.pop_format_context(ast.Str(explanation))
|
||||
|
||||
# Failed
|
||||
|
@ -826,7 +861,6 @@ class AssertionRewriter(ast.NodeVisitor):
|
|||
else: # Original assertion rewriting
|
||||
# Create failure message.
|
||||
body = self.expl_stmts
|
||||
negation = ast.UnaryOp(ast.Not(), top_condition)
|
||||
self.statements.append(ast.If(negation, body, []))
|
||||
if assert_.msg:
|
||||
assertmsg = self.helper("_format_assertmsg", assert_.msg)
|
||||
|
@ -872,7 +906,7 @@ warn_explicit(
|
|||
lineno={lineno},
|
||||
)
|
||||
""".format(
|
||||
filename=module_path, lineno=lineno
|
||||
filename=fspath(module_path), lineno=lineno
|
||||
)
|
||||
).body
|
||||
return ast.If(val_is_none, send_warning, [])
|
||||
|
@ -1018,18 +1052,15 @@ warn_explicit(
|
|||
return res, self.explanation_param(self.pop_format_context(expl_call))
|
||||
|
||||
|
||||
def try_mkdir(cache_dir):
|
||||
"""Attempts to create the given directory, returns True if successful"""
|
||||
def try_makedirs(cache_dir) -> bool:
|
||||
"""Attempts to create the given directory and sub-directories exist, returns True if
|
||||
successful or it already exists"""
|
||||
try:
|
||||
os.mkdir(cache_dir)
|
||||
except FileExistsError:
|
||||
# Either the __pycache__ directory already exists (the
|
||||
# common case) or it's blocked by a non-dir node. In the
|
||||
# latter case, we'll ignore it in _write_pyc.
|
||||
return True
|
||||
except (FileNotFoundError, NotADirectoryError):
|
||||
# One of the path components was not a directory, likely
|
||||
# because we're in a zip file.
|
||||
os.makedirs(fspath(cache_dir), exist_ok=True)
|
||||
except (FileNotFoundError, NotADirectoryError, FileExistsError):
|
||||
# One of the path components was not a directory:
|
||||
# - we're in a zip file
|
||||
# - it is a file
|
||||
return False
|
||||
except PermissionError:
|
||||
return False
|
||||
|
@ -1039,3 +1070,18 @@ def try_mkdir(cache_dir):
|
|||
return False
|
||||
raise
|
||||
return True
|
||||
|
||||
|
||||
def get_cache_dir(file_path: Path) -> Path:
|
||||
"""Returns the cache directory to write .pyc files for the given .py file path"""
|
||||
# Type ignored until added in next mypy release.
|
||||
if sys.version_info >= (3, 8) and sys.pycache_prefix: # type: ignore
|
||||
# given:
|
||||
# prefix = '/tmp/pycs'
|
||||
# path = '/home/user/proj/test_app.py'
|
||||
# we want:
|
||||
# '/tmp/pycs/home/user/proj'
|
||||
return Path(sys.pycache_prefix) / Path(*file_path.parts[1:-1]) # type: ignore
|
||||
else:
|
||||
# classic pycache directory
|
||||
return file_path.parent / "__pycache__"
|
||||
|
|
|
@ -1,12 +1,19 @@
|
|||
"""Utilities for assertion debugging"""
|
||||
import collections.abc
|
||||
import pprint
|
||||
from collections.abc import Sequence
|
||||
from typing import AbstractSet
|
||||
from typing import Any
|
||||
from typing import Callable
|
||||
from typing import Iterable
|
||||
from typing import List
|
||||
from typing import Mapping
|
||||
from typing import Optional
|
||||
from typing import Sequence
|
||||
from typing import Tuple
|
||||
|
||||
import _pytest._code
|
||||
from _pytest import outcomes
|
||||
from _pytest._io.saferepr import safeformat
|
||||
from _pytest._io.saferepr import saferepr
|
||||
from _pytest.compat import ATTRS_EQ_FIELD
|
||||
|
||||
|
@ -21,7 +28,28 @@ _reprcompare = None # type: Optional[Callable[[str, object, object], Optional[s
|
|||
_assertion_pass = None # type: Optional[Callable[[int, str, str], None]]
|
||||
|
||||
|
||||
def format_explanation(explanation):
|
||||
class AlwaysDispatchingPrettyPrinter(pprint.PrettyPrinter):
|
||||
"""PrettyPrinter that always dispatches (regardless of width)."""
|
||||
|
||||
def _format(self, object, stream, indent, allowance, context, level):
|
||||
p = self._dispatch.get(type(object).__repr__, None)
|
||||
|
||||
objid = id(object)
|
||||
if objid in context or p is None:
|
||||
return super()._format(object, stream, indent, allowance, context, level)
|
||||
|
||||
context[objid] = 1
|
||||
p(self, object, stream, indent, allowance, context, level + 1)
|
||||
del context[objid]
|
||||
|
||||
|
||||
def _pformat_dispatch(object, indent=1, width=80, depth=None, *, compact=False):
|
||||
return AlwaysDispatchingPrettyPrinter(
|
||||
indent=1, width=80, depth=None, compact=False
|
||||
).pformat(object)
|
||||
|
||||
|
||||
def format_explanation(explanation: str) -> str:
|
||||
"""This formats an explanation
|
||||
|
||||
Normally all embedded newlines are escaped, however there are
|
||||
|
@ -36,7 +64,7 @@ def format_explanation(explanation):
|
|||
return "\n".join(result)
|
||||
|
||||
|
||||
def _split_explanation(explanation):
|
||||
def _split_explanation(explanation: str) -> List[str]:
|
||||
"""Return a list of individual lines in the explanation
|
||||
|
||||
This will return a list of lines split on '\n{', '\n}' and '\n~'.
|
||||
|
@ -53,7 +81,7 @@ def _split_explanation(explanation):
|
|||
return lines
|
||||
|
||||
|
||||
def _format_lines(lines):
|
||||
def _format_lines(lines: Sequence[str]) -> List[str]:
|
||||
"""Format the individual lines
|
||||
|
||||
This will replace the '{', '}' and '~' characters of our mini
|
||||
|
@ -62,7 +90,7 @@ def _format_lines(lines):
|
|||
|
||||
Return a list of formatted lines.
|
||||
"""
|
||||
result = lines[:1]
|
||||
result = list(lines[:1])
|
||||
stack = [0]
|
||||
stackcnt = [0]
|
||||
for line in lines[1:]:
|
||||
|
@ -88,31 +116,31 @@ def _format_lines(lines):
|
|||
return result
|
||||
|
||||
|
||||
def issequence(x):
|
||||
return isinstance(x, Sequence) and not isinstance(x, str)
|
||||
def issequence(x: Any) -> bool:
|
||||
return isinstance(x, collections.abc.Sequence) and not isinstance(x, str)
|
||||
|
||||
|
||||
def istext(x):
|
||||
def istext(x: Any) -> bool:
|
||||
return isinstance(x, str)
|
||||
|
||||
|
||||
def isdict(x):
|
||||
def isdict(x: Any) -> bool:
|
||||
return isinstance(x, dict)
|
||||
|
||||
|
||||
def isset(x):
|
||||
def isset(x: Any) -> bool:
|
||||
return isinstance(x, (set, frozenset))
|
||||
|
||||
|
||||
def isdatacls(obj):
|
||||
def isdatacls(obj: Any) -> bool:
|
||||
return getattr(obj, "__dataclass_fields__", None) is not None
|
||||
|
||||
|
||||
def isattrs(obj):
|
||||
def isattrs(obj: Any) -> bool:
|
||||
return getattr(obj, "__attrs_attrs__", None) is not None
|
||||
|
||||
|
||||
def isiterable(obj):
|
||||
def isiterable(obj: Any) -> bool:
|
||||
try:
|
||||
iter(obj)
|
||||
return not istext(obj)
|
||||
|
@ -120,15 +148,23 @@ def isiterable(obj):
|
|||
return False
|
||||
|
||||
|
||||
def assertrepr_compare(config, op, left, right):
|
||||
def assertrepr_compare(config, op: str, left: Any, right: Any) -> Optional[List[str]]:
|
||||
"""Return specialised explanations for some operators/operands"""
|
||||
maxsize = (80 - 15 - len(op) - 2) // 2 # 15 chars indentation, 1 space around op
|
||||
verbose = config.getoption("verbose")
|
||||
if verbose > 1:
|
||||
left_repr = safeformat(left)
|
||||
right_repr = safeformat(right)
|
||||
else:
|
||||
# XXX: "15 chars indentation" is wrong
|
||||
# ("E AssertionError: assert "); should use term width.
|
||||
maxsize = (
|
||||
80 - 15 - len(op) - 2
|
||||
) // 2 # 15 chars indentation, 1 space around op
|
||||
left_repr = saferepr(left, maxsize=maxsize)
|
||||
right_repr = saferepr(right, maxsize=maxsize)
|
||||
|
||||
summary = "{} {} {}".format(left_repr, op, right_repr)
|
||||
|
||||
verbose = config.getoption("verbose")
|
||||
explanation = None
|
||||
try:
|
||||
if op == "==":
|
||||
|
@ -170,33 +206,16 @@ def assertrepr_compare(config, op, left, right):
|
|||
return [summary] + explanation
|
||||
|
||||
|
||||
def _diff_text(left, right, verbose=0):
|
||||
"""Return the explanation for the diff between text or bytes.
|
||||
def _diff_text(left: str, right: str, verbose: int = 0) -> List[str]:
|
||||
"""Return the explanation for the diff between text.
|
||||
|
||||
Unless --verbose is used this will skip leading and trailing
|
||||
characters which are identical to keep the diff minimal.
|
||||
|
||||
If the input are bytes they will be safely converted to text.
|
||||
"""
|
||||
from difflib import ndiff
|
||||
|
||||
explanation = [] # type: List[str]
|
||||
|
||||
def escape_for_readable_diff(binary_text):
|
||||
"""
|
||||
Ensures that the internal string is always valid unicode, converting any bytes safely to valid unicode.
|
||||
This is done using repr() which then needs post-processing to fix the encompassing quotes and un-escape
|
||||
newlines and carriage returns (#429).
|
||||
"""
|
||||
r = str(repr(binary_text)[1:-1])
|
||||
r = r.replace(r"\n", "\n")
|
||||
r = r.replace(r"\r", "\r")
|
||||
return r
|
||||
|
||||
if isinstance(left, bytes):
|
||||
left = escape_for_readable_diff(left)
|
||||
if isinstance(right, bytes):
|
||||
right = escape_for_readable_diff(right)
|
||||
if verbose < 1:
|
||||
i = 0 # just in case left or right has zero length
|
||||
for i in range(min(len(left), len(right))):
|
||||
|
@ -233,7 +252,7 @@ def _diff_text(left, right, verbose=0):
|
|||
return explanation
|
||||
|
||||
|
||||
def _compare_eq_verbose(left, right):
|
||||
def _compare_eq_verbose(left: Any, right: Any) -> List[str]:
|
||||
keepends = True
|
||||
left_lines = repr(left).splitlines(keepends)
|
||||
right_lines = repr(right).splitlines(keepends)
|
||||
|
@ -245,7 +264,21 @@ def _compare_eq_verbose(left, right):
|
|||
return explanation
|
||||
|
||||
|
||||
def _compare_eq_iterable(left, right, verbose=0):
|
||||
def _surrounding_parens_on_own_lines(lines: List[str]) -> None:
|
||||
"""Move opening/closing parenthesis/bracket to own lines."""
|
||||
opening = lines[0][:1]
|
||||
if opening in ["(", "[", "{"]:
|
||||
lines[0] = " " + lines[0][1:]
|
||||
lines[:] = [opening] + lines
|
||||
closing = lines[-1][-1:]
|
||||
if closing in [")", "]", "}"]:
|
||||
lines[-1] = lines[-1][:-1] + ","
|
||||
lines[:] = lines + [closing]
|
||||
|
||||
|
||||
def _compare_eq_iterable(
|
||||
left: Iterable[Any], right: Iterable[Any], verbose: int = 0
|
||||
) -> List[str]:
|
||||
if not verbose:
|
||||
return ["Use -v to get the full diff"]
|
||||
# dynamic import to speedup pytest
|
||||
|
@ -253,14 +286,28 @@ def _compare_eq_iterable(left, right, verbose=0):
|
|||
|
||||
left_formatting = pprint.pformat(left).splitlines()
|
||||
right_formatting = pprint.pformat(right).splitlines()
|
||||
|
||||
# Re-format for different output lengths.
|
||||
lines_left = len(left_formatting)
|
||||
lines_right = len(right_formatting)
|
||||
if lines_left != lines_right:
|
||||
left_formatting = _pformat_dispatch(left).splitlines()
|
||||
right_formatting = _pformat_dispatch(right).splitlines()
|
||||
|
||||
if lines_left > 1 or lines_right > 1:
|
||||
_surrounding_parens_on_own_lines(left_formatting)
|
||||
_surrounding_parens_on_own_lines(right_formatting)
|
||||
|
||||
explanation = ["Full diff:"]
|
||||
explanation.extend(
|
||||
line.strip() for line in difflib.ndiff(left_formatting, right_formatting)
|
||||
line.rstrip() for line in difflib.ndiff(left_formatting, right_formatting)
|
||||
)
|
||||
return explanation
|
||||
|
||||
|
||||
def _compare_eq_sequence(left, right, verbose=0):
|
||||
def _compare_eq_sequence(
|
||||
left: Sequence[Any], right: Sequence[Any], verbose: int = 0
|
||||
) -> List[str]:
|
||||
comparing_bytes = isinstance(left, bytes) and isinstance(right, bytes)
|
||||
explanation = [] # type: List[str]
|
||||
len_left = len(left)
|
||||
|
@ -314,7 +361,9 @@ def _compare_eq_sequence(left, right, verbose=0):
|
|||
return explanation
|
||||
|
||||
|
||||
def _compare_eq_set(left, right, verbose=0):
|
||||
def _compare_eq_set(
|
||||
left: AbstractSet[Any], right: AbstractSet[Any], verbose: int = 0
|
||||
) -> List[str]:
|
||||
explanation = []
|
||||
diff_left = left - right
|
||||
diff_right = right - left
|
||||
|
@ -329,7 +378,9 @@ def _compare_eq_set(left, right, verbose=0):
|
|||
return explanation
|
||||
|
||||
|
||||
def _compare_eq_dict(left, right, verbose=0):
|
||||
def _compare_eq_dict(
|
||||
left: Mapping[Any, Any], right: Mapping[Any, Any], verbose: int = 0
|
||||
) -> List[str]:
|
||||
explanation = [] # type: List[str]
|
||||
set_left = set(left)
|
||||
set_right = set(right)
|
||||
|
@ -368,7 +419,12 @@ def _compare_eq_dict(left, right, verbose=0):
|
|||
return explanation
|
||||
|
||||
|
||||
def _compare_eq_cls(left, right, verbose, type_fns):
|
||||
def _compare_eq_cls(
|
||||
left: Any,
|
||||
right: Any,
|
||||
verbose: int,
|
||||
type_fns: Tuple[Callable[[Any], bool], Callable[[Any], bool]],
|
||||
) -> List[str]:
|
||||
isdatacls, isattrs = type_fns
|
||||
if isdatacls(left):
|
||||
all_fields = left.__dataclass_fields__
|
||||
|
@ -402,7 +458,7 @@ def _compare_eq_cls(left, right, verbose, type_fns):
|
|||
return explanation
|
||||
|
||||
|
||||
def _notin_text(term, text, verbose=0):
|
||||
def _notin_text(term: str, text: str, verbose: int = 0) -> List[str]:
|
||||
index = text.find(term)
|
||||
head = text[:index]
|
||||
tail = text[index + len(term) :]
|
||||
|
|
|
@ -7,6 +7,7 @@ ignores the external pytest-cache
|
|||
import json
|
||||
import os
|
||||
from collections import OrderedDict
|
||||
from typing import List
|
||||
|
||||
import attr
|
||||
import py
|
||||
|
@ -15,6 +16,9 @@ import pytest
|
|||
from .pathlib import Path
|
||||
from .pathlib import resolve_from_str
|
||||
from .pathlib import rm_rf
|
||||
from _pytest import nodes
|
||||
from _pytest.config import Config
|
||||
from _pytest.main import Session
|
||||
|
||||
README_CONTENT = """\
|
||||
# pytest cache directory #
|
||||
|
@ -121,13 +125,14 @@ class Cache:
|
|||
return
|
||||
if not cache_dir_exists_already:
|
||||
self._ensure_supporting_files()
|
||||
data = json.dumps(value, indent=2, sort_keys=True)
|
||||
try:
|
||||
f = path.open("w")
|
||||
except (IOError, OSError):
|
||||
self.warn("cache could not write path {path}", path=path)
|
||||
else:
|
||||
with f:
|
||||
json.dump(value, f, indent=2, sort_keys=True)
|
||||
f.write(data)
|
||||
|
||||
def _ensure_supporting_files(self):
|
||||
"""Create supporting files in the cache dir that are not really part of the cache."""
|
||||
|
@ -263,10 +268,12 @@ class NFPlugin:
|
|||
self.active = config.option.newfirst
|
||||
self.cached_nodeids = config.cache.get("cache/nodeids", [])
|
||||
|
||||
def pytest_collection_modifyitems(self, session, config, items):
|
||||
new_items = OrderedDict()
|
||||
def pytest_collection_modifyitems(
|
||||
self, session: Session, config: Config, items: List[nodes.Item]
|
||||
) -> None:
|
||||
new_items = OrderedDict() # type: OrderedDict[str, nodes.Item]
|
||||
if self.active:
|
||||
other_items = OrderedDict()
|
||||
other_items = OrderedDict() # type: OrderedDict[str, nodes.Item]
|
||||
for item in items:
|
||||
if item.nodeid not in self.cached_nodeids:
|
||||
new_items[item.nodeid] = item
|
||||
|
|
|
@ -12,6 +12,7 @@ from tempfile import TemporaryFile
|
|||
|
||||
import pytest
|
||||
from _pytest.compat import CaptureIO
|
||||
from _pytest.fixtures import FixtureRequest
|
||||
|
||||
patchsysdict = {0: "stdin", 1: "stdout", 2: "stderr"}
|
||||
|
||||
|
@ -241,13 +242,12 @@ class CaptureManager:
|
|||
capture_fixtures = {"capfd", "capfdbinary", "capsys", "capsysbinary"}
|
||||
|
||||
|
||||
def _ensure_only_one_capture_fixture(request, name):
|
||||
fixtures = set(request.fixturenames) & capture_fixtures - {name}
|
||||
def _ensure_only_one_capture_fixture(request: FixtureRequest, name):
|
||||
fixtures = sorted(set(request.fixturenames) & capture_fixtures - {name})
|
||||
if fixtures:
|
||||
fixtures = sorted(fixtures)
|
||||
fixtures = fixtures[0] if len(fixtures) == 1 else fixtures
|
||||
arg = fixtures[0] if len(fixtures) == 1 else fixtures
|
||||
raise request.raiseerror(
|
||||
"cannot use {} and {} at the same time".format(fixtures, name)
|
||||
"cannot use {} and {} at the same time".format(arg, name)
|
||||
)
|
||||
|
||||
|
||||
|
@ -693,17 +693,12 @@ class SysCaptureBinary(SysCapture):
|
|||
|
||||
|
||||
class DontReadFromInput:
|
||||
"""Temporary stub class. Ideally when stdin is accessed, the
|
||||
capturing should be turned off, with possibly all data captured
|
||||
so far sent to the screen. This should be configurable, though,
|
||||
because in automated test runs it is better to crash than
|
||||
hang indefinitely.
|
||||
"""
|
||||
|
||||
encoding = None
|
||||
|
||||
def read(self, *args):
|
||||
raise IOError("reading from stdin while output is captured")
|
||||
raise IOError(
|
||||
"pytest: reading from stdin while output is captured! Consider using `-s`."
|
||||
)
|
||||
|
||||
readline = read
|
||||
readlines = read
|
||||
|
|
|
@ -4,12 +4,20 @@ python version compatibility code
|
|||
import functools
|
||||
import inspect
|
||||
import io
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from contextlib import contextmanager
|
||||
from inspect import Parameter
|
||||
from inspect import signature
|
||||
from typing import Any
|
||||
from typing import Callable
|
||||
from typing import Generic
|
||||
from typing import Optional
|
||||
from typing import overload
|
||||
from typing import Tuple
|
||||
from typing import TypeVar
|
||||
from typing import Union
|
||||
|
||||
import attr
|
||||
import py
|
||||
|
@ -19,6 +27,13 @@ from _pytest._io.saferepr import saferepr
|
|||
from _pytest.outcomes import fail
|
||||
from _pytest.outcomes import TEST_OUTCOME
|
||||
|
||||
if False: # TYPE_CHECKING
|
||||
from typing import Type # noqa: F401 (used in type string)
|
||||
|
||||
|
||||
_T = TypeVar("_T")
|
||||
_S = TypeVar("_S")
|
||||
|
||||
|
||||
NOTSET = object()
|
||||
|
||||
|
@ -28,12 +43,13 @@ MODULE_NOT_FOUND_ERROR = (
|
|||
|
||||
|
||||
if sys.version_info >= (3, 8):
|
||||
from importlib import metadata as importlib_metadata # noqa: F401
|
||||
# Type ignored until next mypy release.
|
||||
from importlib import metadata as importlib_metadata # type: ignore
|
||||
else:
|
||||
import importlib_metadata # noqa: F401
|
||||
|
||||
|
||||
def _format_args(func):
|
||||
def _format_args(func: Callable[..., Any]) -> str:
|
||||
return str(signature(func))
|
||||
|
||||
|
||||
|
@ -41,12 +57,25 @@ def _format_args(func):
|
|||
REGEX_TYPE = type(re.compile(""))
|
||||
|
||||
|
||||
def is_generator(func):
|
||||
if sys.version_info < (3, 6):
|
||||
|
||||
def fspath(p):
|
||||
"""os.fspath replacement, useful to point out when we should replace it by the
|
||||
real function once we drop py35.
|
||||
"""
|
||||
return str(p)
|
||||
|
||||
|
||||
else:
|
||||
fspath = os.fspath
|
||||
|
||||
|
||||
def is_generator(func: object) -> bool:
|
||||
genfunc = inspect.isgeneratorfunction(func)
|
||||
return genfunc and not iscoroutinefunction(func)
|
||||
|
||||
|
||||
def iscoroutinefunction(func):
|
||||
def iscoroutinefunction(func: object) -> bool:
|
||||
"""
|
||||
Return True if func is a coroutine function (a function defined with async
|
||||
def syntax, and doesn't contain yield), or a function decorated with
|
||||
|
@ -59,7 +88,7 @@ def iscoroutinefunction(func):
|
|||
return inspect.iscoroutinefunction(func) or getattr(func, "_is_coroutine", False)
|
||||
|
||||
|
||||
def getlocation(function, curdir=None):
|
||||
def getlocation(function, curdir=None) -> str:
|
||||
function = get_real_func(function)
|
||||
fn = py.path.local(inspect.getfile(function))
|
||||
lineno = function.__code__.co_firstlineno
|
||||
|
@ -68,7 +97,7 @@ def getlocation(function, curdir=None):
|
|||
return "%s:%d" % (fn, lineno + 1)
|
||||
|
||||
|
||||
def num_mock_patch_args(function):
|
||||
def num_mock_patch_args(function) -> int:
|
||||
""" return number of arguments used up by mock arguments (if any) """
|
||||
patchings = getattr(function, "patchings", None)
|
||||
if not patchings:
|
||||
|
@ -87,7 +116,13 @@ def num_mock_patch_args(function):
|
|||
)
|
||||
|
||||
|
||||
def getfuncargnames(function, *, name: str = "", is_method=False, cls=None):
|
||||
def getfuncargnames(
|
||||
function: Callable[..., Any],
|
||||
*,
|
||||
name: str = "",
|
||||
is_method: bool = False,
|
||||
cls: Optional[type] = None
|
||||
) -> Tuple[str, ...]:
|
||||
"""Returns the names of a function's mandatory arguments.
|
||||
|
||||
This should return the names of all function arguments that:
|
||||
|
@ -155,7 +190,7 @@ else:
|
|||
from contextlib import nullcontext # noqa
|
||||
|
||||
|
||||
def get_default_arg_names(function):
|
||||
def get_default_arg_names(function: Callable[..., Any]) -> Tuple[str, ...]:
|
||||
# Note: this code intentionally mirrors the code at the beginning of getfuncargnames,
|
||||
# to get the arguments which were excluded from its result because they had default values
|
||||
return tuple(
|
||||
|
@ -174,18 +209,18 @@ _non_printable_ascii_translate_table.update(
|
|||
)
|
||||
|
||||
|
||||
def _translate_non_printable(s):
|
||||
def _translate_non_printable(s: str) -> str:
|
||||
return s.translate(_non_printable_ascii_translate_table)
|
||||
|
||||
|
||||
STRING_TYPES = bytes, str
|
||||
|
||||
|
||||
def _bytes_to_ascii(val):
|
||||
def _bytes_to_ascii(val: bytes) -> str:
|
||||
return val.decode("ascii", "backslashreplace")
|
||||
|
||||
|
||||
def ascii_escaped(val):
|
||||
def ascii_escaped(val: Union[bytes, str]):
|
||||
"""If val is pure ascii, returns it as a str(). Otherwise, escapes
|
||||
bytes objects into a sequence of escaped bytes:
|
||||
|
||||
|
@ -282,7 +317,7 @@ def getimfunc(func):
|
|||
return func
|
||||
|
||||
|
||||
def safe_getattr(object, name, default):
|
||||
def safe_getattr(object: Any, name: str, default: Any) -> Any:
|
||||
""" Like getattr but return default upon any Exception or any OutcomeException.
|
||||
|
||||
Attribute access can potentially fail for 'evil' Python objects.
|
||||
|
@ -296,7 +331,7 @@ def safe_getattr(object, name, default):
|
|||
return default
|
||||
|
||||
|
||||
def safe_isclass(obj):
|
||||
def safe_isclass(obj: object) -> bool:
|
||||
"""Ignore any exception via isinstance on Python 3."""
|
||||
try:
|
||||
return inspect.isclass(obj)
|
||||
|
@ -317,39 +352,26 @@ COLLECT_FAKEMODULE_ATTRIBUTES = (
|
|||
)
|
||||
|
||||
|
||||
def _setup_collect_fakemodule():
|
||||
def _setup_collect_fakemodule() -> None:
|
||||
from types import ModuleType
|
||||
import pytest
|
||||
|
||||
pytest.collect = ModuleType("pytest.collect")
|
||||
pytest.collect.__all__ = [] # used for setns
|
||||
# Types ignored because the module is created dynamically.
|
||||
pytest.collect = ModuleType("pytest.collect") # type: ignore
|
||||
pytest.collect.__all__ = [] # type: ignore # used for setns
|
||||
for attr_name in COLLECT_FAKEMODULE_ATTRIBUTES:
|
||||
setattr(pytest.collect, attr_name, getattr(pytest, attr_name))
|
||||
setattr(pytest.collect, attr_name, getattr(pytest, attr_name)) # type: ignore
|
||||
|
||||
|
||||
class CaptureIO(io.TextIOWrapper):
|
||||
def __init__(self):
|
||||
def __init__(self) -> None:
|
||||
super().__init__(io.BytesIO(), encoding="UTF-8", newline="", write_through=True)
|
||||
|
||||
def getvalue(self):
|
||||
def getvalue(self) -> str:
|
||||
assert isinstance(self.buffer, io.BytesIO)
|
||||
return self.buffer.getvalue().decode("UTF-8")
|
||||
|
||||
|
||||
class FuncargnamesCompatAttr:
|
||||
""" helper class so that Metafunc, Function and FixtureRequest
|
||||
don't need to each define the "funcargnames" compatibility attribute.
|
||||
"""
|
||||
|
||||
@property
|
||||
def funcargnames(self):
|
||||
""" alias attribute for ``fixturenames`` for pre-2.3 compatibility"""
|
||||
import warnings
|
||||
from _pytest.deprecated import FUNCARGNAMES
|
||||
|
||||
warnings.warn(FUNCARGNAMES, stacklevel=2)
|
||||
return self.fixturenames
|
||||
|
||||
|
||||
if sys.version_info < (3, 5, 2): # pragma: no cover
|
||||
|
||||
def overload(f): # noqa: F811
|
||||
|
@ -360,3 +382,35 @@ if getattr(attr, "__version_info__", ()) >= (19, 2):
|
|||
ATTRS_EQ_FIELD = "eq"
|
||||
else:
|
||||
ATTRS_EQ_FIELD = "cmp"
|
||||
|
||||
|
||||
if sys.version_info >= (3, 8):
|
||||
# TODO: Remove type ignore on next mypy update.
|
||||
# https://github.com/python/typeshed/commit/add0b5e930a1db16560fde45a3b710eefc625709
|
||||
from functools import cached_property # type: ignore
|
||||
else:
|
||||
|
||||
class cached_property(Generic[_S, _T]):
|
||||
__slots__ = ("func", "__doc__")
|
||||
|
||||
def __init__(self, func: Callable[[_S], _T]) -> None:
|
||||
self.func = func
|
||||
self.__doc__ = func.__doc__
|
||||
|
||||
@overload
|
||||
def __get__(
|
||||
self, instance: None, owner: Optional["Type[_S]"] = ...
|
||||
) -> "cached_property[_S, _T]":
|
||||
raise NotImplementedError()
|
||||
|
||||
@overload # noqa: F811
|
||||
def __get__( # noqa: F811
|
||||
self, instance: _S, owner: Optional["Type[_S]"] = ...
|
||||
) -> _T:
|
||||
raise NotImplementedError()
|
||||
|
||||
def __get__(self, instance, owner=None): # noqa: F811
|
||||
if instance is None:
|
||||
return self
|
||||
value = instance.__dict__[self.func.__name__] = self.func(instance)
|
||||
return value
|
||||
|
|
|
@ -8,7 +8,6 @@ import sys
|
|||
import types
|
||||
import warnings
|
||||
from functools import lru_cache
|
||||
from pathlib import Path
|
||||
from types import TracebackType
|
||||
from typing import Any
|
||||
from typing import Callable
|
||||
|
@ -18,6 +17,7 @@ from typing import Optional
|
|||
from typing import Sequence
|
||||
from typing import Set
|
||||
from typing import Tuple
|
||||
from typing import Union
|
||||
|
||||
import attr
|
||||
import py
|
||||
|
@ -39,6 +39,7 @@ from _pytest._code import filter_traceback
|
|||
from _pytest.compat import importlib_metadata
|
||||
from _pytest.outcomes import fail
|
||||
from _pytest.outcomes import Skipped
|
||||
from _pytest.pathlib import Path
|
||||
from _pytest.warning_types import PytestConfigWarning
|
||||
|
||||
if False: # TYPE_CHECKING
|
||||
|
@ -56,7 +57,7 @@ class ConftestImportFailure(Exception):
|
|||
self.excinfo = excinfo # type: Tuple[Type[Exception], Exception, TracebackType]
|
||||
|
||||
|
||||
def main(args=None, plugins=None):
|
||||
def main(args=None, plugins=None) -> "Union[int, _pytest.main.ExitCode]":
|
||||
""" return exit code, after performing an in-process test run.
|
||||
|
||||
:arg args: list of command line arguments.
|
||||
|
@ -84,10 +85,16 @@ def main(args=None, plugins=None):
|
|||
formatted_tb = str(exc_repr)
|
||||
for line in formatted_tb.splitlines():
|
||||
tw.line(line.rstrip(), red=True)
|
||||
return 4
|
||||
return ExitCode.USAGE_ERROR
|
||||
else:
|
||||
try:
|
||||
return config.hook.pytest_cmdline_main(config=config)
|
||||
ret = config.hook.pytest_cmdline_main(
|
||||
config=config
|
||||
) # type: Union[ExitCode, int]
|
||||
try:
|
||||
return ExitCode(ret)
|
||||
except ValueError:
|
||||
return ret
|
||||
finally:
|
||||
config._ensure_unconfigure()
|
||||
except UsageError as e:
|
||||
|
@ -124,13 +131,13 @@ def directory_arg(path, optname):
|
|||
|
||||
|
||||
# Plugins that cannot be disabled via "-p no:X" currently.
|
||||
essential_plugins = ( # fmt: off
|
||||
essential_plugins = (
|
||||
"mark",
|
||||
"main",
|
||||
"runner",
|
||||
"fixtures",
|
||||
"helpconfig", # Provides -p.
|
||||
) # fmt: on
|
||||
)
|
||||
|
||||
default_plugins = essential_plugins + (
|
||||
"python",
|
||||
|
@ -169,7 +176,7 @@ def get_config(args=None, plugins=None):
|
|||
config = Config(
|
||||
pluginmanager,
|
||||
invocation_params=Config.InvocationParams(
|
||||
args=args, plugins=plugins, dir=Path().resolve()
|
||||
args=args or (), plugins=plugins, dir=Path().resolve()
|
||||
),
|
||||
)
|
||||
|
||||
|
@ -649,7 +656,7 @@ class Config:
|
|||
|
||||
Contains the following read-only attributes:
|
||||
|
||||
* ``args``: list of command-line arguments as passed to ``pytest.main()``.
|
||||
* ``args``: tuple of command-line arguments as passed to ``pytest.main()``.
|
||||
* ``plugins``: list of extra plugins, might be None.
|
||||
* ``dir``: directory where ``pytest.main()`` was invoked from.
|
||||
"""
|
||||
|
@ -662,13 +669,13 @@ class Config:
|
|||
|
||||
.. note::
|
||||
|
||||
Currently the environment variable PYTEST_ADDOPTS is also handled by
|
||||
pytest implicitly, not being part of the invocation.
|
||||
Note that the environment variable ``PYTEST_ADDOPTS`` and the ``addopts``
|
||||
ini option are handled by pytest, not being included in the ``args`` attribute.
|
||||
|
||||
Plugins accessing ``InvocationParams`` must be aware of that.
|
||||
"""
|
||||
|
||||
args = attr.ib()
|
||||
args = attr.ib(converter=tuple)
|
||||
plugins = attr.ib()
|
||||
dir = attr.ib(type=Path)
|
||||
|
||||
|
@ -697,7 +704,9 @@ class Config:
|
|||
self._cleanup = [] # type: List[Callable[[], None]]
|
||||
self.pluginmanager.register(self, "pytestconfig")
|
||||
self._configured = False
|
||||
self.hook.pytest_addoption.call_historic(kwargs=dict(parser=self._parser))
|
||||
self.hook.pytest_addoption.call_historic(
|
||||
kwargs=dict(parser=self._parser, pluginmanager=self.pluginmanager)
|
||||
)
|
||||
|
||||
@property
|
||||
def invocation_dir(self):
|
||||
|
@ -933,7 +942,6 @@ class Config:
|
|||
assert not hasattr(
|
||||
self, "args"
|
||||
), "can only parse cmdline args at most once per Config object"
|
||||
assert self.invocation_params.args == args
|
||||
self.hook.pytest_addhooks.call_historic(
|
||||
kwargs=dict(pluginmanager=self.pluginmanager)
|
||||
)
|
||||
|
@ -965,7 +973,7 @@ class Config:
|
|||
def getini(self, name: str):
|
||||
""" return configuration value from an :ref:`ini file <inifiles>`. If the
|
||||
specified name hasn't been registered through a prior
|
||||
:py:func:`parser.addini <_pytest.config.Parser.addini>`
|
||||
:py:func:`parser.addini <_pytest.config.argparsing.Parser.addini>`
|
||||
call (usually from a plugin), a ValueError is raised. """
|
||||
try:
|
||||
return self._inicache[name]
|
||||
|
|
|
@ -47,7 +47,7 @@ class Parser:
|
|||
|
||||
The returned group object has an ``addoption`` method with the same
|
||||
signature as :py:func:`parser.addoption
|
||||
<_pytest.config.Parser.addoption>` but will be shown in the
|
||||
<_pytest.config.argparsing.Parser.addoption>` but will be shown in the
|
||||
respective group in the output of ``pytest. --help``.
|
||||
"""
|
||||
for group in self._groups:
|
||||
|
@ -395,7 +395,7 @@ class MyOptionParser(argparse.ArgumentParser):
|
|||
options = ", ".join(option for _, option, _ in option_tuples)
|
||||
self.error(msg % {"option": arg_string, "matches": options})
|
||||
elif len(option_tuples) == 1:
|
||||
option_tuple, = option_tuples
|
||||
(option_tuple,) = option_tuples
|
||||
return option_tuple
|
||||
if self._negative_number_matcher.match(arg_string):
|
||||
if not self._has_negative_number_optionals:
|
||||
|
|
|
@ -1,9 +1,7 @@
|
|||
""" interactive debugging with PDB, the Python Debugger. """
|
||||
import argparse
|
||||
import functools
|
||||
import pdb
|
||||
import sys
|
||||
from doctest import UnexpectedException
|
||||
|
||||
from _pytest import outcomes
|
||||
from _pytest.config import hookimpl
|
||||
|
@ -46,6 +44,8 @@ def pytest_addoption(parser):
|
|||
|
||||
|
||||
def pytest_configure(config):
|
||||
import pdb
|
||||
|
||||
if config.getvalue("trace"):
|
||||
config.pluginmanager.register(PdbTrace(), "pdbtrace")
|
||||
if config.getvalue("usepdb"):
|
||||
|
@ -88,6 +88,8 @@ class pytestPDB:
|
|||
@classmethod
|
||||
def _import_pdb_cls(cls, capman):
|
||||
if not cls._config:
|
||||
import pdb
|
||||
|
||||
# Happens when using pytest.set_trace outside of a test.
|
||||
return pdb.Pdb
|
||||
|
||||
|
@ -114,6 +116,8 @@ class pytestPDB:
|
|||
"--pdbcls: could not import {!r}: {}".format(value, exc)
|
||||
)
|
||||
else:
|
||||
import pdb
|
||||
|
||||
pdb_cls = pdb.Pdb
|
||||
|
||||
wrapped_cls = cls._get_pdb_wrapper_class(pdb_cls, capman)
|
||||
|
@ -317,6 +321,8 @@ def _enter_pdb(node, excinfo, rep):
|
|||
|
||||
|
||||
def _postmortem_traceback(excinfo):
|
||||
from doctest import UnexpectedException
|
||||
|
||||
if isinstance(excinfo.value, UnexpectedException):
|
||||
# A doctest.UnexpectedException is not useful for post_mortem.
|
||||
# Use the underlying exception instead:
|
||||
|
|
|
@ -26,7 +26,7 @@ FUNCARGNAMES = PytestDeprecationWarning(
|
|||
|
||||
|
||||
RESULT_LOG = PytestDeprecationWarning(
|
||||
"--result-log is deprecated and scheduled for removal in pytest 6.0.\n"
|
||||
"--result-log is deprecated, please try the new pytest-reportlog plugin.\n"
|
||||
"See https://docs.pytest.org/en/latest/deprecations.html#result-log-result-log for more information."
|
||||
)
|
||||
|
||||
|
@ -34,3 +34,8 @@ FIXTURE_POSITIONAL_ARGUMENTS = PytestDeprecationWarning(
|
|||
"Passing arguments to pytest.fixture() as positional arguments is deprecated - pass them "
|
||||
"as a keyword argument instead."
|
||||
)
|
||||
|
||||
JUNIT_XML_DEFAULT_FAMILY = PytestDeprecationWarning(
|
||||
"The 'junit_family' default value will change to 'xunit2' in pytest 6.0.\n"
|
||||
"Add 'junit_family=legacy' to your pytest.ini file to silence this warning and make your suite compatible."
|
||||
)
|
||||
|
|
|
@ -1,12 +1,20 @@
|
|||
""" discover and run doctests in modules and test files."""
|
||||
import bdb
|
||||
import inspect
|
||||
import platform
|
||||
import sys
|
||||
import traceback
|
||||
import warnings
|
||||
from contextlib import contextmanager
|
||||
from typing import Dict
|
||||
from typing import List
|
||||
from typing import Optional
|
||||
from typing import Sequence
|
||||
from typing import Tuple
|
||||
from typing import Union
|
||||
|
||||
import pytest
|
||||
from _pytest import outcomes
|
||||
from _pytest._code.code import ExceptionInfo
|
||||
from _pytest._code.code import ReprFileLocation
|
||||
from _pytest._code.code import TerminalRepr
|
||||
|
@ -16,6 +24,10 @@ from _pytest.outcomes import Skipped
|
|||
from _pytest.python_api import approx
|
||||
from _pytest.warning_types import PytestWarning
|
||||
|
||||
if False: # TYPE_CHECKING
|
||||
import doctest
|
||||
from typing import Type
|
||||
|
||||
DOCTEST_REPORT_CHOICE_NONE = "none"
|
||||
DOCTEST_REPORT_CHOICE_CDIFF = "cdiff"
|
||||
DOCTEST_REPORT_CHOICE_NDIFF = "ndiff"
|
||||
|
@ -32,6 +44,8 @@ DOCTEST_REPORT_CHOICES = (
|
|||
|
||||
# Lazy definition of runner class
|
||||
RUNNER_CLASS = None
|
||||
# Lazy definition of output checker class
|
||||
CHECKER_CLASS = None # type: Optional[Type[doctest.OutputChecker]]
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
|
@ -84,6 +98,12 @@ def pytest_addoption(parser):
|
|||
)
|
||||
|
||||
|
||||
def pytest_unconfigure():
|
||||
global RUNNER_CLASS
|
||||
|
||||
RUNNER_CLASS = None
|
||||
|
||||
|
||||
def pytest_collect_file(path, parent):
|
||||
config = parent.config
|
||||
if path.ext == ".py":
|
||||
|
@ -111,11 +131,12 @@ def _is_doctest(config, path, parent):
|
|||
|
||||
|
||||
class ReprFailDoctest(TerminalRepr):
|
||||
def __init__(self, reprlocation_lines):
|
||||
# List of (reprlocation, lines) tuples
|
||||
def __init__(
|
||||
self, reprlocation_lines: Sequence[Tuple[ReprFileLocation, Sequence[str]]]
|
||||
):
|
||||
self.reprlocation_lines = reprlocation_lines
|
||||
|
||||
def toterminal(self, tw):
|
||||
def toterminal(self, tw) -> None:
|
||||
for reprlocation, lines in self.reprlocation_lines:
|
||||
for line in lines:
|
||||
tw.line(line)
|
||||
|
@ -128,7 +149,7 @@ class MultipleDoctestFailures(Exception):
|
|||
self.failures = failures
|
||||
|
||||
|
||||
def _init_runner_class():
|
||||
def _init_runner_class() -> "Type[doctest.DocTestRunner]":
|
||||
import doctest
|
||||
|
||||
class PytestDoctestRunner(doctest.DebugRunner):
|
||||
|
@ -155,6 +176,8 @@ def _init_runner_class():
|
|||
def report_unexpected_exception(self, out, test, example, exc_info):
|
||||
if isinstance(exc_info[1], Skipped):
|
||||
raise exc_info[1]
|
||||
if isinstance(exc_info[1], bdb.BdbQuit):
|
||||
outcomes.exit("Quitting debugger")
|
||||
failure = doctest.UnexpectedException(test, example, exc_info)
|
||||
if self.continue_on_failure:
|
||||
out.append(failure)
|
||||
|
@ -164,12 +187,19 @@ def _init_runner_class():
|
|||
return PytestDoctestRunner
|
||||
|
||||
|
||||
def _get_runner(checker=None, verbose=None, optionflags=0, continue_on_failure=True):
|
||||
def _get_runner(
|
||||
checker: Optional["doctest.OutputChecker"] = None,
|
||||
verbose: Optional[bool] = None,
|
||||
optionflags: int = 0,
|
||||
continue_on_failure: bool = True,
|
||||
) -> "doctest.DocTestRunner":
|
||||
# We need this in order to do a lazy import on doctest
|
||||
global RUNNER_CLASS
|
||||
if RUNNER_CLASS is None:
|
||||
RUNNER_CLASS = _init_runner_class()
|
||||
return RUNNER_CLASS(
|
||||
# Type ignored because the continue_on_failure argument is only defined on
|
||||
# PytestDoctestRunner, which is lazily defined so can't be used as a type.
|
||||
return RUNNER_CLASS( # type: ignore
|
||||
checker=checker,
|
||||
verbose=verbose,
|
||||
optionflags=optionflags,
|
||||
|
@ -198,7 +228,7 @@ class DoctestItem(pytest.Item):
|
|||
def runtest(self):
|
||||
_check_all_skipped(self.dtest)
|
||||
self._disable_output_capturing_for_darwin()
|
||||
failures = []
|
||||
failures = [] # type: List[doctest.DocTestFailure]
|
||||
self.runner.run(self.dtest, out=failures)
|
||||
if failures:
|
||||
raise MultipleDoctestFailures(failures)
|
||||
|
@ -219,7 +249,9 @@ class DoctestItem(pytest.Item):
|
|||
def repr_failure(self, excinfo):
|
||||
import doctest
|
||||
|
||||
failures = None
|
||||
failures = (
|
||||
None
|
||||
) # type: Optional[List[Union[doctest.DocTestFailure, doctest.UnexpectedException]]]
|
||||
if excinfo.errisinstance((doctest.DocTestFailure, doctest.UnexpectedException)):
|
||||
failures = [excinfo.value]
|
||||
elif excinfo.errisinstance(MultipleDoctestFailures):
|
||||
|
@ -242,8 +274,10 @@ class DoctestItem(pytest.Item):
|
|||
self.config.getoption("doctestreport")
|
||||
)
|
||||
if lineno is not None:
|
||||
assert failure.test.docstring is not None
|
||||
lines = failure.test.docstring.splitlines(False)
|
||||
# add line numbers to the left of the error message
|
||||
assert test.lineno is not None
|
||||
lines = [
|
||||
"%03d %s" % (i + test.lineno + 1, x)
|
||||
for (i, x) in enumerate(lines)
|
||||
|
@ -271,11 +305,11 @@ class DoctestItem(pytest.Item):
|
|||
else:
|
||||
return super().repr_failure(excinfo)
|
||||
|
||||
def reportinfo(self):
|
||||
def reportinfo(self) -> Tuple[str, int, str]:
|
||||
return self.fspath, self.dtest.lineno, "[doctest] %s" % self.name
|
||||
|
||||
|
||||
def _get_flag_lookup():
|
||||
def _get_flag_lookup() -> Dict[str, int]:
|
||||
import doctest
|
||||
|
||||
return dict(
|
||||
|
@ -327,7 +361,7 @@ class DoctestTextfile(pytest.Module):
|
|||
optionflags = get_optionflags(self)
|
||||
|
||||
runner = _get_runner(
|
||||
verbose=0,
|
||||
verbose=False,
|
||||
optionflags=optionflags,
|
||||
checker=_get_checker(),
|
||||
continue_on_failure=_get_continue_on_failure(self.config),
|
||||
|
@ -406,7 +440,8 @@ class DoctestModule(pytest.Module):
|
|||
return
|
||||
with _patch_unwrap_mock_aware():
|
||||
|
||||
doctest.DocTestFinder._find(
|
||||
# Type ignored because this is a private function.
|
||||
doctest.DocTestFinder._find( # type: ignore
|
||||
self, tests, obj, name, module, source_lines, globs, seen
|
||||
)
|
||||
|
||||
|
@ -424,7 +459,7 @@ class DoctestModule(pytest.Module):
|
|||
finder = MockAwareDocTestFinder()
|
||||
optionflags = get_optionflags(self)
|
||||
runner = _get_runner(
|
||||
verbose=0,
|
||||
verbose=False,
|
||||
optionflags=optionflags,
|
||||
checker=_get_checker(),
|
||||
continue_on_failure=_get_continue_on_failure(self.config),
|
||||
|
@ -453,24 +488,7 @@ def _setup_fixtures(doctest_item):
|
|||
return fixture_request
|
||||
|
||||
|
||||
def _get_checker():
|
||||
"""
|
||||
Returns a doctest.OutputChecker subclass that supports some
|
||||
additional options:
|
||||
|
||||
* ALLOW_UNICODE and ALLOW_BYTES options to ignore u'' and b''
|
||||
prefixes (respectively) in string literals. Useful when the same
|
||||
doctest should run in Python 2 and Python 3.
|
||||
|
||||
* NUMBER to ignore floating-point differences smaller than the
|
||||
precision of the literal number in the doctest.
|
||||
|
||||
An inner class is used to avoid importing "doctest" at the module
|
||||
level.
|
||||
"""
|
||||
if hasattr(_get_checker, "LiteralsOutputChecker"):
|
||||
return _get_checker.LiteralsOutputChecker()
|
||||
|
||||
def _init_checker_class() -> "Type[doctest.OutputChecker]":
|
||||
import doctest
|
||||
import re
|
||||
|
||||
|
@ -560,11 +578,31 @@ def _get_checker():
|
|||
offset += w.end() - w.start() - (g.end() - g.start())
|
||||
return got
|
||||
|
||||
_get_checker.LiteralsOutputChecker = LiteralsOutputChecker
|
||||
return _get_checker.LiteralsOutputChecker()
|
||||
return LiteralsOutputChecker
|
||||
|
||||
|
||||
def _get_allow_unicode_flag():
|
||||
def _get_checker() -> "doctest.OutputChecker":
|
||||
"""
|
||||
Returns a doctest.OutputChecker subclass that supports some
|
||||
additional options:
|
||||
|
||||
* ALLOW_UNICODE and ALLOW_BYTES options to ignore u'' and b''
|
||||
prefixes (respectively) in string literals. Useful when the same
|
||||
doctest should run in Python 2 and Python 3.
|
||||
|
||||
* NUMBER to ignore floating-point differences smaller than the
|
||||
precision of the literal number in the doctest.
|
||||
|
||||
An inner class is used to avoid importing "doctest" at the module
|
||||
level.
|
||||
"""
|
||||
global CHECKER_CLASS
|
||||
if CHECKER_CLASS is None:
|
||||
CHECKER_CLASS = _init_checker_class()
|
||||
return CHECKER_CLASS()
|
||||
|
||||
|
||||
def _get_allow_unicode_flag() -> int:
|
||||
"""
|
||||
Registers and returns the ALLOW_UNICODE flag.
|
||||
"""
|
||||
|
@ -573,7 +611,7 @@ def _get_allow_unicode_flag():
|
|||
return doctest.register_optionflag("ALLOW_UNICODE")
|
||||
|
||||
|
||||
def _get_allow_bytes_flag():
|
||||
def _get_allow_bytes_flag() -> int:
|
||||
"""
|
||||
Registers and returns the ALLOW_BYTES flag.
|
||||
"""
|
||||
|
@ -582,7 +620,7 @@ def _get_allow_bytes_flag():
|
|||
return doctest.register_optionflag("ALLOW_BYTES")
|
||||
|
||||
|
||||
def _get_number_flag():
|
||||
def _get_number_flag() -> int:
|
||||
"""
|
||||
Registers and returns the NUMBER flag.
|
||||
"""
|
||||
|
@ -591,7 +629,7 @@ def _get_number_flag():
|
|||
return doctest.register_optionflag("NUMBER")
|
||||
|
||||
|
||||
def _get_report_choice(key):
|
||||
def _get_report_choice(key: str) -> int:
|
||||
"""
|
||||
This function returns the actual `doctest` module flag value, we want to do it as late as possible to avoid
|
||||
importing `doctest` and all its dependencies when parsing options, as it adds overhead and breaks tests.
|
||||
|
|
|
@ -7,18 +7,17 @@ from collections import defaultdict
|
|||
from collections import deque
|
||||
from collections import OrderedDict
|
||||
from typing import Dict
|
||||
from typing import List
|
||||
from typing import Tuple
|
||||
|
||||
import attr
|
||||
import py
|
||||
|
||||
import _pytest
|
||||
from _pytest import nodes
|
||||
from _pytest._code.code import FormattedExcinfo
|
||||
from _pytest._code.code import TerminalRepr
|
||||
from _pytest.compat import _format_args
|
||||
from _pytest.compat import _PytestWrapper
|
||||
from _pytest.compat import FuncargnamesCompatAttr
|
||||
from _pytest.compat import get_real_func
|
||||
from _pytest.compat import get_real_method
|
||||
from _pytest.compat import getfslineno
|
||||
|
@ -29,12 +28,15 @@ from _pytest.compat import is_generator
|
|||
from _pytest.compat import NOTSET
|
||||
from _pytest.compat import safe_getattr
|
||||
from _pytest.deprecated import FIXTURE_POSITIONAL_ARGUMENTS
|
||||
from _pytest.deprecated import FUNCARGNAMES
|
||||
from _pytest.outcomes import fail
|
||||
from _pytest.outcomes import TEST_OUTCOME
|
||||
|
||||
if False: # TYPE_CHECKING
|
||||
from typing import Type
|
||||
|
||||
from _pytest import nodes
|
||||
|
||||
|
||||
@attr.s(frozen=True)
|
||||
class PseudoFixtureDef:
|
||||
|
@ -334,7 +336,7 @@ class FuncFixtureInfo:
|
|||
self.names_closure[:] = sorted(closure, key=self.names_closure.index)
|
||||
|
||||
|
||||
class FixtureRequest(FuncargnamesCompatAttr):
|
||||
class FixtureRequest:
|
||||
""" A request for a fixture from a test or fixture function.
|
||||
|
||||
A request object gives access to the requesting test context
|
||||
|
@ -361,6 +363,12 @@ class FixtureRequest(FuncargnamesCompatAttr):
|
|||
result.extend(set(self._fixture_defs).difference(result))
|
||||
return result
|
||||
|
||||
@property
|
||||
def funcargnames(self):
|
||||
""" alias attribute for ``fixturenames`` for pre-2.3 compatibility"""
|
||||
warnings.warn(FUNCARGNAMES, stacklevel=2)
|
||||
return self.fixturenames
|
||||
|
||||
@property
|
||||
def node(self):
|
||||
""" underlying collection node (depends on current request scope)"""
|
||||
|
@ -689,8 +697,8 @@ class FixtureLookupError(LookupError):
|
|||
self.fixturestack = request._get_fixturestack()
|
||||
self.msg = msg
|
||||
|
||||
def formatrepr(self):
|
||||
tblines = []
|
||||
def formatrepr(self) -> "FixtureLookupErrorRepr":
|
||||
tblines = [] # type: List[str]
|
||||
addline = tblines.append
|
||||
stack = [self.request._pyfuncitem.obj]
|
||||
stack.extend(map(lambda x: x.func, self.fixturestack))
|
||||
|
@ -742,7 +750,7 @@ class FixtureLookupErrorRepr(TerminalRepr):
|
|||
self.firstlineno = firstlineno
|
||||
self.argname = argname
|
||||
|
||||
def toterminal(self, tw):
|
||||
def toterminal(self, tw) -> None:
|
||||
# tw.line("FixtureLookupError: %s" %(self.argname), red=True)
|
||||
for tbline in self.tblines:
|
||||
tw.line(tbline.rstrip())
|
||||
|
@ -1283,6 +1291,8 @@ class FixtureManager:
|
|||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
from _pytest import nodes
|
||||
|
||||
# construct the base nodeid which is later used to check
|
||||
# what fixtures are visible for particular tests (as denoted
|
||||
# by their test id)
|
||||
|
@ -1459,6 +1469,8 @@ class FixtureManager:
|
|||
return tuple(self._matchfactories(fixturedefs, nodeid))
|
||||
|
||||
def _matchfactories(self, fixturedefs, nodeid):
|
||||
from _pytest import nodes
|
||||
|
||||
for fixturedef in fixturedefs:
|
||||
if nodes.ischildnode(fixturedef.baseid, nodeid):
|
||||
yield fixturedef
|
||||
|
|
|
@ -115,9 +115,10 @@ def pytest_cmdline_parse():
|
|||
|
||||
|
||||
def showversion(config):
|
||||
p = py.path.local(pytest.__file__)
|
||||
sys.stderr.write(
|
||||
"This is pytest version {}, imported from {}\n".format(pytest.__version__, p)
|
||||
"This is pytest version {}, imported from {}\n".format(
|
||||
pytest.__version__, pytest.__file__
|
||||
)
|
||||
)
|
||||
plugininfo = getpluginversioninfo(config)
|
||||
if plugininfo:
|
||||
|
|
|
@ -35,7 +35,7 @@ def pytest_plugin_registered(plugin, manager):
|
|||
|
||||
|
||||
@hookspec(historic=True)
|
||||
def pytest_addoption(parser):
|
||||
def pytest_addoption(parser, pluginmanager):
|
||||
"""register argparse-style options and ini-style config values,
|
||||
called once at the beginning of a test run.
|
||||
|
||||
|
@ -45,10 +45,15 @@ def pytest_addoption(parser):
|
|||
files situated at the tests root directory due to how pytest
|
||||
:ref:`discovers plugins during startup <pluginorder>`.
|
||||
|
||||
:arg _pytest.config.Parser parser: To add command line options, call
|
||||
:py:func:`parser.addoption(...) <_pytest.config.Parser.addoption>`.
|
||||
:arg _pytest.config.argparsing.Parser parser: To add command line options, call
|
||||
:py:func:`parser.addoption(...) <_pytest.config.argparsing.Parser.addoption>`.
|
||||
To add ini-file values call :py:func:`parser.addini(...)
|
||||
<_pytest.config.Parser.addini>`.
|
||||
<_pytest.config.argparsing.Parser.addini>`.
|
||||
|
||||
:arg _pytest.config.PytestPluginManager pluginmanager: pytest plugin manager,
|
||||
which can be used to install :py:func:`hookspec`'s or :py:func:`hookimpl`'s
|
||||
and allow one plugin to call another plugin's hooks to change how
|
||||
command line options are added.
|
||||
|
||||
Options can later be accessed through the
|
||||
:py:class:`config <_pytest.config.Config>` object, respectively:
|
||||
|
@ -143,7 +148,7 @@ def pytest_load_initial_conftests(early_config, parser, args):
|
|||
|
||||
:param _pytest.config.Config early_config: pytest config object
|
||||
:param list[str] args: list of arguments passed on the command line
|
||||
:param _pytest.config.Parser parser: to add command line options
|
||||
:param _pytest.config.argparsing.Parser parser: to add command line options
|
||||
"""
|
||||
|
||||
|
||||
|
@ -381,16 +386,6 @@ def pytest_runtest_logreport(report):
|
|||
@hookspec(firstresult=True)
|
||||
def pytest_report_to_serializable(config, report):
|
||||
"""
|
||||
.. warning::
|
||||
This hook is experimental and subject to change between pytest releases, even
|
||||
bug fixes.
|
||||
|
||||
The intent is for this to be used by plugins maintained by the core-devs, such
|
||||
as ``pytest-xdist``, ``pytest-subtests``, and as a replacement for the internal
|
||||
'resultlog' plugin.
|
||||
|
||||
In the future it might become part of the public hook API.
|
||||
|
||||
Serializes the given report object into a data structure suitable for sending
|
||||
over the wire, e.g. converted to JSON.
|
||||
"""
|
||||
|
@ -399,16 +394,6 @@ def pytest_report_to_serializable(config, report):
|
|||
@hookspec(firstresult=True)
|
||||
def pytest_report_from_serializable(config, data):
|
||||
"""
|
||||
.. warning::
|
||||
This hook is experimental and subject to change between pytest releases, even
|
||||
bug fixes.
|
||||
|
||||
The intent is for this to be used by plugins maintained by the core-devs, such
|
||||
as ``pytest-xdist``, ``pytest-subtests``, and as a replacement for the internal
|
||||
'resultlog' plugin.
|
||||
|
||||
In the future it might become part of the public hook API.
|
||||
|
||||
Restores a report object previously serialized with pytest_report_to_serializable().
|
||||
"""
|
||||
|
||||
|
|
|
@ -19,8 +19,10 @@ from datetime import datetime
|
|||
import py
|
||||
|
||||
import pytest
|
||||
from _pytest import deprecated
|
||||
from _pytest import nodes
|
||||
from _pytest.config import filename_arg
|
||||
from _pytest.warnings import _issue_warning_captured
|
||||
|
||||
|
||||
class Junit(py.xml.Namespace):
|
||||
|
@ -421,9 +423,7 @@ def pytest_addoption(parser):
|
|||
default="total",
|
||||
) # choices=['total', 'call'])
|
||||
parser.addini(
|
||||
"junit_family",
|
||||
"Emit XML for schema: one of legacy|xunit1|xunit2",
|
||||
default="xunit1",
|
||||
"junit_family", "Emit XML for schema: one of legacy|xunit1|xunit2", default=None
|
||||
)
|
||||
|
||||
|
||||
|
@ -431,13 +431,17 @@ def pytest_configure(config):
|
|||
xmlpath = config.option.xmlpath
|
||||
# prevent opening xmllog on slave nodes (xdist)
|
||||
if xmlpath and not hasattr(config, "slaveinput"):
|
||||
junit_family = config.getini("junit_family")
|
||||
if not junit_family:
|
||||
_issue_warning_captured(deprecated.JUNIT_XML_DEFAULT_FAMILY, config.hook, 2)
|
||||
junit_family = "xunit1"
|
||||
config._xml = LogXML(
|
||||
xmlpath,
|
||||
config.option.junitprefix,
|
||||
config.getini("junit_suite_name"),
|
||||
config.getini("junit_logging"),
|
||||
config.getini("junit_duration_report"),
|
||||
config.getini("junit_family"),
|
||||
junit_family,
|
||||
config.getini("junit_log_passing_tests"),
|
||||
)
|
||||
config.pluginmanager.register(config._xml)
|
||||
|
|
|
@ -3,9 +3,14 @@ import logging
|
|||
import re
|
||||
from contextlib import contextmanager
|
||||
from io import StringIO
|
||||
from typing import AbstractSet
|
||||
from typing import Dict
|
||||
from typing import List
|
||||
from typing import Mapping
|
||||
|
||||
import pytest
|
||||
from _pytest.compat import nullcontext
|
||||
from _pytest.config import _strtobool
|
||||
from _pytest.config import create_terminal_writer
|
||||
from _pytest.pathlib import Path
|
||||
|
||||
|
@ -31,14 +36,15 @@ class ColoredLevelFormatter(logging.Formatter):
|
|||
logging.INFO: {"green"},
|
||||
logging.DEBUG: {"purple"},
|
||||
logging.NOTSET: set(),
|
||||
}
|
||||
} # type: Mapping[int, AbstractSet[str]]
|
||||
LEVELNAME_FMT_REGEX = re.compile(r"%\(levelname\)([+-.]?\d*s)")
|
||||
|
||||
def __init__(self, terminalwriter, *args, **kwargs):
|
||||
def __init__(self, terminalwriter, *args, **kwargs) -> None:
|
||||
super().__init__(*args, **kwargs)
|
||||
self._original_fmt = self._style._fmt
|
||||
self._level_to_fmt_mapping = {}
|
||||
self._level_to_fmt_mapping = {} # type: Dict[int, str]
|
||||
|
||||
assert self._fmt is not None
|
||||
levelname_fmt_match = self.LEVELNAME_FMT_REGEX.search(self._fmt)
|
||||
if not levelname_fmt_match:
|
||||
return
|
||||
|
@ -71,23 +77,86 @@ class PercentStyleMultiline(logging.PercentStyle):
|
|||
formats the message as if each line were logged separately.
|
||||
"""
|
||||
|
||||
def __init__(self, fmt, auto_indent):
|
||||
super().__init__(fmt)
|
||||
self._auto_indent = self._get_auto_indent(auto_indent)
|
||||
|
||||
@staticmethod
|
||||
def _update_message(record_dict, message):
|
||||
tmp = record_dict.copy()
|
||||
tmp["message"] = message
|
||||
return tmp
|
||||
|
||||
@staticmethod
|
||||
def _get_auto_indent(auto_indent_option) -> int:
|
||||
"""Determines the current auto indentation setting
|
||||
|
||||
Specify auto indent behavior (on/off/fixed) by passing in
|
||||
extra={"auto_indent": [value]} to the call to logging.log() or
|
||||
using a --log-auto-indent [value] command line or the
|
||||
log_auto_indent [value] config option.
|
||||
|
||||
Default behavior is auto-indent off.
|
||||
|
||||
Using the string "True" or "on" or the boolean True as the value
|
||||
turns auto indent on, using the string "False" or "off" or the
|
||||
boolean False or the int 0 turns it off, and specifying a
|
||||
positive integer fixes the indentation position to the value
|
||||
specified.
|
||||
|
||||
Any other values for the option are invalid, and will silently be
|
||||
converted to the default.
|
||||
|
||||
:param any auto_indent_option: User specified option for indentation
|
||||
from command line, config or extra kwarg. Accepts int, bool or str.
|
||||
str option accepts the same range of values as boolean config options,
|
||||
as well as positive integers represented in str form.
|
||||
|
||||
:returns: indentation value, which can be
|
||||
-1 (automatically determine indentation) or
|
||||
0 (auto-indent turned off) or
|
||||
>0 (explicitly set indentation position).
|
||||
"""
|
||||
|
||||
if type(auto_indent_option) is int:
|
||||
return int(auto_indent_option)
|
||||
elif type(auto_indent_option) is str:
|
||||
try:
|
||||
return int(auto_indent_option)
|
||||
except ValueError:
|
||||
pass
|
||||
try:
|
||||
if _strtobool(auto_indent_option):
|
||||
return -1
|
||||
except ValueError:
|
||||
return 0
|
||||
elif type(auto_indent_option) is bool:
|
||||
if auto_indent_option:
|
||||
return -1
|
||||
|
||||
return 0
|
||||
|
||||
def format(self, record):
|
||||
if "\n" in record.message:
|
||||
if hasattr(record, "auto_indent"):
|
||||
# passed in from the "extra={}" kwarg on the call to logging.log()
|
||||
auto_indent = self._get_auto_indent(record.auto_indent)
|
||||
else:
|
||||
auto_indent = self._auto_indent
|
||||
|
||||
if auto_indent:
|
||||
lines = record.message.splitlines()
|
||||
formatted = self._fmt % self._update_message(record.__dict__, lines[0])
|
||||
# TODO optimize this by introducing an option that tells the
|
||||
# logging framework that the indentation doesn't
|
||||
# change. This allows to compute the indentation only once.
|
||||
indentation = _remove_ansi_escape_sequences(formatted).find(lines[0])
|
||||
|
||||
if auto_indent < 0:
|
||||
indentation = _remove_ansi_escape_sequences(formatted).find(
|
||||
lines[0]
|
||||
)
|
||||
else:
|
||||
# optimizes logging by allowing a fixed indentation
|
||||
indentation = auto_indent
|
||||
lines[0] = formatted
|
||||
return ("\n" + " " * indentation).join(lines)
|
||||
else:
|
||||
return self._fmt % record.__dict__
|
||||
|
||||
|
||||
|
@ -182,6 +251,12 @@ def pytest_addoption(parser):
|
|||
default=DEFAULT_LOG_DATE_FORMAT,
|
||||
help="log date format as used by the logging module.",
|
||||
)
|
||||
add_option_ini(
|
||||
"--log-auto-indent",
|
||||
dest="log_auto_indent",
|
||||
default=None,
|
||||
help="Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer.",
|
||||
)
|
||||
|
||||
|
||||
@contextmanager
|
||||
|
@ -215,17 +290,17 @@ def catching_logs(handler, formatter=None, level=None):
|
|||
class LogCaptureHandler(logging.StreamHandler):
|
||||
"""A logging handler that stores log records and the log text."""
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self) -> None:
|
||||
"""Creates a new log handler."""
|
||||
logging.StreamHandler.__init__(self, StringIO())
|
||||
self.records = []
|
||||
self.records = [] # type: List[logging.LogRecord]
|
||||
|
||||
def emit(self, record):
|
||||
def emit(self, record: logging.LogRecord) -> None:
|
||||
"""Keep the log records in a list in addition to the log text."""
|
||||
self.records.append(record)
|
||||
logging.StreamHandler.emit(self, record)
|
||||
|
||||
def reset(self):
|
||||
def reset(self) -> None:
|
||||
self.records = []
|
||||
self.stream = StringIO()
|
||||
|
||||
|
@ -233,13 +308,13 @@ class LogCaptureHandler(logging.StreamHandler):
|
|||
class LogCaptureFixture:
|
||||
"""Provides access and control of log capturing."""
|
||||
|
||||
def __init__(self, item):
|
||||
def __init__(self, item) -> None:
|
||||
"""Creates a new funcarg."""
|
||||
self._item = item
|
||||
# dict of log name -> log level
|
||||
self._initial_log_levels = {} # Dict[str, int]
|
||||
self._initial_log_levels = {} # type: Dict[str, int]
|
||||
|
||||
def _finalize(self):
|
||||
def _finalize(self) -> None:
|
||||
"""Finalizes the fixture.
|
||||
|
||||
This restores the log levels changed by :meth:`set_level`.
|
||||
|
@ -413,6 +488,7 @@ class LoggingPlugin:
|
|||
self.formatter = self._create_formatter(
|
||||
get_option_ini(config, "log_format"),
|
||||
get_option_ini(config, "log_date_format"),
|
||||
get_option_ini(config, "log_auto_indent"),
|
||||
)
|
||||
self.log_level = get_actual_log_level(config, "log_level")
|
||||
|
||||
|
@ -444,7 +520,7 @@ class LoggingPlugin:
|
|||
if self._log_cli_enabled():
|
||||
self._setup_cli_logging()
|
||||
|
||||
def _create_formatter(self, log_format, log_date_format):
|
||||
def _create_formatter(self, log_format, log_date_format, auto_indent):
|
||||
# color option doesn't exist if terminal plugin is disabled
|
||||
color = getattr(self._config.option, "color", "no")
|
||||
if color != "no" and ColoredLevelFormatter.LEVELNAME_FMT_REGEX.search(
|
||||
|
@ -452,11 +528,14 @@ class LoggingPlugin:
|
|||
):
|
||||
formatter = ColoredLevelFormatter(
|
||||
create_terminal_writer(self._config), log_format, log_date_format
|
||||
)
|
||||
) # type: logging.Formatter
|
||||
else:
|
||||
formatter = logging.Formatter(log_format, log_date_format)
|
||||
|
||||
formatter._style = PercentStyleMultiline(formatter._style._fmt)
|
||||
formatter._style = PercentStyleMultiline(
|
||||
formatter._style._fmt, auto_indent=auto_indent
|
||||
)
|
||||
|
||||
return formatter
|
||||
|
||||
def _setup_cli_logging(self):
|
||||
|
@ -473,6 +552,7 @@ class LoggingPlugin:
|
|||
log_cli_formatter = self._create_formatter(
|
||||
get_option_ini(config, "log_cli_format", "log_format"),
|
||||
get_option_ini(config, "log_cli_date_format", "log_date_format"),
|
||||
get_option_ini(config, "log_auto_indent"),
|
||||
)
|
||||
|
||||
log_cli_level = get_actual_log_level(config, "log_cli_level", "log_level")
|
||||
|
|
|
@ -5,6 +5,7 @@ import functools
|
|||
import importlib
|
||||
import os
|
||||
import sys
|
||||
from typing import Dict
|
||||
|
||||
import attr
|
||||
import py
|
||||
|
@ -16,6 +17,7 @@ from _pytest.config import hookimpl
|
|||
from _pytest.config import UsageError
|
||||
from _pytest.outcomes import exit
|
||||
from _pytest.runner import collect_one_node
|
||||
from _pytest.runner import SetupState
|
||||
|
||||
|
||||
class ExitCode(enum.IntEnum):
|
||||
|
@ -107,6 +109,7 @@ def pytest_addoption(parser):
|
|||
group.addoption(
|
||||
"--collectonly",
|
||||
"--collect-only",
|
||||
"--co",
|
||||
action="store_true",
|
||||
help="only collect tests, don't execute them.",
|
||||
),
|
||||
|
@ -248,7 +251,10 @@ def pytest_collection(session):
|
|||
|
||||
def pytest_runtestloop(session):
|
||||
if session.testsfailed and not session.config.option.continue_on_collection_errors:
|
||||
raise session.Interrupted("%d errors during collection" % session.testsfailed)
|
||||
raise session.Interrupted(
|
||||
"%d error%s during collection"
|
||||
% (session.testsfailed, "s" if session.testsfailed != 1 else "")
|
||||
)
|
||||
|
||||
if session.config.option.collectonly:
|
||||
return True
|
||||
|
@ -356,8 +362,8 @@ class Failed(Exception):
|
|||
class _bestrelpath_cache(dict):
|
||||
path = attr.ib()
|
||||
|
||||
def __missing__(self, path):
|
||||
r = self.path.bestrelpath(path)
|
||||
def __missing__(self, path: str) -> str:
|
||||
r = self.path.bestrelpath(path) # type: str
|
||||
self[path] = r
|
||||
return r
|
||||
|
||||
|
@ -365,6 +371,7 @@ class _bestrelpath_cache(dict):
|
|||
class Session(nodes.FSCollector):
|
||||
Interrupted = Interrupted
|
||||
Failed = Failed
|
||||
_setupstate = None # type: SetupState
|
||||
|
||||
def __init__(self, config):
|
||||
nodes.FSCollector.__init__(
|
||||
|
@ -380,7 +387,9 @@ class Session(nodes.FSCollector):
|
|||
self._initialpaths = frozenset()
|
||||
# Keep track of any collected nodes in here, so we don't duplicate fixtures
|
||||
self._node_cache = {}
|
||||
self._bestrelpathcache = _bestrelpath_cache(config.rootdir)
|
||||
self._bestrelpathcache = _bestrelpath_cache(
|
||||
config.rootdir
|
||||
) # type: Dict[str, str]
|
||||
# Dirnames of pkgs with dunder-init files.
|
||||
self._pkg_roots = {}
|
||||
|
||||
|
@ -395,7 +404,7 @@ class Session(nodes.FSCollector):
|
|||
self.testscollected,
|
||||
)
|
||||
|
||||
def _node_location_to_relpath(self, node_path):
|
||||
def _node_location_to_relpath(self, node_path: str) -> str:
|
||||
# bestrelpath is a quite slow function
|
||||
return self._bestrelpathcache[node_path]
|
||||
|
||||
|
@ -468,7 +477,6 @@ class Session(nodes.FSCollector):
|
|||
for arg, exc in self._notfound:
|
||||
line = "(no name {!r} in any of {!r})".format(arg, exc.args[0])
|
||||
errors.append("not found: {}\n{}".format(arg, line))
|
||||
# XXX: test this
|
||||
raise UsageError(*errors)
|
||||
if not genitems:
|
||||
return rep.result
|
||||
|
@ -480,22 +488,22 @@ class Session(nodes.FSCollector):
|
|||
|
||||
def collect(self):
|
||||
for initialpart in self._initialparts:
|
||||
arg = "::".join(map(str, initialpart))
|
||||
self.trace("processing argument", arg)
|
||||
self.trace("processing argument", initialpart)
|
||||
self.trace.root.indent += 1
|
||||
try:
|
||||
yield from self._collect(arg)
|
||||
yield from self._collect(initialpart)
|
||||
except NoMatch:
|
||||
report_arg = "::".join(map(str, initialpart))
|
||||
# we are inside a make_report hook so
|
||||
# we cannot directly pass through the exception
|
||||
self._notfound.append((arg, sys.exc_info()[1]))
|
||||
self._notfound.append((report_arg, sys.exc_info()[1]))
|
||||
|
||||
self.trace.root.indent -= 1
|
||||
|
||||
def _collect(self, arg):
|
||||
from _pytest.python import Package
|
||||
|
||||
names = self._parsearg(arg)
|
||||
names = arg[:]
|
||||
argpath = names.pop(0)
|
||||
|
||||
# Start with a Session root, and delve to argpath item (dir or file)
|
||||
|
|
|
@ -8,6 +8,7 @@ from .structures import MARK_GEN
|
|||
from .structures import MarkDecorator
|
||||
from .structures import MarkGenerator
|
||||
from .structures import ParameterSet
|
||||
from _pytest.config import hookimpl
|
||||
from _pytest.config import UsageError
|
||||
|
||||
__all__ = ["Mark", "MarkDecorator", "MarkGenerator", "get_empty_parameterset_mark"]
|
||||
|
@ -74,6 +75,7 @@ def pytest_addoption(parser):
|
|||
parser.addini(EMPTY_PARAMETERSET_OPTION, "default marker for empty parametersets")
|
||||
|
||||
|
||||
@hookimpl(tryfirst=True)
|
||||
def pytest_cmdline_main(config):
|
||||
import _pytest.config
|
||||
|
||||
|
@ -91,10 +93,6 @@ def pytest_cmdline_main(config):
|
|||
return 0
|
||||
|
||||
|
||||
# Ignore type because of https://github.com/python/mypy/issues/2087.
|
||||
pytest_cmdline_main.tryfirst = True # type: ignore
|
||||
|
||||
|
||||
def deselect_by_keyword(items, config):
|
||||
keywordexpr = config.option.keyword.lstrip()
|
||||
if not keywordexpr:
|
||||
|
|
|
@ -2,7 +2,6 @@ import inspect
|
|||
import warnings
|
||||
from collections import namedtuple
|
||||
from collections.abc import MutableMapping
|
||||
from operator import attrgetter
|
||||
from typing import Set
|
||||
|
||||
import attr
|
||||
|
@ -17,16 +16,6 @@ from _pytest.warning_types import PytestUnknownMarkWarning
|
|||
EMPTY_PARAMETERSET_OPTION = "empty_parameter_set_mark"
|
||||
|
||||
|
||||
def alias(name, warning=None):
|
||||
getter = attrgetter(name)
|
||||
|
||||
def warned(self):
|
||||
warnings.warn(warning, stacklevel=2)
|
||||
return getter(self)
|
||||
|
||||
return property(getter if warning is None else warned, doc="alias for " + name)
|
||||
|
||||
|
||||
def istestfunc(func):
|
||||
return (
|
||||
hasattr(func, "__call__")
|
||||
|
@ -205,17 +194,25 @@ class MarkDecorator:
|
|||
|
||||
mark = attr.ib(validator=attr.validators.instance_of(Mark))
|
||||
|
||||
name = alias("mark.name")
|
||||
args = alias("mark.args")
|
||||
kwargs = alias("mark.kwargs")
|
||||
@property
|
||||
def name(self):
|
||||
"""alias for mark.name"""
|
||||
return self.mark.name
|
||||
|
||||
@property
|
||||
def args(self):
|
||||
"""alias for mark.args"""
|
||||
return self.mark.args
|
||||
|
||||
@property
|
||||
def kwargs(self):
|
||||
"""alias for mark.kwargs"""
|
||||
return self.mark.kwargs
|
||||
|
||||
@property
|
||||
def markname(self):
|
||||
return self.name # for backward-compat (2.4.1 had this attr)
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.mark == other.mark if isinstance(other, MarkDecorator) else False
|
||||
|
||||
def __repr__(self):
|
||||
return "<MarkDecorator {!r}>".format(self.mark)
|
||||
|
||||
|
@ -317,7 +314,12 @@ class MarkGenerator:
|
|||
"{!r} not found in `markers` configuration option".format(name),
|
||||
pytrace=False,
|
||||
)
|
||||
else:
|
||||
|
||||
# Raise a specific error for common misspellings of "parametrize".
|
||||
if name in ["parameterize", "parametrise", "parameterise"]:
|
||||
__tracebackhide__ = True
|
||||
fail("Unknown '{}' mark, did you mean 'parametrize'?".format(name))
|
||||
|
||||
warnings.warn(
|
||||
"Unknown pytest.mark.%s - is this a typo? You can register "
|
||||
"custom marks to avoid this warning - for details, see "
|
||||
|
|
|
@ -4,6 +4,7 @@ from functools import lru_cache
|
|||
from typing import Any
|
||||
from typing import Dict
|
||||
from typing import List
|
||||
from typing import Optional
|
||||
from typing import Set
|
||||
from typing import Tuple
|
||||
from typing import Union
|
||||
|
@ -11,15 +12,23 @@ from typing import Union
|
|||
import py
|
||||
|
||||
import _pytest._code
|
||||
from _pytest._code.code import ExceptionChainRepr
|
||||
from _pytest._code.code import ExceptionInfo
|
||||
from _pytest._code.code import ReprExceptionInfo
|
||||
from _pytest.compat import cached_property
|
||||
from _pytest.compat import getfslineno
|
||||
from _pytest.config import Config
|
||||
from _pytest.fixtures import FixtureDef
|
||||
from _pytest.fixtures import FixtureLookupError
|
||||
from _pytest.fixtures import FixtureLookupErrorRepr
|
||||
from _pytest.mark.structures import Mark
|
||||
from _pytest.mark.structures import MarkDecorator
|
||||
from _pytest.mark.structures import NodeKeywords
|
||||
from _pytest.outcomes import fail
|
||||
from _pytest.outcomes import Failed
|
||||
|
||||
if False: # TYPE_CHECKING
|
||||
# Imported here due to circular import.
|
||||
from _pytest.fixtures import FixtureDef
|
||||
from _pytest.main import Session # noqa: F401
|
||||
|
||||
SEP = "/"
|
||||
|
||||
|
@ -69,8 +78,14 @@ class Node:
|
|||
Collector subclasses have children, Items are terminal nodes."""
|
||||
|
||||
def __init__(
|
||||
self, name, parent=None, config=None, session=None, fspath=None, nodeid=None
|
||||
):
|
||||
self,
|
||||
name,
|
||||
parent: Optional["Node"] = None,
|
||||
config: Optional[Config] = None,
|
||||
session: Optional["Session"] = None,
|
||||
fspath: Optional[py.path.local] = None,
|
||||
nodeid: Optional[str] = None,
|
||||
) -> None:
|
||||
#: a unique name within the scope of the parent node
|
||||
self.name = name
|
||||
|
||||
|
@ -78,10 +93,20 @@ class Node:
|
|||
self.parent = parent
|
||||
|
||||
#: the pytest config object
|
||||
self.config = config or parent.config
|
||||
if config:
|
||||
self.config = config
|
||||
else:
|
||||
if not parent:
|
||||
raise TypeError("config or parent must be provided")
|
||||
self.config = parent.config
|
||||
|
||||
#: the session this node is part of
|
||||
self.session = session or parent.session
|
||||
if session:
|
||||
self.session = session
|
||||
else:
|
||||
if not parent:
|
||||
raise TypeError("session or parent must be provided")
|
||||
self.session = parent.session
|
||||
|
||||
#: filesystem path where this node was collected from (can be None)
|
||||
self.fspath = fspath or getattr(parent, "fspath", None)
|
||||
|
@ -102,6 +127,8 @@ class Node:
|
|||
assert "::()" not in nodeid
|
||||
self._nodeid = nodeid
|
||||
else:
|
||||
if not self.parent:
|
||||
raise TypeError("nodeid or parent must be provided")
|
||||
self._nodeid = self.parent.nodeid
|
||||
if self.name != "()":
|
||||
self._nodeid += "::" + self.name
|
||||
|
@ -139,8 +166,7 @@ class Node:
|
|||
)
|
||||
)
|
||||
path, lineno = get_fslocation_from_item(self)
|
||||
# Type ignored: https://github.com/python/typeshed/pull/3121
|
||||
warnings.warn_explicit( # type: ignore
|
||||
warnings.warn_explicit(
|
||||
warning,
|
||||
category=None,
|
||||
filename=str(path),
|
||||
|
@ -166,7 +192,7 @@ class Node:
|
|||
""" return list of all parent collectors up to self,
|
||||
starting from root of collection tree. """
|
||||
chain = []
|
||||
item = self
|
||||
item = self # type: Optional[Node]
|
||||
while item is not None:
|
||||
chain.append(item)
|
||||
item = item.parent
|
||||
|
@ -247,7 +273,7 @@ class Node:
|
|||
def getparent(self, cls):
|
||||
""" get the next parent node (including ourself)
|
||||
which is an instance of the given class"""
|
||||
current = self
|
||||
current = self # type: Optional[Node]
|
||||
while current and not isinstance(current, cls):
|
||||
current = current.parent
|
||||
return current
|
||||
|
@ -255,13 +281,13 @@ class Node:
|
|||
def _prunetraceback(self, excinfo):
|
||||
pass
|
||||
|
||||
def _repr_failure_py(self, excinfo, style=None):
|
||||
# Type ignored: see comment where fail.Exception is defined.
|
||||
if excinfo.errisinstance(fail.Exception): # type: ignore
|
||||
def _repr_failure_py(
|
||||
self, excinfo: ExceptionInfo[Union[Failed, FixtureLookupError]], style=None
|
||||
) -> Union[str, ReprExceptionInfo, ExceptionChainRepr, FixtureLookupErrorRepr]:
|
||||
if isinstance(excinfo.value, Failed):
|
||||
if not excinfo.value.pytrace:
|
||||
return str(excinfo.value)
|
||||
fm = self.session._fixturemanager
|
||||
if excinfo.errisinstance(fm.FixtureLookupError):
|
||||
if isinstance(excinfo.value, FixtureLookupError):
|
||||
return excinfo.value.formatrepr()
|
||||
if self.config.getoption("fulltrace", False):
|
||||
style = "long"
|
||||
|
@ -299,7 +325,9 @@ class Node:
|
|||
truncate_locals=truncate_locals,
|
||||
)
|
||||
|
||||
def repr_failure(self, excinfo, style=None):
|
||||
def repr_failure(
|
||||
self, excinfo, style=None
|
||||
) -> Union[str, ReprExceptionInfo, ExceptionChainRepr, FixtureLookupErrorRepr]:
|
||||
return self._repr_failure_py(excinfo, style)
|
||||
|
||||
|
||||
|
@ -365,8 +393,9 @@ def _check_initialpaths_for_relpath(session, fspath):
|
|||
|
||||
|
||||
class FSCollector(Collector):
|
||||
def __init__(self, fspath, parent=None, config=None, session=None, nodeid=None):
|
||||
fspath = py.path.local(fspath) # xxx only for test_resultlog.py?
|
||||
def __init__(
|
||||
self, fspath: py.path.local, parent=None, config=None, session=None, nodeid=None
|
||||
) -> None:
|
||||
name = fspath.basename
|
||||
if parent is not None:
|
||||
rel = fspath.relto(parent.fspath)
|
||||
|
@ -426,16 +455,12 @@ class Item(Node):
|
|||
if content:
|
||||
self._report_sections.append((when, key, content))
|
||||
|
||||
def reportinfo(self):
|
||||
def reportinfo(self) -> Tuple[str, Optional[int], str]:
|
||||
return self.fspath, None, ""
|
||||
|
||||
@property
|
||||
def location(self):
|
||||
try:
|
||||
return self._location
|
||||
except AttributeError:
|
||||
@cached_property
|
||||
def location(self) -> Tuple[str, Optional[int], str]:
|
||||
location = self.reportinfo()
|
||||
fspath = self.session._node_location_to_relpath(location[0])
|
||||
location = (fspath, location[1], str(location[2]))
|
||||
self._location = location
|
||||
return location
|
||||
assert type(location[2]) is str
|
||||
return (fspath, location[1], location[2])
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
import atexit
|
||||
import fnmatch
|
||||
import itertools
|
||||
import operator
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
|
@ -13,6 +12,11 @@ from os.path import expandvars
|
|||
from os.path import isabs
|
||||
from os.path import sep
|
||||
from posixpath import sep as posix_sep
|
||||
from typing import Iterable
|
||||
from typing import Iterator
|
||||
from typing import Set
|
||||
from typing import TypeVar
|
||||
from typing import Union
|
||||
|
||||
from _pytest.warning_types import PytestWarning
|
||||
|
||||
|
@ -26,10 +30,15 @@ __all__ = ["Path", "PurePath"]
|
|||
|
||||
LOCK_TIMEOUT = 60 * 60 * 3
|
||||
|
||||
get_lock_path = operator.methodcaller("joinpath", ".lock")
|
||||
|
||||
_AnyPurePath = TypeVar("_AnyPurePath", bound=PurePath)
|
||||
|
||||
|
||||
def ensure_reset_dir(path):
|
||||
def get_lock_path(path: _AnyPurePath) -> _AnyPurePath:
|
||||
return path.joinpath(".lock")
|
||||
|
||||
|
||||
def ensure_reset_dir(path: Path) -> None:
|
||||
"""
|
||||
ensures the given path is an empty directory
|
||||
"""
|
||||
|
@ -38,7 +47,7 @@ def ensure_reset_dir(path):
|
|||
path.mkdir()
|
||||
|
||||
|
||||
def on_rm_rf_error(func, path: str, exc, *, start_path) -> bool:
|
||||
def on_rm_rf_error(func, path: str, exc, *, start_path: Path) -> bool:
|
||||
"""Handles known read-only errors during rmtree.
|
||||
|
||||
The returned value is used only by our own tests.
|
||||
|
@ -59,10 +68,11 @@ def on_rm_rf_error(func, path: str, exc, *, start_path) -> bool:
|
|||
return False
|
||||
|
||||
if func not in (os.rmdir, os.remove, os.unlink):
|
||||
if func not in (os.open,):
|
||||
warnings.warn(
|
||||
PytestWarning(
|
||||
"(rm_rf) unknown function {} when removing {}:\n{}: {}".format(
|
||||
path, func, exctype, excvalue
|
||||
func, path, exctype, excvalue
|
||||
)
|
||||
)
|
||||
)
|
||||
|
@ -71,7 +81,7 @@ def on_rm_rf_error(func, path: str, exc, *, start_path) -> bool:
|
|||
# Chmod + retry.
|
||||
import stat
|
||||
|
||||
def chmod_rw(p: str):
|
||||
def chmod_rw(p: str) -> None:
|
||||
mode = os.stat(p).st_mode
|
||||
os.chmod(p, mode | stat.S_IRUSR | stat.S_IWUSR)
|
||||
|
||||
|
@ -90,7 +100,7 @@ def on_rm_rf_error(func, path: str, exc, *, start_path) -> bool:
|
|||
return True
|
||||
|
||||
|
||||
def rm_rf(path: Path):
|
||||
def rm_rf(path: Path) -> None:
|
||||
"""Remove the path contents recursively, even if some elements
|
||||
are read-only.
|
||||
"""
|
||||
|
@ -98,7 +108,7 @@ def rm_rf(path: Path):
|
|||
shutil.rmtree(str(path), onerror=onerror)
|
||||
|
||||
|
||||
def find_prefixed(root, prefix):
|
||||
def find_prefixed(root: Path, prefix: str) -> Iterator[Path]:
|
||||
"""finds all elements in root that begin with the prefix, case insensitive"""
|
||||
l_prefix = prefix.lower()
|
||||
for x in root.iterdir():
|
||||
|
@ -106,7 +116,7 @@ def find_prefixed(root, prefix):
|
|||
yield x
|
||||
|
||||
|
||||
def extract_suffixes(iter, prefix):
|
||||
def extract_suffixes(iter: Iterable[PurePath], prefix: str) -> Iterator[str]:
|
||||
"""
|
||||
:param iter: iterator over path names
|
||||
:param prefix: expected prefix of the path names
|
||||
|
@ -117,13 +127,13 @@ def extract_suffixes(iter, prefix):
|
|||
yield p.name[p_len:]
|
||||
|
||||
|
||||
def find_suffixes(root, prefix):
|
||||
def find_suffixes(root: Path, prefix: str) -> Iterator[str]:
|
||||
"""combines find_prefixes and extract_suffixes
|
||||
"""
|
||||
return extract_suffixes(find_prefixed(root, prefix), prefix)
|
||||
|
||||
|
||||
def parse_num(maybe_num):
|
||||
def parse_num(maybe_num) -> int:
|
||||
"""parses number path suffixes, returns -1 on error"""
|
||||
try:
|
||||
return int(maybe_num)
|
||||
|
@ -131,7 +141,9 @@ def parse_num(maybe_num):
|
|||
return -1
|
||||
|
||||
|
||||
def _force_symlink(root, target, link_to):
|
||||
def _force_symlink(
|
||||
root: Path, target: Union[str, PurePath], link_to: Union[str, Path]
|
||||
) -> None:
|
||||
"""helper to create the current symlink
|
||||
|
||||
it's full of race conditions that are reasonably ok to ignore
|
||||
|
@ -151,7 +163,7 @@ def _force_symlink(root, target, link_to):
|
|||
pass
|
||||
|
||||
|
||||
def make_numbered_dir(root, prefix):
|
||||
def make_numbered_dir(root: Path, prefix: str) -> Path:
|
||||
"""create a directory with an increased number as suffix for the given prefix"""
|
||||
for i in range(10):
|
||||
# try up to 10 times to create the folder
|
||||
|
@ -172,7 +184,7 @@ def make_numbered_dir(root, prefix):
|
|||
)
|
||||
|
||||
|
||||
def create_cleanup_lock(p):
|
||||
def create_cleanup_lock(p: Path) -> Path:
|
||||
"""crates a lock to prevent premature folder cleanup"""
|
||||
lock_path = get_lock_path(p)
|
||||
try:
|
||||
|
@ -189,11 +201,11 @@ def create_cleanup_lock(p):
|
|||
return lock_path
|
||||
|
||||
|
||||
def register_cleanup_lock_removal(lock_path, register=atexit.register):
|
||||
def register_cleanup_lock_removal(lock_path: Path, register=atexit.register):
|
||||
"""registers a cleanup function for removing a lock, by default on atexit"""
|
||||
pid = os.getpid()
|
||||
|
||||
def cleanup_on_exit(lock_path=lock_path, original_pid=pid):
|
||||
def cleanup_on_exit(lock_path: Path = lock_path, original_pid: int = pid) -> None:
|
||||
current_pid = os.getpid()
|
||||
if current_pid != original_pid:
|
||||
# fork
|
||||
|
@ -206,7 +218,7 @@ def register_cleanup_lock_removal(lock_path, register=atexit.register):
|
|||
return register(cleanup_on_exit)
|
||||
|
||||
|
||||
def maybe_delete_a_numbered_dir(path):
|
||||
def maybe_delete_a_numbered_dir(path: Path) -> None:
|
||||
"""removes a numbered directory if its lock can be obtained and it does not seem to be in use"""
|
||||
lock_path = None
|
||||
try:
|
||||
|
@ -232,7 +244,7 @@ def maybe_delete_a_numbered_dir(path):
|
|||
pass
|
||||
|
||||
|
||||
def ensure_deletable(path, consider_lock_dead_if_created_before):
|
||||
def ensure_deletable(path: Path, consider_lock_dead_if_created_before: float) -> bool:
|
||||
"""checks if a lock exists and breaks it if its considered dead"""
|
||||
if path.is_symlink():
|
||||
return False
|
||||
|
@ -251,13 +263,13 @@ def ensure_deletable(path, consider_lock_dead_if_created_before):
|
|||
return False
|
||||
|
||||
|
||||
def try_cleanup(path, consider_lock_dead_if_created_before):
|
||||
def try_cleanup(path: Path, consider_lock_dead_if_created_before: float) -> None:
|
||||
"""tries to cleanup a folder if we can ensure it's deletable"""
|
||||
if ensure_deletable(path, consider_lock_dead_if_created_before):
|
||||
maybe_delete_a_numbered_dir(path)
|
||||
|
||||
|
||||
def cleanup_candidates(root, prefix, keep):
|
||||
def cleanup_candidates(root: Path, prefix: str, keep: int) -> Iterator[Path]:
|
||||
"""lists candidates for numbered directories to be removed - follows py.path"""
|
||||
max_existing = max(map(parse_num, find_suffixes(root, prefix)), default=-1)
|
||||
max_delete = max_existing - keep
|
||||
|
@ -269,7 +281,9 @@ def cleanup_candidates(root, prefix, keep):
|
|||
yield path
|
||||
|
||||
|
||||
def cleanup_numbered_dir(root, prefix, keep, consider_lock_dead_if_created_before):
|
||||
def cleanup_numbered_dir(
|
||||
root: Path, prefix: str, keep: int, consider_lock_dead_if_created_before: float
|
||||
) -> None:
|
||||
"""cleanup for lock driven numbered directories"""
|
||||
for path in cleanup_candidates(root, prefix, keep):
|
||||
try_cleanup(path, consider_lock_dead_if_created_before)
|
||||
|
@ -277,7 +291,9 @@ def cleanup_numbered_dir(root, prefix, keep, consider_lock_dead_if_created_befor
|
|||
try_cleanup(path, consider_lock_dead_if_created_before)
|
||||
|
||||
|
||||
def make_numbered_dir_with_cleanup(root, prefix, keep, lock_timeout):
|
||||
def make_numbered_dir_with_cleanup(
|
||||
root: Path, prefix: str, keep: int, lock_timeout: float
|
||||
) -> Path:
|
||||
"""creates a numbered dir with a cleanup lock and removes old ones"""
|
||||
e = None
|
||||
for i in range(10):
|
||||
|
@ -311,7 +327,7 @@ def resolve_from_str(input, root):
|
|||
return root.joinpath(input)
|
||||
|
||||
|
||||
def fnmatch_ex(pattern, path):
|
||||
def fnmatch_ex(pattern: str, path) -> bool:
|
||||
"""FNMatcher port from py.path.common which works with PurePath() instances.
|
||||
|
||||
The difference between this algorithm and PurePath.match() is that the latter matches "**" glob expressions
|
||||
|
@ -346,6 +362,6 @@ def fnmatch_ex(pattern, path):
|
|||
return fnmatch.fnmatch(name, pattern)
|
||||
|
||||
|
||||
def parts(s):
|
||||
def parts(s: str) -> Set[str]:
|
||||
parts = s.split(sep)
|
||||
return {sep.join(parts[: i + 1]) or sep for i in range(len(parts))}
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
"""(disabled by default) support for testing pytest and pytest plugins."""
|
||||
import collections.abc
|
||||
import gc
|
||||
import importlib
|
||||
import os
|
||||
|
@ -8,9 +9,16 @@ import subprocess
|
|||
import sys
|
||||
import time
|
||||
import traceback
|
||||
from collections.abc import Sequence
|
||||
from fnmatch import fnmatch
|
||||
from io import StringIO
|
||||
from typing import Callable
|
||||
from typing import Dict
|
||||
from typing import Iterable
|
||||
from typing import List
|
||||
from typing import Optional
|
||||
from typing import Sequence
|
||||
from typing import Tuple
|
||||
from typing import Union
|
||||
from weakref import WeakKeyDictionary
|
||||
|
||||
import py
|
||||
|
@ -20,10 +28,16 @@ from _pytest._code import Source
|
|||
from _pytest._io.saferepr import saferepr
|
||||
from _pytest.capture import MultiCapture
|
||||
from _pytest.capture import SysCapture
|
||||
from _pytest.fixtures import FixtureRequest
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.main import Session
|
||||
from _pytest.monkeypatch import MonkeyPatch
|
||||
from _pytest.pathlib import Path
|
||||
from _pytest.reports import TestReport
|
||||
|
||||
if False: # TYPE_CHECKING
|
||||
from typing import Type
|
||||
|
||||
|
||||
IGNORE_PAM = [ # filenames added when obtaining details about the current user
|
||||
"/var/lib/sss/mc/passwd"
|
||||
|
@ -141,7 +155,7 @@ class LsofFdLeakChecker:
|
|||
|
||||
|
||||
@pytest.fixture
|
||||
def _pytest(request):
|
||||
def _pytest(request: FixtureRequest) -> "PytestArg":
|
||||
"""Return a helper which offers a gethookrecorder(hook) method which
|
||||
returns a HookRecorder instance which helps to make assertions about called
|
||||
hooks.
|
||||
|
@ -151,10 +165,10 @@ def _pytest(request):
|
|||
|
||||
|
||||
class PytestArg:
|
||||
def __init__(self, request):
|
||||
def __init__(self, request: FixtureRequest) -> None:
|
||||
self.request = request
|
||||
|
||||
def gethookrecorder(self, hook):
|
||||
def gethookrecorder(self, hook) -> "HookRecorder":
|
||||
hookrecorder = HookRecorder(hook._pm)
|
||||
self.request.addfinalizer(hookrecorder.finish_recording)
|
||||
return hookrecorder
|
||||
|
@ -175,6 +189,11 @@ class ParsedCall:
|
|||
del d["_name"]
|
||||
return "<ParsedCall {!r}(**{!r})>".format(self._name, d)
|
||||
|
||||
if False: # TYPE_CHECKING
|
||||
# The class has undetermined attributes, this tells mypy about it.
|
||||
def __getattr__(self, key):
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
class HookRecorder:
|
||||
"""Record all hooks called in a plugin manager.
|
||||
|
@ -184,27 +203,27 @@ class HookRecorder:
|
|||
|
||||
"""
|
||||
|
||||
def __init__(self, pluginmanager):
|
||||
def __init__(self, pluginmanager) -> None:
|
||||
self._pluginmanager = pluginmanager
|
||||
self.calls = []
|
||||
self.calls = [] # type: List[ParsedCall]
|
||||
|
||||
def before(hook_name, hook_impls, kwargs):
|
||||
def before(hook_name: str, hook_impls, kwargs) -> None:
|
||||
self.calls.append(ParsedCall(hook_name, kwargs))
|
||||
|
||||
def after(outcome, hook_name, hook_impls, kwargs):
|
||||
def after(outcome, hook_name: str, hook_impls, kwargs) -> None:
|
||||
pass
|
||||
|
||||
self._undo_wrapping = pluginmanager.add_hookcall_monitoring(before, after)
|
||||
|
||||
def finish_recording(self):
|
||||
def finish_recording(self) -> None:
|
||||
self._undo_wrapping()
|
||||
|
||||
def getcalls(self, names):
|
||||
def getcalls(self, names: Union[str, Iterable[str]]) -> List[ParsedCall]:
|
||||
if isinstance(names, str):
|
||||
names = names.split()
|
||||
return [call for call in self.calls if call._name in names]
|
||||
|
||||
def assert_contains(self, entries):
|
||||
def assert_contains(self, entries) -> None:
|
||||
__tracebackhide__ = True
|
||||
i = 0
|
||||
entries = list(entries)
|
||||
|
@ -225,7 +244,7 @@ class HookRecorder:
|
|||
else:
|
||||
pytest.fail("could not find {!r} check {!r}".format(name, check))
|
||||
|
||||
def popcall(self, name):
|
||||
def popcall(self, name: str) -> ParsedCall:
|
||||
__tracebackhide__ = True
|
||||
for i, call in enumerate(self.calls):
|
||||
if call._name == name:
|
||||
|
@ -235,20 +254,27 @@ class HookRecorder:
|
|||
lines.extend([" %s" % x for x in self.calls])
|
||||
pytest.fail("\n".join(lines))
|
||||
|
||||
def getcall(self, name):
|
||||
def getcall(self, name: str) -> ParsedCall:
|
||||
values = self.getcalls(name)
|
||||
assert len(values) == 1, (name, values)
|
||||
return values[0]
|
||||
|
||||
# functionality for test reports
|
||||
|
||||
def getreports(self, names="pytest_runtest_logreport pytest_collectreport"):
|
||||
def getreports(
|
||||
self,
|
||||
names: Union[
|
||||
str, Iterable[str]
|
||||
] = "pytest_runtest_logreport pytest_collectreport",
|
||||
) -> List[TestReport]:
|
||||
return [x.report for x in self.getcalls(names)]
|
||||
|
||||
def matchreport(
|
||||
self,
|
||||
inamepart="",
|
||||
names="pytest_runtest_logreport pytest_collectreport",
|
||||
inamepart: str = "",
|
||||
names: Union[
|
||||
str, Iterable[str]
|
||||
] = "pytest_runtest_logreport pytest_collectreport",
|
||||
when=None,
|
||||
):
|
||||
"""return a testreport whose dotted import path matches"""
|
||||
|
@ -274,13 +300,20 @@ class HookRecorder:
|
|||
)
|
||||
return values[0]
|
||||
|
||||
def getfailures(self, names="pytest_runtest_logreport pytest_collectreport"):
|
||||
def getfailures(
|
||||
self,
|
||||
names: Union[
|
||||
str, Iterable[str]
|
||||
] = "pytest_runtest_logreport pytest_collectreport",
|
||||
) -> List[TestReport]:
|
||||
return [rep for rep in self.getreports(names) if rep.failed]
|
||||
|
||||
def getfailedcollections(self):
|
||||
def getfailedcollections(self) -> List[TestReport]:
|
||||
return self.getfailures("pytest_collectreport")
|
||||
|
||||
def listoutcomes(self):
|
||||
def listoutcomes(
|
||||
self,
|
||||
) -> Tuple[List[TestReport], List[TestReport], List[TestReport]]:
|
||||
passed = []
|
||||
skipped = []
|
||||
failed = []
|
||||
|
@ -295,31 +328,38 @@ class HookRecorder:
|
|||
failed.append(rep)
|
||||
return passed, skipped, failed
|
||||
|
||||
def countoutcomes(self):
|
||||
def countoutcomes(self) -> List[int]:
|
||||
return [len(x) for x in self.listoutcomes()]
|
||||
|
||||
def assertoutcome(self, passed=0, skipped=0, failed=0):
|
||||
realpassed, realskipped, realfailed = self.listoutcomes()
|
||||
assert passed == len(realpassed)
|
||||
assert skipped == len(realskipped)
|
||||
assert failed == len(realfailed)
|
||||
def assertoutcome(self, passed: int = 0, skipped: int = 0, failed: int = 0) -> None:
|
||||
__tracebackhide__ = True
|
||||
|
||||
def clear(self):
|
||||
outcomes = self.listoutcomes()
|
||||
realpassed, realskipped, realfailed = outcomes
|
||||
obtained = {
|
||||
"passed": len(realpassed),
|
||||
"skipped": len(realskipped),
|
||||
"failed": len(realfailed),
|
||||
}
|
||||
expected = {"passed": passed, "skipped": skipped, "failed": failed}
|
||||
assert obtained == expected, outcomes
|
||||
|
||||
def clear(self) -> None:
|
||||
self.calls[:] = []
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def linecomp(request):
|
||||
def linecomp(request: FixtureRequest) -> "LineComp":
|
||||
return LineComp()
|
||||
|
||||
|
||||
@pytest.fixture(name="LineMatcher")
|
||||
def LineMatcher_fixture(request):
|
||||
def LineMatcher_fixture(request: FixtureRequest) -> "Type[LineMatcher]":
|
||||
return LineMatcher
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def testdir(request, tmpdir_factory):
|
||||
def testdir(request: FixtureRequest, tmpdir_factory) -> "Testdir":
|
||||
return Testdir(request, tmpdir_factory)
|
||||
|
||||
|
||||
|
@ -362,7 +402,16 @@ class RunResult:
|
|||
:ivar duration: duration in seconds
|
||||
"""
|
||||
|
||||
def __init__(self, ret, outlines, errlines, duration):
|
||||
def __init__(
|
||||
self,
|
||||
ret: Union[int, ExitCode],
|
||||
outlines: Sequence[str],
|
||||
errlines: Sequence[str],
|
||||
duration: float,
|
||||
) -> None:
|
||||
try:
|
||||
self.ret = pytest.ExitCode(ret) # type: Union[int, ExitCode]
|
||||
except ValueError:
|
||||
self.ret = ret
|
||||
self.outlines = outlines
|
||||
self.errlines = errlines
|
||||
|
@ -370,13 +419,13 @@ class RunResult:
|
|||
self.stderr = LineMatcher(errlines)
|
||||
self.duration = duration
|
||||
|
||||
def __repr__(self):
|
||||
def __repr__(self) -> str:
|
||||
return (
|
||||
"<RunResult ret=%r len(stdout.lines)=%d len(stderr.lines)=%d duration=%.2fs>"
|
||||
"<RunResult ret=%s len(stdout.lines)=%d len(stderr.lines)=%d duration=%.2fs>"
|
||||
% (self.ret, len(self.stdout.lines), len(self.stderr.lines), self.duration)
|
||||
)
|
||||
|
||||
def parseoutcomes(self):
|
||||
def parseoutcomes(self) -> Dict[str, int]:
|
||||
"""Return a dictionary of outcomestring->num from parsing the terminal
|
||||
output that the test process produced.
|
||||
|
||||
|
@ -389,12 +438,19 @@ class RunResult:
|
|||
raise ValueError("Pytest terminal summary report not found")
|
||||
|
||||
def assert_outcomes(
|
||||
self, passed=0, skipped=0, failed=0, error=0, xpassed=0, xfailed=0
|
||||
):
|
||||
self,
|
||||
passed: int = 0,
|
||||
skipped: int = 0,
|
||||
failed: int = 0,
|
||||
error: int = 0,
|
||||
xpassed: int = 0,
|
||||
xfailed: int = 0,
|
||||
) -> None:
|
||||
"""Assert that the specified outcomes appear with the respective
|
||||
numbers (0 means it didn't occur) in the text output from a test run.
|
||||
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
|
||||
d = self.parseoutcomes()
|
||||
obtained = {
|
||||
"passed": d.get("passed", 0),
|
||||
|
@ -416,19 +472,19 @@ class RunResult:
|
|||
|
||||
|
||||
class CwdSnapshot:
|
||||
def __init__(self):
|
||||
def __init__(self) -> None:
|
||||
self.__saved = os.getcwd()
|
||||
|
||||
def restore(self):
|
||||
def restore(self) -> None:
|
||||
os.chdir(self.__saved)
|
||||
|
||||
|
||||
class SysModulesSnapshot:
|
||||
def __init__(self, preserve=None):
|
||||
def __init__(self, preserve: Optional[Callable[[str], bool]] = None):
|
||||
self.__preserve = preserve
|
||||
self.__saved = dict(sys.modules)
|
||||
|
||||
def restore(self):
|
||||
def restore(self) -> None:
|
||||
if self.__preserve:
|
||||
self.__saved.update(
|
||||
(k, m) for k, m in sys.modules.items() if self.__preserve(k)
|
||||
|
@ -438,10 +494,10 @@ class SysModulesSnapshot:
|
|||
|
||||
|
||||
class SysPathsSnapshot:
|
||||
def __init__(self):
|
||||
def __init__(self) -> None:
|
||||
self.__saved = list(sys.path), list(sys.meta_path)
|
||||
|
||||
def restore(self):
|
||||
def restore(self) -> None:
|
||||
sys.path[:], sys.meta_path[:] = self.__saved
|
||||
|
||||
|
||||
|
@ -480,11 +536,7 @@ class Testdir:
|
|||
self._sys_modules_snapshot = self.__take_sys_modules_snapshot()
|
||||
self.chdir()
|
||||
self.request.addfinalizer(self.finalize)
|
||||
method = self.request.config.getoption("--runpytest")
|
||||
if method == "inprocess":
|
||||
self._runpytest_method = self.runpytest_inprocess
|
||||
elif method == "subprocess":
|
||||
self._runpytest_method = self.runpytest_subprocess
|
||||
self._method = self.request.config.getoption("--runpytest")
|
||||
|
||||
mp = self.monkeypatch = MonkeyPatch()
|
||||
mp.setenv("PYTEST_DEBUG_TEMPROOT", str(self.test_tmproot))
|
||||
|
@ -832,7 +884,7 @@ class Testdir:
|
|||
reprec = rec.pop()
|
||||
else:
|
||||
|
||||
class reprec:
|
||||
class reprec: # type: ignore
|
||||
pass
|
||||
|
||||
reprec.ret = ret
|
||||
|
@ -848,7 +900,7 @@ class Testdir:
|
|||
for finalizer in finalizers:
|
||||
finalizer()
|
||||
|
||||
def runpytest_inprocess(self, *args, **kwargs):
|
||||
def runpytest_inprocess(self, *args, **kwargs) -> RunResult:
|
||||
"""Return result of running pytest in-process, providing a similar
|
||||
interface to what self.runpytest() provides.
|
||||
"""
|
||||
|
@ -863,15 +915,20 @@ class Testdir:
|
|||
try:
|
||||
reprec = self.inline_run(*args, **kwargs)
|
||||
except SystemExit as e:
|
||||
|
||||
class reprec:
|
||||
ret = e.args[0]
|
||||
try:
|
||||
ret = ExitCode(e.args[0])
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
class reprec: # type: ignore
|
||||
ret = ret
|
||||
|
||||
except Exception:
|
||||
traceback.print_exc()
|
||||
|
||||
class reprec:
|
||||
ret = 3
|
||||
class reprec: # type: ignore
|
||||
ret = ExitCode(3)
|
||||
|
||||
finally:
|
||||
out, err = capture.readouterr()
|
||||
|
@ -879,17 +936,23 @@ class Testdir:
|
|||
sys.stdout.write(out)
|
||||
sys.stderr.write(err)
|
||||
|
||||
res = RunResult(reprec.ret, out.split("\n"), err.split("\n"), time.time() - now)
|
||||
res.reprec = reprec
|
||||
res = RunResult(
|
||||
reprec.ret, out.splitlines(), err.splitlines(), time.time() - now
|
||||
)
|
||||
res.reprec = reprec # type: ignore
|
||||
return res
|
||||
|
||||
def runpytest(self, *args, **kwargs):
|
||||
def runpytest(self, *args, **kwargs) -> RunResult:
|
||||
"""Run pytest inline or in a subprocess, depending on the command line
|
||||
option "--runpytest" and return a :py:class:`RunResult`.
|
||||
|
||||
"""
|
||||
args = self._ensure_basetemp(args)
|
||||
return self._runpytest_method(*args, **kwargs)
|
||||
if self._method == "inprocess":
|
||||
return self.runpytest_inprocess(*args, **kwargs)
|
||||
elif self._method == "subprocess":
|
||||
return self.runpytest_subprocess(*args, **kwargs)
|
||||
raise RuntimeError("Unrecognized runpytest option: {}".format(self._method))
|
||||
|
||||
def _ensure_basetemp(self, args):
|
||||
args = list(args)
|
||||
|
@ -928,11 +991,9 @@ class Testdir:
|
|||
|
||||
This returns a new :py:class:`_pytest.config.Config` instance like
|
||||
:py:meth:`parseconfig`, but also calls the pytest_configure hook.
|
||||
|
||||
"""
|
||||
config = self.parseconfig(*args)
|
||||
config._do_configure()
|
||||
self.request.addfinalizer(config._ensure_unconfigure)
|
||||
return config
|
||||
|
||||
def getitem(self, source, funcname="test_func"):
|
||||
|
@ -1048,7 +1109,7 @@ class Testdir:
|
|||
|
||||
return popen
|
||||
|
||||
def run(self, *cmdargs, timeout=None, stdin=CLOSE_STDIN):
|
||||
def run(self, *cmdargs, timeout=None, stdin=CLOSE_STDIN) -> RunResult:
|
||||
"""Run a command with arguments.
|
||||
|
||||
Run a process using subprocess.Popen saving the stdout and stderr.
|
||||
|
@ -1066,9 +1127,9 @@ class Testdir:
|
|||
"""
|
||||
__tracebackhide__ = True
|
||||
|
||||
cmdargs = [
|
||||
cmdargs = tuple(
|
||||
str(arg) if isinstance(arg, py.path.local) else arg for arg in cmdargs
|
||||
]
|
||||
)
|
||||
p1 = self.tmpdir.join("stdout")
|
||||
p2 = self.tmpdir.join("stderr")
|
||||
print("running:", *cmdargs)
|
||||
|
@ -1119,6 +1180,10 @@ class Testdir:
|
|||
f2.close()
|
||||
self._dump_lines(out, sys.stdout)
|
||||
self._dump_lines(err, sys.stderr)
|
||||
try:
|
||||
ret = ExitCode(ret)
|
||||
except ValueError:
|
||||
pass
|
||||
return RunResult(ret, out, err, time.time() - now)
|
||||
|
||||
def _dump_lines(self, lines, fp):
|
||||
|
@ -1131,7 +1196,7 @@ class Testdir:
|
|||
def _getpytestargs(self):
|
||||
return sys.executable, "-mpytest"
|
||||
|
||||
def runpython(self, script):
|
||||
def runpython(self, script) -> RunResult:
|
||||
"""Run a python script using sys.executable as interpreter.
|
||||
|
||||
Returns a :py:class:`RunResult`.
|
||||
|
@ -1143,7 +1208,7 @@ class Testdir:
|
|||
"""Run python -c "command", return a :py:class:`RunResult`."""
|
||||
return self.run(sys.executable, "-c", command)
|
||||
|
||||
def runpytest_subprocess(self, *args, timeout=None):
|
||||
def runpytest_subprocess(self, *args, timeout=None) -> RunResult:
|
||||
"""Run pytest as a subprocess with given arguments.
|
||||
|
||||
Any plugins added to the :py:attr:`plugins` list will be added using the
|
||||
|
@ -1192,8 +1257,6 @@ class Testdir:
|
|||
pexpect = pytest.importorskip("pexpect", "3.0")
|
||||
if hasattr(sys, "pypy_version_info") and "64" in platform.machine():
|
||||
pytest.skip("pypy-64 bit not supported")
|
||||
if sys.platform.startswith("freebsd"):
|
||||
pytest.xfail("pexpect does not work reliably on freebsd")
|
||||
if not hasattr(pexpect, "spawn"):
|
||||
pytest.skip("pexpect.spawn not available")
|
||||
logfile = self.tmpdir.join("spawn.out").open("wb")
|
||||
|
@ -1319,8 +1382,7 @@ class LineMatcher:
|
|||
|
||||
The argument is a list of lines which have to match and can use glob
|
||||
wildcards. If they do not match a pytest.fail() is called. The
|
||||
matches and non-matches are also printed on stdout.
|
||||
|
||||
matches and non-matches are also shown as part of the error message.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
self._match_lines(lines2, fnmatch, "fnmatch")
|
||||
|
@ -1331,8 +1393,7 @@ class LineMatcher:
|
|||
The argument is a list of lines which have to match using ``re.match``.
|
||||
If they do not match a pytest.fail() is called.
|
||||
|
||||
The matches and non-matches are also printed on stdout.
|
||||
|
||||
The matches and non-matches are also shown as part of the error message.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
self._match_lines(lines2, lambda name, pat: re.match(pat, name), "re.match")
|
||||
|
@ -1347,14 +1408,14 @@ class LineMatcher:
|
|||
pattern
|
||||
:param str match_nickname: the nickname for the match function that
|
||||
will be logged to stdout when a match occurs
|
||||
|
||||
"""
|
||||
assert isinstance(lines2, Sequence)
|
||||
assert isinstance(lines2, collections.abc.Sequence)
|
||||
lines2 = self._getlines(lines2)
|
||||
lines1 = self.lines[:]
|
||||
nextline = None
|
||||
extralines = []
|
||||
__tracebackhide__ = True
|
||||
wnick = len(match_nickname) + 1
|
||||
for line in lines2:
|
||||
nomatchprinted = False
|
||||
while lines1:
|
||||
|
@ -1364,14 +1425,58 @@ class LineMatcher:
|
|||
break
|
||||
elif match_func(nextline, line):
|
||||
self._log("%s:" % match_nickname, repr(line))
|
||||
self._log(" with:", repr(nextline))
|
||||
self._log(
|
||||
"{:>{width}}".format("with:", width=wnick), repr(nextline)
|
||||
)
|
||||
break
|
||||
else:
|
||||
if not nomatchprinted:
|
||||
self._log("nomatch:", repr(line))
|
||||
self._log(
|
||||
"{:>{width}}".format("nomatch:", width=wnick), repr(line)
|
||||
)
|
||||
nomatchprinted = True
|
||||
self._log(" and:", repr(nextline))
|
||||
self._log("{:>{width}}".format("and:", width=wnick), repr(nextline))
|
||||
extralines.append(nextline)
|
||||
else:
|
||||
self._log("remains unmatched: {!r}".format(line))
|
||||
pytest.fail(self._log_text)
|
||||
pytest.fail(self._log_text.lstrip())
|
||||
|
||||
def no_fnmatch_line(self, pat):
|
||||
"""Ensure captured lines do not match the given pattern, using ``fnmatch.fnmatch``.
|
||||
|
||||
:param str pat: the pattern to match lines.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
self._no_match_line(pat, fnmatch, "fnmatch")
|
||||
|
||||
def no_re_match_line(self, pat):
|
||||
"""Ensure captured lines do not match the given pattern, using ``re.match``.
|
||||
|
||||
:param str pat: the regular expression to match lines.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
self._no_match_line(pat, lambda name, pat: re.match(pat, name), "re.match")
|
||||
|
||||
def _no_match_line(self, pat, match_func, match_nickname):
|
||||
"""Ensure captured lines does not have a the given pattern, using ``fnmatch.fnmatch``
|
||||
|
||||
:param str pat: the pattern to match lines
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
nomatch_printed = False
|
||||
wnick = len(match_nickname) + 1
|
||||
try:
|
||||
for line in self.lines:
|
||||
if match_func(line, pat):
|
||||
self._log("%s:" % match_nickname, repr(pat))
|
||||
self._log("{:>{width}}".format("with:", width=wnick), repr(line))
|
||||
pytest.fail(self._log_text.lstrip())
|
||||
else:
|
||||
if not nomatch_printed:
|
||||
self._log(
|
||||
"{:>{width}}".format("nomatch:", width=wnick), repr(pat)
|
||||
)
|
||||
nomatch_printed = True
|
||||
self._log("{:>{width}}".format("and:", width=wnick), repr(line))
|
||||
finally:
|
||||
self._log_output = []
|
||||
|
|
|
@ -9,6 +9,8 @@ from collections import Counter
|
|||
from collections.abc import Sequence
|
||||
from functools import partial
|
||||
from textwrap import dedent
|
||||
from typing import List
|
||||
from typing import Tuple
|
||||
|
||||
import py
|
||||
|
||||
|
@ -30,6 +32,7 @@ from _pytest.compat import safe_getattr
|
|||
from _pytest.compat import safe_isclass
|
||||
from _pytest.compat import STRING_TYPES
|
||||
from _pytest.config import hookimpl
|
||||
from _pytest.deprecated import FUNCARGNAMES
|
||||
from _pytest.main import FSHookProxy
|
||||
from _pytest.mark import MARK_GEN
|
||||
from _pytest.mark.structures import get_unpacked_marks
|
||||
|
@ -118,13 +121,6 @@ def pytest_cmdline_main(config):
|
|||
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
# those alternative spellings are common - raise a specific error to alert
|
||||
# the user
|
||||
alt_spellings = ["parameterize", "parametrise", "parameterise"]
|
||||
for mark_name in alt_spellings:
|
||||
if metafunc.definition.get_closest_marker(mark_name):
|
||||
msg = "{0} has '{1}' mark, spelling should be 'parametrize'"
|
||||
fail(msg.format(metafunc.function.__name__, mark_name), pytrace=False)
|
||||
for marker in metafunc.definition.iter_markers(name="parametrize"):
|
||||
metafunc.parametrize(*marker.args, **marker.kwargs)
|
||||
|
||||
|
@ -235,10 +231,6 @@ def pytest_pycollect_makeitem(collector, name, obj):
|
|||
outcome.force_result(res)
|
||||
|
||||
|
||||
def pytest_make_parametrize_id(config, val, argname=None):
|
||||
return None
|
||||
|
||||
|
||||
class PyobjContext:
|
||||
module = pyobj_property("Module")
|
||||
cls = pyobj_property("Class")
|
||||
|
@ -287,7 +279,7 @@ class PyobjMixin(PyobjContext):
|
|||
parts.reverse()
|
||||
return ".".join(parts)
|
||||
|
||||
def reportinfo(self):
|
||||
def reportinfo(self) -> Tuple[str, int, str]:
|
||||
# XXX caching?
|
||||
obj = self.obj
|
||||
compat_co_firstlineno = getattr(obj, "compat_co_firstlineno", None)
|
||||
|
@ -880,7 +872,7 @@ class CallSpec2:
|
|||
self.marks.extend(normalize_mark_list(marks))
|
||||
|
||||
|
||||
class Metafunc(fixtures.FuncargnamesCompatAttr):
|
||||
class Metafunc:
|
||||
"""
|
||||
Metafunc objects are passed to the :func:`pytest_generate_tests <_pytest.hookspec.pytest_generate_tests>` hook.
|
||||
They help to inspect a test function and to generate tests according to
|
||||
|
@ -888,11 +880,14 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
|
|||
test function is defined.
|
||||
"""
|
||||
|
||||
def __init__(self, definition, fixtureinfo, config, cls=None, module=None):
|
||||
assert (
|
||||
isinstance(definition, FunctionDefinition)
|
||||
or type(definition).__name__ == "DefinitionMock"
|
||||
)
|
||||
def __init__(
|
||||
self,
|
||||
definition: "FunctionDefinition",
|
||||
fixtureinfo,
|
||||
config,
|
||||
cls=None,
|
||||
module=None,
|
||||
) -> None:
|
||||
self.definition = definition
|
||||
|
||||
#: access to the :class:`_pytest.config.Config` object for the test session
|
||||
|
@ -910,10 +905,15 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
|
|||
#: class object where the test function is defined in or ``None``.
|
||||
self.cls = cls
|
||||
|
||||
self._calls = []
|
||||
self._ids = set()
|
||||
self._calls = [] # type: List[CallSpec2]
|
||||
self._arg2fixturedefs = fixtureinfo.name2fixturedefs
|
||||
|
||||
@property
|
||||
def funcargnames(self):
|
||||
""" alias attribute for ``fixturenames`` for pre-2.3 compatibility"""
|
||||
warnings.warn(FUNCARGNAMES, stacklevel=2)
|
||||
return self.fixturenames
|
||||
|
||||
def parametrize(self, argnames, argvalues, indirect=False, ids=None, scope=None):
|
||||
""" Add new invocations to the underlying test function using the list
|
||||
of argvalues for the given argnames. Parametrization is performed
|
||||
|
@ -1166,7 +1166,8 @@ def _idval(val, argname, idx, idfn, item, config):
|
|||
return ascii_escaped(val.pattern)
|
||||
elif isinstance(val, enum.Enum):
|
||||
return str(val)
|
||||
elif (inspect.isclass(val) or inspect.isfunction(val)) and hasattr(val, "__name__"):
|
||||
elif hasattr(val, "__name__") and isinstance(val.__name__, str):
|
||||
# name of a class, function, module, etc.
|
||||
return val.__name__
|
||||
return str(argname) + str(idx)
|
||||
|
||||
|
@ -1336,7 +1337,7 @@ def write_docstring(tw, doc, indent=" "):
|
|||
tw.write(indent + line + "\n")
|
||||
|
||||
|
||||
class Function(FunctionMixin, nodes.Item, fixtures.FuncargnamesCompatAttr):
|
||||
class Function(FunctionMixin, nodes.Item):
|
||||
""" a Function Item is responsible for setting up and executing a
|
||||
Python test function.
|
||||
"""
|
||||
|
@ -1423,6 +1424,12 @@ class Function(FunctionMixin, nodes.Item, fixtures.FuncargnamesCompatAttr):
|
|||
"(compatonly) for code expecting pytest-2.2 style request objects"
|
||||
return self
|
||||
|
||||
@property
|
||||
def funcargnames(self):
|
||||
""" alias attribute for ``fixturenames`` for pre-2.3 compatibility"""
|
||||
warnings.warn(FUNCARGNAMES, stacklevel=2)
|
||||
return self.fixturenames
|
||||
|
||||
def runtest(self):
|
||||
""" execute the underlying test function. """
|
||||
self.ihook.pytest_pyfunc_call(pyfuncitem=self)
|
||||
|
|
|
@ -223,26 +223,24 @@ class ApproxScalar(ApproxBase):
|
|||
def __repr__(self):
|
||||
"""
|
||||
Return a string communicating both the expected value and the tolerance
|
||||
for the comparison being made, e.g. '1.0 +- 1e-6'. Use the unicode
|
||||
plus/minus symbol if this is python3 (it's too hard to get right for
|
||||
python2).
|
||||
for the comparison being made, e.g. '1.0 ± 1e-6', '(3+4j) ± 5e-6 ∠ ±180°'.
|
||||
"""
|
||||
if isinstance(self.expected, complex):
|
||||
return str(self.expected)
|
||||
|
||||
# Infinities aren't compared using tolerances, so don't show a
|
||||
# tolerance.
|
||||
if math.isinf(self.expected):
|
||||
# tolerance. Need to call abs to handle complex numbers, e.g. (inf + 1j)
|
||||
if math.isinf(abs(self.expected)):
|
||||
return str(self.expected)
|
||||
|
||||
# If a sensible tolerance can't be calculated, self.tolerance will
|
||||
# raise a ValueError. In this case, display '???'.
|
||||
try:
|
||||
vetted_tolerance = "{:.1e}".format(self.tolerance)
|
||||
if isinstance(self.expected, complex) and not math.isinf(self.tolerance):
|
||||
vetted_tolerance += " ∠ ±180°"
|
||||
except ValueError:
|
||||
vetted_tolerance = "???"
|
||||
|
||||
return "{} \u00b1 {}".format(self.expected, vetted_tolerance)
|
||||
return "{} ± {}".format(self.expected, vetted_tolerance)
|
||||
|
||||
def __eq__(self, actual):
|
||||
"""
|
||||
|
@ -554,7 +552,7 @@ def raises(
|
|||
|
||||
|
||||
@overload # noqa: F811
|
||||
def raises(
|
||||
def raises( # noqa: F811
|
||||
expected_exception: Union["Type[_E]", Tuple["Type[_E]", ...]],
|
||||
func: Callable,
|
||||
*args: Any,
|
||||
|
|
|
@ -60,18 +60,18 @@ def warns(
|
|||
*,
|
||||
match: "Optional[Union[str, Pattern]]" = ...
|
||||
) -> "WarningsChecker":
|
||||
... # pragma: no cover
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
@overload # noqa: F811
|
||||
def warns(
|
||||
def warns( # noqa: F811
|
||||
expected_warning: Union["Type[Warning]", Tuple["Type[Warning]", ...]],
|
||||
func: Callable,
|
||||
*args: Any,
|
||||
match: Optional[Union[str, "Pattern"]] = ...,
|
||||
**kwargs: Any
|
||||
) -> Union[Any]:
|
||||
... # pragma: no cover
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def warns( # noqa: F811
|
||||
|
@ -187,7 +187,7 @@ class WarningsRecorder(warnings.catch_warnings):
|
|||
exc_type: Optional["Type[BaseException]"],
|
||||
exc_val: Optional[BaseException],
|
||||
exc_tb: Optional[TracebackType],
|
||||
) -> bool:
|
||||
) -> None:
|
||||
if not self._entered:
|
||||
__tracebackhide__ = True
|
||||
raise RuntimeError("Cannot exit %r without entering first" % self)
|
||||
|
@ -198,8 +198,6 @@ class WarningsRecorder(warnings.catch_warnings):
|
|||
# manually here for this context manager to become reusable.
|
||||
self._entered = False
|
||||
|
||||
return False
|
||||
|
||||
|
||||
class WarningsChecker(WarningsRecorder):
|
||||
def __init__(
|
||||
|
@ -232,7 +230,7 @@ class WarningsChecker(WarningsRecorder):
|
|||
exc_type: Optional["Type[BaseException]"],
|
||||
exc_val: Optional[BaseException],
|
||||
exc_tb: Optional[TracebackType],
|
||||
) -> bool:
|
||||
) -> None:
|
||||
super().__exit__(exc_type, exc_val, exc_tb)
|
||||
|
||||
__tracebackhide__ = True
|
||||
|
@ -263,4 +261,3 @@ class WarningsChecker(WarningsRecorder):
|
|||
[each.message for each in self],
|
||||
)
|
||||
)
|
||||
return False
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
from io import StringIO
|
||||
from pprint import pprint
|
||||
from typing import List
|
||||
from typing import Optional
|
||||
from typing import Tuple
|
||||
from typing import Union
|
||||
|
||||
import py
|
||||
|
@ -15,6 +17,7 @@ from _pytest._code.code import ReprFuncArgs
|
|||
from _pytest._code.code import ReprLocals
|
||||
from _pytest._code.code import ReprTraceback
|
||||
from _pytest._code.code import TerminalRepr
|
||||
from _pytest.nodes import Node
|
||||
from _pytest.outcomes import skip
|
||||
from _pytest.pathlib import Path
|
||||
|
||||
|
@ -33,14 +36,17 @@ def getslaveinfoline(node):
|
|||
|
||||
class BaseReport:
|
||||
when = None # type: Optional[str]
|
||||
location = None
|
||||
location = None # type: Optional[Tuple[str, Optional[int], str]]
|
||||
longrepr = None
|
||||
sections = [] # type: List[Tuple[str, str]]
|
||||
nodeid = None # type: str
|
||||
|
||||
def __init__(self, **kw):
|
||||
self.__dict__.update(kw)
|
||||
|
||||
def toterminal(self, out):
|
||||
def toterminal(self, out) -> None:
|
||||
if hasattr(self, "node"):
|
||||
out.line(getslaveinfoline(self.node))
|
||||
out.line(getslaveinfoline(self.node)) # type: ignore
|
||||
|
||||
longrepr = self.longrepr
|
||||
if longrepr is None:
|
||||
|
@ -201,7 +207,7 @@ class TestReport(BaseReport):
|
|||
def __init__(
|
||||
self,
|
||||
nodeid,
|
||||
location,
|
||||
location: Tuple[str, Optional[int], str],
|
||||
keywords,
|
||||
outcome,
|
||||
longrepr,
|
||||
|
@ -210,14 +216,14 @@ class TestReport(BaseReport):
|
|||
duration=0,
|
||||
user_properties=None,
|
||||
**extra
|
||||
):
|
||||
) -> None:
|
||||
#: normalized collection node id
|
||||
self.nodeid = nodeid
|
||||
|
||||
#: a (filesystempath, lineno, domaininfo) tuple indicating the
|
||||
#: actual location of a test item - it might be different from the
|
||||
#: collected one e.g. if a method is inherited from a different module.
|
||||
self.location = location
|
||||
self.location = location # type: Tuple[str, Optional[int], str]
|
||||
|
||||
#: a name -> value dictionary containing all keywords and
|
||||
#: markers associated with a test invocation.
|
||||
|
@ -300,7 +306,9 @@ class TestReport(BaseReport):
|
|||
class CollectReport(BaseReport):
|
||||
when = "collect"
|
||||
|
||||
def __init__(self, nodeid, outcome, longrepr, result, sections=(), **extra):
|
||||
def __init__(
|
||||
self, nodeid: str, outcome, longrepr, result: List[Node], sections=(), **extra
|
||||
) -> None:
|
||||
self.nodeid = nodeid
|
||||
self.outcome = outcome
|
||||
self.longrepr = longrepr
|
||||
|
@ -322,25 +330,25 @@ class CollectErrorRepr(TerminalRepr):
|
|||
def __init__(self, msg):
|
||||
self.longrepr = msg
|
||||
|
||||
def toterminal(self, out):
|
||||
def toterminal(self, out) -> None:
|
||||
out.line(self.longrepr, red=True)
|
||||
|
||||
|
||||
def pytest_report_to_serializable(report):
|
||||
if isinstance(report, (TestReport, CollectReport)):
|
||||
data = report._to_json()
|
||||
data["_report_type"] = report.__class__.__name__
|
||||
data["$report_type"] = report.__class__.__name__
|
||||
return data
|
||||
|
||||
|
||||
def pytest_report_from_serializable(data):
|
||||
if "_report_type" in data:
|
||||
if data["_report_type"] == "TestReport":
|
||||
if "$report_type" in data:
|
||||
if data["$report_type"] == "TestReport":
|
||||
return TestReport._from_json(data)
|
||||
elif data["_report_type"] == "CollectReport":
|
||||
elif data["$report_type"] == "CollectReport":
|
||||
return CollectReport._from_json(data)
|
||||
assert False, "Unknown report_type unserialize data: {}".format(
|
||||
data["_report_type"]
|
||||
data["$report_type"]
|
||||
)
|
||||
|
||||
|
||||
|
@ -472,7 +480,9 @@ def _report_kwargs_from_json(reportdict):
|
|||
description,
|
||||
)
|
||||
)
|
||||
exception_info = ExceptionChainRepr(chain)
|
||||
exception_info = ExceptionChainRepr(
|
||||
chain
|
||||
) # type: Union[ExceptionChainRepr,ReprExceptionInfo]
|
||||
else:
|
||||
exception_info = ReprExceptionInfo(reprtraceback, reprcrash)
|
||||
|
||||
|
|
|
@ -6,6 +6,7 @@ from time import time
|
|||
from typing import Callable
|
||||
from typing import Dict
|
||||
from typing import List
|
||||
from typing import Optional
|
||||
from typing import Tuple
|
||||
|
||||
import attr
|
||||
|
@ -207,8 +208,7 @@ class CallInfo:
|
|||
""" Result/Exception info a function invocation. """
|
||||
|
||||
_result = attr.ib()
|
||||
# Optional[ExceptionInfo]
|
||||
excinfo = attr.ib()
|
||||
excinfo = attr.ib(type=Optional[ExceptionInfo])
|
||||
start = attr.ib()
|
||||
stop = attr.ib()
|
||||
when = attr.ib()
|
||||
|
@ -220,7 +220,7 @@ class CallInfo:
|
|||
return self._result
|
||||
|
||||
@classmethod
|
||||
def from_call(cls, func, when, reraise=None):
|
||||
def from_call(cls, func, when, reraise=None) -> "CallInfo":
|
||||
#: context of invocation: one of "setup", "call",
|
||||
#: "teardown", "memocollect"
|
||||
start = time()
|
||||
|
@ -236,16 +236,9 @@ class CallInfo:
|
|||
return cls(start=start, stop=stop, when=when, result=result, excinfo=excinfo)
|
||||
|
||||
def __repr__(self):
|
||||
if self.excinfo is not None:
|
||||
status = "exception"
|
||||
value = self.excinfo.value
|
||||
else:
|
||||
# TODO: investigate unification
|
||||
value = repr(self._result)
|
||||
status = "result"
|
||||
return "<CallInfo when={when!r} {status}: {value}>".format(
|
||||
when=self.when, value=value, status=status
|
||||
)
|
||||
if self.excinfo is None:
|
||||
return "<CallInfo when={!r} result: {!r}>".format(self.when, self._result)
|
||||
return "<CallInfo when={!r} excinfo={!r}>".format(self.when, self.excinfo)
|
||||
|
||||
|
||||
def pytest_runtest_makereport(item, call):
|
||||
|
|
|
@ -20,8 +20,7 @@ def pytest_addoption(parser):
|
|||
@pytest.hookimpl(hookwrapper=True)
|
||||
def pytest_fixture_setup(fixturedef, request):
|
||||
yield
|
||||
config = request.config
|
||||
if config.option.setupshow:
|
||||
if request.config.option.setupshow:
|
||||
if hasattr(request, "param"):
|
||||
# Save the fixture parameter so ._show_fixture_action() can
|
||||
# display it now and during the teardown (in .finish()).
|
||||
|
|
|
@ -161,9 +161,9 @@ def pytest_runtest_makereport(item, call):
|
|||
# skipped by mark.skipif; change the location of the failure
|
||||
# to point to the item definition, otherwise it will display
|
||||
# the location of where the skip exception was raised within pytest
|
||||
filename, line, reason = rep.longrepr
|
||||
_, _, reason = rep.longrepr
|
||||
filename, line = item.location[:2]
|
||||
rep.longrepr = filename, line, reason
|
||||
rep.longrepr = filename, line + 1, reason
|
||||
|
||||
|
||||
# called by terminalreporter progress reporting
|
||||
|
|
|
@ -9,6 +9,14 @@ import platform
|
|||
import sys
|
||||
import time
|
||||
from functools import partial
|
||||
from typing import Any
|
||||
from typing import Callable
|
||||
from typing import Dict
|
||||
from typing import List
|
||||
from typing import Mapping
|
||||
from typing import Optional
|
||||
from typing import Set
|
||||
from typing import Tuple
|
||||
|
||||
import attr
|
||||
import pluggy
|
||||
|
@ -17,7 +25,11 @@ from more_itertools import collapse
|
|||
|
||||
import pytest
|
||||
from _pytest import nodes
|
||||
from _pytest.config import Config
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.main import Session
|
||||
from _pytest.reports import CollectReport
|
||||
from _pytest.reports import TestReport
|
||||
|
||||
REPORT_COLLECTING_RESOLUTION = 0.5
|
||||
|
||||
|
@ -66,7 +78,11 @@ def pytest_addoption(parser):
|
|||
help="decrease verbosity.",
|
||||
),
|
||||
group._addoption(
|
||||
"--verbosity", dest="verbose", type=int, default=0, help="set verbosity"
|
||||
"--verbosity",
|
||||
dest="verbose",
|
||||
type=int,
|
||||
default=0,
|
||||
help="set verbosity. Default is 0.",
|
||||
)
|
||||
group._addoption(
|
||||
"-r",
|
||||
|
@ -137,7 +153,7 @@ def pytest_addoption(parser):
|
|||
)
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
def pytest_configure(config: Config) -> None:
|
||||
reporter = TerminalReporter(config, sys.stdout)
|
||||
config.pluginmanager.register(reporter, "terminalreporter")
|
||||
if config.option.debug or config.option.traceconfig:
|
||||
|
@ -149,7 +165,7 @@ def pytest_configure(config):
|
|||
config.trace.root.setprocessor("pytest:config", mywriter)
|
||||
|
||||
|
||||
def getreportopt(config):
|
||||
def getreportopt(config: Config) -> str:
|
||||
reportopts = ""
|
||||
reportchars = config.option.reportchars
|
||||
if not config.option.disable_warnings and "w" not in reportchars:
|
||||
|
@ -168,7 +184,7 @@ def getreportopt(config):
|
|||
|
||||
|
||||
@pytest.hookimpl(trylast=True) # after _pytest.runner
|
||||
def pytest_report_teststatus(report):
|
||||
def pytest_report_teststatus(report: TestReport) -> Tuple[str, str, str]:
|
||||
if report.passed:
|
||||
letter = "."
|
||||
elif report.skipped:
|
||||
|
@ -177,7 +193,13 @@ def pytest_report_teststatus(report):
|
|||
letter = "F"
|
||||
if report.when != "call":
|
||||
letter = "f"
|
||||
return report.outcome, letter, report.outcome.upper()
|
||||
|
||||
# Report failed CollectReports as "error" (in line with pytest_collectreport).
|
||||
outcome = report.outcome
|
||||
if report.when == "collect" and outcome == "failed":
|
||||
outcome = "error"
|
||||
|
||||
return outcome, letter, outcome.upper()
|
||||
|
||||
|
||||
@attr.s
|
||||
|
@ -191,8 +213,8 @@ class WarningReport:
|
|||
file system location of the source of the warning (see ``get_location``).
|
||||
"""
|
||||
|
||||
message = attr.ib()
|
||||
nodeid = attr.ib(default=None)
|
||||
message = attr.ib(type=str)
|
||||
nodeid = attr.ib(type=Optional[str], default=None)
|
||||
fslocation = attr.ib(default=None)
|
||||
count_towards_summary = True
|
||||
|
||||
|
@ -216,15 +238,15 @@ class WarningReport:
|
|||
|
||||
|
||||
class TerminalReporter:
|
||||
def __init__(self, config, file=None):
|
||||
def __init__(self, config: Config, file=None) -> None:
|
||||
import _pytest.config
|
||||
|
||||
self.config = config
|
||||
self._numcollected = 0
|
||||
self._session = None
|
||||
self._session = None # type: Optional[Session]
|
||||
self._showfspath = None
|
||||
|
||||
self.stats = {}
|
||||
self.stats = {} # type: Dict[str, List[Any]]
|
||||
self.startdir = config.invocation_dir
|
||||
if file is None:
|
||||
file = sys.stdout
|
||||
|
@ -232,13 +254,13 @@ class TerminalReporter:
|
|||
# self.writer will be deprecated in pytest-3.4
|
||||
self.writer = self._tw
|
||||
self._screen_width = self._tw.fullwidth
|
||||
self.currentfspath = None
|
||||
self.currentfspath = None # type: Any
|
||||
self.reportchars = getreportopt(config)
|
||||
self.hasmarkup = self._tw.hasmarkup
|
||||
self.isatty = file.isatty()
|
||||
self._progress_nodeids_reported = set()
|
||||
self._progress_nodeids_reported = set() # type: Set[str]
|
||||
self._show_progress_info = self._determine_show_progress_info()
|
||||
self._collect_report_last_write = None
|
||||
self._collect_report_last_write = None # type: Optional[float]
|
||||
|
||||
def _determine_show_progress_info(self):
|
||||
"""Return True if we should display progress information based on the current config"""
|
||||
|
@ -383,7 +405,7 @@ class TerminalReporter:
|
|||
fsid = nodeid.split("::")[0]
|
||||
self.write_fspath_result(fsid, "")
|
||||
|
||||
def pytest_runtest_logreport(self, report):
|
||||
def pytest_runtest_logreport(self, report: TestReport) -> None:
|
||||
self._tests_ran = True
|
||||
rep = report
|
||||
res = self.config.hook.pytest_report_teststatus(report=rep, config=self.config)
|
||||
|
@ -423,7 +445,7 @@ class TerminalReporter:
|
|||
self._write_progress_information_filling_space()
|
||||
else:
|
||||
self.ensure_newline()
|
||||
self._tw.write("[%s]" % rep.node.gateway.id)
|
||||
self._tw.write("[%s]" % rep.node.gateway.id) # type: ignore
|
||||
if self._show_progress_info:
|
||||
self._tw.write(
|
||||
self._get_progress_information_message() + " ", cyan=True
|
||||
|
@ -435,6 +457,7 @@ class TerminalReporter:
|
|||
self.currentfspath = -2
|
||||
|
||||
def pytest_runtest_logfinish(self, nodeid):
|
||||
assert self._session
|
||||
if self.verbosity <= 0 and self._show_progress_info:
|
||||
if self._show_progress_info == "count":
|
||||
num_tests = self._session.testscollected
|
||||
|
@ -442,20 +465,23 @@ class TerminalReporter:
|
|||
else:
|
||||
progress_length = len(" [100%]")
|
||||
|
||||
main_color, _ = _get_main_color(self.stats)
|
||||
|
||||
self._progress_nodeids_reported.add(nodeid)
|
||||
is_last_item = (
|
||||
len(self._progress_nodeids_reported) == self._session.testscollected
|
||||
)
|
||||
if is_last_item:
|
||||
self._write_progress_information_filling_space()
|
||||
self._write_progress_information_filling_space(color=main_color)
|
||||
else:
|
||||
w = self._width_of_current_line
|
||||
past_edge = w + progress_length + 1 >= self._screen_width
|
||||
if past_edge:
|
||||
msg = self._get_progress_information_message()
|
||||
self._tw.write(msg + "\n", cyan=True)
|
||||
self._tw.write(msg + "\n", **{main_color: True})
|
||||
|
||||
def _get_progress_information_message(self):
|
||||
def _get_progress_information_message(self) -> str:
|
||||
assert self._session
|
||||
collected = self._session.testscollected
|
||||
if self._show_progress_info == "count":
|
||||
if collected:
|
||||
|
@ -466,15 +492,18 @@ class TerminalReporter:
|
|||
return " [ {} / {} ]".format(collected, collected)
|
||||
else:
|
||||
if collected:
|
||||
progress = len(self._progress_nodeids_reported) * 100 // collected
|
||||
return " [{:3d}%]".format(progress)
|
||||
return " [{:3d}%]".format(
|
||||
len(self._progress_nodeids_reported) * 100 // collected
|
||||
)
|
||||
return " [100%]"
|
||||
|
||||
def _write_progress_information_filling_space(self):
|
||||
def _write_progress_information_filling_space(self, color=None):
|
||||
if not color:
|
||||
color, _ = _get_main_color(self.stats)
|
||||
msg = self._get_progress_information_message()
|
||||
w = self._width_of_current_line
|
||||
fill = self._tw.fullwidth - w - 1
|
||||
self.write(msg.rjust(fill), cyan=True)
|
||||
self.write(msg.rjust(fill), **{color: True})
|
||||
|
||||
@property
|
||||
def _width_of_current_line(self):
|
||||
|
@ -493,7 +522,7 @@ class TerminalReporter:
|
|||
elif self.config.option.verbose >= 1:
|
||||
self.write("collecting ... ", bold=True)
|
||||
|
||||
def pytest_collectreport(self, report):
|
||||
def pytest_collectreport(self, report: CollectReport) -> None:
|
||||
if report.failed:
|
||||
self.stats.setdefault("error", []).append(report)
|
||||
elif report.skipped:
|
||||
|
@ -529,7 +558,7 @@ class TerminalReporter:
|
|||
str(self._numcollected) + " item" + ("" if self._numcollected == 1 else "s")
|
||||
)
|
||||
if errors:
|
||||
line += " / %d errors" % errors
|
||||
line += " / %d error%s" % (errors, "s" if errors != 1 else "")
|
||||
if deselected:
|
||||
line += " / %d deselected" % deselected
|
||||
if skipped:
|
||||
|
@ -544,7 +573,7 @@ class TerminalReporter:
|
|||
self.write_line(line)
|
||||
|
||||
@pytest.hookimpl(trylast=True)
|
||||
def pytest_sessionstart(self, session):
|
||||
def pytest_sessionstart(self, session: Session) -> None:
|
||||
self._session = session
|
||||
self._sessionstarttime = time.time()
|
||||
if not self.showheader:
|
||||
|
@ -552,9 +581,10 @@ class TerminalReporter:
|
|||
self.write_sep("=", "test session starts", bold=True)
|
||||
verinfo = platform.python_version()
|
||||
msg = "platform {} -- Python {}".format(sys.platform, verinfo)
|
||||
if hasattr(sys, "pypy_version_info"):
|
||||
verinfo = ".".join(map(str, sys.pypy_version_info[:3]))
|
||||
msg += "[pypy-{}-{}]".format(verinfo, sys.pypy_version_info[3])
|
||||
pypy_version_info = getattr(sys, "pypy_version_info", None)
|
||||
if pypy_version_info:
|
||||
verinfo = ".".join(map(str, pypy_version_info[:3]))
|
||||
msg += "[pypy-{}-{}]".format(verinfo, pypy_version_info[3])
|
||||
msg += ", pytest-{}, py-{}, pluggy-{}".format(
|
||||
pytest.__version__, py.__version__, pluggy.__version__
|
||||
)
|
||||
|
@ -604,9 +634,10 @@ class TerminalReporter:
|
|||
self._write_report_lines_from_hooks(lines)
|
||||
|
||||
if self.config.getoption("collectonly"):
|
||||
if self.stats.get("failed"):
|
||||
failed = self.stats.get("failed")
|
||||
if failed:
|
||||
self._tw.sep("!", "collection failures")
|
||||
for rep in self.stats.get("failed"):
|
||||
for rep in failed:
|
||||
rep.toterminal(self._tw)
|
||||
|
||||
def _printcollecteditems(self, items):
|
||||
|
@ -615,7 +646,7 @@ class TerminalReporter:
|
|||
# because later versions are going to get rid of them anyway
|
||||
if self.config.option.verbose < 0:
|
||||
if self.config.option.verbose < -1:
|
||||
counts = {}
|
||||
counts = {} # type: Dict[str, int]
|
||||
for item in items:
|
||||
name = item.nodeid.split("::", 1)[0]
|
||||
counts[name] = counts.get(name, 0) + 1
|
||||
|
@ -645,7 +676,7 @@ class TerminalReporter:
|
|||
self._tw.line("{}{}".format(indent + " ", line.strip()))
|
||||
|
||||
@pytest.hookimpl(hookwrapper=True)
|
||||
def pytest_sessionfinish(self, exitstatus):
|
||||
def pytest_sessionfinish(self, session: Session, exitstatus: ExitCode):
|
||||
outcome = yield
|
||||
outcome.get_result()
|
||||
self._tw.line("")
|
||||
|
@ -660,9 +691,13 @@ class TerminalReporter:
|
|||
self.config.hook.pytest_terminal_summary(
|
||||
terminalreporter=self, exitstatus=exitstatus, config=self.config
|
||||
)
|
||||
if session.shouldfail:
|
||||
self.write_sep("!", session.shouldfail, red=True)
|
||||
if exitstatus == ExitCode.INTERRUPTED:
|
||||
self._report_keyboardinterrupt()
|
||||
del self._keyboardinterrupt_memo
|
||||
elif session.shouldstop:
|
||||
self.write_sep("!", session.shouldstop, red=True)
|
||||
self.summary_stats()
|
||||
|
||||
@pytest.hookimpl(hookwrapper=True)
|
||||
|
@ -746,7 +781,9 @@ class TerminalReporter:
|
|||
|
||||
def summary_warnings(self):
|
||||
if self.hasopt("w"):
|
||||
all_warnings = self.stats.get("warnings")
|
||||
all_warnings = self.stats.get(
|
||||
"warnings"
|
||||
) # type: Optional[List[WarningReport]]
|
||||
if not all_warnings:
|
||||
return
|
||||
|
||||
|
@ -759,7 +796,9 @@ class TerminalReporter:
|
|||
if not warning_reports:
|
||||
return
|
||||
|
||||
reports_grouped_by_message = collections.OrderedDict()
|
||||
reports_grouped_by_message = (
|
||||
collections.OrderedDict()
|
||||
) # type: collections.OrderedDict[str, List[WarningReport]]
|
||||
for wr in warning_reports:
|
||||
reports_grouped_by_message.setdefault(wr.message, []).append(wr)
|
||||
|
||||
|
@ -860,21 +899,47 @@ class TerminalReporter:
|
|||
self._tw.line(content)
|
||||
|
||||
def summary_stats(self):
|
||||
if self.verbosity < -1:
|
||||
return
|
||||
|
||||
session_duration = time.time() - self._sessionstarttime
|
||||
(line, color) = build_summary_stats_line(self.stats)
|
||||
msg = "{} in {}".format(line, format_session_duration(session_duration))
|
||||
markup = {color: True, "bold": True}
|
||||
(parts, main_color) = build_summary_stats_line(self.stats)
|
||||
line_parts = []
|
||||
|
||||
if self.verbosity >= 0:
|
||||
self.write_sep("=", msg, **markup)
|
||||
if self.verbosity == -1:
|
||||
self.write_line(msg, **markup)
|
||||
display_sep = self.verbosity >= 0
|
||||
if display_sep:
|
||||
fullwidth = self._tw.fullwidth
|
||||
for text, markup in parts:
|
||||
with_markup = self._tw.markup(text, **markup)
|
||||
if display_sep:
|
||||
fullwidth += len(with_markup) - len(text)
|
||||
line_parts.append(with_markup)
|
||||
msg = ", ".join(line_parts)
|
||||
|
||||
def short_test_summary(self):
|
||||
main_markup = {main_color: True}
|
||||
duration = " in {}".format(format_session_duration(session_duration))
|
||||
duration_with_markup = self._tw.markup(duration, **main_markup)
|
||||
if display_sep:
|
||||
fullwidth += len(duration_with_markup) - len(duration)
|
||||
msg += duration_with_markup
|
||||
|
||||
if display_sep:
|
||||
markup_for_end_sep = self._tw.markup("", **main_markup)
|
||||
if markup_for_end_sep.endswith("\x1b[0m"):
|
||||
markup_for_end_sep = markup_for_end_sep[:-4]
|
||||
fullwidth += len(markup_for_end_sep)
|
||||
msg += markup_for_end_sep
|
||||
|
||||
if display_sep:
|
||||
self.write_sep("=", msg, fullwidth=fullwidth, **main_markup)
|
||||
else:
|
||||
self.write_line(msg, **main_markup)
|
||||
|
||||
def short_test_summary(self) -> None:
|
||||
if not self.reportchars:
|
||||
return
|
||||
|
||||
def show_simple(stat, lines):
|
||||
def show_simple(stat, lines: List[str]) -> None:
|
||||
failed = self.stats.get(stat, [])
|
||||
if not failed:
|
||||
return
|
||||
|
@ -884,7 +949,7 @@ class TerminalReporter:
|
|||
line = _get_line_with_reprcrash_message(config, rep, termwidth)
|
||||
lines.append(line)
|
||||
|
||||
def show_xfailed(lines):
|
||||
def show_xfailed(lines: List[str]) -> None:
|
||||
xfailed = self.stats.get("xfailed", [])
|
||||
for rep in xfailed:
|
||||
verbose_word = rep._get_verbose_word(self.config)
|
||||
|
@ -894,7 +959,7 @@ class TerminalReporter:
|
|||
if reason:
|
||||
lines.append(" " + str(reason))
|
||||
|
||||
def show_xpassed(lines):
|
||||
def show_xpassed(lines: List[str]) -> None:
|
||||
xpassed = self.stats.get("xpassed", [])
|
||||
for rep in xpassed:
|
||||
verbose_word = rep._get_verbose_word(self.config)
|
||||
|
@ -902,7 +967,7 @@ class TerminalReporter:
|
|||
reason = rep.wasxfail
|
||||
lines.append("{} {} {}".format(verbose_word, pos, reason))
|
||||
|
||||
def show_skipped(lines):
|
||||
def show_skipped(lines: List[str]) -> None:
|
||||
skipped = self.stats.get("skipped", [])
|
||||
fskips = _folded_skips(skipped) if skipped else []
|
||||
if not fskips:
|
||||
|
@ -914,7 +979,7 @@ class TerminalReporter:
|
|||
if lineno is not None:
|
||||
lines.append(
|
||||
"%s [%d] %s:%d: %s"
|
||||
% (verbose_word, num, fspath, lineno + 1, reason)
|
||||
% (verbose_word, num, fspath, lineno, reason)
|
||||
)
|
||||
else:
|
||||
lines.append("%s [%d] %s: %s" % (verbose_word, num, fspath, reason))
|
||||
|
@ -928,9 +993,9 @@ class TerminalReporter:
|
|||
"S": show_skipped,
|
||||
"p": partial(show_simple, "passed"),
|
||||
"E": partial(show_simple, "error"),
|
||||
}
|
||||
} # type: Mapping[str, Callable[[List[str]], None]]
|
||||
|
||||
lines = []
|
||||
lines = [] # type: List[str]
|
||||
for char in self.reportchars:
|
||||
action = REPORTCHAR_ACTIONS.get(char)
|
||||
if action: # skipping e.g. "P" (passed with output) here.
|
||||
|
@ -1007,7 +1072,29 @@ def _folded_skips(skipped):
|
|||
return values
|
||||
|
||||
|
||||
def build_summary_stats_line(stats):
|
||||
_color_for_type = {
|
||||
"failed": "red",
|
||||
"error": "red",
|
||||
"warnings": "yellow",
|
||||
"passed": "green",
|
||||
}
|
||||
_color_for_type_default = "yellow"
|
||||
|
||||
|
||||
def _make_plural(count, noun):
|
||||
# No need to pluralize words such as `failed` or `passed`.
|
||||
if noun not in ["error", "warnings"]:
|
||||
return count, noun
|
||||
|
||||
# The `warnings` key is plural. To avoid API breakage, we keep it that way but
|
||||
# set it to singular here so we can determine plurality in the same way as we do
|
||||
# for `error`.
|
||||
noun = noun.replace("warnings", "warning")
|
||||
|
||||
return count, noun + "s" if count != 1 else noun
|
||||
|
||||
|
||||
def _get_main_color(stats) -> Tuple[str, List[str]]:
|
||||
known_types = (
|
||||
"failed passed skipped deselected xfailed xpassed warnings error".split()
|
||||
)
|
||||
|
@ -1017,6 +1104,23 @@ def build_summary_stats_line(stats):
|
|||
if found_type: # setup/teardown reports have an empty key, ignore them
|
||||
known_types.append(found_type)
|
||||
unknown_type_seen = True
|
||||
|
||||
# main color
|
||||
if "failed" in stats or "error" in stats:
|
||||
main_color = "red"
|
||||
elif "warnings" in stats or unknown_type_seen:
|
||||
main_color = "yellow"
|
||||
elif "passed" in stats:
|
||||
main_color = "green"
|
||||
else:
|
||||
main_color = "yellow"
|
||||
|
||||
return main_color, known_types
|
||||
|
||||
|
||||
def build_summary_stats_line(stats):
|
||||
main_color, known_types = _get_main_color(stats)
|
||||
|
||||
parts = []
|
||||
for key in known_types:
|
||||
reports = stats.get(key, None)
|
||||
|
@ -1024,27 +1128,18 @@ def build_summary_stats_line(stats):
|
|||
count = sum(
|
||||
1 for rep in reports if getattr(rep, "count_towards_summary", True)
|
||||
)
|
||||
parts.append("%d %s" % (count, key))
|
||||
color = _color_for_type.get(key, _color_for_type_default)
|
||||
markup = {color: True, "bold": color == main_color}
|
||||
parts.append(("%d %s" % _make_plural(count, key), markup))
|
||||
|
||||
if parts:
|
||||
line = ", ".join(parts)
|
||||
else:
|
||||
line = "no tests ran"
|
||||
if not parts:
|
||||
parts = [("no tests ran", {_color_for_type_default: True})]
|
||||
|
||||
if "failed" in stats or "error" in stats:
|
||||
color = "red"
|
||||
elif "warnings" in stats or unknown_type_seen:
|
||||
color = "yellow"
|
||||
elif "passed" in stats:
|
||||
color = "green"
|
||||
else:
|
||||
color = "yellow"
|
||||
|
||||
return line, color
|
||||
return parts, main_color
|
||||
|
||||
|
||||
def _plugin_nameversions(plugininfo):
|
||||
values = []
|
||||
def _plugin_nameversions(plugininfo) -> List[str]:
|
||||
values = [] # type: List[str]
|
||||
for plugin, dist in plugininfo:
|
||||
# gets us name and version!
|
||||
name = "{dist.project_name}-{dist.version}".format(dist=dist)
|
||||
|
@ -1058,7 +1153,7 @@ def _plugin_nameversions(plugininfo):
|
|||
return values
|
||||
|
||||
|
||||
def format_session_duration(seconds):
|
||||
def format_session_duration(seconds: float) -> str:
|
||||
"""Format the given seconds in a human readable manner to show in the final summary"""
|
||||
if seconds < 60:
|
||||
return "{:.2f}s".format(seconds)
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
import os
|
||||
import re
|
||||
import tempfile
|
||||
from typing import Optional
|
||||
|
||||
import attr
|
||||
import py
|
||||
|
@ -12,6 +13,7 @@ from .pathlib import LOCK_TIMEOUT
|
|||
from .pathlib import make_numbered_dir
|
||||
from .pathlib import make_numbered_dir_with_cleanup
|
||||
from .pathlib import Path
|
||||
from _pytest.fixtures import FixtureRequest
|
||||
from _pytest.monkeypatch import MonkeyPatch
|
||||
|
||||
|
||||
|
@ -22,19 +24,20 @@ class TempPathFactory:
|
|||
The base directory can be configured using the ``--basetemp`` option."""
|
||||
|
||||
_given_basetemp = attr.ib(
|
||||
type=Path,
|
||||
# using os.path.abspath() to get absolute path instead of resolve() as it
|
||||
# does not work the same in all platforms (see #4427)
|
||||
# Path.absolute() exists, but it is not public (see https://bugs.python.org/issue25012)
|
||||
# Ignore type because of https://github.com/python/mypy/issues/6172.
|
||||
converter=attr.converters.optional(
|
||||
lambda p: Path(os.path.abspath(str(p))) # type: ignore
|
||||
)
|
||||
),
|
||||
)
|
||||
_trace = attr.ib()
|
||||
_basetemp = attr.ib(default=None)
|
||||
_basetemp = attr.ib(type=Optional[Path], default=None)
|
||||
|
||||
@classmethod
|
||||
def from_config(cls, config):
|
||||
def from_config(cls, config) -> "TempPathFactory":
|
||||
"""
|
||||
:param config: a pytest configuration
|
||||
"""
|
||||
|
@ -42,7 +45,7 @@ class TempPathFactory:
|
|||
given_basetemp=config.option.basetemp, trace=config.trace.get("tmpdir")
|
||||
)
|
||||
|
||||
def mktemp(self, basename, numbered=True):
|
||||
def mktemp(self, basename: str, numbered: bool = True) -> Path:
|
||||
"""makes a temporary directory managed by the factory"""
|
||||
if not numbered:
|
||||
p = self.getbasetemp().joinpath(basename)
|
||||
|
@ -52,7 +55,7 @@ class TempPathFactory:
|
|||
self._trace("mktemp", p)
|
||||
return p
|
||||
|
||||
def getbasetemp(self):
|
||||
def getbasetemp(self) -> Path:
|
||||
""" return base temporary directory. """
|
||||
if self._basetemp is not None:
|
||||
return self._basetemp
|
||||
|
@ -85,9 +88,9 @@ class TempdirFactory:
|
|||
:class:``py.path.local`` for :class:``TempPathFactory``
|
||||
"""
|
||||
|
||||
_tmppath_factory = attr.ib()
|
||||
_tmppath_factory = attr.ib(type=TempPathFactory)
|
||||
|
||||
def mktemp(self, basename, numbered=True):
|
||||
def mktemp(self, basename: str, numbered: bool = True):
|
||||
"""Create a subdirectory of the base temporary directory and return it.
|
||||
If ``numbered``, ensure the directory is unique by adding a number
|
||||
prefix greater than any existing one.
|
||||
|
@ -99,7 +102,7 @@ class TempdirFactory:
|
|||
return py.path.local(self._tmppath_factory.getbasetemp().resolve())
|
||||
|
||||
|
||||
def get_user():
|
||||
def get_user() -> Optional[str]:
|
||||
"""Return the current user name, or None if getuser() does not work
|
||||
in the current environment (see #1010).
|
||||
"""
|
||||
|
@ -111,7 +114,7 @@ def get_user():
|
|||
return None
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
def pytest_configure(config) -> None:
|
||||
"""Create a TempdirFactory and attach it to the config object.
|
||||
|
||||
This is to comply with existing plugins which expect the handler to be
|
||||
|
@ -127,20 +130,22 @@ def pytest_configure(config):
|
|||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def tmpdir_factory(request):
|
||||
def tmpdir_factory(request: FixtureRequest) -> TempdirFactory:
|
||||
"""Return a :class:`_pytest.tmpdir.TempdirFactory` instance for the test session.
|
||||
"""
|
||||
return request.config._tmpdirhandler
|
||||
# Set dynamically by pytest_configure() above.
|
||||
return request.config._tmpdirhandler # type: ignore
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def tmp_path_factory(request):
|
||||
def tmp_path_factory(request: FixtureRequest) -> TempPathFactory:
|
||||
"""Return a :class:`_pytest.tmpdir.TempPathFactory` instance for the test session.
|
||||
"""
|
||||
return request.config._tmp_path_factory
|
||||
# Set dynamically by pytest_configure() above.
|
||||
return request.config._tmp_path_factory # type: ignore
|
||||
|
||||
|
||||
def _mk_tmp(request, factory):
|
||||
def _mk_tmp(request: FixtureRequest, factory: TempPathFactory) -> Path:
|
||||
name = request.node.name
|
||||
name = re.sub(r"[\W]", "_", name)
|
||||
MAXVAL = 30
|
||||
|
@ -162,7 +167,7 @@ def tmpdir(tmp_path):
|
|||
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_path(request, tmp_path_factory):
|
||||
def tmp_path(request: FixtureRequest, tmp_path_factory: TempPathFactory) -> Path:
|
||||
"""Return a temporary directory path object
|
||||
which is unique to each test function invocation,
|
||||
created as a sub directory of the base temporary
|
||||
|
|
|
@ -1,6 +1,14 @@
|
|||
from typing import Any
|
||||
from typing import Generic
|
||||
from typing import TypeVar
|
||||
|
||||
import attr
|
||||
|
||||
|
||||
if False: # TYPE_CHECKING
|
||||
from typing import Type # noqa: F401 (used in type string)
|
||||
|
||||
|
||||
class PytestWarning(UserWarning):
|
||||
"""
|
||||
Bases: :class:`UserWarning`.
|
||||
|
@ -72,7 +80,7 @@ class PytestExperimentalApiWarning(PytestWarning, FutureWarning):
|
|||
__module__ = "pytest"
|
||||
|
||||
@classmethod
|
||||
def simple(cls, apiname):
|
||||
def simple(cls, apiname: str) -> "PytestExperimentalApiWarning":
|
||||
return cls(
|
||||
"{apiname} is an experimental api that may change over time".format(
|
||||
apiname=apiname
|
||||
|
@ -103,17 +111,20 @@ class PytestUnknownMarkWarning(PytestWarning):
|
|||
__module__ = "pytest"
|
||||
|
||||
|
||||
_W = TypeVar("_W", bound=PytestWarning)
|
||||
|
||||
|
||||
@attr.s
|
||||
class UnformattedWarning:
|
||||
class UnformattedWarning(Generic[_W]):
|
||||
"""Used to hold warnings that need to format their message at runtime, as opposed to a direct message.
|
||||
|
||||
Using this class avoids to keep all the warning types and messages in this module, avoiding misuse.
|
||||
"""
|
||||
|
||||
category = attr.ib()
|
||||
template = attr.ib()
|
||||
category = attr.ib(type="Type[_W]")
|
||||
template = attr.ib(type=str)
|
||||
|
||||
def format(self, **kwargs):
|
||||
def format(self, **kwargs: Any) -> _W:
|
||||
"""Returns an instance of the warning category, formatted with given kwargs"""
|
||||
return self.category(self.template.format(**kwargs))
|
||||
|
||||
|
|
|
@ -42,7 +42,7 @@ def pytest_addoption(parser):
|
|||
type="linelist",
|
||||
help="Each line specifies a pattern for "
|
||||
"warnings.filterwarnings. "
|
||||
"Processed after -W and --pythonwarnings.",
|
||||
"Processed after -W/--pythonwarnings.",
|
||||
)
|
||||
|
||||
|
||||
|
@ -66,6 +66,8 @@ def catch_warnings_for_item(config, ihook, when, item):
|
|||
cmdline_filters = config.getoption("pythonwarnings") or []
|
||||
inifilters = config.getini("filterwarnings")
|
||||
with warnings.catch_warnings(record=True) as log:
|
||||
# mypy can't infer that record=True means log is not None; help it.
|
||||
assert log is not None
|
||||
|
||||
if not sys.warnoptions:
|
||||
# if user is not explicitly configuring warning filters, show deprecation warnings by default (#2908)
|
||||
|
@ -136,7 +138,7 @@ def _issue_warning_captured(warning, hook, stacklevel):
|
|||
"""
|
||||
This function should be used instead of calling ``warnings.warn`` directly when we are in the "configure" stage:
|
||||
at this point the actual options might not have been set, so we manually trigger the pytest_warning_captured
|
||||
hook so we can display this warnings in the terminal. This is a hack until we can sort out #2891.
|
||||
hook so we can display these warnings in the terminal. This is a hack until we can sort out #2891.
|
||||
|
||||
:param warning: the warning instance.
|
||||
:param hook: the hook caller
|
||||
|
@ -145,6 +147,8 @@ def _issue_warning_captured(warning, hook, stacklevel):
|
|||
with warnings.catch_warnings(record=True) as records:
|
||||
warnings.simplefilter("always", type(warning))
|
||||
warnings.warn(warning, stacklevel=stacklevel)
|
||||
# Mypy can't infer that record=True means records is not None; help it.
|
||||
assert records is not None
|
||||
hook.pytest_warning_captured.call_historic(
|
||||
kwargs=dict(warning_message=records[0], when="config", item=None)
|
||||
)
|
||||
|
|
|
@ -178,8 +178,14 @@ class TestGeneralUsage:
|
|||
p1 = testdir.makepyfile("")
|
||||
p2 = testdir.makefile(".pyc", "123")
|
||||
result = testdir.runpytest(p1, p2)
|
||||
assert result.ret
|
||||
result.stderr.fnmatch_lines(["*ERROR: not found:*{}".format(p2.basename)])
|
||||
assert result.ret == ExitCode.USAGE_ERROR
|
||||
result.stderr.fnmatch_lines(
|
||||
[
|
||||
"ERROR: not found: {}".format(p2),
|
||||
"(no name {!r} in any of [[][]])".format(str(p2)),
|
||||
"",
|
||||
]
|
||||
)
|
||||
|
||||
@pytest.mark.filterwarnings("default")
|
||||
def test_better_reporting_on_conftest_load_failure(self, testdir, request):
|
||||
|
@ -246,7 +252,7 @@ class TestGeneralUsage:
|
|||
)
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
assert "should not be seen" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*should not be seen*")
|
||||
assert "stderr42" not in result.stderr.str()
|
||||
|
||||
def test_conftest_printing_shows_if_error(self, testdir):
|
||||
|
@ -628,7 +634,7 @@ class TestInvocationVariants:
|
|||
result = testdir.runpytest("--pyargs", "tpkg.test_hello", syspathinsert=True)
|
||||
assert result.ret != 0
|
||||
|
||||
result.stdout.fnmatch_lines(["collected*0*items*/*1*errors"])
|
||||
result.stdout.fnmatch_lines(["collected*0*items*/*1*error"])
|
||||
|
||||
def test_pyargs_only_imported_once(self, testdir):
|
||||
pkg = testdir.mkpydir("foo")
|
||||
|
@ -858,16 +864,21 @@ class TestInvocationVariants:
|
|||
4
|
||||
""",
|
||||
)
|
||||
result = testdir.runpytest("-rf")
|
||||
lines = result.stdout.str().splitlines()
|
||||
for line in lines:
|
||||
if line.startswith(("FAIL ", "FAILED ")):
|
||||
_fail, _sep, testid = line.partition(" ")
|
||||
break
|
||||
result = testdir.runpytest(testid, "-rf")
|
||||
result.stdout.fnmatch_lines(
|
||||
["FAILED test_doctest_id.txt::test_doctest_id.txt", "*1 failed*"]
|
||||
)
|
||||
testid = "test_doctest_id.txt::test_doctest_id.txt"
|
||||
expected_lines = [
|
||||
"*= FAILURES =*",
|
||||
"*_ ?doctest? test_doctest_id.txt _*",
|
||||
"FAILED test_doctest_id.txt::test_doctest_id.txt",
|
||||
"*= 1 failed in*",
|
||||
]
|
||||
result = testdir.runpytest(testid, "-rf", "--tb=short")
|
||||
result.stdout.fnmatch_lines(expected_lines)
|
||||
|
||||
# Ensure that re-running it will still handle it as
|
||||
# doctest.DocTestFailure, which was not the case before when
|
||||
# re-importing doctest, but not creating a new RUNNER_CLASS.
|
||||
result = testdir.runpytest(testid, "-rf", "--tb=short")
|
||||
result.stdout.fnmatch_lines(expected_lines)
|
||||
|
||||
def test_core_backward_compatibility(self):
|
||||
"""Test backward compatibility for get_plugin_manager function. See #787."""
|
||||
|
@ -950,10 +961,10 @@ class TestDurations:
|
|||
testdir.makepyfile(test_collecterror="""xyz""")
|
||||
result = testdir.runpytest("--durations=2", "-k test_1")
|
||||
assert result.ret == 2
|
||||
result.stdout.fnmatch_lines(["*Interrupted: 1 errors during collection*"])
|
||||
result.stdout.fnmatch_lines(["*Interrupted: 1 error during collection*"])
|
||||
# Collection errors abort test execution, therefore no duration is
|
||||
# output
|
||||
assert "duration" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*duration*")
|
||||
|
||||
def test_with_not(self, testdir):
|
||||
testdir.makepyfile(self.source)
|
||||
|
@ -1007,7 +1018,7 @@ def test_zipimport_hook(testdir, tmpdir):
|
|||
result = testdir.runpython(target)
|
||||
assert result.ret == 0
|
||||
result.stderr.fnmatch_lines(["*not found*foo*"])
|
||||
assert "INTERNALERROR>" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*INTERNALERROR>*")
|
||||
|
||||
|
||||
def test_import_plugin_unicode_name(testdir):
|
||||
|
@ -1237,3 +1248,40 @@ def test_warn_on_async_gen_function(testdir):
|
|||
assert (
|
||||
result.stdout.str().count("async def functions are not natively supported") == 1
|
||||
)
|
||||
|
||||
|
||||
def test_pdb_can_be_rewritten(testdir):
|
||||
testdir.makepyfile(
|
||||
**{
|
||||
"conftest.py": """
|
||||
import pytest
|
||||
pytest.register_assert_rewrite("pdb")
|
||||
""",
|
||||
"__init__.py": "",
|
||||
"pdb.py": """
|
||||
def check():
|
||||
assert 1 == 2
|
||||
""",
|
||||
"test_pdb.py": """
|
||||
def test():
|
||||
import pdb
|
||||
assert pdb.check()
|
||||
""",
|
||||
}
|
||||
)
|
||||
# Disable debugging plugin itself to avoid:
|
||||
# > INTERNALERROR> AttributeError: module 'pdb' has no attribute 'set_trace'
|
||||
result = testdir.runpytest_subprocess("-p", "no:debugging", "-vv")
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
" def check():",
|
||||
"> assert 1 == 2",
|
||||
"E assert 1 == 2",
|
||||
"E -1",
|
||||
"E +2",
|
||||
"",
|
||||
"pdb.py:2: AssertionError",
|
||||
"*= 1 failed in *",
|
||||
]
|
||||
)
|
||||
assert result.ret == 1
|
||||
|
|
|
@ -1,18 +1,19 @@
|
|||
import sys
|
||||
from types import FrameType
|
||||
from unittest import mock
|
||||
|
||||
import _pytest._code
|
||||
import pytest
|
||||
|
||||
|
||||
def test_ne():
|
||||
def test_ne() -> None:
|
||||
code1 = _pytest._code.Code(compile('foo = "bar"', "", "exec"))
|
||||
assert code1 == code1
|
||||
code2 = _pytest._code.Code(compile('foo = "baz"', "", "exec"))
|
||||
assert code2 != code1
|
||||
|
||||
|
||||
def test_code_gives_back_name_for_not_existing_file():
|
||||
def test_code_gives_back_name_for_not_existing_file() -> None:
|
||||
name = "abc-123"
|
||||
co_code = compile("pass\n", name, "exec")
|
||||
assert co_code.co_filename == name
|
||||
|
@ -21,68 +22,67 @@ def test_code_gives_back_name_for_not_existing_file():
|
|||
assert code.fullsource is None
|
||||
|
||||
|
||||
def test_code_with_class():
|
||||
def test_code_with_class() -> None:
|
||||
class A:
|
||||
pass
|
||||
|
||||
pytest.raises(TypeError, _pytest._code.Code, A)
|
||||
|
||||
|
||||
def x():
|
||||
def x() -> None:
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def test_code_fullsource():
|
||||
def test_code_fullsource() -> None:
|
||||
code = _pytest._code.Code(x)
|
||||
full = code.fullsource
|
||||
assert "test_code_fullsource()" in str(full)
|
||||
|
||||
|
||||
def test_code_source():
|
||||
def test_code_source() -> None:
|
||||
code = _pytest._code.Code(x)
|
||||
src = code.source()
|
||||
expected = """def x():
|
||||
expected = """def x() -> None:
|
||||
raise NotImplementedError()"""
|
||||
assert str(src) == expected
|
||||
|
||||
|
||||
def test_frame_getsourcelineno_myself():
|
||||
def func():
|
||||
def test_frame_getsourcelineno_myself() -> None:
|
||||
def func() -> FrameType:
|
||||
return sys._getframe(0)
|
||||
|
||||
f = func()
|
||||
f = _pytest._code.Frame(f)
|
||||
f = _pytest._code.Frame(func())
|
||||
source, lineno = f.code.fullsource, f.lineno
|
||||
assert source is not None
|
||||
assert source[lineno].startswith(" return sys._getframe(0)")
|
||||
|
||||
|
||||
def test_getstatement_empty_fullsource():
|
||||
def func():
|
||||
def test_getstatement_empty_fullsource() -> None:
|
||||
def func() -> FrameType:
|
||||
return sys._getframe(0)
|
||||
|
||||
f = func()
|
||||
f = _pytest._code.Frame(f)
|
||||
f = _pytest._code.Frame(func())
|
||||
with mock.patch.object(f.code.__class__, "fullsource", None):
|
||||
assert f.statement == ""
|
||||
|
||||
|
||||
def test_code_from_func():
|
||||
def test_code_from_func() -> None:
|
||||
co = _pytest._code.Code(test_frame_getsourcelineno_myself)
|
||||
assert co.firstlineno
|
||||
assert co.path
|
||||
|
||||
|
||||
def test_unicode_handling():
|
||||
def test_unicode_handling() -> None:
|
||||
value = "ąć".encode()
|
||||
|
||||
def f():
|
||||
def f() -> None:
|
||||
raise Exception(value)
|
||||
|
||||
excinfo = pytest.raises(Exception, f)
|
||||
str(excinfo)
|
||||
|
||||
|
||||
def test_code_getargs():
|
||||
def test_code_getargs() -> None:
|
||||
def f1(x):
|
||||
raise NotImplementedError()
|
||||
|
||||
|
@ -108,26 +108,26 @@ def test_code_getargs():
|
|||
assert c4.getargs(var=True) == ("x", "y", "z")
|
||||
|
||||
|
||||
def test_frame_getargs():
|
||||
def f1(x):
|
||||
def test_frame_getargs() -> None:
|
||||
def f1(x) -> FrameType:
|
||||
return sys._getframe(0)
|
||||
|
||||
fr1 = _pytest._code.Frame(f1("a"))
|
||||
assert fr1.getargs(var=True) == [("x", "a")]
|
||||
|
||||
def f2(x, *y):
|
||||
def f2(x, *y) -> FrameType:
|
||||
return sys._getframe(0)
|
||||
|
||||
fr2 = _pytest._code.Frame(f2("a", "b", "c"))
|
||||
assert fr2.getargs(var=True) == [("x", "a"), ("y", ("b", "c"))]
|
||||
|
||||
def f3(x, **z):
|
||||
def f3(x, **z) -> FrameType:
|
||||
return sys._getframe(0)
|
||||
|
||||
fr3 = _pytest._code.Frame(f3("a", b="c"))
|
||||
assert fr3.getargs(var=True) == [("x", "a"), ("z", {"b": "c"})]
|
||||
|
||||
def f4(x, *y, **z):
|
||||
def f4(x, *y, **z) -> FrameType:
|
||||
return sys._getframe(0)
|
||||
|
||||
fr4 = _pytest._code.Frame(f4("a", "b", c="d"))
|
||||
|
@ -135,7 +135,7 @@ def test_frame_getargs():
|
|||
|
||||
|
||||
class TestExceptionInfo:
|
||||
def test_bad_getsource(self):
|
||||
def test_bad_getsource(self) -> None:
|
||||
try:
|
||||
if False:
|
||||
pass
|
||||
|
@ -145,13 +145,13 @@ class TestExceptionInfo:
|
|||
exci = _pytest._code.ExceptionInfo.from_current()
|
||||
assert exci.getrepr()
|
||||
|
||||
def test_from_current_with_missing(self):
|
||||
def test_from_current_with_missing(self) -> None:
|
||||
with pytest.raises(AssertionError, match="no current exception"):
|
||||
_pytest._code.ExceptionInfo.from_current()
|
||||
|
||||
|
||||
class TestTracebackEntry:
|
||||
def test_getsource(self):
|
||||
def test_getsource(self) -> None:
|
||||
try:
|
||||
if False:
|
||||
pass
|
||||
|
@ -161,12 +161,13 @@ class TestTracebackEntry:
|
|||
exci = _pytest._code.ExceptionInfo.from_current()
|
||||
entry = exci.traceback[0]
|
||||
source = entry.getsource()
|
||||
assert source is not None
|
||||
assert len(source) == 6
|
||||
assert "assert False" in source[5]
|
||||
|
||||
|
||||
class TestReprFuncArgs:
|
||||
def test_not_raise_exception_with_mixed_encoding(self, tw_mock):
|
||||
def test_not_raise_exception_with_mixed_encoding(self, tw_mock) -> None:
|
||||
from _pytest._code.code import ReprFuncArgs
|
||||
|
||||
args = [("unicode_string", "São Paulo"), ("utf8_string", b"S\xc3\xa3o Paulo")]
|
||||
|
|
|
@ -3,6 +3,7 @@ import os
|
|||
import queue
|
||||
import sys
|
||||
import textwrap
|
||||
from typing import Union
|
||||
|
||||
import py
|
||||
|
||||
|
@ -59,9 +60,9 @@ def test_excinfo_getstatement():
|
|||
except ValueError:
|
||||
excinfo = _pytest._code.ExceptionInfo.from_current()
|
||||
linenumbers = [
|
||||
_pytest._code.getrawcode(f).co_firstlineno - 1 + 4,
|
||||
_pytest._code.getrawcode(f).co_firstlineno - 1 + 1,
|
||||
_pytest._code.getrawcode(g).co_firstlineno - 1 + 1,
|
||||
f.__code__.co_firstlineno - 1 + 4,
|
||||
f.__code__.co_firstlineno - 1 + 1,
|
||||
g.__code__.co_firstlineno - 1 + 1,
|
||||
]
|
||||
values = list(excinfo.traceback)
|
||||
foundlinenumbers = [x.lineno for x in values]
|
||||
|
@ -224,23 +225,25 @@ class TestTraceback_f_g_h:
|
|||
repr = excinfo.getrepr()
|
||||
assert "RuntimeError: hello" in str(repr.reprcrash)
|
||||
|
||||
def test_traceback_no_recursion_index(self):
|
||||
def do_stuff():
|
||||
def test_traceback_no_recursion_index(self) -> None:
|
||||
def do_stuff() -> None:
|
||||
raise RuntimeError
|
||||
|
||||
def reraise_me():
|
||||
def reraise_me() -> None:
|
||||
import sys
|
||||
|
||||
exc, val, tb = sys.exc_info()
|
||||
assert val is not None
|
||||
raise val.with_traceback(tb)
|
||||
|
||||
def f(n):
|
||||
def f(n: int) -> None:
|
||||
try:
|
||||
do_stuff()
|
||||
except: # noqa
|
||||
reraise_me()
|
||||
|
||||
excinfo = pytest.raises(RuntimeError, f, 8)
|
||||
assert excinfo is not None
|
||||
traceback = excinfo.traceback
|
||||
recindex = traceback.recursionindex()
|
||||
assert recindex is None
|
||||
|
@ -316,8 +319,19 @@ def test_excinfo_exconly():
|
|||
|
||||
def test_excinfo_repr_str():
|
||||
excinfo = pytest.raises(ValueError, h)
|
||||
assert repr(excinfo) == "<ExceptionInfo ValueError tblen=4>"
|
||||
assert str(excinfo) == "<ExceptionInfo ValueError tblen=4>"
|
||||
assert repr(excinfo) == "<ExceptionInfo ValueError() tblen=4>"
|
||||
assert str(excinfo) == "<ExceptionInfo ValueError() tblen=4>"
|
||||
|
||||
class CustomException(Exception):
|
||||
def __repr__(self):
|
||||
return "custom_repr"
|
||||
|
||||
def raises():
|
||||
raise CustomException()
|
||||
|
||||
excinfo = pytest.raises(CustomException, raises)
|
||||
assert repr(excinfo) == "<ExceptionInfo custom_repr tblen=2>"
|
||||
assert str(excinfo) == "<ExceptionInfo custom_repr tblen=2>"
|
||||
|
||||
|
||||
def test_excinfo_for_later():
|
||||
|
@ -399,7 +413,7 @@ def test_match_raises_error(testdir):
|
|||
result = testdir.runpytest()
|
||||
assert result.ret != 0
|
||||
result.stdout.fnmatch_lines(["*AssertionError*Pattern*[123]*not found*"])
|
||||
assert "__tracebackhide__ = True" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*__tracebackhide__ = True*")
|
||||
|
||||
result = testdir.runpytest("--fulltrace")
|
||||
assert result.ret != 0
|
||||
|
@ -491,65 +505,18 @@ raise ValueError()
|
|||
assert repr.reprtraceback.reprentries[1].lines[0] == "> ???"
|
||||
assert repr.chain[0][0].reprentries[1].lines[0] == "> ???"
|
||||
|
||||
def test_repr_source_failing_fullsource(self):
|
||||
def test_repr_source_failing_fullsource(self, monkeypatch) -> None:
|
||||
pr = FormattedExcinfo()
|
||||
|
||||
class FakeCode:
|
||||
class raw:
|
||||
co_filename = "?"
|
||||
try:
|
||||
1 / 0
|
||||
except ZeroDivisionError:
|
||||
excinfo = ExceptionInfo.from_current()
|
||||
|
||||
path = "?"
|
||||
firstlineno = 5
|
||||
|
||||
def fullsource(self):
|
||||
return None
|
||||
|
||||
fullsource = property(fullsource)
|
||||
|
||||
class FakeFrame:
|
||||
code = FakeCode()
|
||||
f_locals = {}
|
||||
f_globals = {}
|
||||
|
||||
class FakeTracebackEntry(_pytest._code.Traceback.Entry):
|
||||
def __init__(self, tb, excinfo=None):
|
||||
self.lineno = 5 + 3
|
||||
|
||||
@property
|
||||
def frame(self):
|
||||
return FakeFrame()
|
||||
|
||||
class Traceback(_pytest._code.Traceback):
|
||||
Entry = FakeTracebackEntry
|
||||
|
||||
class FakeExcinfo(_pytest._code.ExceptionInfo):
|
||||
typename = "Foo"
|
||||
value = Exception()
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def exconly(self, tryshort):
|
||||
return "EXC"
|
||||
|
||||
def errisinstance(self, cls):
|
||||
return False
|
||||
|
||||
excinfo = FakeExcinfo()
|
||||
|
||||
class FakeRawTB:
|
||||
tb_next = None
|
||||
|
||||
tb = FakeRawTB()
|
||||
excinfo.traceback = Traceback(tb)
|
||||
|
||||
fail = IOError()
|
||||
with monkeypatch.context() as m:
|
||||
m.setattr(_pytest._code.Code, "fullsource", property(lambda self: None))
|
||||
repr = pr.repr_excinfo(excinfo)
|
||||
assert repr.reprtraceback.reprentries[0].lines[0] == "> ???"
|
||||
assert repr.chain[0][0].reprentries[0].lines[0] == "> ???"
|
||||
|
||||
fail = py.error.ENOENT # noqa
|
||||
repr = pr.repr_excinfo(excinfo)
|
||||
assert repr.reprtraceback.reprentries[0].lines[0] == "> ???"
|
||||
assert repr.chain[0][0].reprentries[0].lines[0] == "> ???"
|
||||
|
||||
|
@ -573,7 +540,7 @@ raise ValueError()
|
|||
reprlocals = p.repr_locals(loc)
|
||||
assert reprlocals.lines
|
||||
assert reprlocals.lines[0] == "__builtins__ = <builtins>"
|
||||
assert '[NotImplementedError("") raised in repr()]' in reprlocals.lines[1]
|
||||
assert "[NotImplementedError() raised in repr()]" in reprlocals.lines[1]
|
||||
|
||||
def test_repr_local_with_exception_in_class_property(self):
|
||||
class ExceptionWithBrokenClass(Exception):
|
||||
|
@ -591,7 +558,7 @@ raise ValueError()
|
|||
reprlocals = p.repr_locals(loc)
|
||||
assert reprlocals.lines
|
||||
assert reprlocals.lines[0] == "__builtins__ = <builtins>"
|
||||
assert '[ExceptionWithBrokenClass("") raised in repr()]' in reprlocals.lines[1]
|
||||
assert "[ExceptionWithBrokenClass() raised in repr()]" in reprlocals.lines[1]
|
||||
|
||||
def test_repr_local_truncated(self):
|
||||
loc = {"l": [i for i in range(10)]}
|
||||
|
@ -632,7 +599,6 @@ raise ValueError()
|
|||
assert lines[3] == "E world"
|
||||
assert not lines[4:]
|
||||
|
||||
loc = repr_entry.reprlocals is not None
|
||||
loc = repr_entry.reprfileloc
|
||||
assert loc.path == mod.__file__
|
||||
assert loc.lineno == 3
|
||||
|
@ -891,7 +857,7 @@ raise ValueError()
|
|||
from _pytest._code.code import TerminalRepr
|
||||
|
||||
class MyRepr(TerminalRepr):
|
||||
def toterminal(self, tw):
|
||||
def toterminal(self, tw) -> None:
|
||||
tw.line("я")
|
||||
|
||||
x = str(MyRepr())
|
||||
|
@ -1218,13 +1184,15 @@ raise ValueError()
|
|||
@pytest.mark.parametrize(
|
||||
"reason, description",
|
||||
[
|
||||
(
|
||||
pytest.param(
|
||||
"cause",
|
||||
"The above exception was the direct cause of the following exception:",
|
||||
id="cause",
|
||||
),
|
||||
(
|
||||
pytest.param(
|
||||
"context",
|
||||
"During handling of the above exception, another exception occurred:",
|
||||
id="context",
|
||||
),
|
||||
],
|
||||
)
|
||||
|
@ -1320,9 +1288,10 @@ raise ValueError()
|
|||
@pytest.mark.parametrize("style", ["short", "long"])
|
||||
@pytest.mark.parametrize("encoding", [None, "utf8", "utf16"])
|
||||
def test_repr_traceback_with_unicode(style, encoding):
|
||||
msg = "☹"
|
||||
if encoding is not None:
|
||||
msg = msg.encode(encoding)
|
||||
if encoding is None:
|
||||
msg = "☹" # type: Union[str, bytes]
|
||||
else:
|
||||
msg = "☹".encode(encoding)
|
||||
try:
|
||||
raise RuntimeError(msg)
|
||||
except RuntimeError:
|
||||
|
@ -1343,7 +1312,8 @@ def test_cwd_deleted(testdir):
|
|||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["* 1 failed in *"])
|
||||
assert "INTERNALERROR" not in result.stdout.str() + result.stderr.str()
|
||||
result.stdout.no_fnmatch_line("*INTERNALERROR*")
|
||||
result.stderr.no_fnmatch_line("*INTERNALERROR*")
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("limited_recursion_depth")
|
||||
|
|
|
@ -4,13 +4,16 @@
|
|||
import ast
|
||||
import inspect
|
||||
import sys
|
||||
from typing import Any
|
||||
from typing import Dict
|
||||
from typing import Optional
|
||||
|
||||
import _pytest._code
|
||||
import pytest
|
||||
from _pytest._code import Source
|
||||
|
||||
|
||||
def test_source_str_function():
|
||||
def test_source_str_function() -> None:
|
||||
x = Source("3")
|
||||
assert str(x) == "3"
|
||||
|
||||
|
@ -25,7 +28,7 @@ def test_source_str_function():
|
|||
assert str(x) == "\n3"
|
||||
|
||||
|
||||
def test_unicode():
|
||||
def test_unicode() -> None:
|
||||
x = Source("4")
|
||||
assert str(x) == "4"
|
||||
co = _pytest._code.compile('"å"', mode="eval")
|
||||
|
@ -33,12 +36,12 @@ def test_unicode():
|
|||
assert isinstance(val, str)
|
||||
|
||||
|
||||
def test_source_from_function():
|
||||
def test_source_from_function() -> None:
|
||||
source = _pytest._code.Source(test_source_str_function)
|
||||
assert str(source).startswith("def test_source_str_function():")
|
||||
assert str(source).startswith("def test_source_str_function() -> None:")
|
||||
|
||||
|
||||
def test_source_from_method():
|
||||
def test_source_from_method() -> None:
|
||||
class TestClass:
|
||||
def test_method(self):
|
||||
pass
|
||||
|
@ -47,13 +50,13 @@ def test_source_from_method():
|
|||
assert source.lines == ["def test_method(self):", " pass"]
|
||||
|
||||
|
||||
def test_source_from_lines():
|
||||
def test_source_from_lines() -> None:
|
||||
lines = ["a \n", "b\n", "c"]
|
||||
source = _pytest._code.Source(lines)
|
||||
assert source.lines == ["a ", "b", "c"]
|
||||
|
||||
|
||||
def test_source_from_inner_function():
|
||||
def test_source_from_inner_function() -> None:
|
||||
def f():
|
||||
pass
|
||||
|
||||
|
@ -63,7 +66,7 @@ def test_source_from_inner_function():
|
|||
assert str(source).startswith("def f():")
|
||||
|
||||
|
||||
def test_source_putaround_simple():
|
||||
def test_source_putaround_simple() -> None:
|
||||
source = Source("raise ValueError")
|
||||
source = source.putaround(
|
||||
"try:",
|
||||
|
@ -85,7 +88,7 @@ else:
|
|||
)
|
||||
|
||||
|
||||
def test_source_putaround():
|
||||
def test_source_putaround() -> None:
|
||||
source = Source()
|
||||
source = source.putaround(
|
||||
"""
|
||||
|
@ -96,28 +99,29 @@ def test_source_putaround():
|
|||
assert str(source).strip() == "if 1:\n x=1"
|
||||
|
||||
|
||||
def test_source_strips():
|
||||
def test_source_strips() -> None:
|
||||
source = Source("")
|
||||
assert source == Source()
|
||||
assert str(source) == ""
|
||||
assert source.strip() == source
|
||||
|
||||
|
||||
def test_source_strip_multiline():
|
||||
def test_source_strip_multiline() -> None:
|
||||
source = Source()
|
||||
source.lines = ["", " hello", " "]
|
||||
source2 = source.strip()
|
||||
assert source2.lines == [" hello"]
|
||||
|
||||
|
||||
def test_syntaxerror_rerepresentation():
|
||||
def test_syntaxerror_rerepresentation() -> None:
|
||||
ex = pytest.raises(SyntaxError, _pytest._code.compile, "xyz xyz")
|
||||
assert ex is not None
|
||||
assert ex.value.lineno == 1
|
||||
assert ex.value.offset in {5, 7} # cpython: 7, pypy3.6 7.1.1: 5
|
||||
assert ex.value.text.strip(), "x x"
|
||||
assert ex.value.text == "xyz xyz\n"
|
||||
|
||||
|
||||
def test_isparseable():
|
||||
def test_isparseable() -> None:
|
||||
assert Source("hello").isparseable()
|
||||
assert Source("if 1:\n pass").isparseable()
|
||||
assert Source(" \nif 1:\n pass").isparseable()
|
||||
|
@ -127,7 +131,8 @@ def test_isparseable():
|
|||
|
||||
|
||||
class TestAccesses:
|
||||
source = Source(
|
||||
def setup_class(self) -> None:
|
||||
self.source = Source(
|
||||
"""\
|
||||
def f(x):
|
||||
pass
|
||||
|
@ -136,26 +141,27 @@ class TestAccesses:
|
|||
"""
|
||||
)
|
||||
|
||||
def test_getrange(self):
|
||||
def test_getrange(self) -> None:
|
||||
x = self.source[0:2]
|
||||
assert x.isparseable()
|
||||
assert len(x.lines) == 2
|
||||
assert str(x) == "def f(x):\n pass"
|
||||
|
||||
def test_getline(self):
|
||||
def test_getline(self) -> None:
|
||||
x = self.source[0]
|
||||
assert x == "def f(x):"
|
||||
|
||||
def test_len(self):
|
||||
def test_len(self) -> None:
|
||||
assert len(self.source) == 4
|
||||
|
||||
def test_iter(self):
|
||||
def test_iter(self) -> None:
|
||||
values = [x for x in self.source]
|
||||
assert len(values) == 4
|
||||
|
||||
|
||||
class TestSourceParsingAndCompiling:
|
||||
source = Source(
|
||||
def setup_class(self) -> None:
|
||||
self.source = Source(
|
||||
"""\
|
||||
def f(x):
|
||||
assert (x ==
|
||||
|
@ -164,19 +170,19 @@ class TestSourceParsingAndCompiling:
|
|||
"""
|
||||
).strip()
|
||||
|
||||
def test_compile(self):
|
||||
def test_compile(self) -> None:
|
||||
co = _pytest._code.compile("x=3")
|
||||
d = {}
|
||||
d = {} # type: Dict[str, Any]
|
||||
exec(co, d)
|
||||
assert d["x"] == 3
|
||||
|
||||
def test_compile_and_getsource_simple(self):
|
||||
def test_compile_and_getsource_simple(self) -> None:
|
||||
co = _pytest._code.compile("x=3")
|
||||
exec(co)
|
||||
source = _pytest._code.Source(co)
|
||||
assert str(source) == "x=3"
|
||||
|
||||
def test_compile_and_getsource_through_same_function(self):
|
||||
def test_compile_and_getsource_through_same_function(self) -> None:
|
||||
def gensource(source):
|
||||
return _pytest._code.compile(source)
|
||||
|
||||
|
@ -197,7 +203,7 @@ class TestSourceParsingAndCompiling:
|
|||
source2 = inspect.getsource(co2)
|
||||
assert "ValueError" in source2
|
||||
|
||||
def test_getstatement(self):
|
||||
def test_getstatement(self) -> None:
|
||||
# print str(self.source)
|
||||
ass = str(self.source[1:])
|
||||
for i in range(1, 4):
|
||||
|
@ -206,7 +212,7 @@ class TestSourceParsingAndCompiling:
|
|||
# x = s.deindent()
|
||||
assert str(s) == ass
|
||||
|
||||
def test_getstatementrange_triple_quoted(self):
|
||||
def test_getstatementrange_triple_quoted(self) -> None:
|
||||
# print str(self.source)
|
||||
source = Source(
|
||||
"""hello('''
|
||||
|
@ -217,7 +223,7 @@ class TestSourceParsingAndCompiling:
|
|||
s = source.getstatement(1)
|
||||
assert s == str(source)
|
||||
|
||||
def test_getstatementrange_within_constructs(self):
|
||||
def test_getstatementrange_within_constructs(self) -> None:
|
||||
source = Source(
|
||||
"""\
|
||||
try:
|
||||
|
@ -239,7 +245,7 @@ class TestSourceParsingAndCompiling:
|
|||
# assert source.getstatementrange(5) == (0, 7)
|
||||
assert source.getstatementrange(6) == (6, 7)
|
||||
|
||||
def test_getstatementrange_bug(self):
|
||||
def test_getstatementrange_bug(self) -> None:
|
||||
source = Source(
|
||||
"""\
|
||||
try:
|
||||
|
@ -253,7 +259,7 @@ class TestSourceParsingAndCompiling:
|
|||
assert len(source) == 6
|
||||
assert source.getstatementrange(2) == (1, 4)
|
||||
|
||||
def test_getstatementrange_bug2(self):
|
||||
def test_getstatementrange_bug2(self) -> None:
|
||||
source = Source(
|
||||
"""\
|
||||
assert (
|
||||
|
@ -270,7 +276,7 @@ class TestSourceParsingAndCompiling:
|
|||
assert len(source) == 9
|
||||
assert source.getstatementrange(5) == (0, 9)
|
||||
|
||||
def test_getstatementrange_ast_issue58(self):
|
||||
def test_getstatementrange_ast_issue58(self) -> None:
|
||||
source = Source(
|
||||
"""\
|
||||
|
||||
|
@ -284,38 +290,44 @@ class TestSourceParsingAndCompiling:
|
|||
assert getstatement(2, source).lines == source.lines[2:3]
|
||||
assert getstatement(3, source).lines == source.lines[3:4]
|
||||
|
||||
def test_getstatementrange_out_of_bounds_py3(self):
|
||||
def test_getstatementrange_out_of_bounds_py3(self) -> None:
|
||||
source = Source("if xxx:\n from .collections import something")
|
||||
r = source.getstatementrange(1)
|
||||
assert r == (1, 2)
|
||||
|
||||
def test_getstatementrange_with_syntaxerror_issue7(self):
|
||||
def test_getstatementrange_with_syntaxerror_issue7(self) -> None:
|
||||
source = Source(":")
|
||||
pytest.raises(SyntaxError, lambda: source.getstatementrange(0))
|
||||
|
||||
def test_compile_to_ast(self):
|
||||
def test_compile_to_ast(self) -> None:
|
||||
source = Source("x = 4")
|
||||
mod = source.compile(flag=ast.PyCF_ONLY_AST)
|
||||
assert isinstance(mod, ast.Module)
|
||||
compile(mod, "<filename>", "exec")
|
||||
|
||||
def test_compile_and_getsource(self):
|
||||
def test_compile_and_getsource(self) -> None:
|
||||
co = self.source.compile()
|
||||
exec(co, globals())
|
||||
f(7)
|
||||
excinfo = pytest.raises(AssertionError, f, 6)
|
||||
f(7) # type: ignore
|
||||
excinfo = pytest.raises(AssertionError, f, 6) # type: ignore
|
||||
assert excinfo is not None
|
||||
frame = excinfo.traceback[-1].frame
|
||||
assert isinstance(frame.code.fullsource, Source)
|
||||
stmt = frame.code.fullsource.getstatement(frame.lineno)
|
||||
assert str(stmt).strip().startswith("assert")
|
||||
|
||||
@pytest.mark.parametrize("name", ["", None, "my"])
|
||||
def test_compilefuncs_and_path_sanity(self, name):
|
||||
def test_compilefuncs_and_path_sanity(self, name: Optional[str]) -> None:
|
||||
def check(comp, name):
|
||||
co = comp(self.source, name)
|
||||
if not name:
|
||||
expected = "codegen %s:%d>" % (mypath, mylineno + 2 + 2)
|
||||
expected = "codegen %s:%d>" % (mypath, mylineno + 2 + 2) # type: ignore
|
||||
else:
|
||||
expected = "codegen %r %s:%d>" % (name, mypath, mylineno + 2 + 2)
|
||||
expected = "codegen %r %s:%d>" % (
|
||||
name,
|
||||
mypath, # type: ignore
|
||||
mylineno + 2 + 2, # type: ignore
|
||||
) # type: ignore
|
||||
fn = co.co_filename
|
||||
assert fn.endswith(expected)
|
||||
|
||||
|
@ -330,9 +342,9 @@ class TestSourceParsingAndCompiling:
|
|||
pytest.raises(SyntaxError, _pytest._code.compile, "lambda a,a: 0", mode="eval")
|
||||
|
||||
|
||||
def test_getstartingblock_singleline():
|
||||
def test_getstartingblock_singleline() -> None:
|
||||
class A:
|
||||
def __init__(self, *args):
|
||||
def __init__(self, *args) -> None:
|
||||
frame = sys._getframe(1)
|
||||
self.source = _pytest._code.Frame(frame).statement
|
||||
|
||||
|
@ -342,22 +354,22 @@ def test_getstartingblock_singleline():
|
|||
assert len(values) == 1
|
||||
|
||||
|
||||
def test_getline_finally():
|
||||
def c():
|
||||
def test_getline_finally() -> None:
|
||||
def c() -> None:
|
||||
pass
|
||||
|
||||
with pytest.raises(TypeError) as excinfo:
|
||||
teardown = None
|
||||
try:
|
||||
c(1)
|
||||
c(1) # type: ignore
|
||||
finally:
|
||||
if teardown:
|
||||
teardown()
|
||||
source = excinfo.traceback[-1].statement
|
||||
assert str(source).strip() == "c(1)"
|
||||
assert str(source).strip() == "c(1) # type: ignore"
|
||||
|
||||
|
||||
def test_getfuncsource_dynamic():
|
||||
def test_getfuncsource_dynamic() -> None:
|
||||
source = """
|
||||
def f():
|
||||
raise ValueError
|
||||
|
@ -366,11 +378,13 @@ def test_getfuncsource_dynamic():
|
|||
"""
|
||||
co = _pytest._code.compile(source)
|
||||
exec(co, globals())
|
||||
assert str(_pytest._code.Source(f)).strip() == "def f():\n raise ValueError"
|
||||
assert str(_pytest._code.Source(g)).strip() == "def g(): pass"
|
||||
f_source = _pytest._code.Source(f) # type: ignore
|
||||
g_source = _pytest._code.Source(g) # type: ignore
|
||||
assert str(f_source).strip() == "def f():\n raise ValueError"
|
||||
assert str(g_source).strip() == "def g(): pass"
|
||||
|
||||
|
||||
def test_getfuncsource_with_multine_string():
|
||||
def test_getfuncsource_with_multine_string() -> None:
|
||||
def f():
|
||||
c = """while True:
|
||||
pass
|
||||
|
@ -385,7 +399,7 @@ def test_getfuncsource_with_multine_string():
|
|||
assert str(_pytest._code.Source(f)) == expected.rstrip()
|
||||
|
||||
|
||||
def test_deindent():
|
||||
def test_deindent() -> None:
|
||||
from _pytest._code.source import deindent as deindent
|
||||
|
||||
assert deindent(["\tfoo", "\tbar"]) == ["foo", "bar"]
|
||||
|
@ -399,7 +413,7 @@ def test_deindent():
|
|||
assert lines == ["def f():", " def g():", " pass"]
|
||||
|
||||
|
||||
def test_source_of_class_at_eof_without_newline(tmpdir, _sys_snapshot):
|
||||
def test_source_of_class_at_eof_without_newline(tmpdir, _sys_snapshot) -> None:
|
||||
# this test fails because the implicit inspect.getsource(A) below
|
||||
# does not return the "x = 1" last line.
|
||||
source = _pytest._code.Source(
|
||||
|
@ -421,7 +435,7 @@ if True:
|
|||
pass
|
||||
|
||||
|
||||
def test_getsource_fallback():
|
||||
def test_getsource_fallback() -> None:
|
||||
from _pytest._code.source import getsource
|
||||
|
||||
expected = """def x():
|
||||
|
@ -430,7 +444,7 @@ def test_getsource_fallback():
|
|||
assert src == expected
|
||||
|
||||
|
||||
def test_idem_compile_and_getsource():
|
||||
def test_idem_compile_and_getsource() -> None:
|
||||
from _pytest._code.source import getsource
|
||||
|
||||
expected = "def x(): pass"
|
||||
|
@ -439,15 +453,16 @@ def test_idem_compile_and_getsource():
|
|||
assert src == expected
|
||||
|
||||
|
||||
def test_findsource_fallback():
|
||||
def test_findsource_fallback() -> None:
|
||||
from _pytest._code.source import findsource
|
||||
|
||||
src, lineno = findsource(x)
|
||||
assert src is not None
|
||||
assert "test_findsource_simple" in str(src)
|
||||
assert src[lineno] == " def x():"
|
||||
|
||||
|
||||
def test_findsource():
|
||||
def test_findsource() -> None:
|
||||
from _pytest._code.source import findsource
|
||||
|
||||
co = _pytest._code.compile(
|
||||
|
@ -458,25 +473,27 @@ def test_findsource():
|
|||
)
|
||||
|
||||
src, lineno = findsource(co)
|
||||
assert src is not None
|
||||
assert "if 1:" in str(src)
|
||||
|
||||
d = {}
|
||||
d = {} # type: Dict[str, Any]
|
||||
eval(co, d)
|
||||
src, lineno = findsource(d["x"])
|
||||
assert src is not None
|
||||
assert "if 1:" in str(src)
|
||||
assert src[lineno] == " def x():"
|
||||
|
||||
|
||||
def test_getfslineno():
|
||||
def test_getfslineno() -> None:
|
||||
from _pytest._code import getfslineno
|
||||
|
||||
def f(x):
|
||||
def f(x) -> None:
|
||||
pass
|
||||
|
||||
fspath, lineno = getfslineno(f)
|
||||
|
||||
assert fspath.basename == "test_source.py"
|
||||
assert lineno == _pytest._code.getrawcode(f).co_firstlineno - 1 # see findsource
|
||||
assert lineno == f.__code__.co_firstlineno - 1 # see findsource
|
||||
|
||||
class A:
|
||||
pass
|
||||
|
@ -496,40 +513,40 @@ def test_getfslineno():
|
|||
assert getfslineno(B)[1] == -1
|
||||
|
||||
|
||||
def test_code_of_object_instance_with_call():
|
||||
def test_code_of_object_instance_with_call() -> None:
|
||||
class A:
|
||||
pass
|
||||
|
||||
pytest.raises(TypeError, lambda: _pytest._code.Source(A()))
|
||||
|
||||
class WithCall:
|
||||
def __call__(self):
|
||||
def __call__(self) -> None:
|
||||
pass
|
||||
|
||||
code = _pytest._code.Code(WithCall())
|
||||
assert "pass" in str(code.source())
|
||||
|
||||
class Hello:
|
||||
def __call__(self):
|
||||
def __call__(self) -> None:
|
||||
pass
|
||||
|
||||
pytest.raises(TypeError, lambda: _pytest._code.Code(Hello))
|
||||
|
||||
|
||||
def getstatement(lineno, source):
|
||||
def getstatement(lineno: int, source) -> Source:
|
||||
from _pytest._code.source import getstatementrange_ast
|
||||
|
||||
source = _pytest._code.Source(source, deindent=False)
|
||||
ast, start, end = getstatementrange_ast(lineno, source)
|
||||
return source[start:end]
|
||||
src = _pytest._code.Source(source, deindent=False)
|
||||
ast, start, end = getstatementrange_ast(lineno, src)
|
||||
return src[start:end]
|
||||
|
||||
|
||||
def test_oneline():
|
||||
def test_oneline() -> None:
|
||||
source = getstatement(0, "raise ValueError")
|
||||
assert str(source) == "raise ValueError"
|
||||
|
||||
|
||||
def test_comment_and_no_newline_at_end():
|
||||
def test_comment_and_no_newline_at_end() -> None:
|
||||
from _pytest._code.source import getstatementrange_ast
|
||||
|
||||
source = Source(
|
||||
|
@ -543,12 +560,12 @@ def test_comment_and_no_newline_at_end():
|
|||
assert end == 2
|
||||
|
||||
|
||||
def test_oneline_and_comment():
|
||||
def test_oneline_and_comment() -> None:
|
||||
source = getstatement(0, "raise ValueError\n#hello")
|
||||
assert str(source) == "raise ValueError"
|
||||
|
||||
|
||||
def test_comments():
|
||||
def test_comments() -> None:
|
||||
source = '''def test():
|
||||
"comment 1"
|
||||
x = 1
|
||||
|
@ -574,7 +591,7 @@ comment 4
|
|||
assert str(getstatement(line, source)) == '"""\ncomment 4\n"""'
|
||||
|
||||
|
||||
def test_comment_in_statement():
|
||||
def test_comment_in_statement() -> None:
|
||||
source = """test(foo=1,
|
||||
# comment 1
|
||||
bar=2)
|
||||
|
@ -586,17 +603,17 @@ def test_comment_in_statement():
|
|||
)
|
||||
|
||||
|
||||
def test_single_line_else():
|
||||
def test_single_line_else() -> None:
|
||||
source = getstatement(1, "if False: 2\nelse: 3")
|
||||
assert str(source) == "else: 3"
|
||||
|
||||
|
||||
def test_single_line_finally():
|
||||
def test_single_line_finally() -> None:
|
||||
source = getstatement(1, "try: 1\nfinally: 3")
|
||||
assert str(source) == "finally: 3"
|
||||
|
||||
|
||||
def test_issue55():
|
||||
def test_issue55() -> None:
|
||||
source = (
|
||||
"def round_trip(dinp):\n assert 1 == dinp\n"
|
||||
'def test_rt():\n round_trip("""\n""")\n'
|
||||
|
@ -605,7 +622,7 @@ def test_issue55():
|
|||
assert str(s) == ' round_trip("""\n""")'
|
||||
|
||||
|
||||
def test_multiline():
|
||||
def test_multiline() -> None:
|
||||
source = getstatement(
|
||||
0,
|
||||
"""\
|
||||
|
@ -619,7 +636,8 @@ x = 3
|
|||
|
||||
|
||||
class TestTry:
|
||||
source = """\
|
||||
def setup_class(self) -> None:
|
||||
self.source = """\
|
||||
try:
|
||||
raise ValueError
|
||||
except Something:
|
||||
|
@ -628,42 +646,44 @@ else:
|
|||
raise KeyError()
|
||||
"""
|
||||
|
||||
def test_body(self):
|
||||
def test_body(self) -> None:
|
||||
source = getstatement(1, self.source)
|
||||
assert str(source) == " raise ValueError"
|
||||
|
||||
def test_except_line(self):
|
||||
def test_except_line(self) -> None:
|
||||
source = getstatement(2, self.source)
|
||||
assert str(source) == "except Something:"
|
||||
|
||||
def test_except_body(self):
|
||||
def test_except_body(self) -> None:
|
||||
source = getstatement(3, self.source)
|
||||
assert str(source) == " raise IndexError(1)"
|
||||
|
||||
def test_else(self):
|
||||
def test_else(self) -> None:
|
||||
source = getstatement(5, self.source)
|
||||
assert str(source) == " raise KeyError()"
|
||||
|
||||
|
||||
class TestTryFinally:
|
||||
source = """\
|
||||
def setup_class(self) -> None:
|
||||
self.source = """\
|
||||
try:
|
||||
raise ValueError
|
||||
finally:
|
||||
raise IndexError(1)
|
||||
"""
|
||||
|
||||
def test_body(self):
|
||||
def test_body(self) -> None:
|
||||
source = getstatement(1, self.source)
|
||||
assert str(source) == " raise ValueError"
|
||||
|
||||
def test_finally(self):
|
||||
def test_finally(self) -> None:
|
||||
source = getstatement(3, self.source)
|
||||
assert str(source) == " raise IndexError(1)"
|
||||
|
||||
|
||||
class TestIf:
|
||||
source = """\
|
||||
def setup_class(self) -> None:
|
||||
self.source = """\
|
||||
if 1:
|
||||
y = 3
|
||||
elif False:
|
||||
|
@ -672,24 +692,24 @@ else:
|
|||
y = 7
|
||||
"""
|
||||
|
||||
def test_body(self):
|
||||
def test_body(self) -> None:
|
||||
source = getstatement(1, self.source)
|
||||
assert str(source) == " y = 3"
|
||||
|
||||
def test_elif_clause(self):
|
||||
def test_elif_clause(self) -> None:
|
||||
source = getstatement(2, self.source)
|
||||
assert str(source) == "elif False:"
|
||||
|
||||
def test_elif(self):
|
||||
def test_elif(self) -> None:
|
||||
source = getstatement(3, self.source)
|
||||
assert str(source) == " y = 5"
|
||||
|
||||
def test_else(self):
|
||||
def test_else(self) -> None:
|
||||
source = getstatement(5, self.source)
|
||||
assert str(source) == " y = 7"
|
||||
|
||||
|
||||
def test_semicolon():
|
||||
def test_semicolon() -> None:
|
||||
s = """\
|
||||
hello ; pytest.skip()
|
||||
"""
|
||||
|
@ -697,7 +717,7 @@ hello ; pytest.skip()
|
|||
assert str(source) == s.strip()
|
||||
|
||||
|
||||
def test_def_online():
|
||||
def test_def_online() -> None:
|
||||
s = """\
|
||||
def func(): raise ValueError(42)
|
||||
|
||||
|
@ -708,7 +728,7 @@ def something():
|
|||
assert str(source) == "def func(): raise ValueError(42)"
|
||||
|
||||
|
||||
def XXX_test_expression_multiline():
|
||||
def XXX_test_expression_multiline() -> None:
|
||||
source = """\
|
||||
something
|
||||
'''
|
||||
|
@ -717,7 +737,7 @@ something
|
|||
assert str(result) == "'''\n'''"
|
||||
|
||||
|
||||
def test_getstartingblock_multiline():
|
||||
def test_getstartingblock_multiline() -> None:
|
||||
class A:
|
||||
def __init__(self, *args):
|
||||
frame = sys._getframe(1)
|
||||
|
|
|
@ -39,9 +39,12 @@ def pytest_collection_modifyitems(config, items):
|
|||
neutral_items.append(item)
|
||||
else:
|
||||
if "testdir" in fixtures:
|
||||
if spawn_names.intersection(item.function.__code__.co_names):
|
||||
co_names = item.function.__code__.co_names
|
||||
if spawn_names.intersection(co_names):
|
||||
item.add_marker(pytest.mark.uses_pexpect)
|
||||
slowest_items.append(item)
|
||||
elif "runpytest_subprocess" in co_names:
|
||||
slowest_items.append(item)
|
||||
else:
|
||||
slow_items.append(item)
|
||||
item.add_marker(pytest.mark.slow)
|
||||
|
|
|
@ -16,7 +16,7 @@ def test_resultlog_is_deprecated(testdir):
|
|||
result = testdir.runpytest("--result-log=%s" % testdir.tmpdir.join("result.log"))
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
"*--result-log is deprecated and scheduled for removal in pytest 6.0*",
|
||||
"*--result-log is deprecated, please try the new pytest-reportlog plugin.",
|
||||
"*See https://docs.pytest.org/en/latest/deprecations.html#result-log-result-log for more information*",
|
||||
]
|
||||
)
|
||||
|
@ -44,3 +44,32 @@ def test_external_plugins_integrated(testdir, plugin):
|
|||
|
||||
with pytest.warns(pytest.PytestConfigWarning):
|
||||
testdir.parseconfig("-p", plugin)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("junit_family", [None, "legacy", "xunit2"])
|
||||
def test_warn_about_imminent_junit_family_default_change(testdir, junit_family):
|
||||
"""Show a warning if junit_family is not defined and --junitxml is used (#6179)"""
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test_foo():
|
||||
pass
|
||||
"""
|
||||
)
|
||||
if junit_family:
|
||||
testdir.makeini(
|
||||
"""
|
||||
[pytest]
|
||||
junit_family={junit_family}
|
||||
""".format(
|
||||
junit_family=junit_family
|
||||
)
|
||||
)
|
||||
|
||||
result = testdir.runpytest("--junit-xml=foo.xml")
|
||||
warning_msg = (
|
||||
"*PytestDeprecationWarning: The 'junit_family' default value will change*"
|
||||
)
|
||||
if junit_family:
|
||||
result.stdout.no_fnmatch_line(warning_msg)
|
||||
else:
|
||||
result.stdout.fnmatch_lines([warning_msg])
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
import pytest
|
||||
from _pytest._io.saferepr import saferepr
|
||||
|
||||
|
||||
|
@ -40,9 +41,81 @@ def test_exceptions():
|
|||
assert "TypeError" in s
|
||||
assert "TypeError" in saferepr(BrokenRepr("string"))
|
||||
|
||||
s2 = saferepr(BrokenRepr(BrokenReprException("omg even worse")))
|
||||
assert "NameError" not in s2
|
||||
assert "unknown" in s2
|
||||
none = None
|
||||
try:
|
||||
none()
|
||||
except BaseException as exc:
|
||||
exp_exc = repr(exc)
|
||||
obj = BrokenRepr(BrokenReprException("omg even worse"))
|
||||
s2 = saferepr(obj)
|
||||
assert s2 == (
|
||||
"<[unpresentable exception ({!s}) raised in repr()] BrokenRepr object at 0x{:x}>".format(
|
||||
exp_exc, id(obj)
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def test_baseexception():
|
||||
"""Test saferepr() with BaseExceptions, which includes pytest outcomes."""
|
||||
|
||||
class RaisingOnStrRepr(BaseException):
|
||||
def __init__(self, exc_types):
|
||||
self.exc_types = exc_types
|
||||
|
||||
def raise_exc(self, *args):
|
||||
try:
|
||||
self.exc_type = self.exc_types.pop(0)
|
||||
except IndexError:
|
||||
pass
|
||||
if hasattr(self.exc_type, "__call__"):
|
||||
raise self.exc_type(*args)
|
||||
raise self.exc_type
|
||||
|
||||
def __str__(self):
|
||||
self.raise_exc("__str__")
|
||||
|
||||
def __repr__(self):
|
||||
self.raise_exc("__repr__")
|
||||
|
||||
class BrokenObj:
|
||||
def __init__(self, exc):
|
||||
self.exc = exc
|
||||
|
||||
def __repr__(self):
|
||||
raise self.exc
|
||||
|
||||
__str__ = __repr__
|
||||
|
||||
baseexc_str = BaseException("__str__")
|
||||
obj = BrokenObj(RaisingOnStrRepr([BaseException]))
|
||||
assert saferepr(obj) == (
|
||||
"<[unpresentable exception ({!r}) "
|
||||
"raised in repr()] BrokenObj object at 0x{:x}>".format(baseexc_str, id(obj))
|
||||
)
|
||||
obj = BrokenObj(RaisingOnStrRepr([RaisingOnStrRepr([BaseException])]))
|
||||
assert saferepr(obj) == (
|
||||
"<[{!r} raised in repr()] BrokenObj object at 0x{:x}>".format(
|
||||
baseexc_str, id(obj)
|
||||
)
|
||||
)
|
||||
|
||||
with pytest.raises(KeyboardInterrupt):
|
||||
saferepr(BrokenObj(KeyboardInterrupt()))
|
||||
|
||||
with pytest.raises(SystemExit):
|
||||
saferepr(BrokenObj(SystemExit()))
|
||||
|
||||
with pytest.raises(KeyboardInterrupt):
|
||||
saferepr(BrokenObj(RaisingOnStrRepr([KeyboardInterrupt])))
|
||||
|
||||
with pytest.raises(SystemExit):
|
||||
saferepr(BrokenObj(RaisingOnStrRepr([SystemExit])))
|
||||
|
||||
with pytest.raises(KeyboardInterrupt):
|
||||
print(saferepr(BrokenObj(RaisingOnStrRepr([BaseException, KeyboardInterrupt]))))
|
||||
|
||||
with pytest.raises(SystemExit):
|
||||
saferepr(BrokenObj(RaisingOnStrRepr([BaseException, SystemExit])))
|
||||
|
||||
|
||||
def test_buggy_builtin_repr():
|
||||
|
|
|
@ -46,7 +46,7 @@ def test_change_level_undo(testdir):
|
|||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*log from test1*", "*2 failed in *"])
|
||||
assert "log from test2" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*log from test2*")
|
||||
|
||||
|
||||
def test_with_statement(caplog):
|
||||
|
|
|
@ -53,13 +53,77 @@ def test_multiline_message():
|
|||
# this is called by logging.Formatter.format
|
||||
record.message = record.getMessage()
|
||||
|
||||
style = PercentStyleMultiline(logfmt)
|
||||
output = style.format(record)
|
||||
ai_on_style = PercentStyleMultiline(logfmt, True)
|
||||
output = ai_on_style.format(record)
|
||||
assert output == (
|
||||
"dummypath 10 INFO Test Message line1\n"
|
||||
" line2"
|
||||
)
|
||||
|
||||
ai_off_style = PercentStyleMultiline(logfmt, False)
|
||||
output = ai_off_style.format(record)
|
||||
assert output == (
|
||||
"dummypath 10 INFO Test Message line1\nline2"
|
||||
)
|
||||
|
||||
ai_none_style = PercentStyleMultiline(logfmt, None)
|
||||
output = ai_none_style.format(record)
|
||||
assert output == (
|
||||
"dummypath 10 INFO Test Message line1\nline2"
|
||||
)
|
||||
|
||||
record.auto_indent = False
|
||||
output = ai_on_style.format(record)
|
||||
assert output == (
|
||||
"dummypath 10 INFO Test Message line1\nline2"
|
||||
)
|
||||
|
||||
record.auto_indent = True
|
||||
output = ai_off_style.format(record)
|
||||
assert output == (
|
||||
"dummypath 10 INFO Test Message line1\n"
|
||||
" line2"
|
||||
)
|
||||
|
||||
record.auto_indent = "False"
|
||||
output = ai_on_style.format(record)
|
||||
assert output == (
|
||||
"dummypath 10 INFO Test Message line1\nline2"
|
||||
)
|
||||
|
||||
record.auto_indent = "True"
|
||||
output = ai_off_style.format(record)
|
||||
assert output == (
|
||||
"dummypath 10 INFO Test Message line1\n"
|
||||
" line2"
|
||||
)
|
||||
|
||||
# bad string values default to False
|
||||
record.auto_indent = "junk"
|
||||
output = ai_off_style.format(record)
|
||||
assert output == (
|
||||
"dummypath 10 INFO Test Message line1\nline2"
|
||||
)
|
||||
|
||||
# anything other than string or int will default to False
|
||||
record.auto_indent = dict()
|
||||
output = ai_off_style.format(record)
|
||||
assert output == (
|
||||
"dummypath 10 INFO Test Message line1\nline2"
|
||||
)
|
||||
|
||||
record.auto_indent = "5"
|
||||
output = ai_off_style.format(record)
|
||||
assert output == (
|
||||
"dummypath 10 INFO Test Message line1\n line2"
|
||||
)
|
||||
|
||||
record.auto_indent = 5
|
||||
output = ai_off_style.format(record)
|
||||
assert output == (
|
||||
"dummypath 10 INFO Test Message line1\n line2"
|
||||
)
|
||||
|
||||
|
||||
def test_colored_short_level():
|
||||
logfmt = "%(levelname).1s %(message)s"
|
||||
|
|
|
@ -109,7 +109,7 @@ def test_log_cli_level_log_level_interaction(testdir):
|
|||
"=* 1 failed in *=",
|
||||
]
|
||||
)
|
||||
assert "DEBUG" not in result.stdout.str()
|
||||
result.stdout.no_re_match_line("DEBUG")
|
||||
|
||||
|
||||
def test_setup_logging(testdir):
|
||||
|
@ -282,7 +282,7 @@ def test_log_cli_default_level(testdir):
|
|||
"WARNING*test_log_cli_default_level.py* message will be shown*",
|
||||
]
|
||||
)
|
||||
assert "INFO message won't be shown" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*INFO message won't be shown*")
|
||||
# make sure that that we get a '0' exit code for the testsuite
|
||||
assert result.ret == 0
|
||||
|
||||
|
@ -566,7 +566,7 @@ def test_log_cli_level(testdir):
|
|||
"PASSED", # 'PASSED' on its own line because the log message prints a new line
|
||||
]
|
||||
)
|
||||
assert "This log message won't be shown" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*This log message won't be shown*")
|
||||
|
||||
# make sure that that we get a '0' exit code for the testsuite
|
||||
assert result.ret == 0
|
||||
|
@ -580,7 +580,7 @@ def test_log_cli_level(testdir):
|
|||
"PASSED", # 'PASSED' on its own line because the log message prints a new line
|
||||
]
|
||||
)
|
||||
assert "This log message won't be shown" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*This log message won't be shown*")
|
||||
|
||||
# make sure that that we get a '0' exit code for the testsuite
|
||||
assert result.ret == 0
|
||||
|
@ -616,7 +616,7 @@ def test_log_cli_ini_level(testdir):
|
|||
"PASSED", # 'PASSED' on its own line because the log message prints a new line
|
||||
]
|
||||
)
|
||||
assert "This log message won't be shown" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*This log message won't be shown*")
|
||||
|
||||
# make sure that that we get a '0' exit code for the testsuite
|
||||
assert result.ret == 0
|
||||
|
@ -942,7 +942,7 @@ def test_collection_collect_only_live_logging(testdir, verbose):
|
|||
]
|
||||
)
|
||||
elif verbose == "-q":
|
||||
assert "collected 1 item*" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*collected 1 item**")
|
||||
expected_lines.extend(
|
||||
[
|
||||
"*test_collection_collect_only_live_logging.py::test_simple*",
|
||||
|
@ -950,7 +950,7 @@ def test_collection_collect_only_live_logging(testdir, verbose):
|
|||
]
|
||||
)
|
||||
elif verbose == "-qq":
|
||||
assert "collected 1 item*" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*collected 1 item**")
|
||||
expected_lines.extend(["*test_collection_collect_only_live_logging.py: 1*"])
|
||||
|
||||
result.stdout.fnmatch_lines(expected_lines)
|
||||
|
@ -983,7 +983,7 @@ def test_collection_logging_to_file(testdir):
|
|||
|
||||
result = testdir.runpytest()
|
||||
|
||||
assert "--- live log collection ---" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*--- live log collection ---*")
|
||||
|
||||
assert result.ret == 0
|
||||
assert os.path.isfile(log_file)
|
||||
|
|
|
@ -1,4 +1,3 @@
|
|||
import doctest
|
||||
import operator
|
||||
from decimal import Decimal
|
||||
from fractions import Fraction
|
||||
|
@ -11,10 +10,26 @@ from pytest import approx
|
|||
inf, nan = float("inf"), float("nan")
|
||||
|
||||
|
||||
class MyDocTestRunner(doctest.DocTestRunner):
|
||||
def __init__(self):
|
||||
doctest.DocTestRunner.__init__(self)
|
||||
@pytest.fixture
|
||||
def mocked_doctest_runner(monkeypatch):
|
||||
import doctest
|
||||
|
||||
class MockedPdb:
|
||||
def __init__(self, out):
|
||||
pass
|
||||
|
||||
def set_trace(self):
|
||||
raise NotImplementedError("not used")
|
||||
|
||||
def reset(self):
|
||||
pass
|
||||
|
||||
def set_continue(self):
|
||||
pass
|
||||
|
||||
monkeypatch.setattr("doctest._OutputRedirectingPdb", MockedPdb)
|
||||
|
||||
class MyDocTestRunner(doctest.DocTestRunner):
|
||||
def report_failure(self, out, test, example, got):
|
||||
raise AssertionError(
|
||||
"'{}' evaluates to '{}', not '{}'".format(
|
||||
|
@ -22,57 +37,54 @@ class MyDocTestRunner(doctest.DocTestRunner):
|
|||
)
|
||||
)
|
||||
|
||||
return MyDocTestRunner()
|
||||
|
||||
|
||||
class TestApprox:
|
||||
@pytest.fixture
|
||||
def plus_minus(self):
|
||||
return "\u00b1"
|
||||
|
||||
def test_repr_string(self, plus_minus):
|
||||
tol1, tol2, infr = "1.0e-06", "2.0e-06", "inf"
|
||||
assert repr(approx(1.0)) == "1.0 {pm} {tol1}".format(pm=plus_minus, tol1=tol1)
|
||||
assert repr(
|
||||
approx([1.0, 2.0])
|
||||
) == "approx([1.0 {pm} {tol1}, 2.0 {pm} {tol2}])".format(
|
||||
pm=plus_minus, tol1=tol1, tol2=tol2
|
||||
)
|
||||
assert repr(
|
||||
approx((1.0, 2.0))
|
||||
) == "approx((1.0 {pm} {tol1}, 2.0 {pm} {tol2}))".format(
|
||||
pm=plus_minus, tol1=tol1, tol2=tol2
|
||||
)
|
||||
def test_repr_string(self):
|
||||
assert repr(approx(1.0)) == "1.0 ± 1.0e-06"
|
||||
assert repr(approx([1.0, 2.0])) == "approx([1.0 ± 1.0e-06, 2.0 ± 2.0e-06])"
|
||||
assert repr(approx((1.0, 2.0))) == "approx((1.0 ± 1.0e-06, 2.0 ± 2.0e-06))"
|
||||
assert repr(approx(inf)) == "inf"
|
||||
assert repr(approx(1.0, rel=nan)) == "1.0 {pm} ???".format(pm=plus_minus)
|
||||
assert repr(approx(1.0, rel=inf)) == "1.0 {pm} {infr}".format(
|
||||
pm=plus_minus, infr=infr
|
||||
)
|
||||
assert repr(approx(1.0j, rel=inf)) == "1j"
|
||||
assert repr(approx(1.0, rel=nan)) == "1.0 ± ???"
|
||||
assert repr(approx(1.0, rel=inf)) == "1.0 ± inf"
|
||||
|
||||
# Dictionaries aren't ordered, so we need to check both orders.
|
||||
assert repr(approx({"a": 1.0, "b": 2.0})) in (
|
||||
"approx({{'a': 1.0 {pm} {tol1}, 'b': 2.0 {pm} {tol2}}})".format(
|
||||
pm=plus_minus, tol1=tol1, tol2=tol2
|
||||
),
|
||||
"approx({{'b': 2.0 {pm} {tol2}, 'a': 1.0 {pm} {tol1}}})".format(
|
||||
pm=plus_minus, tol1=tol1, tol2=tol2
|
||||
),
|
||||
"approx({'a': 1.0 ± 1.0e-06, 'b': 2.0 ± 2.0e-06})",
|
||||
"approx({'b': 2.0 ± 2.0e-06, 'a': 1.0 ± 1.0e-06})",
|
||||
)
|
||||
|
||||
def test_repr_complex_numbers(self):
|
||||
assert repr(approx(inf + 1j)) == "(inf+1j)"
|
||||
assert repr(approx(1.0j, rel=inf)) == "1j ± inf"
|
||||
|
||||
# can't compute a sensible tolerance
|
||||
assert repr(approx(nan + 1j)) == "(nan+1j) ± ???"
|
||||
|
||||
assert repr(approx(1.0j)) == "1j ± 1.0e-06 ∠ ±180°"
|
||||
|
||||
# relative tolerance is scaled to |3+4j| = 5
|
||||
assert repr(approx(3 + 4 * 1j)) == "(3+4j) ± 5.0e-06 ∠ ±180°"
|
||||
|
||||
# absolute tolerance is not scaled
|
||||
assert repr(approx(3.3 + 4.4 * 1j, abs=0.02)) == "(3.3+4.4j) ± 2.0e-02 ∠ ±180°"
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"value, repr_string",
|
||||
"value, expected_repr_string",
|
||||
[
|
||||
(5.0, "approx(5.0 {pm} 5.0e-06)"),
|
||||
([5.0], "approx([5.0 {pm} 5.0e-06])"),
|
||||
([[5.0]], "approx([[5.0 {pm} 5.0e-06]])"),
|
||||
([[5.0, 6.0]], "approx([[5.0 {pm} 5.0e-06, 6.0 {pm} 6.0e-06]])"),
|
||||
([[5.0], [6.0]], "approx([[5.0 {pm} 5.0e-06], [6.0 {pm} 6.0e-06]])"),
|
||||
(5.0, "approx(5.0 ± 5.0e-06)"),
|
||||
([5.0], "approx([5.0 ± 5.0e-06])"),
|
||||
([[5.0]], "approx([[5.0 ± 5.0e-06]])"),
|
||||
([[5.0, 6.0]], "approx([[5.0 ± 5.0e-06, 6.0 ± 6.0e-06]])"),
|
||||
([[5.0], [6.0]], "approx([[5.0 ± 5.0e-06], [6.0 ± 6.0e-06]])"),
|
||||
],
|
||||
)
|
||||
def test_repr_nd_array(self, plus_minus, value, repr_string):
|
||||
def test_repr_nd_array(self, value, expected_repr_string):
|
||||
"""Make sure that arrays of all different dimensions are repr'd correctly."""
|
||||
np = pytest.importorskip("numpy")
|
||||
np_array = np.array(value)
|
||||
assert repr(approx(np_array)) == repr_string.format(pm=plus_minus)
|
||||
assert repr(approx(np_array)) == expected_repr_string
|
||||
|
||||
def test_operator_overloading(self):
|
||||
assert 1 == approx(1, rel=1e-6, abs=1e-12)
|
||||
|
@ -416,13 +428,14 @@ class TestApprox:
|
|||
assert a12 != approx(a21)
|
||||
assert a21 != approx(a12)
|
||||
|
||||
def test_doctests(self):
|
||||
def test_doctests(self, mocked_doctest_runner):
|
||||
import doctest
|
||||
|
||||
parser = doctest.DocTestParser()
|
||||
test = parser.get_doctest(
|
||||
approx.__doc__, {"approx": approx}, approx.__name__, None, None
|
||||
)
|
||||
runner = MyDocTestRunner()
|
||||
runner.run(test)
|
||||
mocked_doctest_runner.run(test)
|
||||
|
||||
def test_unicode_plus_minus(self, testdir):
|
||||
"""
|
||||
|
|
|
@ -1139,7 +1139,7 @@ def test_unorderable_types(testdir):
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
assert "TypeError" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*TypeError*")
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
|
@ -1167,7 +1167,7 @@ def test_dont_collect_non_function_callable(testdir):
|
|||
[
|
||||
"*collected 1 item*",
|
||||
"*test_dont_collect_non_function_callable.py:2: *cannot collect 'test_a' because it is not a function*",
|
||||
"*1 passed, 1 warnings in *",
|
||||
"*1 passed, 1 warning in *",
|
||||
]
|
||||
)
|
||||
|
||||
|
|
|
@ -455,7 +455,7 @@ class TestFillFixtures:
|
|||
"*1 error*",
|
||||
]
|
||||
)
|
||||
assert "INTERNAL" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*INTERNAL*")
|
||||
|
||||
def test_fixture_excinfo_leak(self, testdir):
|
||||
# on python2 sys.excinfo would leak into fixture executions
|
||||
|
@ -503,7 +503,7 @@ class TestRequestBasic:
|
|||
assert repr(req).find(req.function.__name__) != -1
|
||||
|
||||
def test_request_attributes_method(self, testdir):
|
||||
item, = testdir.getitems(
|
||||
(item,) = testdir.getitems(
|
||||
"""
|
||||
import pytest
|
||||
class TestB(object):
|
||||
|
@ -531,7 +531,7 @@ class TestRequestBasic:
|
|||
pass
|
||||
"""
|
||||
)
|
||||
item1, = testdir.genitems([modcol])
|
||||
(item1,) = testdir.genitems([modcol])
|
||||
assert item1.name == "test_method"
|
||||
arg2fixturedefs = fixtures.FixtureRequest(item1)._arg2fixturedefs
|
||||
assert len(arg2fixturedefs) == 1
|
||||
|
@ -781,7 +781,7 @@ class TestRequestBasic:
|
|||
|
||||
def test_request_getmodulepath(self, testdir):
|
||||
modcol = testdir.getmodulecol("def test_somefunc(): pass")
|
||||
item, = testdir.genitems([modcol])
|
||||
(item,) = testdir.genitems([modcol])
|
||||
req = fixtures.FixtureRequest(item)
|
||||
assert req.fspath == modcol.fspath
|
||||
|
||||
|
@ -2647,7 +2647,7 @@ class TestFixtureMarker:
|
|||
*3 passed*
|
||||
"""
|
||||
)
|
||||
assert "error" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*error*")
|
||||
|
||||
def test_fixture_finalizer(self, testdir):
|
||||
testdir.makeconftest(
|
||||
|
@ -3081,7 +3081,7 @@ class TestErrors:
|
|||
*KeyError*
|
||||
*ERROR*teardown*test_2*
|
||||
*KeyError*
|
||||
*3 pass*2 error*
|
||||
*3 pass*2 errors*
|
||||
"""
|
||||
)
|
||||
|
||||
|
@ -3151,7 +3151,7 @@ class TestShowFixtures:
|
|||
*hello world*
|
||||
"""
|
||||
)
|
||||
assert "arg0" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*arg0*")
|
||||
|
||||
@pytest.mark.parametrize("testmod", [True, False])
|
||||
def test_show_fixtures_conftest(self, testdir, testmod):
|
||||
|
|
|
@ -12,7 +12,7 @@ from _pytest import python
|
|||
|
||||
|
||||
class TestMetafunc:
|
||||
def Metafunc(self, func, config=None):
|
||||
def Metafunc(self, func, config=None) -> python.Metafunc:
|
||||
# the unit tests of this class check if things work correctly
|
||||
# on the funcarg level, so we don't need a full blown
|
||||
# initialization
|
||||
|
@ -23,7 +23,7 @@ class TestMetafunc:
|
|||
self.names_closure = names
|
||||
|
||||
@attr.s
|
||||
class DefinitionMock:
|
||||
class DefinitionMock(python.FunctionDefinition):
|
||||
obj = attr.ib()
|
||||
|
||||
names = fixtures.getfuncargnames(func)
|
||||
|
@ -1323,25 +1323,29 @@ class TestMetafuncFunctional:
|
|||
reprec = testdir.runpytest()
|
||||
reprec.assert_outcomes(passed=4)
|
||||
|
||||
@pytest.mark.parametrize("attr", ["parametrise", "parameterize", "parameterise"])
|
||||
def test_parametrize_misspelling(self, testdir, attr):
|
||||
def test_parametrize_misspelling(self, testdir):
|
||||
"""#463"""
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
import pytest
|
||||
|
||||
@pytest.mark.{}("x", range(2))
|
||||
@pytest.mark.parametrise("x", range(2))
|
||||
def test_foo(x):
|
||||
pass
|
||||
""".format(
|
||||
attr
|
||||
)
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest("--collectonly")
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
"test_foo has '{}' mark, spelling should be 'parametrize'".format(attr),
|
||||
"*1 error in*",
|
||||
"collected 0 items / 1 error",
|
||||
"",
|
||||
"*= ERRORS =*",
|
||||
"*_ ERROR collecting test_parametrize_misspelling.py _*",
|
||||
"test_parametrize_misspelling.py:3: in <module>",
|
||||
' @pytest.mark.parametrise("x", range(2))',
|
||||
"E Failed: Unknown 'parametrise' mark, did you mean 'parametrize'?",
|
||||
"*! Interrupted: 1 error during collection !*",
|
||||
"*= 1 error in *",
|
||||
]
|
||||
)
|
||||
|
||||
|
@ -1551,27 +1555,6 @@ class TestMarkersWithParametrization:
|
|||
assert len(skipped) == 0
|
||||
assert len(fail) == 0
|
||||
|
||||
@pytest.mark.xfail(reason="is this important to support??")
|
||||
def test_nested_marks(self, testdir):
|
||||
s = """
|
||||
import pytest
|
||||
mastermark = pytest.mark.foo(pytest.mark.bar)
|
||||
|
||||
@pytest.mark.parametrize(("n", "expected"), [
|
||||
(1, 2),
|
||||
mastermark((1, 3)),
|
||||
(2, 3),
|
||||
])
|
||||
def test_increment(n, expected):
|
||||
assert n + 1 == expected
|
||||
"""
|
||||
items = testdir.getitems(s)
|
||||
assert len(items) == 3
|
||||
for mark in ["foo", "bar"]:
|
||||
assert mark not in items[0].keywords
|
||||
assert mark in items[1].keywords
|
||||
assert mark not in items[2].keywords
|
||||
|
||||
def test_simple_xfail(self, testdir):
|
||||
s = """
|
||||
import pytest
|
||||
|
|
|
@ -205,7 +205,7 @@ class TestRaises:
|
|||
with pytest.raises(AssertionError) as excinfo:
|
||||
with pytest.raises(AssertionError, match="'foo"):
|
||||
raise AssertionError("'bar")
|
||||
msg, = excinfo.value.args
|
||||
(msg,) = excinfo.value.args
|
||||
assert msg == 'Pattern "\'foo" not found in "\'bar"'
|
||||
|
||||
def test_raises_match_wrong_type(self):
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
def test_no_items_should_not_show_output(testdir):
|
||||
result = testdir.runpytest("--fixtures-per-test")
|
||||
assert "fixtures used by" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*fixtures used by*")
|
||||
assert result.ret == 0
|
||||
|
||||
|
||||
|
@ -30,7 +30,7 @@ def test_fixtures_in_module(testdir):
|
|||
" arg1 docstring",
|
||||
]
|
||||
)
|
||||
assert "_arg0" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*_arg0*")
|
||||
|
||||
|
||||
def test_fixtures_in_conftest(testdir):
|
||||
|
|
|
@ -12,13 +12,11 @@ from _pytest.assertion import util
|
|||
from _pytest.compat import ATTRS_EQ_FIELD
|
||||
|
||||
|
||||
def mock_config():
|
||||
def mock_config(verbose=0):
|
||||
class Config:
|
||||
verbose = False
|
||||
|
||||
def getoption(self, name):
|
||||
if name == "verbose":
|
||||
return self.verbose
|
||||
return verbose
|
||||
raise KeyError("Not mocked out: %s" % name)
|
||||
|
||||
return Config()
|
||||
|
@ -72,7 +70,14 @@ class TestImportHookInstallation:
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest_subprocess()
|
||||
result.stdout.fnmatch_lines(["*assert 1 == 0*"])
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
"E * AssertionError: ([[][]], [[][]], [[]<TestReport *>[]])*",
|
||||
"E * assert"
|
||||
" {'failed': 1, 'passed': 0, 'skipped': 0} =="
|
||||
" {'failed': 0, 'passed': 1, 'skipped': 0}",
|
||||
]
|
||||
)
|
||||
|
||||
@pytest.mark.parametrize("mode", ["plain", "rewrite"])
|
||||
def test_pytest_plugins_rewrite(self, testdir, mode):
|
||||
|
@ -296,9 +301,8 @@ class TestBinReprIntegration:
|
|||
result.stdout.fnmatch_lines(["*test_hello*FAIL*", "*test_check*PASS*"])
|
||||
|
||||
|
||||
def callequal(left, right, verbose=False):
|
||||
config = mock_config()
|
||||
config.verbose = verbose
|
||||
def callequal(left, right, verbose=0):
|
||||
config = mock_config(verbose=verbose)
|
||||
return plugin.pytest_assertrepr_compare(config, "==", left, right)
|
||||
|
||||
|
||||
|
@ -322,7 +326,7 @@ class TestAssert_reprcompare:
|
|||
assert "a" * 50 not in line
|
||||
|
||||
def test_text_skipping_verbose(self):
|
||||
lines = callequal("a" * 50 + "spam", "a" * 50 + "eggs", verbose=True)
|
||||
lines = callequal("a" * 50 + "spam", "a" * 50 + "eggs", verbose=1)
|
||||
assert "- " + "a" * 50 + "spam" in lines
|
||||
assert "+ " + "a" * 50 + "eggs" in lines
|
||||
|
||||
|
@ -345,7 +349,7 @@ class TestAssert_reprcompare:
|
|||
|
||||
def test_bytes_diff_verbose(self):
|
||||
"""Check special handling for bytes diff (#5260)"""
|
||||
diff = callequal(b"spam", b"eggs", verbose=True)
|
||||
diff = callequal(b"spam", b"eggs", verbose=1)
|
||||
assert diff == [
|
||||
"b'spam' == b'eggs'",
|
||||
"At index 0 diff: b's' != b'e'",
|
||||
|
@ -361,7 +365,7 @@ class TestAssert_reprcompare:
|
|||
@pytest.mark.parametrize(
|
||||
["left", "right", "expected"],
|
||||
[
|
||||
(
|
||||
pytest.param(
|
||||
[0, 1],
|
||||
[0, 2],
|
||||
"""
|
||||
|
@ -371,8 +375,9 @@ class TestAssert_reprcompare:
|
|||
+ [0, 2]
|
||||
? ^
|
||||
""",
|
||||
id="lists",
|
||||
),
|
||||
(
|
||||
pytest.param(
|
||||
{0: 1},
|
||||
{0: 2},
|
||||
"""
|
||||
|
@ -382,8 +387,9 @@ class TestAssert_reprcompare:
|
|||
+ {0: 2}
|
||||
? ^
|
||||
""",
|
||||
id="dicts",
|
||||
),
|
||||
(
|
||||
pytest.param(
|
||||
{0, 1},
|
||||
{0, 2},
|
||||
"""
|
||||
|
@ -393,6 +399,7 @@ class TestAssert_reprcompare:
|
|||
+ {0, 2}
|
||||
? ^
|
||||
""",
|
||||
id="sets",
|
||||
),
|
||||
],
|
||||
)
|
||||
|
@ -402,9 +409,9 @@ class TestAssert_reprcompare:
|
|||
When verbose is False, then just a -v notice to get the diff is rendered,
|
||||
when verbose is True, then ndiff of the pprint is returned.
|
||||
"""
|
||||
expl = callequal(left, right, verbose=False)
|
||||
expl = callequal(left, right, verbose=0)
|
||||
assert expl[-1] == "Use -v to get the full diff"
|
||||
expl = "\n".join(callequal(left, right, verbose=True))
|
||||
expl = "\n".join(callequal(left, right, verbose=1))
|
||||
assert expl.endswith(textwrap.dedent(expected).strip())
|
||||
|
||||
def test_list_different_lengths(self):
|
||||
|
@ -413,6 +420,113 @@ class TestAssert_reprcompare:
|
|||
expl = callequal([0, 1, 2], [0, 1])
|
||||
assert len(expl) > 1
|
||||
|
||||
def test_list_wrap_for_multiple_lines(self):
|
||||
long_d = "d" * 80
|
||||
l1 = ["a", "b", "c"]
|
||||
l2 = ["a", "b", "c", long_d]
|
||||
diff = callequal(l1, l2, verbose=True)
|
||||
assert diff == [
|
||||
"['a', 'b', 'c'] == ['a', 'b', 'c...dddddddddddd']",
|
||||
"Right contains one more item: '" + long_d + "'",
|
||||
"Full diff:",
|
||||
" [",
|
||||
" 'a',",
|
||||
" 'b',",
|
||||
" 'c',",
|
||||
"+ '" + long_d + "',",
|
||||
" ]",
|
||||
]
|
||||
|
||||
diff = callequal(l2, l1, verbose=True)
|
||||
assert diff == [
|
||||
"['a', 'b', 'c...dddddddddddd'] == ['a', 'b', 'c']",
|
||||
"Left contains one more item: '" + long_d + "'",
|
||||
"Full diff:",
|
||||
" [",
|
||||
" 'a',",
|
||||
" 'b',",
|
||||
" 'c',",
|
||||
"- '" + long_d + "',",
|
||||
" ]",
|
||||
]
|
||||
|
||||
def test_list_wrap_for_width_rewrap_same_length(self):
|
||||
long_a = "a" * 30
|
||||
long_b = "b" * 30
|
||||
long_c = "c" * 30
|
||||
l1 = [long_a, long_b, long_c]
|
||||
l2 = [long_b, long_c, long_a]
|
||||
diff = callequal(l1, l2, verbose=True)
|
||||
assert diff == [
|
||||
"['aaaaaaaaaaa...cccccccccccc'] == ['bbbbbbbbbbb...aaaaaaaaaaaa']",
|
||||
"At index 0 diff: 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' != 'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbb'",
|
||||
"Full diff:",
|
||||
" [",
|
||||
"- 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa',",
|
||||
" 'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbb',",
|
||||
" 'cccccccccccccccccccccccccccccc',",
|
||||
"+ 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa',",
|
||||
" ]",
|
||||
]
|
||||
|
||||
def test_list_dont_wrap_strings(self):
|
||||
long_a = "a" * 10
|
||||
l1 = ["a"] + [long_a for _ in range(0, 7)]
|
||||
l2 = ["should not get wrapped"]
|
||||
diff = callequal(l1, l2, verbose=True)
|
||||
assert diff == [
|
||||
"['a', 'aaaaaa...aaaaaaa', ...] == ['should not get wrapped']",
|
||||
"At index 0 diff: 'a' != 'should not get wrapped'",
|
||||
"Left contains 7 more items, first extra item: 'aaaaaaaaaa'",
|
||||
"Full diff:",
|
||||
" [",
|
||||
"+ 'should not get wrapped',",
|
||||
"- 'a',",
|
||||
"- 'aaaaaaaaaa',",
|
||||
"- 'aaaaaaaaaa',",
|
||||
"- 'aaaaaaaaaa',",
|
||||
"- 'aaaaaaaaaa',",
|
||||
"- 'aaaaaaaaaa',",
|
||||
"- 'aaaaaaaaaa',",
|
||||
"- 'aaaaaaaaaa',",
|
||||
" ]",
|
||||
]
|
||||
|
||||
def test_dict_wrap(self):
|
||||
d1 = {"common": 1, "env": {"env1": 1}}
|
||||
d2 = {"common": 1, "env": {"env1": 1, "env2": 2}}
|
||||
|
||||
diff = callequal(d1, d2, verbose=True)
|
||||
assert diff == [
|
||||
"{'common': 1,...: {'env1': 1}} == {'common': 1,...1, 'env2': 2}}",
|
||||
"Omitting 1 identical items, use -vv to show",
|
||||
"Differing items:",
|
||||
"{'env': {'env1': 1}} != {'env': {'env1': 1, 'env2': 2}}",
|
||||
"Full diff:",
|
||||
"- {'common': 1, 'env': {'env1': 1}}",
|
||||
"+ {'common': 1, 'env': {'env1': 1, 'env2': 2}}",
|
||||
"? +++++++++++",
|
||||
]
|
||||
|
||||
long_a = "a" * 80
|
||||
sub = {"long_a": long_a, "sub1": {"long_a": "substring that gets wrapped " * 2}}
|
||||
d1 = {"env": {"sub": sub}}
|
||||
d2 = {"env": {"sub": sub}, "new": 1}
|
||||
diff = callequal(d1, d2, verbose=True)
|
||||
assert diff == [
|
||||
"{'env': {'sub... wrapped '}}}} == {'env': {'sub...}}}, 'new': 1}",
|
||||
"Omitting 1 identical items, use -vv to show",
|
||||
"Right contains 1 more item:",
|
||||
"{'new': 1}",
|
||||
"Full diff:",
|
||||
" {",
|
||||
" 'env': {'sub': {'long_a': '" + long_a + "',",
|
||||
" 'sub1': {'long_a': 'substring that gets wrapped substring '",
|
||||
" 'that gets wrapped '}}},",
|
||||
"+ 'new': 1,",
|
||||
" }",
|
||||
]
|
||||
|
||||
def test_dict(self):
|
||||
expl = callequal({"a": 0}, {"a": 1})
|
||||
assert len(expl) > 1
|
||||
|
@ -1034,7 +1148,7 @@ def test_assertion_options(testdir):
|
|||
result = testdir.runpytest()
|
||||
assert "3 == 4" in result.stdout.str()
|
||||
result = testdir.runpytest_subprocess("--assert=plain")
|
||||
assert "3 == 4" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*3 == 4*")
|
||||
|
||||
|
||||
def test_triple_quoted_string_issue113(testdir):
|
||||
|
@ -1046,7 +1160,7 @@ def test_triple_quoted_string_issue113(testdir):
|
|||
)
|
||||
result = testdir.runpytest("--fulltrace")
|
||||
result.stdout.fnmatch_lines(["*1 failed*"])
|
||||
assert "SyntaxError" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*SyntaxError*")
|
||||
|
||||
|
||||
def test_traceback_failure(testdir):
|
||||
|
|
|
@ -17,9 +17,12 @@ import pytest
|
|||
from _pytest.assertion import util
|
||||
from _pytest.assertion.rewrite import _get_assertion_exprs
|
||||
from _pytest.assertion.rewrite import AssertionRewritingHook
|
||||
from _pytest.assertion.rewrite import get_cache_dir
|
||||
from _pytest.assertion.rewrite import PYC_TAIL
|
||||
from _pytest.assertion.rewrite import PYTEST_TAG
|
||||
from _pytest.assertion.rewrite import rewrite_asserts
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.pathlib import Path
|
||||
|
||||
|
||||
def setup_module(mod):
|
||||
|
@ -119,7 +122,7 @@ class TestAssertionRewrite:
|
|||
}
|
||||
testdir.makepyfile(**contents)
|
||||
result = testdir.runpytest_subprocess()
|
||||
assert "warnings" not in "".join(result.outlines)
|
||||
assert "warning" not in "".join(result.outlines)
|
||||
|
||||
def test_rewrites_plugin_as_a_package(self, testdir):
|
||||
pkgdir = testdir.mkpydir("plugin")
|
||||
|
@ -190,11 +193,12 @@ class TestAssertionRewrite:
|
|||
pass
|
||||
|
||||
msg = getmsg(f, {"cls": X}).splitlines()
|
||||
if verbose > 0:
|
||||
|
||||
if verbose > 1:
|
||||
assert msg == ["assert {!r} == 42".format(X), " -{!r}".format(X), " +42"]
|
||||
elif verbose > 0:
|
||||
assert msg == [
|
||||
"assert <class 'test_...e.<locals>.X'> == 42",
|
||||
" -<class 'test_assertrewrite.TestAssertionRewrite.test_name.<locals>.X'>",
|
||||
" -{!r}".format(X),
|
||||
" +42",
|
||||
]
|
||||
else:
|
||||
|
@ -206,8 +210,16 @@ class TestAssertionRewrite:
|
|||
def f():
|
||||
assert "1234567890" * 5 + "A" == "1234567890" * 5 + "B"
|
||||
|
||||
assert getmsg(f).splitlines()[0] == (
|
||||
"assert '123456789012...901234567890A' == '123456789012...901234567890B'"
|
||||
msg = getmsg(f).splitlines()[0]
|
||||
if request.config.getoption("verbose") > 1:
|
||||
assert msg == (
|
||||
"assert '12345678901234567890123456789012345678901234567890A' "
|
||||
"== '12345678901234567890123456789012345678901234567890B'"
|
||||
)
|
||||
else:
|
||||
assert msg == (
|
||||
"assert '123456789012...901234567890A' "
|
||||
"== '123456789012...901234567890B'"
|
||||
)
|
||||
|
||||
def test_dont_rewrite_if_hasattr_fails(self, request):
|
||||
|
@ -914,7 +926,7 @@ def test_rewritten():
|
|||
testdir.chdir()
|
||||
result = testdir.runpytest_subprocess()
|
||||
result.stdout.fnmatch_lines(["*= 1 passed in *=*"])
|
||||
assert "pytest-warning summary" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*pytest-warning summary*")
|
||||
|
||||
def test_rewrite_warning_using_pytest_plugins_env_var(self, testdir, monkeypatch):
|
||||
monkeypatch.setenv("PYTEST_PLUGINS", "plugin")
|
||||
|
@ -932,7 +944,7 @@ def test_rewritten():
|
|||
testdir.chdir()
|
||||
result = testdir.runpytest_subprocess()
|
||||
result.stdout.fnmatch_lines(["*= 1 passed in *=*"])
|
||||
assert "pytest-warning summary" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*pytest-warning summary*")
|
||||
|
||||
|
||||
class TestAssertionRewriteHookDetails:
|
||||
|
@ -947,14 +959,15 @@ class TestAssertionRewriteHookDetails:
|
|||
def test_write_pyc(self, testdir, tmpdir, monkeypatch):
|
||||
from _pytest.assertion.rewrite import _write_pyc
|
||||
from _pytest.assertion import AssertionState
|
||||
import atomicwrites
|
||||
from contextlib import contextmanager
|
||||
|
||||
config = testdir.parseconfig([])
|
||||
state = AssertionState(config, "rewrite")
|
||||
source_path = tmpdir.ensure("source.py")
|
||||
source_path = str(tmpdir.ensure("source.py"))
|
||||
pycpath = tmpdir.join("pyc").strpath
|
||||
assert _write_pyc(state, [1], os.stat(source_path.strpath), pycpath)
|
||||
assert _write_pyc(state, [1], os.stat(source_path), pycpath)
|
||||
|
||||
if sys.platform == "win32":
|
||||
from contextlib import contextmanager
|
||||
|
||||
@contextmanager
|
||||
def atomic_write_failed(fn, mode="r", overwrite=False):
|
||||
|
@ -963,8 +976,17 @@ class TestAssertionRewriteHookDetails:
|
|||
raise e
|
||||
yield
|
||||
|
||||
monkeypatch.setattr(atomicwrites, "atomic_write", atomic_write_failed)
|
||||
assert not _write_pyc(state, [1], source_path.stat(), pycpath)
|
||||
monkeypatch.setattr(
|
||||
_pytest.assertion.rewrite, "atomic_write", atomic_write_failed
|
||||
)
|
||||
else:
|
||||
|
||||
def raise_ioerror(*args):
|
||||
raise IOError()
|
||||
|
||||
monkeypatch.setattr("os.rename", raise_ioerror)
|
||||
|
||||
assert not _write_pyc(state, [1], os.stat(source_path), pycpath)
|
||||
|
||||
def test_resources_provider_for_loader(self, testdir):
|
||||
"""
|
||||
|
@ -1124,7 +1146,7 @@ def test_issue731(testdir):
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
assert "unbalanced braces" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*unbalanced braces*")
|
||||
|
||||
|
||||
class TestIssue925:
|
||||
|
@ -1542,41 +1564,97 @@ def test_get_assertion_exprs(src, expected):
|
|||
assert _get_assertion_exprs(src) == expected
|
||||
|
||||
|
||||
def test_try_mkdir(monkeypatch, tmp_path):
|
||||
from _pytest.assertion.rewrite import try_mkdir
|
||||
def test_try_makedirs(monkeypatch, tmp_path):
|
||||
from _pytest.assertion.rewrite import try_makedirs
|
||||
|
||||
p = tmp_path / "foo"
|
||||
|
||||
# create
|
||||
assert try_mkdir(str(p))
|
||||
assert try_makedirs(str(p))
|
||||
assert p.is_dir()
|
||||
|
||||
# already exist
|
||||
assert try_mkdir(str(p))
|
||||
assert try_makedirs(str(p))
|
||||
|
||||
# monkeypatch to simulate all error situations
|
||||
def fake_mkdir(p, *, exc):
|
||||
def fake_mkdir(p, exist_ok=False, *, exc):
|
||||
assert isinstance(p, str)
|
||||
raise exc
|
||||
|
||||
monkeypatch.setattr(os, "mkdir", partial(fake_mkdir, exc=FileNotFoundError()))
|
||||
assert not try_mkdir(str(p))
|
||||
monkeypatch.setattr(os, "makedirs", partial(fake_mkdir, exc=FileNotFoundError()))
|
||||
assert not try_makedirs(str(p))
|
||||
|
||||
monkeypatch.setattr(os, "mkdir", partial(fake_mkdir, exc=NotADirectoryError()))
|
||||
assert not try_mkdir(str(p))
|
||||
monkeypatch.setattr(os, "makedirs", partial(fake_mkdir, exc=NotADirectoryError()))
|
||||
assert not try_makedirs(str(p))
|
||||
|
||||
monkeypatch.setattr(os, "mkdir", partial(fake_mkdir, exc=PermissionError()))
|
||||
assert not try_mkdir(str(p))
|
||||
monkeypatch.setattr(os, "makedirs", partial(fake_mkdir, exc=PermissionError()))
|
||||
assert not try_makedirs(str(p))
|
||||
|
||||
err = OSError()
|
||||
err.errno = errno.EROFS
|
||||
monkeypatch.setattr(os, "mkdir", partial(fake_mkdir, exc=err))
|
||||
assert not try_mkdir(str(p))
|
||||
monkeypatch.setattr(os, "makedirs", partial(fake_mkdir, exc=err))
|
||||
assert not try_makedirs(str(p))
|
||||
|
||||
# unhandled OSError should raise
|
||||
err = OSError()
|
||||
err.errno = errno.ECHILD
|
||||
monkeypatch.setattr(os, "mkdir", partial(fake_mkdir, exc=err))
|
||||
monkeypatch.setattr(os, "makedirs", partial(fake_mkdir, exc=err))
|
||||
with pytest.raises(OSError) as exc_info:
|
||||
try_mkdir(str(p))
|
||||
try_makedirs(str(p))
|
||||
assert exc_info.value.errno == errno.ECHILD
|
||||
|
||||
|
||||
class TestPyCacheDir:
|
||||
@pytest.mark.parametrize(
|
||||
"prefix, source, expected",
|
||||
[
|
||||
("c:/tmp/pycs", "d:/projects/src/foo.py", "c:/tmp/pycs/projects/src"),
|
||||
(None, "d:/projects/src/foo.py", "d:/projects/src/__pycache__"),
|
||||
("/tmp/pycs", "/home/projects/src/foo.py", "/tmp/pycs/home/projects/src"),
|
||||
(None, "/home/projects/src/foo.py", "/home/projects/src/__pycache__"),
|
||||
],
|
||||
)
|
||||
def test_get_cache_dir(self, monkeypatch, prefix, source, expected):
|
||||
if prefix:
|
||||
if sys.version_info < (3, 8):
|
||||
pytest.skip("pycache_prefix not available in py<38")
|
||||
monkeypatch.setattr(sys, "pycache_prefix", prefix)
|
||||
|
||||
assert get_cache_dir(Path(source)) == Path(expected)
|
||||
|
||||
@pytest.mark.skipif(
|
||||
sys.version_info < (3, 8), reason="pycache_prefix not available in py<38"
|
||||
)
|
||||
def test_sys_pycache_prefix_integration(self, tmp_path, monkeypatch, testdir):
|
||||
"""Integration test for sys.pycache_prefix (#4730)."""
|
||||
pycache_prefix = tmp_path / "my/pycs"
|
||||
monkeypatch.setattr(sys, "pycache_prefix", str(pycache_prefix))
|
||||
monkeypatch.setattr(sys, "dont_write_bytecode", False)
|
||||
|
||||
testdir.makepyfile(
|
||||
**{
|
||||
"src/test_foo.py": """
|
||||
import bar
|
||||
def test_foo():
|
||||
pass
|
||||
""",
|
||||
"src/bar/__init__.py": "",
|
||||
}
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == 0
|
||||
|
||||
test_foo = Path(testdir.tmpdir) / "src/test_foo.py"
|
||||
bar_init = Path(testdir.tmpdir) / "src/bar/__init__.py"
|
||||
assert test_foo.is_file()
|
||||
assert bar_init.is_file()
|
||||
|
||||
# test file: rewritten, custom pytest cache tag
|
||||
test_foo_pyc = get_cache_dir(test_foo) / ("test_foo" + PYC_TAIL)
|
||||
assert test_foo_pyc.is_file()
|
||||
|
||||
# normal file: not touched by pytest, normal cache tag
|
||||
bar_init_pyc = get_cache_dir(bar_init) / "__init__.{cache_tag}.pyc".format(
|
||||
cache_tag=sys.implementation.cache_tag
|
||||
)
|
||||
assert bar_init_pyc.is_file()
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
import os
|
||||
import shutil
|
||||
import stat
|
||||
import sys
|
||||
import textwrap
|
||||
|
||||
import py
|
||||
|
||||
|
@ -45,26 +45,35 @@ class TestNewAPI:
|
|||
)
|
||||
def test_cache_writefail_permissions(self, testdir):
|
||||
testdir.makeini("[pytest]")
|
||||
cache_dir = str(testdir.tmpdir.ensure_dir(".pytest_cache"))
|
||||
mode = os.stat(cache_dir)[stat.ST_MODE]
|
||||
testdir.tmpdir.ensure_dir(".pytest_cache").chmod(0)
|
||||
try:
|
||||
config = testdir.parseconfigure()
|
||||
cache = config.cache
|
||||
cache.set("test/broken", [])
|
||||
finally:
|
||||
testdir.tmpdir.ensure_dir(".pytest_cache").chmod(mode)
|
||||
|
||||
@pytest.mark.skipif(sys.platform.startswith("win"), reason="no chmod on windows")
|
||||
@pytest.mark.filterwarnings("default")
|
||||
def test_cache_failure_warns(self, testdir):
|
||||
testdir.tmpdir.ensure_dir(".pytest_cache").chmod(0)
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test_error():
|
||||
raise Exception
|
||||
|
||||
"""
|
||||
@pytest.mark.filterwarnings(
|
||||
"ignore:could not create cache path:pytest.PytestWarning"
|
||||
)
|
||||
def test_cache_failure_warns(self, testdir, monkeypatch):
|
||||
monkeypatch.setenv("PYTEST_DISABLE_PLUGIN_AUTOLOAD", "1")
|
||||
cache_dir = str(testdir.tmpdir.ensure_dir(".pytest_cache"))
|
||||
mode = os.stat(cache_dir)[stat.ST_MODE]
|
||||
testdir.tmpdir.ensure_dir(".pytest_cache").chmod(0)
|
||||
try:
|
||||
testdir.makepyfile("def test_error(): raise Exception")
|
||||
result = testdir.runpytest("-rw")
|
||||
assert result.ret == 1
|
||||
# warnings from nodeids, lastfailed, and stepwise
|
||||
result.stdout.fnmatch_lines(["*could not create cache path*", "*3 warnings*"])
|
||||
result.stdout.fnmatch_lines(
|
||||
["*could not create cache path*", "*3 warnings*"]
|
||||
)
|
||||
finally:
|
||||
testdir.tmpdir.ensure_dir(".pytest_cache").chmod(mode)
|
||||
|
||||
def test_config_cache(self, testdir):
|
||||
testdir.makeconftest(
|
||||
|
@ -163,12 +172,7 @@ def test_cache_reportheader_external_abspath(testdir, tmpdir_factory):
|
|||
"test_cache_reportheader_external_abspath_abs"
|
||||
)
|
||||
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test_hello():
|
||||
pass
|
||||
"""
|
||||
)
|
||||
testdir.makepyfile("def test_hello(): pass")
|
||||
testdir.makeini(
|
||||
"""
|
||||
[pytest]
|
||||
|
@ -177,7 +181,6 @@ def test_cache_reportheader_external_abspath(testdir, tmpdir_factory):
|
|||
abscache=external_cache
|
||||
)
|
||||
)
|
||||
|
||||
result = testdir.runpytest("-v")
|
||||
result.stdout.fnmatch_lines(
|
||||
["cachedir: {abscache}".format(abscache=external_cache)]
|
||||
|
@ -238,36 +241,26 @@ def test_cache_show(testdir):
|
|||
|
||||
class TestLastFailed:
|
||||
def test_lastfailed_usecase(self, testdir, monkeypatch):
|
||||
monkeypatch.setenv("PYTHONDONTWRITEBYTECODE", "1")
|
||||
monkeypatch.setattr("sys.dont_write_bytecode", True)
|
||||
p = testdir.makepyfile(
|
||||
"""
|
||||
def test_1():
|
||||
assert 0
|
||||
def test_2():
|
||||
assert 0
|
||||
def test_3():
|
||||
assert 1
|
||||
def test_1(): assert 0
|
||||
def test_2(): assert 0
|
||||
def test_3(): assert 1
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result = testdir.runpytest(str(p))
|
||||
result.stdout.fnmatch_lines(["*2 failed*"])
|
||||
p.write(
|
||||
textwrap.dedent(
|
||||
"""\
|
||||
def test_1():
|
||||
assert 1
|
||||
|
||||
def test_2():
|
||||
assert 1
|
||||
|
||||
def test_3():
|
||||
assert 0
|
||||
p = testdir.makepyfile(
|
||||
"""
|
||||
def test_1(): assert 1
|
||||
def test_2(): assert 1
|
||||
def test_3(): assert 0
|
||||
"""
|
||||
)
|
||||
)
|
||||
result = testdir.runpytest("--lf")
|
||||
result = testdir.runpytest(str(p), "--lf")
|
||||
result.stdout.fnmatch_lines(["*2 passed*1 desel*"])
|
||||
result = testdir.runpytest("--lf")
|
||||
result = testdir.runpytest(str(p), "--lf")
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
"collected 3 items",
|
||||
|
@ -275,7 +268,7 @@ class TestLastFailed:
|
|||
"*1 failed*2 passed*",
|
||||
]
|
||||
)
|
||||
result = testdir.runpytest("--lf", "--cache-clear")
|
||||
result = testdir.runpytest(str(p), "--lf", "--cache-clear")
|
||||
result.stdout.fnmatch_lines(["*1 failed*2 passed*"])
|
||||
|
||||
# Run this again to make sure clear-cache is robust
|
||||
|
@ -285,21 +278,9 @@ class TestLastFailed:
|
|||
result.stdout.fnmatch_lines(["*1 failed*2 passed*"])
|
||||
|
||||
def test_failedfirst_order(self, testdir):
|
||||
testdir.tmpdir.join("test_a.py").write(
|
||||
textwrap.dedent(
|
||||
"""\
|
||||
def test_always_passes():
|
||||
assert 1
|
||||
"""
|
||||
)
|
||||
)
|
||||
testdir.tmpdir.join("test_b.py").write(
|
||||
textwrap.dedent(
|
||||
"""\
|
||||
def test_always_fails():
|
||||
assert 0
|
||||
"""
|
||||
)
|
||||
testdir.makepyfile(
|
||||
test_a="def test_always_passes(): pass",
|
||||
test_b="def test_always_fails(): assert 0",
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
# Test order will be collection order; alphabetical
|
||||
|
@ -310,16 +291,8 @@ class TestLastFailed:
|
|||
|
||||
def test_lastfailed_failedfirst_order(self, testdir):
|
||||
testdir.makepyfile(
|
||||
**{
|
||||
"test_a.py": """\
|
||||
def test_always_passes():
|
||||
assert 1
|
||||
""",
|
||||
"test_b.py": """\
|
||||
def test_always_fails():
|
||||
assert 0
|
||||
""",
|
||||
}
|
||||
test_a="def test_always_passes(): assert 1",
|
||||
test_b="def test_always_fails(): assert 0",
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
# Test order will be collection order; alphabetical
|
||||
|
@ -327,21 +300,16 @@ class TestLastFailed:
|
|||
result = testdir.runpytest("--lf", "--ff")
|
||||
# Test order will be failing tests firs
|
||||
result.stdout.fnmatch_lines(["test_b.py*"])
|
||||
assert "test_a.py" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*test_a.py*")
|
||||
|
||||
def test_lastfailed_difference_invocations(self, testdir, monkeypatch):
|
||||
monkeypatch.setenv("PYTHONDONTWRITEBYTECODE", "1")
|
||||
monkeypatch.setattr("sys.dont_write_bytecode", True)
|
||||
testdir.makepyfile(
|
||||
test_a="""\
|
||||
def test_a1():
|
||||
assert 0
|
||||
def test_a2():
|
||||
assert 1
|
||||
""",
|
||||
test_b="""\
|
||||
def test_b1():
|
||||
assert 0
|
||||
test_a="""
|
||||
def test_a1(): assert 0
|
||||
def test_a2(): assert 1
|
||||
""",
|
||||
test_b="def test_b1(): assert 0",
|
||||
)
|
||||
p = testdir.tmpdir.join("test_a.py")
|
||||
p2 = testdir.tmpdir.join("test_b.py")
|
||||
|
@ -350,36 +318,19 @@ class TestLastFailed:
|
|||
result.stdout.fnmatch_lines(["*2 failed*"])
|
||||
result = testdir.runpytest("--lf", p2)
|
||||
result.stdout.fnmatch_lines(["*1 failed*"])
|
||||
p2.write(
|
||||
textwrap.dedent(
|
||||
"""\
|
||||
def test_b1():
|
||||
assert 1
|
||||
"""
|
||||
)
|
||||
)
|
||||
|
||||
testdir.makepyfile(test_b="def test_b1(): assert 1")
|
||||
result = testdir.runpytest("--lf", p2)
|
||||
result.stdout.fnmatch_lines(["*1 passed*"])
|
||||
result = testdir.runpytest("--lf", p)
|
||||
result.stdout.fnmatch_lines(["*1 failed*1 desel*"])
|
||||
|
||||
def test_lastfailed_usecase_splice(self, testdir, monkeypatch):
|
||||
monkeypatch.setenv("PYTHONDONTWRITEBYTECODE", "1")
|
||||
monkeypatch.setattr("sys.dont_write_bytecode", True)
|
||||
testdir.makepyfile(
|
||||
"""\
|
||||
def test_1():
|
||||
assert 0
|
||||
"""
|
||||
"def test_1(): assert 0", test_something="def test_2(): assert 0"
|
||||
)
|
||||
p2 = testdir.tmpdir.join("test_something.py")
|
||||
p2.write(
|
||||
textwrap.dedent(
|
||||
"""\
|
||||
def test_2():
|
||||
assert 0
|
||||
"""
|
||||
)
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*2 failed*"])
|
||||
result = testdir.runpytest("--lf", p2)
|
||||
|
@ -421,18 +372,14 @@ class TestLastFailed:
|
|||
def test_terminal_report_lastfailed(self, testdir):
|
||||
test_a = testdir.makepyfile(
|
||||
test_a="""
|
||||
def test_a1():
|
||||
pass
|
||||
def test_a2():
|
||||
pass
|
||||
def test_a1(): pass
|
||||
def test_a2(): pass
|
||||
"""
|
||||
)
|
||||
test_b = testdir.makepyfile(
|
||||
test_b="""
|
||||
def test_b1():
|
||||
assert 0
|
||||
def test_b2():
|
||||
assert 0
|
||||
def test_b1(): assert 0
|
||||
def test_b2(): assert 0
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
|
@ -477,10 +424,8 @@ class TestLastFailed:
|
|||
def test_terminal_report_failedfirst(self, testdir):
|
||||
testdir.makepyfile(
|
||||
test_a="""
|
||||
def test_a1():
|
||||
assert 0
|
||||
def test_a2():
|
||||
pass
|
||||
def test_a1(): assert 0
|
||||
def test_a2(): pass
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
|
@ -527,7 +472,6 @@ class TestLastFailed:
|
|||
assert list(lastfailed) == ["test_maybe.py::test_hello"]
|
||||
|
||||
def test_lastfailed_failure_subset(self, testdir, monkeypatch):
|
||||
|
||||
testdir.makepyfile(
|
||||
test_maybe="""
|
||||
import os
|
||||
|
@ -545,6 +489,7 @@ class TestLastFailed:
|
|||
env = os.environ
|
||||
if '1' == env['FAILIMPORT']:
|
||||
raise ImportError('fail')
|
||||
|
||||
def test_hello():
|
||||
assert '0' == env['FAILTEST']
|
||||
|
||||
|
@ -598,8 +543,7 @@ class TestLastFailed:
|
|||
"""
|
||||
import pytest
|
||||
@pytest.mark.xfail
|
||||
def test():
|
||||
assert 0
|
||||
def test(): assert 0
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
|
@ -611,8 +555,7 @@ class TestLastFailed:
|
|||
"""
|
||||
import pytest
|
||||
@pytest.mark.xfail(strict=True)
|
||||
def test():
|
||||
pass
|
||||
def test(): pass
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
|
@ -626,8 +569,7 @@ class TestLastFailed:
|
|||
testdir.makepyfile(
|
||||
"""
|
||||
import pytest
|
||||
def test():
|
||||
assert 0
|
||||
def test(): assert 0
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
|
@ -640,8 +582,7 @@ class TestLastFailed:
|
|||
"""
|
||||
import pytest
|
||||
@pytest.{mark}
|
||||
def test():
|
||||
assert 0
|
||||
def test(): assert 0
|
||||
""".format(
|
||||
mark=mark
|
||||
)
|
||||
|
@ -660,11 +601,11 @@ class TestLastFailed:
|
|||
if quiet:
|
||||
args.append("-q")
|
||||
result = testdir.runpytest(*args)
|
||||
assert "run all" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*run all*")
|
||||
|
||||
result = testdir.runpytest(*args)
|
||||
if quiet:
|
||||
assert "run all" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*run all*")
|
||||
else:
|
||||
assert "rerun previous" in result.stdout.str()
|
||||
|
||||
|
@ -679,18 +620,14 @@ class TestLastFailed:
|
|||
# 1. initial run
|
||||
test_bar = testdir.makepyfile(
|
||||
test_bar="""
|
||||
def test_bar_1():
|
||||
pass
|
||||
def test_bar_2():
|
||||
assert 0
|
||||
def test_bar_1(): pass
|
||||
def test_bar_2(): assert 0
|
||||
"""
|
||||
)
|
||||
test_foo = testdir.makepyfile(
|
||||
test_foo="""
|
||||
def test_foo_3():
|
||||
pass
|
||||
def test_foo_4():
|
||||
assert 0
|
||||
def test_foo_3(): pass
|
||||
def test_foo_4(): assert 0
|
||||
"""
|
||||
)
|
||||
testdir.runpytest()
|
||||
|
@ -702,10 +639,8 @@ class TestLastFailed:
|
|||
# 2. fix test_bar_2, run only test_bar.py
|
||||
testdir.makepyfile(
|
||||
test_bar="""
|
||||
def test_bar_1():
|
||||
pass
|
||||
def test_bar_2():
|
||||
pass
|
||||
def test_bar_1(): pass
|
||||
def test_bar_2(): pass
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest(test_bar)
|
||||
|
@ -720,10 +655,8 @@ class TestLastFailed:
|
|||
# 3. fix test_foo_4, run only test_foo.py
|
||||
test_foo = testdir.makepyfile(
|
||||
test_foo="""
|
||||
def test_foo_3():
|
||||
pass
|
||||
def test_foo_4():
|
||||
pass
|
||||
def test_foo_3(): pass
|
||||
def test_foo_4(): pass
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest(test_foo, "--last-failed")
|
||||
|
@ -737,10 +670,8 @@ class TestLastFailed:
|
|||
def test_lastfailed_no_failures_behavior_all_passed(self, testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test_1():
|
||||
assert True
|
||||
def test_2():
|
||||
assert True
|
||||
def test_1(): pass
|
||||
def test_2(): pass
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
|
@ -762,10 +693,8 @@ class TestLastFailed:
|
|||
def test_lastfailed_no_failures_behavior_empty_cache(self, testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test_1():
|
||||
assert True
|
||||
def test_2():
|
||||
assert False
|
||||
def test_1(): pass
|
||||
def test_2(): assert 0
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest("--lf", "--cache-clear")
|
||||
|
@ -1007,22 +936,12 @@ class TestReadme:
|
|||
return readme.is_file()
|
||||
|
||||
def test_readme_passed(self, testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test_always_passes():
|
||||
assert 1
|
||||
"""
|
||||
)
|
||||
testdir.makepyfile("def test_always_passes(): pass")
|
||||
testdir.runpytest()
|
||||
assert self.check_readme(testdir) is True
|
||||
|
||||
def test_readme_failed(self, testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test_always_fails():
|
||||
assert 0
|
||||
"""
|
||||
)
|
||||
testdir.makepyfile("def test_always_fails(): assert 0")
|
||||
testdir.runpytest()
|
||||
assert self.check_readme(testdir) is True
|
||||
|
||||
|
|
|
@ -7,6 +7,8 @@ import sys
|
|||
import textwrap
|
||||
from io import StringIO
|
||||
from io import UnsupportedOperation
|
||||
from typing import List
|
||||
from typing import TextIO
|
||||
|
||||
import pytest
|
||||
from _pytest import capture
|
||||
|
@ -90,8 +92,6 @@ class TestCaptureManager:
|
|||
|
||||
@pytest.mark.parametrize("method", ["fd", "sys"])
|
||||
def test_capturing_unicode(testdir, method):
|
||||
if hasattr(sys, "pypy_version_info") and sys.pypy_version_info < (2, 2):
|
||||
pytest.xfail("does not work on pypy < 2.2")
|
||||
obj = "'b\u00f6y'"
|
||||
testdir.makepyfile(
|
||||
"""\
|
||||
|
@ -451,7 +451,7 @@ class TestCaptureFixture:
|
|||
"E*capfd*capsys*same*time*",
|
||||
"*ERROR*setup*test_two*",
|
||||
"E*capsys*capfd*same*time*",
|
||||
"*2 error*",
|
||||
"*2 errors*",
|
||||
]
|
||||
)
|
||||
|
||||
|
@ -603,17 +603,13 @@ class TestCaptureFixture:
|
|||
)
|
||||
args = ("-s",) if no_capture else ()
|
||||
result = testdir.runpytest_subprocess(*args)
|
||||
result.stdout.fnmatch_lines(
|
||||
"""
|
||||
*while capture is disabled*
|
||||
"""
|
||||
)
|
||||
assert "captured before" not in result.stdout.str()
|
||||
assert "captured after" not in result.stdout.str()
|
||||
result.stdout.fnmatch_lines(["*while capture is disabled*", "*= 2 passed in *"])
|
||||
result.stdout.no_fnmatch_line("*captured before*")
|
||||
result.stdout.no_fnmatch_line("*captured after*")
|
||||
if no_capture:
|
||||
assert "test_normal executed" in result.stdout.str()
|
||||
else:
|
||||
assert "test_normal executed" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*test_normal executed*")
|
||||
|
||||
@pytest.mark.parametrize("fixture", ["capsys", "capfd"])
|
||||
def test_fixture_use_by_other_fixtures(self, testdir, fixture):
|
||||
|
@ -649,8 +645,8 @@ class TestCaptureFixture:
|
|||
)
|
||||
result = testdir.runpytest_subprocess()
|
||||
result.stdout.fnmatch_lines(["*1 passed*"])
|
||||
assert "stdout contents begin" not in result.stdout.str()
|
||||
assert "stderr contents begin" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*stdout contents begin*")
|
||||
result.stdout.no_fnmatch_line("*stderr contents begin*")
|
||||
|
||||
@pytest.mark.parametrize("cap", ["capsys", "capfd"])
|
||||
def test_fixture_use_by_other_fixtures_teardown(self, testdir, cap):
|
||||
|
@ -720,7 +716,7 @@ def test_capture_conftest_runtest_setup(testdir):
|
|||
testdir.makepyfile("def test_func(): pass")
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == 0
|
||||
assert "hello19" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*hello19*")
|
||||
|
||||
|
||||
def test_capture_badoutput_issue412(testdir):
|
||||
|
@ -824,6 +820,7 @@ def test_dontreadfrominput():
|
|||
from _pytest.capture import DontReadFromInput
|
||||
|
||||
f = DontReadFromInput()
|
||||
assert f.buffer is f
|
||||
assert not f.isatty()
|
||||
pytest.raises(IOError, f.read)
|
||||
pytest.raises(IOError, f.readlines)
|
||||
|
@ -833,20 +830,6 @@ def test_dontreadfrominput():
|
|||
f.close() # just for completeness
|
||||
|
||||
|
||||
def test_dontreadfrominput_buffer_python3():
|
||||
from _pytest.capture import DontReadFromInput
|
||||
|
||||
f = DontReadFromInput()
|
||||
fb = f.buffer
|
||||
assert not fb.isatty()
|
||||
pytest.raises(IOError, fb.read)
|
||||
pytest.raises(IOError, fb.readlines)
|
||||
iter_f = iter(f)
|
||||
pytest.raises(IOError, next, iter_f)
|
||||
pytest.raises(ValueError, fb.fileno)
|
||||
f.close() # just for completeness
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tmpfile(testdir):
|
||||
f = testdir.makepyfile("").open("wb+")
|
||||
|
@ -856,8 +839,8 @@ def tmpfile(testdir):
|
|||
|
||||
|
||||
@needsosdup
|
||||
def test_dupfile(tmpfile):
|
||||
flist = []
|
||||
def test_dupfile(tmpfile) -> None:
|
||||
flist = [] # type: List[TextIO]
|
||||
for i in range(5):
|
||||
nf = capture.safe_text_dupfile(tmpfile, "wb")
|
||||
assert nf != tmpfile
|
||||
|
@ -903,9 +886,9 @@ def lsof_check():
|
|||
pid = os.getpid()
|
||||
try:
|
||||
out = subprocess.check_output(("lsof", "-p", str(pid))).decode()
|
||||
except (OSError, subprocess.CalledProcessError, UnicodeDecodeError):
|
||||
except (OSError, subprocess.CalledProcessError, UnicodeDecodeError) as exc:
|
||||
# about UnicodeDecodeError, see note on pytester
|
||||
pytest.skip("could not run 'lsof'")
|
||||
pytest.skip("could not run 'lsof' ({!r})".format(exc))
|
||||
yield
|
||||
out2 = subprocess.check_output(("lsof", "-p", str(pid))).decode()
|
||||
len1 = len([x for x in out.split("\n") if "REG" in x])
|
||||
|
@ -1387,7 +1370,7 @@ def test_crash_on_closing_tmpfile_py27(testdir):
|
|||
result = testdir.runpytest_subprocess(str(p))
|
||||
assert result.ret == 0
|
||||
assert result.stderr.str() == ""
|
||||
assert "IOError" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*IOError*")
|
||||
|
||||
|
||||
def test_pickling_and_unpickling_encoded_file():
|
||||
|
@ -1501,11 +1484,9 @@ def test_typeerror_encodedfile_write(testdir):
|
|||
"""
|
||||
)
|
||||
result_without_capture = testdir.runpytest("-s", str(p))
|
||||
|
||||
result_with_capture = testdir.runpytest(str(p))
|
||||
|
||||
assert result_with_capture.ret == result_without_capture.ret
|
||||
|
||||
result_with_capture.stdout.fnmatch_lines(
|
||||
["E TypeError: write() argument must be str, not bytes"]
|
||||
["E * TypeError: write() argument must be str, not bytes"]
|
||||
)
|
||||
|
|
|
@ -139,7 +139,7 @@ class TestCollectFS:
|
|||
|
||||
# by default, ignore tests inside a virtualenv
|
||||
result = testdir.runpytest()
|
||||
assert "test_invenv" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*test_invenv*")
|
||||
# allow test collection if user insists
|
||||
result = testdir.runpytest("--collect-in-virtualenv")
|
||||
assert "test_invenv" in result.stdout.str()
|
||||
|
@ -165,7 +165,7 @@ class TestCollectFS:
|
|||
testfile = testdir.tmpdir.ensure(".virtual", "test_invenv.py")
|
||||
testfile.write("def test_hello(): pass")
|
||||
result = testdir.runpytest("--collect-in-virtualenv")
|
||||
assert "test_invenv" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*test_invenv*")
|
||||
# ...unless the virtualenv is explicitly given on the CLI
|
||||
result = testdir.runpytest("--collect-in-virtualenv", ".virtual")
|
||||
assert "test_invenv" in result.stdout.str()
|
||||
|
@ -364,7 +364,7 @@ class TestCustomConftests:
|
|||
testdir.makepyfile(test_world="def test_hello(): pass")
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
assert "passed" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*passed*")
|
||||
result = testdir.runpytest("--XX")
|
||||
assert result.ret == 0
|
||||
assert "passed" in result.stdout.str()
|
||||
|
@ -402,7 +402,7 @@ class TestCustomConftests:
|
|||
)
|
||||
testdir.mkdir("sub")
|
||||
testdir.makepyfile("def test_x(): pass")
|
||||
result = testdir.runpytest("--collect-only")
|
||||
result = testdir.runpytest("--co")
|
||||
result.stdout.fnmatch_lines(["*MyModule*", "*test_x*"])
|
||||
|
||||
def test_pytest_collect_file_from_sister_dir(self, testdir):
|
||||
|
@ -433,7 +433,7 @@ class TestCustomConftests:
|
|||
p = testdir.makepyfile("def test_x(): pass")
|
||||
p.copy(sub1.join(p.basename))
|
||||
p.copy(sub2.join(p.basename))
|
||||
result = testdir.runpytest("--collect-only")
|
||||
result = testdir.runpytest("--co")
|
||||
result.stdout.fnmatch_lines(["*MyModule1*", "*MyModule2*", "*test_x*"])
|
||||
|
||||
|
||||
|
@ -486,7 +486,7 @@ class TestSession:
|
|||
p = testdir.makepyfile("def test_func(): pass")
|
||||
id = "::".join([p.basename, "test_func"])
|
||||
items, hookrec = testdir.inline_genitems(id)
|
||||
item, = items
|
||||
(item,) = items
|
||||
assert item.name == "test_func"
|
||||
newid = item.nodeid
|
||||
assert newid == id
|
||||
|
@ -605,9 +605,9 @@ class TestSession:
|
|||
testdir.makepyfile("def test_func(): pass")
|
||||
items, hookrec = testdir.inline_genitems()
|
||||
assert len(items) == 1
|
||||
item, = items
|
||||
(item,) = items
|
||||
items2, hookrec = testdir.inline_genitems(item.nodeid)
|
||||
item2, = items2
|
||||
(item2,) = items2
|
||||
assert item2.name == item.name
|
||||
assert item2.fspath == item.fspath
|
||||
|
||||
|
@ -622,7 +622,7 @@ class TestSession:
|
|||
arg = p.basename + "::TestClass::test_method"
|
||||
items, hookrec = testdir.inline_genitems(arg)
|
||||
assert len(items) == 1
|
||||
item, = items
|
||||
(item,) = items
|
||||
assert item.nodeid.endswith("TestClass::test_method")
|
||||
# ensure we are reporting the collection of the single test item (#2464)
|
||||
assert [x.name for x in self.get_reported_items(hookrec)] == ["test_method"]
|
||||
|
@ -859,12 +859,16 @@ def test_exit_on_collection_with_maxfail_smaller_than_n_errors(testdir):
|
|||
|
||||
res = testdir.runpytest("--maxfail=1")
|
||||
assert res.ret == 1
|
||||
|
||||
res.stdout.fnmatch_lines(
|
||||
["*ERROR collecting test_02_import_error.py*", "*No module named *asdfa*"]
|
||||
[
|
||||
"collected 1 item / 1 error",
|
||||
"*ERROR collecting test_02_import_error.py*",
|
||||
"*No module named *asdfa*",
|
||||
"*! stopping after 1 failures !*",
|
||||
"*= 1 error in *",
|
||||
]
|
||||
)
|
||||
|
||||
assert "test_03" not in res.stdout.str()
|
||||
res.stdout.no_fnmatch_line("*test_03*")
|
||||
|
||||
|
||||
def test_exit_on_collection_with_maxfail_bigger_than_n_errors(testdir):
|
||||
|
@ -876,7 +880,6 @@ def test_exit_on_collection_with_maxfail_bigger_than_n_errors(testdir):
|
|||
|
||||
res = testdir.runpytest("--maxfail=4")
|
||||
assert res.ret == 2
|
||||
|
||||
res.stdout.fnmatch_lines(
|
||||
[
|
||||
"collected 2 items / 2 errors",
|
||||
|
@ -884,6 +887,8 @@ def test_exit_on_collection_with_maxfail_bigger_than_n_errors(testdir):
|
|||
"*No module named *asdfa*",
|
||||
"*ERROR collecting test_03_import_error.py*",
|
||||
"*No module named *asdfa*",
|
||||
"*! Interrupted: 2 errors during collection !*",
|
||||
"*= 2 errors in *",
|
||||
]
|
||||
)
|
||||
|
||||
|
@ -899,7 +904,7 @@ def test_continue_on_collection_errors(testdir):
|
|||
assert res.ret == 1
|
||||
|
||||
res.stdout.fnmatch_lines(
|
||||
["collected 2 items / 2 errors", "*1 failed, 1 passed, 2 error*"]
|
||||
["collected 2 items / 2 errors", "*1 failed, 1 passed, 2 errors*"]
|
||||
)
|
||||
|
||||
|
||||
|
@ -916,7 +921,7 @@ def test_continue_on_collection_errors_maxfail(testdir):
|
|||
res = testdir.runpytest("--continue-on-collection-errors", "--maxfail=3")
|
||||
assert res.ret == 1
|
||||
|
||||
res.stdout.fnmatch_lines(["collected 2 items / 2 errors", "*1 failed, 2 error*"])
|
||||
res.stdout.fnmatch_lines(["collected 2 items / 2 errors", "*1 failed, 2 errors*"])
|
||||
|
||||
|
||||
def test_fixture_scope_sibling_conftests(testdir):
|
||||
|
@ -1003,12 +1008,12 @@ def test_collect_init_tests(testdir):
|
|||
result.stdout.fnmatch_lines(
|
||||
["<Package */tests>", " <Module test_foo.py>", " <Function test_foo>"]
|
||||
)
|
||||
assert "test_init" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*test_init*")
|
||||
result = testdir.runpytest("./tests/__init__.py", "--collect-only")
|
||||
result.stdout.fnmatch_lines(
|
||||
["<Package */tests>", " <Module __init__.py>", " <Function test_init>"]
|
||||
)
|
||||
assert "test_foo" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*test_foo*")
|
||||
|
||||
|
||||
def test_collect_invalid_signature_message(testdir):
|
||||
|
@ -1260,7 +1265,7 @@ def test_collector_respects_tbstyle(testdir):
|
|||
' File "*/test_collector_respects_tbstyle.py", line 1, in <module>',
|
||||
" assert 0",
|
||||
"AssertionError: assert 0",
|
||||
"*! Interrupted: 1 errors during collection !*",
|
||||
"*! Interrupted: 1 error during collection !*",
|
||||
"*= 1 error in *",
|
||||
]
|
||||
)
|
||||
|
|
|
@ -4,6 +4,7 @@ from functools import wraps
|
|||
|
||||
import pytest
|
||||
from _pytest.compat import _PytestWrapper
|
||||
from _pytest.compat import cached_property
|
||||
from _pytest.compat import get_real_func
|
||||
from _pytest.compat import is_generator
|
||||
from _pytest.compat import safe_getattr
|
||||
|
@ -178,3 +179,23 @@ def test_safe_isclass():
|
|||
assert False, "Should be ignored"
|
||||
|
||||
assert safe_isclass(CrappyClass()) is False
|
||||
|
||||
|
||||
def test_cached_property() -> None:
|
||||
ncalls = 0
|
||||
|
||||
class Class:
|
||||
@cached_property
|
||||
def prop(self) -> int:
|
||||
nonlocal ncalls
|
||||
ncalls += 1
|
||||
return ncalls
|
||||
|
||||
c1 = Class()
|
||||
assert ncalls == 0
|
||||
assert c1.prop == 1
|
||||
assert c1.prop == 1
|
||||
c2 = Class()
|
||||
assert ncalls == 1
|
||||
assert c2.prop == 2
|
||||
assert c1.prop == 1
|
||||
|
|
|
@ -1,17 +1,18 @@
|
|||
import os
|
||||
import sys
|
||||
import textwrap
|
||||
from pathlib import Path
|
||||
|
||||
import _pytest._code
|
||||
import pytest
|
||||
from _pytest.compat import importlib_metadata
|
||||
from _pytest.config import _iter_rewritable_modules
|
||||
from _pytest.config import Config
|
||||
from _pytest.config.exceptions import UsageError
|
||||
from _pytest.config.findpaths import determine_setup
|
||||
from _pytest.config.findpaths import get_common_ancestor
|
||||
from _pytest.config.findpaths import getcfg
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.pathlib import Path
|
||||
|
||||
|
||||
class TestParseIni:
|
||||
|
@ -456,7 +457,7 @@ class TestConfigFromdictargs:
|
|||
|
||||
config = Config.fromdictargs(option_dict, args)
|
||||
assert config.args == ["a", "b"]
|
||||
assert config.invocation_params.args == args
|
||||
assert config.invocation_params.args == tuple(args)
|
||||
assert config.option.verbose == 4
|
||||
assert config.option.capture == "no"
|
||||
|
||||
|
@ -1235,7 +1236,7 @@ def test_invocation_args(testdir):
|
|||
call = calls[0]
|
||||
config = call.item.config
|
||||
|
||||
assert config.invocation_params.args == [p, "-v"]
|
||||
assert config.invocation_params.args == (p, "-v")
|
||||
assert config.invocation_params.dir == Path(str(testdir.tmpdir))
|
||||
|
||||
plugins = config.invocation_params.plugins
|
||||
|
@ -1243,6 +1244,10 @@ def test_invocation_args(testdir):
|
|||
assert plugins[0] is plugin
|
||||
assert type(plugins[1]).__name__ == "Collect" # installed by testdir.inline_run()
|
||||
|
||||
# args cannot be None
|
||||
with pytest.raises(TypeError):
|
||||
Config.InvocationParams(args=None, plugins=None, dir=Path())
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"plugin",
|
||||
|
@ -1286,7 +1291,7 @@ def test_config_blocked_default_plugins(testdir, plugin):
|
|||
if plugin != "terminal":
|
||||
result.stdout.fnmatch_lines(["* 1 failed in *"])
|
||||
else:
|
||||
assert result.stdout.lines == [""]
|
||||
assert result.stdout.lines == []
|
||||
|
||||
|
||||
class TestSetupCfg:
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
import os
|
||||
import textwrap
|
||||
from pathlib import Path
|
||||
|
||||
import py
|
||||
|
||||
import pytest
|
||||
from _pytest.config import PytestPluginManager
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.pathlib import Path
|
||||
|
||||
|
||||
def ConftestWithSetinitial(path):
|
||||
|
@ -187,7 +187,7 @@ def test_conftest_confcutdir(testdir):
|
|||
)
|
||||
result = testdir.runpytest("-h", "--confcutdir=%s" % x, x)
|
||||
result.stdout.fnmatch_lines(["*--xyz*"])
|
||||
assert "warning: could not load initial" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*warning: could not load initial*")
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
|
@ -648,5 +648,5 @@ def test_required_option_help(testdir):
|
|||
)
|
||||
)
|
||||
result = testdir.runpytest("-h", x)
|
||||
assert "argument --xyz is required" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*argument --xyz is required*")
|
||||
assert "general:" in result.stdout.str()
|
||||
|
|
|
@ -239,8 +239,8 @@ class TestDoctests:
|
|||
]
|
||||
)
|
||||
# lines below should be trimmed out
|
||||
assert "text-line-2" not in result.stdout.str()
|
||||
assert "text-line-after" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*text-line-2*")
|
||||
result.stdout.no_fnmatch_line("*text-line-after*")
|
||||
|
||||
def test_docstring_full_context_around_error(self, testdir):
|
||||
"""Test that we show the whole context before the actual line of a failing
|
||||
|
@ -334,7 +334,7 @@ class TestDoctests:
|
|||
[
|
||||
"*ERROR collecting hello.py*",
|
||||
"*{e}: No module named *asdals*".format(e=MODULE_NOT_FOUND_ERROR),
|
||||
"*Interrupted: 1 errors during collection*",
|
||||
"*Interrupted: 1 error during collection*",
|
||||
]
|
||||
)
|
||||
|
||||
|
@ -839,7 +839,8 @@ class TestLiterals:
|
|||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(failed=1)
|
||||
|
||||
def test_number_re(self):
|
||||
def test_number_re(self) -> None:
|
||||
_number_re = _get_checker()._number_re # type: ignore
|
||||
for s in [
|
||||
"1.",
|
||||
"+1.",
|
||||
|
@ -861,12 +862,12 @@ class TestLiterals:
|
|||
"-1.2e-3",
|
||||
]:
|
||||
print(s)
|
||||
m = _get_checker()._number_re.match(s)
|
||||
m = _number_re.match(s)
|
||||
assert m is not None
|
||||
assert float(m.group()) == pytest.approx(float(s))
|
||||
for s in ["1", "abc"]:
|
||||
print(s)
|
||||
assert _get_checker()._number_re.match(s) is None
|
||||
assert _number_re.match(s) is None
|
||||
|
||||
@pytest.mark.parametrize("config_mode", ["ini", "comment"])
|
||||
def test_number_precision(self, testdir, config_mode):
|
||||
|
@ -1177,7 +1178,7 @@ class TestDoctestAutoUseFixtures:
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest("--doctest-modules")
|
||||
assert "FAILURES" not in str(result.stdout.str())
|
||||
result.stdout.no_fnmatch_line("*FAILURES*")
|
||||
result.stdout.fnmatch_lines(["*=== 1 passed in *"])
|
||||
|
||||
@pytest.mark.parametrize("scope", SCOPES)
|
||||
|
@ -1209,7 +1210,7 @@ class TestDoctestAutoUseFixtures:
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest("--doctest-modules")
|
||||
assert "FAILURES" not in str(result.stdout.str())
|
||||
str(result.stdout.no_fnmatch_line("*FAILURES*"))
|
||||
result.stdout.fnmatch_lines(["*=== 1 passed in *"])
|
||||
|
||||
|
||||
|
|
|
@ -58,13 +58,13 @@ def test_timeout(testdir, enabled):
|
|||
"""
|
||||
import time
|
||||
def test_timeout():
|
||||
time.sleep(2.0)
|
||||
time.sleep(0.1)
|
||||
"""
|
||||
)
|
||||
testdir.makeini(
|
||||
"""
|
||||
[pytest]
|
||||
faulthandler_timeout = 1
|
||||
faulthandler_timeout = 0.01
|
||||
"""
|
||||
)
|
||||
args = ["-p", "no:faulthandler"] if not enabled else []
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
import os
|
||||
import platform
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from xml.dom import minidom
|
||||
|
||||
import py
|
||||
|
@ -9,6 +8,7 @@ import xmlschema
|
|||
|
||||
import pytest
|
||||
from _pytest.junitxml import LogXML
|
||||
from _pytest.pathlib import Path
|
||||
from _pytest.reports import BaseReport
|
||||
|
||||
|
||||
|
@ -477,22 +477,25 @@ class TestPython:
|
|||
assert "ValueError" in fnode.toxml()
|
||||
systemout = fnode.next_sibling
|
||||
assert systemout.tag == "system-out"
|
||||
assert "hello-stdout" in systemout.toxml()
|
||||
assert "info msg" not in systemout.toxml()
|
||||
systemout_xml = systemout.toxml()
|
||||
assert "hello-stdout" in systemout_xml
|
||||
assert "info msg" not in systemout_xml
|
||||
systemerr = systemout.next_sibling
|
||||
assert systemerr.tag == "system-err"
|
||||
assert "hello-stderr" in systemerr.toxml()
|
||||
assert "info msg" not in systemerr.toxml()
|
||||
systemerr_xml = systemerr.toxml()
|
||||
assert "hello-stderr" in systemerr_xml
|
||||
assert "info msg" not in systemerr_xml
|
||||
|
||||
if junit_logging == "system-out":
|
||||
assert "warning msg" in systemout.toxml()
|
||||
assert "warning msg" not in systemerr.toxml()
|
||||
assert "warning msg" in systemout_xml
|
||||
assert "warning msg" not in systemerr_xml
|
||||
elif junit_logging == "system-err":
|
||||
assert "warning msg" not in systemout.toxml()
|
||||
assert "warning msg" in systemerr.toxml()
|
||||
elif junit_logging == "no":
|
||||
assert "warning msg" not in systemout.toxml()
|
||||
assert "warning msg" not in systemerr.toxml()
|
||||
assert "warning msg" not in systemout_xml
|
||||
assert "warning msg" in systemerr_xml
|
||||
else:
|
||||
assert junit_logging == "no"
|
||||
assert "warning msg" not in systemout_xml
|
||||
assert "warning msg" not in systemerr_xml
|
||||
|
||||
@parametrize_families
|
||||
def test_failure_verbose_message(self, testdir, run_and_parse, xunit_family):
|
||||
|
@ -1216,7 +1219,7 @@ def test_runs_twice(testdir, run_and_parse):
|
|||
)
|
||||
|
||||
result, dom = run_and_parse(f, f)
|
||||
assert "INTERNALERROR" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*INTERNALERROR*")
|
||||
first, second = [x["classname"] for x in dom.find_by_tag("testcase")]
|
||||
assert first == second
|
||||
|
||||
|
@ -1231,7 +1234,7 @@ def test_runs_twice_xdist(testdir, run_and_parse):
|
|||
)
|
||||
|
||||
result, dom = run_and_parse(f, "--dist", "each", "--tx", "2*popen")
|
||||
assert "INTERNALERROR" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*INTERNALERROR*")
|
||||
first, second = [x["classname"] for x in dom.find_by_tag("testcase")]
|
||||
assert first == second
|
||||
|
||||
|
@ -1271,7 +1274,7 @@ def test_fancy_items_regression(testdir, run_and_parse):
|
|||
|
||||
result, dom = run_and_parse()
|
||||
|
||||
assert "INTERNALERROR" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*INTERNALERROR*")
|
||||
|
||||
items = sorted("%(classname)s %(name)s" % x for x in dom.find_by_tag("testcase"))
|
||||
import pprint
|
||||
|
|
|
@ -314,6 +314,21 @@ def test_keyword_option_parametrize(spec, testdir):
|
|||
assert list(passed) == list(passed_result)
|
||||
|
||||
|
||||
def test_parametrize_with_module(testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
import pytest
|
||||
@pytest.mark.parametrize("arg", [pytest,])
|
||||
def test_func(arg):
|
||||
pass
|
||||
"""
|
||||
)
|
||||
rec = testdir.inline_run()
|
||||
passed, skipped, fail = rec.listoutcomes()
|
||||
expected_id = "test_func[" + pytest.__name__ + "]"
|
||||
assert passed[0].nodeid.split("::")[-1] == expected_id
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"spec",
|
||||
[
|
||||
|
@ -831,6 +846,12 @@ class TestMarkDecorator:
|
|||
def test__eq__(self, lhs, rhs, expected):
|
||||
assert (lhs == rhs) == expected
|
||||
|
||||
def test_aliases(self) -> None:
|
||||
md = pytest.mark.foo(1, "2", three=3)
|
||||
assert md.name == "foo"
|
||||
assert md.args == (1, "2")
|
||||
assert md.kwargs == {"three": 3}
|
||||
|
||||
|
||||
@pytest.mark.parametrize("mark", [None, "", "skip", "xfail"])
|
||||
def test_parameterset_for_parametrize_marks(testdir, mark):
|
||||
|
@ -891,7 +912,7 @@ def test_parameterset_for_fail_at_collect(testdir):
|
|||
result = testdir.runpytest(str(p1))
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
"collected 0 items / 1 errors",
|
||||
"collected 0 items / 1 error",
|
||||
"* ERROR collecting test_parameterset_for_fail_at_collect.py *",
|
||||
"Empty parameter set in 'test' at line 3",
|
||||
"*= 1 error in *",
|
||||
|
@ -990,7 +1011,7 @@ def test_markers_from_parametrize(testdir):
|
|||
def test_pytest_param_id_requires_string():
|
||||
with pytest.raises(TypeError) as excinfo:
|
||||
pytest.param(id=True)
|
||||
msg, = excinfo.value.args
|
||||
(msg,) = excinfo.value.args
|
||||
assert msg == "Expected id to be a string, got <class 'bool'>: True"
|
||||
|
||||
|
||||
|
|
|
@ -15,6 +15,7 @@ def _modules():
|
|||
)
|
||||
|
||||
|
||||
@pytest.mark.slow
|
||||
@pytest.mark.parametrize("module", _modules())
|
||||
def test_no_warnings(module):
|
||||
# fmt: off
|
||||
|
|
|
@ -193,7 +193,7 @@ class TestPDB:
|
|||
)
|
||||
child = testdir.spawn_pytest("-rs --pdb %s" % p1)
|
||||
child.expect("Skipping also with pdb active")
|
||||
child.expect("1 skipped in")
|
||||
child.expect_exact("= \x1b[33m\x1b[1m1 skipped\x1b[0m\x1b[33m in")
|
||||
child.sendeof()
|
||||
self.flush(child)
|
||||
|
||||
|
@ -221,7 +221,7 @@ class TestPDB:
|
|||
child.sendeof()
|
||||
rest = child.read().decode("utf8")
|
||||
assert "Exit: Quitting debugger" in rest
|
||||
assert "= 1 failed in" in rest
|
||||
assert "= \x1b[31m\x1b[1m1 failed\x1b[0m\x1b[31m in" in rest
|
||||
assert "def test_1" not in rest
|
||||
assert "get rekt" not in rest
|
||||
self.flush(child)
|
||||
|
@ -466,7 +466,6 @@ class TestPDB:
|
|||
def test_pdb_interaction_doctest(self, testdir, monkeypatch):
|
||||
p1 = testdir.makepyfile(
|
||||
"""
|
||||
import pytest
|
||||
def function_1():
|
||||
'''
|
||||
>>> i = 0
|
||||
|
@ -485,9 +484,32 @@ class TestPDB:
|
|||
|
||||
child.sendeof()
|
||||
rest = child.read().decode("utf8")
|
||||
assert "! _pytest.outcomes.Exit: Quitting debugger !" in rest
|
||||
assert "BdbQuit" not in rest
|
||||
assert "1 failed" in rest
|
||||
self.flush(child)
|
||||
|
||||
def test_doctest_set_trace_quit(self, testdir, monkeypatch):
|
||||
p1 = testdir.makepyfile(
|
||||
"""
|
||||
def function_1():
|
||||
'''
|
||||
>>> __import__('pdb').set_trace()
|
||||
'''
|
||||
"""
|
||||
)
|
||||
# NOTE: does not use pytest.set_trace, but Python's patched pdb,
|
||||
# therefore "-s" is required.
|
||||
child = testdir.spawn_pytest("--doctest-modules --pdb -s %s" % p1)
|
||||
child.expect("Pdb")
|
||||
child.sendline("q")
|
||||
rest = child.read().decode("utf8")
|
||||
|
||||
assert "! _pytest.outcomes.Exit: Quitting debugger !" in rest
|
||||
assert "= \x1b[33mno tests ran\x1b[0m\x1b[33m in" in rest
|
||||
assert "BdbQuit" not in rest
|
||||
assert "UNEXPECTED EXCEPTION" not in rest
|
||||
|
||||
def test_pdb_interaction_capturing_twice(self, testdir):
|
||||
p1 = testdir.makepyfile(
|
||||
"""
|
||||
|
@ -703,7 +725,7 @@ class TestPDB:
|
|||
assert "> PDB continue (IO-capturing resumed) >" in rest
|
||||
else:
|
||||
assert "> PDB continue >" in rest
|
||||
assert "1 passed in" in rest
|
||||
assert "= \x1b[32m\x1b[1m1 passed\x1b[0m\x1b[32m in" in rest
|
||||
|
||||
def test_pdb_used_outside_test(self, testdir):
|
||||
p1 = testdir.makepyfile(
|
||||
|
@ -1019,7 +1041,7 @@ class TestTraceOption:
|
|||
child.sendline("q")
|
||||
child.expect_exact("Exit: Quitting debugger")
|
||||
rest = child.read().decode("utf8")
|
||||
assert "2 passed in" in rest
|
||||
assert "= \x1b[32m\x1b[1m2 passed\x1b[0m\x1b[32m in" in rest
|
||||
assert "reading from stdin while output" not in rest
|
||||
# Only printed once - not on stderr.
|
||||
assert "Exit: Quitting debugger" not in child.before.decode("utf8")
|
||||
|
@ -1064,7 +1086,7 @@ class TestTraceOption:
|
|||
child.sendline("c")
|
||||
child.expect_exact("> PDB continue (IO-capturing resumed) >")
|
||||
rest = child.read().decode("utf8")
|
||||
assert "6 passed in" in rest
|
||||
assert "= \x1b[32m\x1b[1m6 passed\x1b[0m\x1b[32m in" in rest
|
||||
assert "reading from stdin while output" not in rest
|
||||
# Only printed once - not on stderr.
|
||||
assert "Exit: Quitting debugger" not in child.before.decode("utf8")
|
||||
|
@ -1175,7 +1197,7 @@ def test_pdb_suspends_fixture_capturing(testdir, fixture):
|
|||
|
||||
TestPDB.flush(child)
|
||||
assert child.exitstatus == 0
|
||||
assert "= 1 passed in " in rest
|
||||
assert "= \x1b[32m\x1b[1m1 passed\x1b[0m\x1b[32m in" in rest
|
||||
assert "> PDB continue (IO-capturing resumed for fixture %s) >" % (fixture) in rest
|
||||
|
||||
|
||||
|
|
|
@ -135,6 +135,36 @@ class TestPytestPluginInteractions:
|
|||
ihook_b = session.gethookproxy(testdir.tmpdir.join("tests"))
|
||||
assert ihook_a is not ihook_b
|
||||
|
||||
def test_hook_with_addoption(self, testdir):
|
||||
"""Test that hooks can be used in a call to pytest_addoption"""
|
||||
testdir.makepyfile(
|
||||
newhooks="""
|
||||
import pytest
|
||||
@pytest.hookspec(firstresult=True)
|
||||
def pytest_default_value():
|
||||
pass
|
||||
"""
|
||||
)
|
||||
testdir.makepyfile(
|
||||
myplugin="""
|
||||
import newhooks
|
||||
def pytest_addhooks(pluginmanager):
|
||||
pluginmanager.add_hookspecs(newhooks)
|
||||
def pytest_addoption(parser, pluginmanager):
|
||||
default_value = pluginmanager.hook.pytest_default_value()
|
||||
parser.addoption("--config", help="Config, defaults to %(default)s", default=default_value)
|
||||
"""
|
||||
)
|
||||
testdir.makeconftest(
|
||||
"""
|
||||
pytest_plugins=("myplugin",)
|
||||
def pytest_default_value():
|
||||
return "default_value"
|
||||
"""
|
||||
)
|
||||
res = testdir.runpytest("--help")
|
||||
res.stdout.fnmatch_lines(["*--config=CONFIG*default_value*"])
|
||||
|
||||
|
||||
def test_default_markers(testdir):
|
||||
result = testdir.runpytest("--markers")
|
||||
|
|
|
@ -121,17 +121,6 @@ def test_runresult_assertion_on_xpassed(testdir):
|
|||
assert result.ret == 0
|
||||
|
||||
|
||||
def test_runresult_repr():
|
||||
from _pytest.pytester import RunResult
|
||||
|
||||
assert (
|
||||
repr(
|
||||
RunResult(ret="ret", outlines=[""], errlines=["some", "errors"], duration=1)
|
||||
)
|
||||
== "<RunResult ret='ret' len(stdout.lines)=1 len(stderr.lines)=2 duration=1.00s>"
|
||||
)
|
||||
|
||||
|
||||
def test_xpassed_with_strict_is_considered_a_failure(testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
|
@ -406,6 +395,27 @@ def test_testdir_subprocess(testdir):
|
|||
assert testdir.runpytest_subprocess(testfile).ret == 0
|
||||
|
||||
|
||||
def test_testdir_subprocess_via_runpytest_arg(testdir) -> None:
|
||||
testfile = testdir.makepyfile(
|
||||
"""
|
||||
def test_testdir_subprocess(testdir):
|
||||
import os
|
||||
testfile = testdir.makepyfile(
|
||||
\"""
|
||||
import os
|
||||
def test_one():
|
||||
assert {} != os.getpid()
|
||||
\""".format(os.getpid())
|
||||
)
|
||||
assert testdir.runpytest(testfile).ret == 0
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest_subprocess(
|
||||
"-p", "pytester", "--runpytest", "subprocess", testfile
|
||||
)
|
||||
assert result.ret == 0
|
||||
|
||||
|
||||
def test_unicode_args(testdir):
|
||||
result = testdir.runpytest("-k", "💩")
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
@ -457,6 +467,81 @@ def test_linematcher_with_nonlist():
|
|||
assert lm._getlines(set()) == set()
|
||||
|
||||
|
||||
def test_linematcher_match_failure():
|
||||
lm = LineMatcher(["foo", "foo", "bar"])
|
||||
with pytest.raises(pytest.fail.Exception) as e:
|
||||
lm.fnmatch_lines(["foo", "f*", "baz"])
|
||||
assert e.value.msg.splitlines() == [
|
||||
"exact match: 'foo'",
|
||||
"fnmatch: 'f*'",
|
||||
" with: 'foo'",
|
||||
"nomatch: 'baz'",
|
||||
" and: 'bar'",
|
||||
"remains unmatched: 'baz'",
|
||||
]
|
||||
|
||||
lm = LineMatcher(["foo", "foo", "bar"])
|
||||
with pytest.raises(pytest.fail.Exception) as e:
|
||||
lm.re_match_lines(["foo", "^f.*", "baz"])
|
||||
assert e.value.msg.splitlines() == [
|
||||
"exact match: 'foo'",
|
||||
"re.match: '^f.*'",
|
||||
" with: 'foo'",
|
||||
" nomatch: 'baz'",
|
||||
" and: 'bar'",
|
||||
"remains unmatched: 'baz'",
|
||||
]
|
||||
|
||||
|
||||
@pytest.mark.parametrize("function", ["no_fnmatch_line", "no_re_match_line"])
|
||||
def test_no_matching(function):
|
||||
if function == "no_fnmatch_line":
|
||||
good_pattern = "*.py OK*"
|
||||
bad_pattern = "*X.py OK*"
|
||||
else:
|
||||
assert function == "no_re_match_line"
|
||||
good_pattern = r".*py OK"
|
||||
bad_pattern = r".*Xpy OK"
|
||||
|
||||
lm = LineMatcher(
|
||||
[
|
||||
"cachedir: .pytest_cache",
|
||||
"collecting ... collected 1 item",
|
||||
"",
|
||||
"show_fixtures_per_test.py OK",
|
||||
"=== elapsed 1s ===",
|
||||
]
|
||||
)
|
||||
|
||||
# check the function twice to ensure we don't accumulate the internal buffer
|
||||
for i in range(2):
|
||||
with pytest.raises(pytest.fail.Exception) as e:
|
||||
func = getattr(lm, function)
|
||||
func(good_pattern)
|
||||
obtained = str(e.value).splitlines()
|
||||
if function == "no_fnmatch_line":
|
||||
assert obtained == [
|
||||
"nomatch: '{}'".format(good_pattern),
|
||||
" and: 'cachedir: .pytest_cache'",
|
||||
" and: 'collecting ... collected 1 item'",
|
||||
" and: ''",
|
||||
"fnmatch: '{}'".format(good_pattern),
|
||||
" with: 'show_fixtures_per_test.py OK'",
|
||||
]
|
||||
else:
|
||||
assert obtained == [
|
||||
"nomatch: '{}'".format(good_pattern),
|
||||
" and: 'cachedir: .pytest_cache'",
|
||||
" and: 'collecting ... collected 1 item'",
|
||||
" and: ''",
|
||||
"re.match: '{}'".format(good_pattern),
|
||||
" with: 'show_fixtures_per_test.py OK'",
|
||||
]
|
||||
|
||||
func = getattr(lm, function)
|
||||
func(bad_pattern) # bad pattern does not match any line: passes
|
||||
|
||||
|
||||
def test_pytester_addopts(request, monkeypatch):
|
||||
monkeypatch.setenv("PYTEST_ADDOPTS", "--orig-unused")
|
||||
|
||||
|
@ -570,3 +655,22 @@ def test_spawn_uses_tmphome(testdir):
|
|||
child = testdir.spawn_pytest(str(p1))
|
||||
out = child.read()
|
||||
assert child.wait() == 0, out.decode("utf8")
|
||||
|
||||
|
||||
def test_run_result_repr():
|
||||
outlines = ["some", "normal", "output"]
|
||||
errlines = ["some", "nasty", "errors", "happened"]
|
||||
|
||||
# known exit code
|
||||
r = pytester.RunResult(1, outlines, errlines, duration=0.5)
|
||||
assert (
|
||||
repr(r) == "<RunResult ret=ExitCode.TESTS_FAILED len(stdout.lines)=3"
|
||||
" len(stderr.lines)=4 duration=0.50s>"
|
||||
)
|
||||
|
||||
# unknown exit code: just the number
|
||||
r = pytester.RunResult(99, outlines, errlines, duration=0.5)
|
||||
assert (
|
||||
repr(r) == "<RunResult ret=99 len(stdout.lines)=3"
|
||||
" len(stderr.lines)=4 duration=0.50s>"
|
||||
)
|
||||
|
|
|
@ -330,7 +330,7 @@ class TestHooks:
|
|||
data = pytestconfig.hook.pytest_report_to_serializable(
|
||||
config=pytestconfig, report=rep
|
||||
)
|
||||
assert data["_report_type"] == "TestReport"
|
||||
assert data["$report_type"] == "TestReport"
|
||||
new_rep = pytestconfig.hook.pytest_report_from_serializable(
|
||||
config=pytestconfig, data=data
|
||||
)
|
||||
|
@ -352,7 +352,7 @@ class TestHooks:
|
|||
data = pytestconfig.hook.pytest_report_to_serializable(
|
||||
config=pytestconfig, report=rep
|
||||
)
|
||||
assert data["_report_type"] == "CollectReport"
|
||||
assert data["$report_type"] == "CollectReport"
|
||||
new_rep = pytestconfig.hook.pytest_report_from_serializable(
|
||||
config=pytestconfig, data=data
|
||||
)
|
||||
|
@ -376,7 +376,7 @@ class TestHooks:
|
|||
data = pytestconfig.hook.pytest_report_to_serializable(
|
||||
config=pytestconfig, report=rep
|
||||
)
|
||||
data["_report_type"] = "Unknown"
|
||||
data["$report_type"] = "Unknown"
|
||||
with pytest.raises(AssertionError):
|
||||
_ = pytestconfig.hook.pytest_report_from_serializable(
|
||||
config=pytestconfig, data=data
|
||||
|
|
|
@ -483,13 +483,22 @@ def test_callinfo():
|
|||
assert ci.result == 0
|
||||
assert "result" in repr(ci)
|
||||
assert repr(ci) == "<CallInfo when='123' result: 0>"
|
||||
assert str(ci) == "<CallInfo when='123' result: 0>"
|
||||
|
||||
ci = runner.CallInfo.from_call(lambda: 0 / 0, "123")
|
||||
assert ci.when == "123"
|
||||
assert not hasattr(ci, "result")
|
||||
assert repr(ci) == "<CallInfo when='123' exception: division by zero>"
|
||||
assert repr(ci) == "<CallInfo when='123' excinfo={!r}>".format(ci.excinfo)
|
||||
assert str(ci) == repr(ci)
|
||||
assert ci.excinfo
|
||||
assert "exc" in repr(ci)
|
||||
|
||||
# Newlines are escaped.
|
||||
def raise_assertion():
|
||||
assert 0, "assert_msg"
|
||||
|
||||
ci = runner.CallInfo.from_call(raise_assertion, "call")
|
||||
assert repr(ci) == "<CallInfo when='call' excinfo={!r}>".format(ci.excinfo)
|
||||
assert "\n" not in repr(ci)
|
||||
|
||||
|
||||
# design question: do we want general hooks in python files?
|
||||
|
@ -588,7 +597,7 @@ def test_pytest_exit_returncode(testdir):
|
|||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*! *Exit: some exit msg !*"])
|
||||
|
||||
assert _strip_resource_warnings(result.stderr.lines) == [""]
|
||||
assert _strip_resource_warnings(result.stderr.lines) == []
|
||||
assert result.ret == 99
|
||||
|
||||
# It prints to stderr also in case of exit during pytest_sessionstart.
|
||||
|
@ -603,8 +612,7 @@ def test_pytest_exit_returncode(testdir):
|
|||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*! *Exit: during_sessionstart !*"])
|
||||
assert _strip_resource_warnings(result.stderr.lines) == [
|
||||
"Exit: during_sessionstart",
|
||||
"",
|
||||
"Exit: during_sessionstart"
|
||||
]
|
||||
assert result.ret == 98
|
||||
|
||||
|
@ -622,7 +630,7 @@ def test_pytest_fail_notrace_runtest(testdir):
|
|||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["world", "hello"])
|
||||
assert "def teardown_function" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*def teardown_function*")
|
||||
|
||||
|
||||
def test_pytest_fail_notrace_collection(testdir):
|
||||
|
@ -637,7 +645,7 @@ def test_pytest_fail_notrace_collection(testdir):
|
|||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["hello"])
|
||||
assert "def some_internal_function()" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*def some_internal_function()*")
|
||||
|
||||
|
||||
def test_pytest_fail_notrace_non_ascii(testdir):
|
||||
|
@ -655,7 +663,7 @@ def test_pytest_fail_notrace_non_ascii(testdir):
|
|||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*test_hello*", "oh oh: ☺"])
|
||||
assert "def test_hello" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*def test_hello*")
|
||||
|
||||
|
||||
def test_pytest_no_tests_collected_exit_status(testdir):
|
||||
|
@ -820,7 +828,7 @@ def test_failure_in_setup(testdir):
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest("--tb=line")
|
||||
assert "def setup_module" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*def setup_module*")
|
||||
|
||||
|
||||
def test_makereport_getsource(testdir):
|
||||
|
@ -832,7 +840,7 @@ def test_makereport_getsource(testdir):
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
assert "INTERNALERROR" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*INTERNALERROR*")
|
||||
result.stdout.fnmatch_lines(["*else: assert False*"])
|
||||
|
||||
|
||||
|
@ -863,7 +871,7 @@ def test_makereport_getsource_dynamic_code(testdir, monkeypatch):
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest("-vv")
|
||||
assert "INTERNALERROR" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*INTERNALERROR*")
|
||||
result.stdout.fnmatch_lines(["*test_fix*", "*fixture*'missing'*not found*"])
|
||||
|
||||
|
||||
|
|
|
@ -234,10 +234,10 @@ def test_setup_funcarg_setup_when_outer_scope_fails(testdir):
|
|||
"*ValueError*42*",
|
||||
"*function2*",
|
||||
"*ValueError*42*",
|
||||
"*2 error*",
|
||||
"*2 errors*",
|
||||
]
|
||||
)
|
||||
assert "xyz43" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*xyz43*")
|
||||
|
||||
|
||||
@pytest.mark.parametrize("arg", ["", "arg"])
|
||||
|
|
|
@ -102,15 +102,20 @@ class SessionTests:
|
|||
p = testdir.makepyfile(
|
||||
"""
|
||||
import pytest
|
||||
|
||||
class reprexc(BaseException):
|
||||
def __str__(self):
|
||||
return "Ha Ha fooled you, I'm a broken repr()."
|
||||
|
||||
class BrokenRepr1(object):
|
||||
foo=0
|
||||
def __repr__(self):
|
||||
raise Exception("Ha Ha fooled you, I'm a broken repr().")
|
||||
raise reprexc
|
||||
|
||||
class TestBrokenClass(object):
|
||||
def test_explicit_bad_repr(self):
|
||||
t = BrokenRepr1()
|
||||
with pytest.raises(Exception, match="I'm a broken repr"):
|
||||
with pytest.raises(BaseException, match="broken repr"):
|
||||
repr(t)
|
||||
|
||||
def test_implicit_bad_repr1(self):
|
||||
|
@ -123,12 +128,7 @@ class SessionTests:
|
|||
passed, skipped, failed = reprec.listoutcomes()
|
||||
assert (len(passed), len(skipped), len(failed)) == (1, 0, 1)
|
||||
out = failed[0].longrepr.reprcrash.message
|
||||
assert (
|
||||
out.find(
|
||||
"""[Exception("Ha Ha fooled you, I'm a broken repr().") raised in repr()]"""
|
||||
)
|
||||
!= -1
|
||||
)
|
||||
assert out.find("<[reprexc() raised in repr()] BrokenRepr1") != -1
|
||||
|
||||
def test_broken_repr_with_showlocals_verbose(self, testdir):
|
||||
p = testdir.makepyfile(
|
||||
|
@ -151,7 +151,7 @@ class SessionTests:
|
|||
assert repr_locals.lines
|
||||
assert len(repr_locals.lines) == 1
|
||||
assert repr_locals.lines[0].startswith(
|
||||
'x = <[NotImplementedError("") raised in repr()] ObjWithErrorInRepr'
|
||||
"x = <[NotImplementedError() raised in repr()] ObjWithErrorInRepr"
|
||||
)
|
||||
|
||||
def test_skip_file_by_conftest(self, testdir):
|
||||
|
|
|
@ -28,7 +28,7 @@ def test_show_only_active_fixtures(testdir, mode, dummy_yaml_custom_test):
|
|||
result.stdout.fnmatch_lines(
|
||||
["*SETUP F arg1*", "*test_arg1 (fixtures used: arg1)*", "*TEARDOWN F arg1*"]
|
||||
)
|
||||
assert "_arg0" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*_arg0*")
|
||||
|
||||
|
||||
def test_show_different_scopes(testdir, mode):
|
|
@ -115,7 +115,7 @@ class TestEvaluator:
|
|||
)
|
||||
|
||||
def test_skipif_class(self, testdir):
|
||||
item, = testdir.getitems(
|
||||
(item,) = testdir.getitems(
|
||||
"""
|
||||
import pytest
|
||||
class TestClass(object):
|
||||
|
@ -731,23 +731,37 @@ def test_skipif_class(testdir):
|
|||
def test_skipped_reasons_functional(testdir):
|
||||
testdir.makepyfile(
|
||||
test_one="""
|
||||
import pytest
|
||||
from conftest import doskip
|
||||
|
||||
def setup_function(func):
|
||||
doskip()
|
||||
|
||||
def test_func():
|
||||
pass
|
||||
|
||||
class TestClass(object):
|
||||
def test_method(self):
|
||||
doskip()
|
||||
|
||||
@pytest.mark.skip("via_decorator")
|
||||
def test_deco(self):
|
||||
assert 0
|
||||
""",
|
||||
conftest="""
|
||||
import pytest
|
||||
import pytest, sys
|
||||
def doskip():
|
||||
assert sys._getframe().f_lineno == 3
|
||||
pytest.skip('test')
|
||||
""",
|
||||
)
|
||||
result = testdir.runpytest("-rs")
|
||||
result.stdout.fnmatch_lines(["*SKIP*2*conftest.py:4: test"])
|
||||
result.stdout.fnmatch_lines_random(
|
||||
[
|
||||
"SKIPPED [[]2[]] */conftest.py:4: test",
|
||||
"SKIPPED [[]1[]] test_one.py:14: via_decorator",
|
||||
]
|
||||
)
|
||||
assert result.ret == 0
|
||||
|
||||
|
||||
|
@ -886,7 +900,7 @@ def test_errors_in_xfail_skip_expressions(testdir):
|
|||
" syntax error",
|
||||
markline,
|
||||
"SyntaxError: invalid syntax",
|
||||
"*1 pass*2 error*",
|
||||
"*1 pass*2 errors*",
|
||||
]
|
||||
)
|
||||
|
||||
|
@ -949,7 +963,7 @@ def test_xfail_test_setup_exception(testdir):
|
|||
result = testdir.runpytest(p)
|
||||
assert result.ret == 0
|
||||
assert "xfailed" in result.stdout.str()
|
||||
assert "xpassed" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*xpassed*")
|
||||
|
||||
|
||||
def test_imperativeskip_on_xfail_test(testdir):
|
||||
|
|
|
@ -164,7 +164,7 @@ def test_stop_on_collection_errors(broken_testdir, broken_first):
|
|||
if broken_first:
|
||||
files.reverse()
|
||||
result = broken_testdir.runpytest("-v", "--strict-markers", "--stepwise", *files)
|
||||
result.stdout.fnmatch_lines("*errors during collection*")
|
||||
result.stdout.fnmatch_lines("*error during collection*")
|
||||
|
||||
|
||||
def test_xfail_handling(testdir):
|
||||
|
|
|
@ -21,30 +21,26 @@ from _pytest.terminal import getreportopt
|
|||
from _pytest.terminal import TerminalReporter
|
||||
|
||||
DistInfo = collections.namedtuple("DistInfo", ["project_name", "version"])
|
||||
RED = r"\x1b\[31m"
|
||||
GREEN = r"\x1b\[32m"
|
||||
YELLOW = r"\x1b\[33m"
|
||||
RESET = r"\x1b\[0m"
|
||||
|
||||
|
||||
class Option:
|
||||
def __init__(self, verbosity=0, fulltrace=False):
|
||||
def __init__(self, verbosity=0):
|
||||
self.verbosity = verbosity
|
||||
self.fulltrace = fulltrace
|
||||
|
||||
@property
|
||||
def args(self):
|
||||
values = []
|
||||
values.append("--verbosity=%d" % self.verbosity)
|
||||
if self.fulltrace:
|
||||
values.append("--fulltrace")
|
||||
return values
|
||||
|
||||
|
||||
@pytest.fixture(
|
||||
params=[
|
||||
Option(verbosity=0),
|
||||
Option(verbosity=1),
|
||||
Option(verbosity=-1),
|
||||
Option(fulltrace=True),
|
||||
],
|
||||
ids=["default", "verbose", "quiet", "fulltrace"],
|
||||
params=[Option(verbosity=0), Option(verbosity=1), Option(verbosity=-1)],
|
||||
ids=["default", "verbose", "quiet"],
|
||||
)
|
||||
def option(request):
|
||||
return request.param
|
||||
|
@ -165,7 +161,7 @@ class TestTerminal:
|
|||
child.expect(r"collecting 2 items")
|
||||
child.expect(r"collected 2 items")
|
||||
rest = child.read().decode("utf8")
|
||||
assert "2 passed in" in rest
|
||||
assert "= \x1b[32m\x1b[1m2 passed\x1b[0m\x1b[32m in" in rest
|
||||
|
||||
def test_itemreport_subclasses_show_subclassed_file(self, testdir):
|
||||
testdir.makepyfile(
|
||||
|
@ -205,9 +201,10 @@ class TestTerminal:
|
|||
result = testdir.runpytest("-vv")
|
||||
assert result.ret == 0
|
||||
result.stdout.fnmatch_lines(["*a123/test_hello123.py*PASS*"])
|
||||
assert " <- " not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("* <- *")
|
||||
|
||||
def test_keyboard_interrupt(self, testdir, option):
|
||||
@pytest.mark.parametrize("fulltrace", ("", "--fulltrace"))
|
||||
def test_keyboard_interrupt(self, testdir, fulltrace):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test_foobar():
|
||||
|
@ -219,7 +216,7 @@ class TestTerminal:
|
|||
"""
|
||||
)
|
||||
|
||||
result = testdir.runpytest(*option.args, no_reraise_ctrlc=True)
|
||||
result = testdir.runpytest(fulltrace, no_reraise_ctrlc=True)
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
" def test_foobar():",
|
||||
|
@ -228,7 +225,7 @@ class TestTerminal:
|
|||
"*_keyboard_interrupt.py:6: KeyboardInterrupt*",
|
||||
]
|
||||
)
|
||||
if option.fulltrace:
|
||||
if fulltrace:
|
||||
result.stdout.fnmatch_lines(
|
||||
["*raise KeyboardInterrupt # simulating the user*"]
|
||||
)
|
||||
|
@ -560,7 +557,7 @@ class TestTerminalFunctional:
|
|||
"*= 2 passed, 1 deselected in * =*",
|
||||
]
|
||||
)
|
||||
assert "= 1 deselected =" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*= 1 deselected =*")
|
||||
assert result.ret == 0
|
||||
|
||||
def test_no_skip_summary_if_failure(self, testdir):
|
||||
|
@ -760,7 +757,7 @@ def test_fail_extra_reporting(testdir, monkeypatch):
|
|||
monkeypatch.setenv("COLUMNS", "80")
|
||||
testdir.makepyfile("def test_this(): assert 0, 'this_failed' * 100")
|
||||
result = testdir.runpytest()
|
||||
assert "short test summary" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*short test summary*")
|
||||
result = testdir.runpytest("-rf")
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
|
@ -773,13 +770,13 @@ def test_fail_extra_reporting(testdir, monkeypatch):
|
|||
def test_fail_reporting_on_pass(testdir):
|
||||
testdir.makepyfile("def test_this(): assert 1")
|
||||
result = testdir.runpytest("-rf")
|
||||
assert "short test summary" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*short test summary*")
|
||||
|
||||
|
||||
def test_pass_extra_reporting(testdir):
|
||||
testdir.makepyfile("def test_this(): assert 1")
|
||||
result = testdir.runpytest()
|
||||
assert "short test summary" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*short test summary*")
|
||||
result = testdir.runpytest("-rp")
|
||||
result.stdout.fnmatch_lines(["*test summary*", "PASS*test_pass_extra_reporting*"])
|
||||
|
||||
|
@ -787,7 +784,7 @@ def test_pass_extra_reporting(testdir):
|
|||
def test_pass_reporting_on_fail(testdir):
|
||||
testdir.makepyfile("def test_this(): assert 0")
|
||||
result = testdir.runpytest("-rp")
|
||||
assert "short test summary" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*short test summary*")
|
||||
|
||||
|
||||
def test_pass_output_reporting(testdir):
|
||||
|
@ -830,7 +827,7 @@ def test_color_no(testdir):
|
|||
testdir.makepyfile("def test_this(): assert 1")
|
||||
result = testdir.runpytest("--color=no")
|
||||
assert "test session starts" in result.stdout.str()
|
||||
assert "\x1b[1m" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*\x1b[1m*")
|
||||
|
||||
|
||||
@pytest.mark.parametrize("verbose", [True, False])
|
||||
|
@ -852,7 +849,7 @@ def test_color_yes_collection_on_non_atty(testdir, verbose):
|
|||
result = testdir.runpytest(*args)
|
||||
assert "test session starts" in result.stdout.str()
|
||||
assert "\x1b[1m" in result.stdout.str()
|
||||
assert "collecting 10 items" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*collecting 10 items*")
|
||||
if verbose:
|
||||
assert "collecting ..." in result.stdout.str()
|
||||
assert "collected 10 items" in result.stdout.str()
|
||||
|
@ -966,7 +963,31 @@ class TestGenericReporting:
|
|||
)
|
||||
result = testdir.runpytest("--maxfail=2", *option.args)
|
||||
result.stdout.fnmatch_lines(
|
||||
["*def test_1():*", "*def test_2():*", "*2 failed*"]
|
||||
[
|
||||
"*def test_1():*",
|
||||
"*def test_2():*",
|
||||
"*! stopping after 2 failures !*",
|
||||
"*2 failed*",
|
||||
]
|
||||
)
|
||||
|
||||
def test_maxfailures_with_interrupted(self, testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test(request):
|
||||
request.session.shouldstop = "session_interrupted"
|
||||
assert 0
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest("--maxfail=1", "-ra")
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
"*= short test summary info =*",
|
||||
"FAILED *",
|
||||
"*! stopping after 1 failures !*",
|
||||
"*! session_interrupted !*",
|
||||
"*= 1 failed in*",
|
||||
]
|
||||
)
|
||||
|
||||
def test_tb_option(self, testdir, option):
|
||||
|
@ -1215,7 +1236,7 @@ def test_terminal_summary_warnings_are_displayed(testdir):
|
|||
"*== 1 failed, 2 warnings in *",
|
||||
]
|
||||
)
|
||||
assert "None" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*None*")
|
||||
stdout = result.stdout.str()
|
||||
assert stdout.count("warning_from_test") == 1
|
||||
assert stdout.count("=== warnings summary ") == 2
|
||||
|
@ -1237,10 +1258,10 @@ def test_terminal_summary_warnings_header_once(testdir):
|
|||
"*= warnings summary =*",
|
||||
"*warning_from_test*",
|
||||
"*= short test summary info =*",
|
||||
"*== 1 failed, 1 warnings in *",
|
||||
"*== 1 failed, 1 warning in *",
|
||||
]
|
||||
)
|
||||
assert "None" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*None*")
|
||||
stdout = result.stdout.str()
|
||||
assert stdout.count("warning_from_test") == 1
|
||||
assert stdout.count("=== warnings summary ") == 1
|
||||
|
@ -1253,42 +1274,120 @@ def test_terminal_summary_warnings_header_once(testdir):
|
|||
# dict value, not the actual contents, so tuples of anything
|
||||
# suffice
|
||||
# Important statuses -- the highest priority of these always wins
|
||||
("red", "1 failed", {"failed": (1,)}),
|
||||
("red", "1 failed, 1 passed", {"failed": (1,), "passed": (1,)}),
|
||||
("red", "1 error", {"error": (1,)}),
|
||||
("red", "1 passed, 1 error", {"error": (1,), "passed": (1,)}),
|
||||
("red", [("1 failed", {"bold": True, "red": True})], {"failed": (1,)}),
|
||||
(
|
||||
"red",
|
||||
[
|
||||
("1 failed", {"bold": True, "red": True}),
|
||||
("1 passed", {"bold": False, "green": True}),
|
||||
],
|
||||
{"failed": (1,), "passed": (1,)},
|
||||
),
|
||||
("red", [("1 error", {"bold": True, "red": True})], {"error": (1,)}),
|
||||
("red", [("2 errors", {"bold": True, "red": True})], {"error": (1, 2)}),
|
||||
(
|
||||
"red",
|
||||
[
|
||||
("1 passed", {"bold": False, "green": True}),
|
||||
("1 error", {"bold": True, "red": True}),
|
||||
],
|
||||
{"error": (1,), "passed": (1,)},
|
||||
),
|
||||
# (a status that's not known to the code)
|
||||
("yellow", "1 weird", {"weird": (1,)}),
|
||||
("yellow", "1 passed, 1 weird", {"weird": (1,), "passed": (1,)}),
|
||||
("yellow", "1 warnings", {"warnings": (1,)}),
|
||||
("yellow", "1 passed, 1 warnings", {"warnings": (1,), "passed": (1,)}),
|
||||
("green", "5 passed", {"passed": (1, 2, 3, 4, 5)}),
|
||||
("yellow", [("1 weird", {"bold": True, "yellow": True})], {"weird": (1,)}),
|
||||
(
|
||||
"yellow",
|
||||
[
|
||||
("1 passed", {"bold": False, "green": True}),
|
||||
("1 weird", {"bold": True, "yellow": True}),
|
||||
],
|
||||
{"weird": (1,), "passed": (1,)},
|
||||
),
|
||||
("yellow", [("1 warning", {"bold": True, "yellow": True})], {"warnings": (1,)}),
|
||||
(
|
||||
"yellow",
|
||||
[
|
||||
("1 passed", {"bold": False, "green": True}),
|
||||
("1 warning", {"bold": True, "yellow": True}),
|
||||
],
|
||||
{"warnings": (1,), "passed": (1,)},
|
||||
),
|
||||
(
|
||||
"green",
|
||||
[("5 passed", {"bold": True, "green": True})],
|
||||
{"passed": (1, 2, 3, 4, 5)},
|
||||
),
|
||||
# "Boring" statuses. These have no effect on the color of the summary
|
||||
# line. Thus, if *every* test has a boring status, the summary line stays
|
||||
# at its default color, i.e. yellow, to warn the user that the test run
|
||||
# produced no useful information
|
||||
("yellow", "1 skipped", {"skipped": (1,)}),
|
||||
("green", "1 passed, 1 skipped", {"skipped": (1,), "passed": (1,)}),
|
||||
("yellow", "1 deselected", {"deselected": (1,)}),
|
||||
("green", "1 passed, 1 deselected", {"deselected": (1,), "passed": (1,)}),
|
||||
("yellow", "1 xfailed", {"xfailed": (1,)}),
|
||||
("green", "1 passed, 1 xfailed", {"xfailed": (1,), "passed": (1,)}),
|
||||
("yellow", "1 xpassed", {"xpassed": (1,)}),
|
||||
("green", "1 passed, 1 xpassed", {"xpassed": (1,), "passed": (1,)}),
|
||||
("yellow", [("1 skipped", {"bold": True, "yellow": True})], {"skipped": (1,)}),
|
||||
(
|
||||
"green",
|
||||
[
|
||||
("1 passed", {"bold": True, "green": True}),
|
||||
("1 skipped", {"bold": False, "yellow": True}),
|
||||
],
|
||||
{"skipped": (1,), "passed": (1,)},
|
||||
),
|
||||
(
|
||||
"yellow",
|
||||
[("1 deselected", {"bold": True, "yellow": True})],
|
||||
{"deselected": (1,)},
|
||||
),
|
||||
(
|
||||
"green",
|
||||
[
|
||||
("1 passed", {"bold": True, "green": True}),
|
||||
("1 deselected", {"bold": False, "yellow": True}),
|
||||
],
|
||||
{"deselected": (1,), "passed": (1,)},
|
||||
),
|
||||
("yellow", [("1 xfailed", {"bold": True, "yellow": True})], {"xfailed": (1,)}),
|
||||
(
|
||||
"green",
|
||||
[
|
||||
("1 passed", {"bold": True, "green": True}),
|
||||
("1 xfailed", {"bold": False, "yellow": True}),
|
||||
],
|
||||
{"xfailed": (1,), "passed": (1,)},
|
||||
),
|
||||
("yellow", [("1 xpassed", {"bold": True, "yellow": True})], {"xpassed": (1,)}),
|
||||
(
|
||||
"green",
|
||||
[
|
||||
("1 passed", {"bold": True, "green": True}),
|
||||
("1 xpassed", {"bold": False, "yellow": True}),
|
||||
],
|
||||
{"xpassed": (1,), "passed": (1,)},
|
||||
),
|
||||
# Likewise if no tests were found at all
|
||||
("yellow", "no tests ran", {}),
|
||||
("yellow", [("no tests ran", {"yellow": True})], {}),
|
||||
# Test the empty-key special case
|
||||
("yellow", "no tests ran", {"": (1,)}),
|
||||
("green", "1 passed", {"": (1,), "passed": (1,)}),
|
||||
("yellow", [("no tests ran", {"yellow": True})], {"": (1,)}),
|
||||
(
|
||||
"green",
|
||||
[("1 passed", {"bold": True, "green": True})],
|
||||
{"": (1,), "passed": (1,)},
|
||||
),
|
||||
# A couple more complex combinations
|
||||
(
|
||||
"red",
|
||||
"1 failed, 2 passed, 3 xfailed",
|
||||
[
|
||||
("1 failed", {"bold": True, "red": True}),
|
||||
("2 passed", {"bold": False, "green": True}),
|
||||
("3 xfailed", {"bold": False, "yellow": True}),
|
||||
],
|
||||
{"passed": (1, 2), "failed": (1,), "xfailed": (1, 2, 3)},
|
||||
),
|
||||
(
|
||||
"green",
|
||||
"1 passed, 2 skipped, 3 deselected, 2 xfailed",
|
||||
[
|
||||
("1 passed", {"bold": True, "green": True}),
|
||||
("2 skipped", {"bold": False, "yellow": True}),
|
||||
("3 deselected", {"bold": False, "yellow": True}),
|
||||
("2 xfailed", {"bold": False, "yellow": True}),
|
||||
],
|
||||
{
|
||||
"passed": (1,),
|
||||
"skipped": (1, 2),
|
||||
|
@ -1314,11 +1413,11 @@ def test_skip_counting_towards_summary():
|
|||
r1 = DummyReport()
|
||||
r2 = DummyReport()
|
||||
res = build_summary_stats_line({"failed": (r1, r2)})
|
||||
assert res == ("2 failed", "red")
|
||||
assert res == ([("2 failed", {"bold": True, "red": True})], "red")
|
||||
|
||||
r1.count_towards_summary = False
|
||||
res = build_summary_stats_line({"failed": (r1, r2)})
|
||||
assert res == ("1 failed", "red")
|
||||
assert res == ([("1 failed", {"bold": True, "red": True})], "red")
|
||||
|
||||
|
||||
class TestClassicOutputStyle:
|
||||
|
@ -1403,7 +1502,7 @@ class TestProgressOutputStyle:
|
|||
"""
|
||||
)
|
||||
output = testdir.runpytest()
|
||||
assert "ZeroDivisionError" not in output.stdout.str()
|
||||
output.stdout.no_fnmatch_line("*ZeroDivisionError*")
|
||||
output.stdout.fnmatch_lines(["=* 2 passed in *="])
|
||||
|
||||
def test_normal(self, many_tests_files, testdir):
|
||||
|
@ -1416,6 +1515,43 @@ class TestProgressOutputStyle:
|
|||
]
|
||||
)
|
||||
|
||||
def test_colored_progress(self, testdir, monkeypatch):
|
||||
monkeypatch.setenv("PY_COLORS", "1")
|
||||
testdir.makepyfile(
|
||||
test_bar="""
|
||||
import pytest
|
||||
@pytest.mark.parametrize('i', range(10))
|
||||
def test_bar(i): pass
|
||||
""",
|
||||
test_foo="""
|
||||
import pytest
|
||||
import warnings
|
||||
@pytest.mark.parametrize('i', range(5))
|
||||
def test_foo(i):
|
||||
warnings.warn(DeprecationWarning("collection"))
|
||||
pass
|
||||
""",
|
||||
test_foobar="""
|
||||
import pytest
|
||||
@pytest.mark.parametrize('i', range(5))
|
||||
def test_foobar(i): raise ValueError()
|
||||
""",
|
||||
)
|
||||
output = testdir.runpytest()
|
||||
output.stdout.re_match_lines(
|
||||
[
|
||||
r"test_bar.py ({green}\.{reset}){{10}}{green} \s+ \[ 50%\]{reset}".format(
|
||||
green=GREEN, reset=RESET
|
||||
),
|
||||
r"test_foo.py ({green}\.{reset}){{5}}{yellow} \s+ \[ 75%\]{reset}".format(
|
||||
green=GREEN, reset=RESET, yellow=YELLOW
|
||||
),
|
||||
r"test_foobar.py ({red}F{reset}){{5}}{red} \s+ \[100%\]{reset}".format(
|
||||
reset=RESET, red=RED
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
def test_count(self, many_tests_files, testdir):
|
||||
testdir.makeini(
|
||||
"""
|
||||
|
@ -1495,7 +1631,7 @@ class TestProgressOutputStyle:
|
|||
)
|
||||
|
||||
output = testdir.runpytest("--capture=no")
|
||||
assert "%]" not in output.stdout.str()
|
||||
output.stdout.no_fnmatch_line("*%]*")
|
||||
|
||||
|
||||
class TestProgressWithTeardown:
|
||||
|
@ -1696,3 +1832,20 @@ def test_format_session_duration(seconds, expected):
|
|||
from _pytest.terminal import format_session_duration
|
||||
|
||||
assert format_session_duration(seconds) == expected
|
||||
|
||||
|
||||
def test_collecterror(testdir):
|
||||
p1 = testdir.makepyfile("raise SyntaxError()")
|
||||
result = testdir.runpytest("-ra", str(p1))
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
"collected 0 items / 1 error",
|
||||
"*= ERRORS =*",
|
||||
"*_ ERROR collecting test_collecterror.py _*",
|
||||
"E SyntaxError: *",
|
||||
"*= short test summary info =*",
|
||||
"ERROR test_collecterror.py",
|
||||
"*! Interrupted: 1 error during collection !*",
|
||||
"*= 1 error in *",
|
||||
]
|
||||
)
|
||||
|
|
|
@ -258,7 +258,7 @@ class TestNumberedDir:
|
|||
registry = []
|
||||
register_cleanup_lock_removal(lock, register=registry.append)
|
||||
|
||||
cleanup_func, = registry
|
||||
(cleanup_func,) = registry
|
||||
|
||||
assert lock.is_file()
|
||||
|
||||
|
@ -388,11 +388,21 @@ class TestRmRf:
|
|||
assert not on_rm_rf_error(None, str(fn), exc_info, start_path=tmp_path)
|
||||
|
||||
# unknown function
|
||||
with pytest.warns(pytest.PytestWarning):
|
||||
with pytest.warns(
|
||||
pytest.PytestWarning,
|
||||
match=r"^\(rm_rf\) unknown function None when removing .*foo.txt:\nNone: ",
|
||||
):
|
||||
exc_info = (None, PermissionError(), None)
|
||||
on_rm_rf_error(None, str(fn), exc_info, start_path=tmp_path)
|
||||
assert fn.is_file()
|
||||
|
||||
# ignored function
|
||||
with pytest.warns(None) as warninfo:
|
||||
exc_info = (None, PermissionError(), None)
|
||||
on_rm_rf_error(os.open, str(fn), exc_info, start_path=tmp_path)
|
||||
assert fn.is_file()
|
||||
assert not [x.message for x in warninfo]
|
||||
|
||||
exc_info = (None, PermissionError(), None)
|
||||
on_rm_rf_error(os.unlink, str(fn), exc_info, start_path=tmp_path)
|
||||
assert not fn.is_file()
|
||||
|
|
|
@ -233,7 +233,7 @@ def test_unittest_skip_issue148(testdir):
|
|||
def test_method_and_teardown_failing_reporting(testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
import unittest, pytest
|
||||
import unittest
|
||||
class TC(unittest.TestCase):
|
||||
def tearDown(self):
|
||||
assert 0, "down1"
|
||||
|
@ -270,7 +270,7 @@ def test_setup_failure_is_shown(testdir):
|
|||
result = testdir.runpytest("-s")
|
||||
assert result.ret == 1
|
||||
result.stdout.fnmatch_lines(["*setUp*", "*assert 0*down1*", "*1 failed*"])
|
||||
assert "never42" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*never42*")
|
||||
|
||||
|
||||
def test_setup_setUpClass(testdir):
|
||||
|
@ -342,7 +342,7 @@ def test_testcase_adderrorandfailure_defers(testdir, type):
|
|||
% (type, type)
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
assert "should not raise" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*should not raise*")
|
||||
|
||||
|
||||
@pytest.mark.parametrize("type", ["Error", "Failure"])
|
||||
|
@ -383,7 +383,7 @@ def test_testcase_custom_exception_info(testdir, type):
|
|||
|
||||
|
||||
def test_testcase_totally_incompatible_exception_info(testdir):
|
||||
item, = testdir.getitems(
|
||||
(item,) = testdir.getitems(
|
||||
"""
|
||||
from unittest import TestCase
|
||||
class MyTestCase(TestCase):
|
||||
|
@ -530,19 +530,31 @@ class TestTrialUnittest:
|
|||
# will crash both at test time and at teardown
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
# Ignore DeprecationWarning (for `cmp`) from attrs through twisted,
|
||||
# for stable test results.
|
||||
result = testdir.runpytest(
|
||||
"-vv", "-oconsole_output_style=classic", "-W", "ignore::DeprecationWarning"
|
||||
)
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
"test_trial_error.py::TC::test_four FAILED",
|
||||
"test_trial_error.py::TC::test_four ERROR",
|
||||
"test_trial_error.py::TC::test_one FAILED",
|
||||
"test_trial_error.py::TC::test_three FAILED",
|
||||
"test_trial_error.py::TC::test_two FAILED",
|
||||
"*ERRORS*",
|
||||
"*_ ERROR at teardown of TC.test_four _*",
|
||||
"*DelayedCalls*",
|
||||
"*test_four*",
|
||||
"*= FAILURES =*",
|
||||
"*_ TC.test_four _*",
|
||||
"*NameError*crash*",
|
||||
"*test_one*",
|
||||
"*_ TC.test_one _*",
|
||||
"*NameError*crash*",
|
||||
"*test_three*",
|
||||
"*_ TC.test_three _*",
|
||||
"*DelayedCalls*",
|
||||
"*test_two*",
|
||||
"*crash*",
|
||||
"*_ TC.test_two _*",
|
||||
"*NameError*crash*",
|
||||
"*= 4 failed, 1 error in *",
|
||||
]
|
||||
)
|
||||
|
||||
|
@ -684,7 +696,7 @@ def test_unittest_not_shown_in_traceback(testdir):
|
|||
"""
|
||||
)
|
||||
res = testdir.runpytest()
|
||||
assert "failUnlessEqual" not in res.stdout.str()
|
||||
res.stdout.no_fnmatch_line("*failUnlessEqual*")
|
||||
|
||||
|
||||
def test_unorderable_types(testdir):
|
||||
|
@ -703,7 +715,7 @@ def test_unorderable_types(testdir):
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
assert "TypeError" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*TypeError*")
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
|
@ -1020,7 +1032,7 @@ def test_testcase_handles_init_exceptions(testdir):
|
|||
)
|
||||
result = testdir.runpytest()
|
||||
assert "should raise this exception" in result.stdout.str()
|
||||
assert "ERROR at teardown of MyTestCase.test_hello" not in result.stdout.str()
|
||||
result.stdout.no_fnmatch_line("*ERROR at teardown of MyTestCase.test_hello*")
|
||||
|
||||
|
||||
def test_error_message_with_parametrized_fixtures(testdir):
|
||||
|
|
|
@ -142,7 +142,7 @@ def test_unicode(testdir, pyfile_with_warnings):
|
|||
[
|
||||
"*== %s ==*" % WARNINGS_SUMMARY_HEADER,
|
||||
"*test_unicode.py:7: UserWarning: \u6d4b\u8bd5*",
|
||||
"* 1 passed, 1 warnings*",
|
||||
"* 1 passed, 1 warning*",
|
||||
]
|
||||
)
|
||||
|
||||
|
@ -201,7 +201,7 @@ def test_filterwarnings_mark(testdir, default_config):
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest("-W always" if default_config == "cmdline" else "")
|
||||
result.stdout.fnmatch_lines(["*= 1 failed, 2 passed, 1 warnings in *"])
|
||||
result.stdout.fnmatch_lines(["*= 1 failed, 2 passed, 1 warning in *"])
|
||||
|
||||
|
||||
def test_non_string_warning_argument(testdir):
|
||||
|
@ -216,7 +216,7 @@ def test_non_string_warning_argument(testdir):
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest("-W", "always")
|
||||
result.stdout.fnmatch_lines(["*= 1 passed, 1 warnings in *"])
|
||||
result.stdout.fnmatch_lines(["*= 1 passed, 1 warning in *"])
|
||||
|
||||
|
||||
def test_filterwarnings_mark_registration(testdir):
|
||||
|
@ -302,7 +302,7 @@ def test_collection_warnings(testdir):
|
|||
"*== %s ==*" % WARNINGS_SUMMARY_HEADER,
|
||||
" *collection_warnings.py:3: UserWarning: collection warning",
|
||||
' warnings.warn(UserWarning("collection warning"))',
|
||||
"* 1 passed, 1 warnings*",
|
||||
"* 1 passed, 1 warning*",
|
||||
]
|
||||
)
|
||||
|
||||
|
@ -358,7 +358,7 @@ def test_hide_pytest_internal_warnings(testdir, ignore_pytest_warnings):
|
|||
[
|
||||
"*== %s ==*" % WARNINGS_SUMMARY_HEADER,
|
||||
"*test_hide_pytest_internal_warnings.py:4: PytestWarning: some internal warning",
|
||||
"* 1 passed, 1 warnings *",
|
||||
"* 1 passed, 1 warning *",
|
||||
]
|
||||
)
|
||||
|
||||
|
@ -476,7 +476,7 @@ class TestDeprecationWarningsByDefault:
|
|||
[
|
||||
"*== %s ==*" % WARNINGS_SUMMARY_HEADER,
|
||||
"*test_hidden_by_mark.py:3: DeprecationWarning: collection",
|
||||
"* 1 passed, 1 warnings*",
|
||||
"* 1 passed, 1 warning*",
|
||||
]
|
||||
)
|
||||
|
||||
|
@ -605,6 +605,7 @@ def test_warnings_checker_twice():
|
|||
warnings.warn("Message B", UserWarning)
|
||||
|
||||
|
||||
@pytest.mark.filterwarnings("ignore::pytest.PytestExperimentalApiWarning")
|
||||
@pytest.mark.filterwarnings("always")
|
||||
def test_group_warnings_by_message(testdir):
|
||||
testdir.copy_example("warnings/test_group_warnings_by_message.py")
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue