Merge pull request #5522 from nicoddemus/merge-master-into-features
Merge master into features
This commit is contained in:
commit
4f9bf028f5
|
@ -2,3 +2,4 @@
|
|||
# * https://help.github.com/en/articles/displaying-a-sponsor-button-in-your-repository
|
||||
# * https://tidelift.com/subscription/how-to-connect-tidelift-with-github
|
||||
tidelift: pypi/pytest
|
||||
open_collective: pytest
|
||||
|
|
19
.travis.yml
19
.travis.yml
|
@ -20,9 +20,6 @@ jobs:
|
|||
include:
|
||||
# OSX tests - first (in test stage), since they are the slower ones.
|
||||
- os: osx
|
||||
# NOTE: (tests with) pexpect appear to be buggy on Travis,
|
||||
# at least with coverage.
|
||||
# Log: https://travis-ci.org/pytest-dev/pytest/jobs/500358864
|
||||
osx_image: xcode10.1
|
||||
language: generic
|
||||
env: TOXENV=py37-xdist PYTEST_COVERAGE=1
|
||||
|
@ -34,7 +31,7 @@ jobs:
|
|||
- test $(python -c 'import sys; print("%d%d" % sys.version_info[0:2])') = 37
|
||||
|
||||
# Full run of latest supported version, without xdist.
|
||||
- env: TOXENV=py37 PYTEST_COVERAGE=1
|
||||
- env: TOXENV=py37
|
||||
python: '3.7'
|
||||
|
||||
# Coverage tracking is slow with pypy, skip it.
|
||||
|
@ -49,12 +46,12 @@ jobs:
|
|||
# - TestArgComplete (linux only)
|
||||
# - numpy
|
||||
# Empty PYTEST_ADDOPTS to run this non-verbose.
|
||||
- env: TOXENV=py37-lsof-numpy-xdist PYTEST_COVERAGE=1 PYTEST_ADDOPTS=
|
||||
- env: TOXENV=py37-lsof-numpy-twisted-xdist PYTEST_COVERAGE=1 PYTEST_ADDOPTS=
|
||||
|
||||
# Specialized factors for py37.
|
||||
# Coverage for:
|
||||
# - test_sys_breakpoint_interception (via pexpect).
|
||||
- env: TOXENV=py37-pexpect,py37-twisted PYTEST_COVERAGE=1
|
||||
- env: TOXENV=py37-pexpect PYTEST_COVERAGE=1
|
||||
- env: TOXENV=py37-pluggymaster-xdist
|
||||
- env: TOXENV=py37-freeze
|
||||
|
||||
|
@ -105,18 +102,12 @@ before_script:
|
|||
export _PYTEST_TOX_EXTRA_DEP=coverage-enable-subprocess
|
||||
fi
|
||||
|
||||
script: tox --recreate
|
||||
script: tox
|
||||
|
||||
after_success:
|
||||
- |
|
||||
if [[ "$PYTEST_COVERAGE" = 1 ]]; then
|
||||
set -e
|
||||
# Add last TOXENV to $PATH.
|
||||
PATH="$PWD/.tox/${TOXENV##*,}/bin:$PATH"
|
||||
coverage combine
|
||||
coverage xml
|
||||
coverage report -m
|
||||
bash <(curl -s https://codecov.io/bash) -Z -X gcov -X coveragepy -X search -X xcode -X gcovout -X fix -f coverage.xml -n $TOXENV-$TRAVIS_OS_NAME
|
||||
env CODECOV_NAME="$TOXENV-$TRAVIS_OS_NAME" scripts/report-coverage.sh
|
||||
fi
|
||||
|
||||
notifications:
|
||||
|
|
2
AUTHORS
2
AUTHORS
|
@ -135,6 +135,7 @@ Kale Kundert
|
|||
Katarzyna Jachim
|
||||
Katerina Koukiou
|
||||
Kevin Cox
|
||||
Kevin J. Foley
|
||||
Kodi B. Arfer
|
||||
Kostis Anagnostopoulos
|
||||
Kristoffer Nordström
|
||||
|
@ -200,6 +201,7 @@ Pulkit Goyal
|
|||
Punyashloka Biswal
|
||||
Quentin Pradet
|
||||
Ralf Schmitt
|
||||
Ralph Giles
|
||||
Ran Benita
|
||||
Raphael Castaneda
|
||||
Raphael Pierzina
|
||||
|
|
236
CHANGELOG.rst
236
CHANGELOG.rst
|
@ -18,6 +18,242 @@ with advance notice in the **Deprecations** section of releases.
|
|||
|
||||
.. towncrier release notes start
|
||||
|
||||
pytest 5.0.0 (2019-06-28)
|
||||
=========================
|
||||
|
||||
Important
|
||||
---------
|
||||
|
||||
This release is a Python3.5+ only release.
|
||||
|
||||
For more details, see our `Python 2.7 and 3.4 support plan <https://docs.pytest.org/en/latest/py27-py34-deprecation.html>`__.
|
||||
|
||||
Removals
|
||||
--------
|
||||
|
||||
- `#1149 <https://github.com/pytest-dev/pytest/issues/1149>`_: Pytest no longer accepts prefixes of command-line arguments, for example
|
||||
typing ``pytest --doctest-mod`` inplace of ``--doctest-modules``.
|
||||
This was previously allowed where the ``ArgumentParser`` thought it was unambiguous,
|
||||
but this could be incorrect due to delayed parsing of options for plugins.
|
||||
See for example issues `#1149 <https://github.com/pytest-dev/pytest/issues/1149>`__,
|
||||
`#3413 <https://github.com/pytest-dev/pytest/issues/3413>`__, and
|
||||
`#4009 <https://github.com/pytest-dev/pytest/issues/4009>`__.
|
||||
|
||||
|
||||
- `#5402 <https://github.com/pytest-dev/pytest/issues/5402>`_: **PytestDeprecationWarning are now errors by default.**
|
||||
|
||||
Following our plan to remove deprecated features with as little disruption as
|
||||
possible, all warnings of type ``PytestDeprecationWarning`` now generate errors
|
||||
instead of warning messages.
|
||||
|
||||
**The affected features will be effectively removed in pytest 5.1**, so please consult the
|
||||
`Deprecations and Removals <https://docs.pytest.org/en/latest/deprecations.html>`__
|
||||
section in the docs for directions on how to update existing code.
|
||||
|
||||
In the pytest ``5.0.X`` series, it is possible to change the errors back into warnings as a stop
|
||||
gap measure by adding this to your ``pytest.ini`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pytest]
|
||||
filterwarnings =
|
||||
ignore::pytest.PytestDeprecationWarning
|
||||
|
||||
But this will stop working when pytest ``5.1`` is released.
|
||||
|
||||
**If you have concerns** about the removal of a specific feature, please add a
|
||||
comment to `#5402 <https://github.com/pytest-dev/pytest/issues/5402>`__.
|
||||
|
||||
|
||||
- `#5412 <https://github.com/pytest-dev/pytest/issues/5412>`_: ``ExceptionInfo`` objects (returned by ``pytest.raises``) now have the same ``str`` representation as ``repr``, which
|
||||
avoids some confusion when users use ``print(e)`` to inspect the object.
|
||||
|
||||
|
||||
|
||||
Deprecations
|
||||
------------
|
||||
|
||||
- `#4488 <https://github.com/pytest-dev/pytest/issues/4488>`_: The removal of the ``--result-log`` option and module has been postponed to (tentatively) pytest 6.0 as
|
||||
the team has not yet got around to implement a good alternative for it.
|
||||
|
||||
|
||||
- `#466 <https://github.com/pytest-dev/pytest/issues/466>`_: The ``funcargnames`` attribute has been an alias for ``fixturenames`` since
|
||||
pytest 2.3, and is now deprecated in code too.
|
||||
|
||||
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- `#3457 <https://github.com/pytest-dev/pytest/issues/3457>`_: New `pytest_assertion_pass <https://docs.pytest.org/en/latest/reference.html#_pytest.hookspec.pytest_assertion_pass>`__
|
||||
hook, called with context information when an assertion *passes*.
|
||||
|
||||
This hook is still **experimental** so use it with caution.
|
||||
|
||||
|
||||
- `#5440 <https://github.com/pytest-dev/pytest/issues/5440>`_: The `faulthandler <https://docs.python.org/3/library/faulthandler.html>`__ standard library
|
||||
module is now enabled by default to help users diagnose crashes in C modules.
|
||||
|
||||
This functionality was provided by integrating the external
|
||||
`pytest-faulthandler <https://github.com/pytest-dev/pytest-faulthandler>`__ plugin into the core,
|
||||
so users should remove that plugin from their requirements if used.
|
||||
|
||||
For more information see the docs: https://docs.pytest.org/en/latest/usage.html#fault-handler
|
||||
|
||||
|
||||
- `#5452 <https://github.com/pytest-dev/pytest/issues/5452>`_: When warnings are configured as errors, pytest warnings now appear as originating from ``pytest.`` instead of the internal ``_pytest.warning_types.`` module.
|
||||
|
||||
|
||||
- `#5125 <https://github.com/pytest-dev/pytest/issues/5125>`_: ``Session.exitcode`` values are now coded in ``pytest.ExitCode``, an ``IntEnum``. This makes the exit code available for consumer code and are more explicit other than just documentation. User defined exit codes are still valid, but should be used with caution.
|
||||
|
||||
The team doesn't expect this change to break test suites or plugins in general, except in esoteric/specific scenarios.
|
||||
|
||||
**pytest-xdist** users should upgrade to ``1.29.0`` or later, as ``pytest-xdist`` required a compatibility fix because of this change.
|
||||
|
||||
|
||||
|
||||
Bug Fixes
|
||||
---------
|
||||
|
||||
- `#1403 <https://github.com/pytest-dev/pytest/issues/1403>`_: Switch from ``imp`` to ``importlib``.
|
||||
|
||||
|
||||
- `#1671 <https://github.com/pytest-dev/pytest/issues/1671>`_: The name of the ``.pyc`` files cached by the assertion writer now includes the pytest version
|
||||
to avoid stale caches.
|
||||
|
||||
|
||||
- `#2761 <https://github.com/pytest-dev/pytest/issues/2761>`_: Honor PEP 235 on case-insensitive file systems.
|
||||
|
||||
|
||||
- `#5078 <https://github.com/pytest-dev/pytest/issues/5078>`_: Test module is no longer double-imported when using ``--pyargs``.
|
||||
|
||||
|
||||
- `#5260 <https://github.com/pytest-dev/pytest/issues/5260>`_: Improved comparison of byte strings.
|
||||
|
||||
When comparing bytes, the assertion message used to show the byte numeric value when showing the differences::
|
||||
|
||||
def test():
|
||||
> assert b'spam' == b'eggs'
|
||||
E AssertionError: assert b'spam' == b'eggs'
|
||||
E At index 0 diff: 115 != 101
|
||||
E Use -v to get the full diff
|
||||
|
||||
It now shows the actual ascii representation instead, which is often more useful::
|
||||
|
||||
def test():
|
||||
> assert b'spam' == b'eggs'
|
||||
E AssertionError: assert b'spam' == b'eggs'
|
||||
E At index 0 diff: b's' != b'e'
|
||||
E Use -v to get the full diff
|
||||
|
||||
|
||||
- `#5335 <https://github.com/pytest-dev/pytest/issues/5335>`_: Colorize level names when the level in the logging format is formatted using
|
||||
'%(levelname).Xs' (truncated fixed width alignment), where X is an integer.
|
||||
|
||||
|
||||
- `#5354 <https://github.com/pytest-dev/pytest/issues/5354>`_: Fix ``pytest.mark.parametrize`` when the argvalues is an iterator.
|
||||
|
||||
|
||||
- `#5370 <https://github.com/pytest-dev/pytest/issues/5370>`_: Revert unrolling of ``all()`` to fix ``NameError`` on nested comprehensions.
|
||||
|
||||
|
||||
- `#5371 <https://github.com/pytest-dev/pytest/issues/5371>`_: Revert unrolling of ``all()`` to fix incorrect handling of generators with ``if``.
|
||||
|
||||
|
||||
- `#5372 <https://github.com/pytest-dev/pytest/issues/5372>`_: Revert unrolling of ``all()`` to fix incorrect assertion when using ``all()`` in an expression.
|
||||
|
||||
|
||||
- `#5383 <https://github.com/pytest-dev/pytest/issues/5383>`_: ``-q`` has again an impact on the style of the collected items
|
||||
(``--collect-only``) when ``--log-cli-level`` is used.
|
||||
|
||||
|
||||
- `#5389 <https://github.com/pytest-dev/pytest/issues/5389>`_: Fix regressions of `#5063 <https://github.com/pytest-dev/pytest/pull/5063>`__ for ``importlib_metadata.PathDistribution`` which have their ``files`` attribute being ``None``.
|
||||
|
||||
|
||||
- `#5390 <https://github.com/pytest-dev/pytest/issues/5390>`_: Fix regression where the ``obj`` attribute of ``TestCase`` items was no longer bound to methods.
|
||||
|
||||
|
||||
- `#5404 <https://github.com/pytest-dev/pytest/issues/5404>`_: Emit a warning when attempting to unwrap a broken object raises an exception,
|
||||
for easier debugging (`#5080 <https://github.com/pytest-dev/pytest/issues/5080>`__).
|
||||
|
||||
|
||||
- `#5432 <https://github.com/pytest-dev/pytest/issues/5432>`_: Prevent "already imported" warnings from assertion rewriter when invoking pytest in-process multiple times.
|
||||
|
||||
|
||||
- `#5433 <https://github.com/pytest-dev/pytest/issues/5433>`_: Fix assertion rewriting in packages (``__init__.py``).
|
||||
|
||||
|
||||
- `#5444 <https://github.com/pytest-dev/pytest/issues/5444>`_: Fix ``--stepwise`` mode when the first file passed on the command-line fails to collect.
|
||||
|
||||
|
||||
- `#5482 <https://github.com/pytest-dev/pytest/issues/5482>`_: Fix bug introduced in 4.6.0 causing collection errors when passing
|
||||
more than 2 positional arguments to ``pytest.mark.parametrize``.
|
||||
|
||||
|
||||
- `#5505 <https://github.com/pytest-dev/pytest/issues/5505>`_: Fix crash when discovery fails while using ``-p no:terminal``.
|
||||
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- `#5315 <https://github.com/pytest-dev/pytest/issues/5315>`_: Expand docs on mocking classes and dictionaries with ``monkeypatch``.
|
||||
|
||||
|
||||
- `#5416 <https://github.com/pytest-dev/pytest/issues/5416>`_: Fix PytestUnknownMarkWarning in run/skip example.
|
||||
|
||||
|
||||
pytest 4.6.4 (2019-06-28)
|
||||
=========================
|
||||
|
||||
Bug Fixes
|
||||
---------
|
||||
|
||||
- `#5404 <https://github.com/pytest-dev/pytest/issues/5404>`_: Emit a warning when attempting to unwrap a broken object raises an exception,
|
||||
for easier debugging (`#5080 <https://github.com/pytest-dev/pytest/issues/5080>`__).
|
||||
|
||||
|
||||
- `#5444 <https://github.com/pytest-dev/pytest/issues/5444>`_: Fix ``--stepwise`` mode when the first file passed on the command-line fails to collect.
|
||||
|
||||
|
||||
- `#5482 <https://github.com/pytest-dev/pytest/issues/5482>`_: Fix bug introduced in 4.6.0 causing collection errors when passing
|
||||
more than 2 positional arguments to ``pytest.mark.parametrize``.
|
||||
|
||||
|
||||
- `#5505 <https://github.com/pytest-dev/pytest/issues/5505>`_: Fix crash when discovery fails while using ``-p no:terminal``.
|
||||
|
||||
|
||||
pytest 4.6.3 (2019-06-11)
|
||||
=========================
|
||||
|
||||
Bug Fixes
|
||||
---------
|
||||
|
||||
- `#5383 <https://github.com/pytest-dev/pytest/issues/5383>`_: ``-q`` has again an impact on the style of the collected items
|
||||
(``--collect-only``) when ``--log-cli-level`` is used.
|
||||
|
||||
|
||||
- `#5389 <https://github.com/pytest-dev/pytest/issues/5389>`_: Fix regressions of `#5063 <https://github.com/pytest-dev/pytest/pull/5063>`__ for ``importlib_metadata.PathDistribution`` which have their ``files`` attribute being ``None``.
|
||||
|
||||
|
||||
- `#5390 <https://github.com/pytest-dev/pytest/issues/5390>`_: Fix regression where the ``obj`` attribute of ``TestCase`` items was no longer bound to methods.
|
||||
|
||||
|
||||
pytest 4.6.2 (2019-06-03)
|
||||
=========================
|
||||
|
||||
Bug Fixes
|
||||
---------
|
||||
|
||||
- `#5370 <https://github.com/pytest-dev/pytest/issues/5370>`_: Revert unrolling of ``all()`` to fix ``NameError`` on nested comprehensions.
|
||||
|
||||
|
||||
- `#5371 <https://github.com/pytest-dev/pytest/issues/5371>`_: Revert unrolling of ``all()`` to fix incorrect handling of generators with ``if``.
|
||||
|
||||
|
||||
- `#5372 <https://github.com/pytest-dev/pytest/issues/5372>`_: Revert unrolling of ``all()`` to fix incorrect assertion when using ``all()`` in an expression.
|
||||
|
||||
|
||||
pytest 4.6.1 (2019-06-02)
|
||||
=========================
|
||||
|
||||
|
|
|
@ -173,7 +173,7 @@ Short version
|
|||
|
||||
The test environments above are usually enough to cover most cases locally.
|
||||
|
||||
#. Write a ``changelog`` entry: ``changelog/2574.bugfix``, use issue id number
|
||||
#. Write a ``changelog`` entry: ``changelog/2574.bugfix.rst``, use issue id number
|
||||
and one of ``bugfix``, ``removal``, ``feature``, ``vendor``, ``doc`` or
|
||||
``trivial`` for the issue type.
|
||||
#. Unless your change is a trivial or a documentation fix (e.g., a typo or reword of a small section) please
|
||||
|
@ -264,7 +264,7 @@ Here is a simple overview, with pytest-specific bits:
|
|||
$ git commit -a -m "<commit message>"
|
||||
$ git push -u
|
||||
|
||||
#. Create a new changelog entry in ``changelog``. The file should be named ``<issueid>.<type>``,
|
||||
#. Create a new changelog entry in ``changelog``. The file should be named ``<issueid>.<type>.rst``,
|
||||
where *issueid* is the number of the issue related to the change and *type* is one of
|
||||
``bugfix``, ``removal``, ``feature``, ``vendor``, ``doc`` or ``trivial``.
|
||||
|
||||
|
|
|
@ -4,8 +4,6 @@ trigger:
|
|||
|
||||
variables:
|
||||
PYTEST_ADDOPTS: "--junitxml=build/test-results/$(tox.env).xml -vv"
|
||||
COVERAGE_FILE: "$(Build.Repository.LocalPath)/.coverage"
|
||||
COVERAGE_PROCESS_START: "$(Build.Repository.LocalPath)/.coveragerc"
|
||||
PYTEST_COVERAGE: '0'
|
||||
|
||||
jobs:
|
||||
|
@ -30,7 +28,7 @@ jobs:
|
|||
tox.env: 'py36-xdist'
|
||||
py37:
|
||||
python.version: '3.7'
|
||||
tox.env: 'py37'
|
||||
tox.env: 'py37-twisted-numpy'
|
||||
# Coverage for:
|
||||
# - _py36_windowsconsoleio_workaround (with py36+)
|
||||
# - test_request_garbage (no xdist)
|
||||
|
@ -38,9 +36,6 @@ jobs:
|
|||
py37-linting/docs/doctesting:
|
||||
python.version: '3.7'
|
||||
tox.env: 'linting,docs,doctesting'
|
||||
py37-twisted/numpy:
|
||||
python.version: '3.7'
|
||||
tox.env: 'py37-twisted,py37-numpy'
|
||||
py37-pluggymaster-xdist:
|
||||
python.version: '3.7'
|
||||
tox.env: 'py37-pluggymaster-xdist'
|
||||
|
@ -55,8 +50,13 @@ jobs:
|
|||
- script: python -m pip install --upgrade pip && python -m pip install tox
|
||||
displayName: 'Install tox'
|
||||
|
||||
- script: |
|
||||
call scripts/setup-coverage-vars.bat || goto :eof
|
||||
- bash: |
|
||||
if [[ "$PYTEST_COVERAGE" == "1" ]]; then
|
||||
export _PYTEST_TOX_COVERAGE_RUN="coverage run -m"
|
||||
export _PYTEST_TOX_EXTRA_DEP=coverage-enable-subprocess
|
||||
export COVERAGE_FILE="$PWD/.coverage"
|
||||
export COVERAGE_PROCESS_START="$PWD/.coveragerc"
|
||||
fi
|
||||
python -m tox -e $(tox.env)
|
||||
displayName: 'Run tests'
|
||||
|
||||
|
@ -66,9 +66,12 @@ jobs:
|
|||
testRunTitle: '$(tox.env)'
|
||||
condition: succeededOrFailed()
|
||||
|
||||
- script: call scripts\upload-coverage.bat
|
||||
displayName: 'Report and upload coverage'
|
||||
condition: eq(variables['PYTEST_COVERAGE'], '1')
|
||||
- bash: |
|
||||
if [[ "$PYTEST_COVERAGE" == 1 ]]; then
|
||||
scripts/report-coverage.sh
|
||||
fi
|
||||
env:
|
||||
CODECOV_NAME: $(tox.env)
|
||||
CODECOV_TOKEN: $(CODECOV_TOKEN)
|
||||
PYTEST_CODECOV_NAME: $(tox.env)
|
||||
displayName: Report and upload coverage
|
||||
condition: eq(variables['PYTEST_COVERAGE'], '1')
|
||||
|
|
|
@ -1,7 +0,0 @@
|
|||
Pytest no longer accepts prefixes of command-line arguments, for example
|
||||
typing ``pytest --doctest-mod`` inplace of ``--doctest-modules``.
|
||||
This was previously allowed where the ``ArgumentParser`` thought it was unambiguous,
|
||||
but this could be incorrect due to delayed parsing of options for plugins.
|
||||
See for example issues `#1149 <https://github.com/pytest-dev/pytest/issues/1149>`__,
|
||||
`#3413 <https://github.com/pytest-dev/pytest/issues/3413>`__, and
|
||||
`#4009 <https://github.com/pytest-dev/pytest/issues/4009>`__.
|
|
@ -1,2 +0,0 @@
|
|||
Colorize level names when the level in the logging format is formatted using
|
||||
'%(levelname).Xs' (truncated fixed width alignment), where X is an integer.
|
|
@ -1 +0,0 @@
|
|||
Fix ``pytest.mark.parametrize`` when the argvalues is an iterator.
|
|
@ -1 +0,0 @@
|
|||
Fix assertion rewriting of ``all()`` calls to deal with non-generators.
|
|
@ -6,6 +6,10 @@ Release announcements
|
|||
:maxdepth: 2
|
||||
|
||||
|
||||
release-5.0.0
|
||||
release-4.6.4
|
||||
release-4.6.3
|
||||
release-4.6.2
|
||||
release-4.6.1
|
||||
release-4.6.0
|
||||
release-4.5.0
|
||||
|
|
|
@ -20,7 +20,7 @@ Thanks to all who contributed to this release, among them:
|
|||
* Jeffrey Rackauckas
|
||||
* Jose Carlos Menezes
|
||||
* Ronny Pfannschmidt
|
||||
* Zac-HD
|
||||
* Zac Hatfield-Dodds
|
||||
* iwanb
|
||||
|
||||
|
||||
|
|
|
@ -21,7 +21,6 @@ Thanks to all who contributed to this release, among them:
|
|||
* Kyle Altendorf
|
||||
* Stephan Hoyer
|
||||
* Zac Hatfield-Dodds
|
||||
* Zac-HD
|
||||
* songbowen
|
||||
|
||||
|
||||
|
|
|
@ -28,7 +28,6 @@ Thanks to all who contributed to this release, among them:
|
|||
* Pulkit Goyal
|
||||
* Samuel Searles-Bryant
|
||||
* Zac Hatfield-Dodds
|
||||
* Zac-HD
|
||||
|
||||
|
||||
Happy testing,
|
||||
|
|
|
@ -0,0 +1,18 @@
|
|||
pytest-4.6.2
|
||||
=======================================
|
||||
|
||||
pytest 4.6.2 has just been released to PyPI.
|
||||
|
||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||
|
||||
pip install --upgrade pytest
|
||||
|
||||
The full changelog is available at https://docs.pytest.org/en/latest/changelog.html.
|
||||
|
||||
Thanks to all who contributed to this release, among them:
|
||||
|
||||
* Anthony Sottile
|
||||
|
||||
|
||||
Happy testing,
|
||||
The pytest Development Team
|
|
@ -0,0 +1,21 @@
|
|||
pytest-4.6.3
|
||||
=======================================
|
||||
|
||||
pytest 4.6.3 has just been released to PyPI.
|
||||
|
||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||
|
||||
pip install --upgrade pytest
|
||||
|
||||
The full changelog is available at https://docs.pytest.org/en/latest/changelog.html.
|
||||
|
||||
Thanks to all who contributed to this release, among them:
|
||||
|
||||
* Anthony Sottile
|
||||
* Bruno Oliveira
|
||||
* Daniel Hahler
|
||||
* Dirk Thomas
|
||||
|
||||
|
||||
Happy testing,
|
||||
The pytest Development Team
|
|
@ -0,0 +1,22 @@
|
|||
pytest-4.6.4
|
||||
=======================================
|
||||
|
||||
pytest 4.6.4 has just been released to PyPI.
|
||||
|
||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||
|
||||
pip install --upgrade pytest
|
||||
|
||||
The full changelog is available at https://docs.pytest.org/en/latest/changelog.html.
|
||||
|
||||
Thanks to all who contributed to this release, among them:
|
||||
|
||||
* Anthony Sottile
|
||||
* Bruno Oliveira
|
||||
* Daniel Hahler
|
||||
* Thomas Grainger
|
||||
* Zac Hatfield-Dodds
|
||||
|
||||
|
||||
Happy testing,
|
||||
The pytest Development Team
|
|
@ -0,0 +1,46 @@
|
|||
pytest-5.0.0
|
||||
=======================================
|
||||
|
||||
The pytest team is proud to announce the 5.0.0 release!
|
||||
|
||||
pytest is a mature Python testing tool with more than a 2000 tests
|
||||
against itself, passing on many different interpreters and platforms.
|
||||
|
||||
This release contains a number of bugs fixes and improvements, so users are encouraged
|
||||
to take a look at the CHANGELOG:
|
||||
|
||||
https://docs.pytest.org/en/latest/changelog.html
|
||||
|
||||
For complete documentation, please visit:
|
||||
|
||||
https://docs.pytest.org/en/latest/
|
||||
|
||||
As usual, you can upgrade from pypi via:
|
||||
|
||||
pip install -U pytest
|
||||
|
||||
Thanks to all who contributed to this release, among them:
|
||||
|
||||
* Anthony Sottile
|
||||
* Bruno Oliveira
|
||||
* Daniel Hahler
|
||||
* Dirk Thomas
|
||||
* Evan Kepner
|
||||
* Florian Bruhin
|
||||
* Hugo
|
||||
* Kevin J. Foley
|
||||
* Pulkit Goyal
|
||||
* Ralph Giles
|
||||
* Ronny Pfannschmidt
|
||||
* Thomas Grainger
|
||||
* Thomas Hisch
|
||||
* Tim Gates
|
||||
* Victor Maryama
|
||||
* Yuri Apollov
|
||||
* Zac Hatfield-Dodds
|
||||
* curiousjazz77
|
||||
* patriksevallius
|
||||
|
||||
|
||||
Happy testing,
|
||||
The Pytest Development Team
|
|
@ -113,8 +113,8 @@ If you then run it with ``--lf``:
|
|||
test_50.py:6: Failed
|
||||
================= 2 failed, 48 deselected in 0.12 seconds ==================
|
||||
|
||||
You have run only the two failing test from the last run, while 48 tests have
|
||||
not been run ("deselected").
|
||||
You have run only the two failing tests from the last run, while the 48 passing
|
||||
tests have not been run ("deselected").
|
||||
|
||||
Now, if you run with the ``--ff`` option, all tests will be run but the first
|
||||
previous failures will be executed first (as can be seen from the series
|
||||
|
@ -224,7 +224,7 @@ If you run this command for the first time, you can see the print statement:
|
|||
running expensive computation...
|
||||
1 failed in 0.12 seconds
|
||||
|
||||
If you run it a second time the value will be retrieved from
|
||||
If you run it a second time, the value will be retrieved from
|
||||
the cache and nothing will be printed:
|
||||
|
||||
.. code-block:: pytest
|
||||
|
|
|
@ -19,6 +19,21 @@ Below is a complete list of all pytest features which are considered deprecated.
|
|||
:class:`_pytest.warning_types.PytestWarning` or subclasses, which can be filtered using
|
||||
:ref:`standard warning filters <warnings>`.
|
||||
|
||||
|
||||
Removal of ``funcargnames`` alias for ``fixturenames``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. deprecated:: 5.0
|
||||
|
||||
The ``FixtureRequest``, ``Metafunc``, and ``Function`` classes track the names of
|
||||
their associated fixtures, with the aptly-named ``fixturenames`` attribute.
|
||||
|
||||
Prior to pytest 2.3, this attribute was named ``funcargnames``, and we have kept
|
||||
that as an alias since. It is finally due for removal, as it is often confusing
|
||||
in places where we or plugin authors must distinguish between fixture names and
|
||||
names supplied by non-fixture things such as ``pytest.mark.parametrize``.
|
||||
|
||||
|
||||
.. _`raises message deprecated`:
|
||||
|
||||
``"message"`` parameter of ``pytest.raises``
|
||||
|
@ -101,20 +116,21 @@ Becomes:
|
|||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Result log (``--result-log``)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. deprecated:: 4.0
|
||||
|
||||
The ``--result-log`` option produces a stream of test reports which can be
|
||||
analysed at runtime. It uses a custom format which requires users to implement their own
|
||||
parser, but the team believes using a line-based format that can be parsed using standard
|
||||
tools would provide a suitable and better alternative.
|
||||
|
||||
The ``--resultlog`` command line option has been deprecated: it is little used
|
||||
and there are more modern and better alternatives, for example `pytest-tap <https://tappy.readthedocs.io/en/latest/>`_.
|
||||
The current plan is to provide an alternative in the pytest 5.0 series and remove the ``--result-log``
|
||||
option in pytest 6.0 after the new implementation proves satisfactory to all users and is deemed
|
||||
stable.
|
||||
|
||||
This feature will be effectively removed in pytest 4.0 as the team intends to include a better alternative in the core.
|
||||
|
||||
If you have any concerns, please don't hesitate to `open an issue <https://github.com/pytest-dev/pytest/issues>`__.
|
||||
The actual alternative is still being discussed in issue `#4488 <https://github.com/pytest-dev/pytest/issues/4488>`__.
|
||||
|
||||
Removed Features
|
||||
----------------
|
||||
|
|
|
@ -190,12 +190,13 @@ class TestRaises:
|
|||
|
||||
# thanks to Matthew Scott for this test
|
||||
def test_dynamic_compile_shows_nicely():
|
||||
import imp
|
||||
import importlib.util
|
||||
import sys
|
||||
|
||||
src = "def foo():\n assert 1 == 0\n"
|
||||
name = "abc-123"
|
||||
module = imp.new_module(name)
|
||||
spec = importlib.util.spec_from_loader(name, loader=None)
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
code = _pytest._code.compile(src, name, "exec")
|
||||
exec(code, module.__dict__)
|
||||
sys.modules[name] = module
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
module containing a parametrized tests testing cross-python
|
||||
serialization via the pickle module.
|
||||
"""
|
||||
import distutils.spawn
|
||||
import shutil
|
||||
import subprocess
|
||||
import textwrap
|
||||
|
||||
|
@ -24,7 +24,7 @@ def python2(request, python1):
|
|||
|
||||
class Python:
|
||||
def __init__(self, version, picklefile):
|
||||
self.pythonpath = distutils.spawn.find_executable(version)
|
||||
self.pythonpath = shutil.which(version)
|
||||
if not self.pythonpath:
|
||||
pytest.skip("{!r} not found".format(version))
|
||||
self.picklefile = picklefile
|
||||
|
|
|
@ -434,9 +434,9 @@ Running it results in some skips if we don't have all the python interpreters in
|
|||
.. code-block:: pytest
|
||||
|
||||
. $ pytest -rs -q multipython.py
|
||||
...sss...sssssssss...sss... [100%]
|
||||
ssssssssssss......sss...... [100%]
|
||||
========================= short test summary info ==========================
|
||||
SKIPPED [15] $REGENDOC_TMPDIR/CWD/multipython.py:31: 'python3.4' not found
|
||||
SKIPPED [15] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.5' not found
|
||||
12 passed, 15 skipped in 0.12 seconds
|
||||
|
||||
Indirect parametrization of optional implementations/imports
|
||||
|
|
|
@ -26,7 +26,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
> assert param1 * 2 < param2
|
||||
E assert (3 * 2) < 6
|
||||
|
||||
failure_demo.py:21: AssertionError
|
||||
failure_demo.py:20: AssertionError
|
||||
_________________________ TestFailing.test_simple __________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||
|
@ -43,7 +43,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E + where 42 = <function TestFailing.test_simple.<locals>.f at 0xdeadbeef>()
|
||||
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>()
|
||||
|
||||
failure_demo.py:32: AssertionError
|
||||
failure_demo.py:31: AssertionError
|
||||
____________________ TestFailing.test_simple_multiline _____________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||
|
@ -51,7 +51,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
def test_simple_multiline(self):
|
||||
> otherfunc_multi(42, 6 * 9)
|
||||
|
||||
failure_demo.py:35:
|
||||
failure_demo.py:34:
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
|
||||
a = 42, b = 54
|
||||
|
@ -60,7 +60,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
> assert a == b
|
||||
E assert 42 == 54
|
||||
|
||||
failure_demo.py:16: AssertionError
|
||||
failure_demo.py:15: AssertionError
|
||||
___________________________ TestFailing.test_not ___________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||
|
@ -73,7 +73,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E assert not 42
|
||||
E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>()
|
||||
|
||||
failure_demo.py:41: AssertionError
|
||||
failure_demo.py:40: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_text _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -84,7 +84,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E - spam
|
||||
E + eggs
|
||||
|
||||
failure_demo.py:46: AssertionError
|
||||
failure_demo.py:45: AssertionError
|
||||
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -97,7 +97,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E + foo 2 bar
|
||||
E ? ^
|
||||
|
||||
failure_demo.py:49: AssertionError
|
||||
failure_demo.py:48: AssertionError
|
||||
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -110,7 +110,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E + eggs
|
||||
E bar
|
||||
|
||||
failure_demo.py:52: AssertionError
|
||||
failure_demo.py:51: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_long_text _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -127,7 +127,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E + 1111111111b222222222
|
||||
E ? ^
|
||||
|
||||
failure_demo.py:57: AssertionError
|
||||
failure_demo.py:56: AssertionError
|
||||
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -147,7 +147,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E
|
||||
E ...Full output truncated (7 lines hidden), use '-vv' to show
|
||||
|
||||
failure_demo.py:62: AssertionError
|
||||
failure_demo.py:61: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -158,7 +158,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E At index 2 diff: 2 != 3
|
||||
E Use -v to get the full diff
|
||||
|
||||
failure_demo.py:65: AssertionError
|
||||
failure_demo.py:64: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_list_long _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -171,7 +171,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E At index 100 diff: 1 != 2
|
||||
E Use -v to get the full diff
|
||||
|
||||
failure_demo.py:70: AssertionError
|
||||
failure_demo.py:69: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_dict _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -189,7 +189,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E
|
||||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||
|
||||
failure_demo.py:73: AssertionError
|
||||
failure_demo.py:72: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_set __________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -207,7 +207,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E
|
||||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||
|
||||
failure_demo.py:76: AssertionError
|
||||
failure_demo.py:75: AssertionError
|
||||
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -218,7 +218,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E Right contains one more item: 3
|
||||
E Use -v to get the full diff
|
||||
|
||||
failure_demo.py:79: AssertionError
|
||||
failure_demo.py:78: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_in_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -227,7 +227,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
> assert 1 in [0, 2, 3, 4, 5]
|
||||
E assert 1 in [0, 2, 3, 4, 5]
|
||||
|
||||
failure_demo.py:82: AssertionError
|
||||
failure_demo.py:81: AssertionError
|
||||
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -246,7 +246,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E
|
||||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||
|
||||
failure_demo.py:86: AssertionError
|
||||
failure_demo.py:85: AssertionError
|
||||
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -259,7 +259,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E single foo line
|
||||
E ? +++
|
||||
|
||||
failure_demo.py:90: AssertionError
|
||||
failure_demo.py:89: AssertionError
|
||||
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -272,7 +272,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
|
||||
E ? +++
|
||||
|
||||
failure_demo.py:94: AssertionError
|
||||
failure_demo.py:93: AssertionError
|
||||
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -285,7 +285,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
|
||||
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
failure_demo.py:98: AssertionError
|
||||
failure_demo.py:97: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_dataclass _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -294,7 +294,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
from dataclasses import dataclass
|
||||
|
||||
@dataclass
|
||||
class Foo(object):
|
||||
class Foo:
|
||||
a: int
|
||||
b: str
|
||||
|
||||
|
@ -306,7 +306,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E Differing attributes:
|
||||
E b: 'b' != 'c'
|
||||
|
||||
failure_demo.py:110: AssertionError
|
||||
failure_demo.py:109: AssertionError
|
||||
________________ TestSpecialisedExplanations.test_eq_attrs _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||
|
@ -315,7 +315,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
import attr
|
||||
|
||||
@attr.s
|
||||
class Foo(object):
|
||||
class Foo:
|
||||
a = attr.ib()
|
||||
b = attr.ib()
|
||||
|
||||
|
@ -327,11 +327,11 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E Differing attributes:
|
||||
E b: 'b' != 'c'
|
||||
|
||||
failure_demo.py:122: AssertionError
|
||||
failure_demo.py:121: AssertionError
|
||||
______________________________ test_attribute ______________________________
|
||||
|
||||
def test_attribute():
|
||||
class Foo(object):
|
||||
class Foo:
|
||||
b = 1
|
||||
|
||||
i = Foo()
|
||||
|
@ -339,11 +339,11 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.b
|
||||
|
||||
failure_demo.py:130: AssertionError
|
||||
failure_demo.py:129: AssertionError
|
||||
_________________________ test_attribute_instance __________________________
|
||||
|
||||
def test_attribute_instance():
|
||||
class Foo(object):
|
||||
class Foo:
|
||||
b = 1
|
||||
|
||||
> assert Foo().b == 2
|
||||
|
@ -351,11 +351,11 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef>.b
|
||||
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
|
||||
|
||||
failure_demo.py:137: AssertionError
|
||||
failure_demo.py:136: AssertionError
|
||||
__________________________ test_attribute_failure __________________________
|
||||
|
||||
def test_attribute_failure():
|
||||
class Foo(object):
|
||||
class Foo:
|
||||
def _get_b(self):
|
||||
raise Exception("Failed to get attrib")
|
||||
|
||||
|
@ -364,7 +364,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
i = Foo()
|
||||
> assert i.b == 2
|
||||
|
||||
failure_demo.py:148:
|
||||
failure_demo.py:147:
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
|
||||
self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0xdeadbeef>
|
||||
|
@ -373,14 +373,14 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
> raise Exception("Failed to get attrib")
|
||||
E Exception: Failed to get attrib
|
||||
|
||||
failure_demo.py:143: Exception
|
||||
failure_demo.py:142: Exception
|
||||
_________________________ test_attribute_multiple __________________________
|
||||
|
||||
def test_attribute_multiple():
|
||||
class Foo(object):
|
||||
class Foo:
|
||||
b = 1
|
||||
|
||||
class Bar(object):
|
||||
class Bar:
|
||||
b = 2
|
||||
|
||||
> assert Foo().b == Bar().b
|
||||
|
@ -390,7 +390,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef>.b
|
||||
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
|
||||
|
||||
failure_demo.py:158: AssertionError
|
||||
failure_demo.py:157: AssertionError
|
||||
__________________________ TestRaises.test_raises __________________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
@ -400,7 +400,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
> raises(TypeError, int, s)
|
||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||
|
||||
failure_demo.py:168: ValueError
|
||||
failure_demo.py:167: ValueError
|
||||
______________________ TestRaises.test_raises_doesnt _______________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
@ -409,7 +409,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
> raises(IOError, int, "3")
|
||||
E Failed: DID NOT RAISE <class 'OSError'>
|
||||
|
||||
failure_demo.py:171: Failed
|
||||
failure_demo.py:170: Failed
|
||||
__________________________ TestRaises.test_raise ___________________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
@ -418,7 +418,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
> raise ValueError("demo error")
|
||||
E ValueError: demo error
|
||||
|
||||
failure_demo.py:174: ValueError
|
||||
failure_demo.py:173: ValueError
|
||||
________________________ TestRaises.test_tupleerror ________________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
@ -427,7 +427,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
> a, b = [1] # NOQA
|
||||
E ValueError: not enough values to unpack (expected 2, got 1)
|
||||
|
||||
failure_demo.py:177: ValueError
|
||||
failure_demo.py:176: ValueError
|
||||
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
@ -438,7 +438,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
> a, b = items.pop()
|
||||
E TypeError: 'int' object is not iterable
|
||||
|
||||
failure_demo.py:182: TypeError
|
||||
failure_demo.py:181: TypeError
|
||||
--------------------------- Captured stdout call ---------------------------
|
||||
items is [1, 2, 3]
|
||||
________________________ TestRaises.test_some_error ________________________
|
||||
|
@ -449,16 +449,17 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
> if namenotexi: # NOQA
|
||||
E NameError: name 'namenotexi' is not defined
|
||||
|
||||
failure_demo.py:185: NameError
|
||||
failure_demo.py:184: NameError
|
||||
____________________ test_dynamic_compile_shows_nicely _____________________
|
||||
|
||||
def test_dynamic_compile_shows_nicely():
|
||||
import imp
|
||||
import importlib.util
|
||||
import sys
|
||||
|
||||
src = "def foo():\n assert 1 == 0\n"
|
||||
name = "abc-123"
|
||||
module = imp.new_module(name)
|
||||
spec = importlib.util.spec_from_loader(name, loader=None)
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
code = _pytest._code.compile(src, name, "exec")
|
||||
exec(code, module.__dict__)
|
||||
sys.modules[name] = module
|
||||
|
@ -487,7 +488,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
|
||||
failure_demo.py:214:
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
failure_demo.py:12: in somefunc
|
||||
failure_demo.py:11: in somefunc
|
||||
otherfunc(x, y)
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
|
||||
|
@ -497,7 +498,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
> assert a == b
|
||||
E assert 44 == 43
|
||||
|
||||
failure_demo.py:8: AssertionError
|
||||
failure_demo.py:7: AssertionError
|
||||
___________________ TestMoreErrors.test_z1_unpack_error ____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||
|
@ -598,7 +599,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||
|
||||
def test_single_line(self):
|
||||
class A(object):
|
||||
class A:
|
||||
a = 1
|
||||
|
||||
b = 2
|
||||
|
@ -613,7 +614,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||
|
||||
def test_multiline(self):
|
||||
class A(object):
|
||||
class A:
|
||||
a = 1
|
||||
|
||||
b = 2
|
||||
|
@ -632,7 +633,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
|
|||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||
|
||||
def test_custom_repr(self):
|
||||
class JSON(object):
|
||||
class JSON:
|
||||
a = 1
|
||||
|
||||
def __repr__(self):
|
||||
|
|
|
@ -157,6 +157,10 @@ line option to control skipping of ``pytest.mark.slow`` marked tests:
|
|||
)
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
config.addinivalue_line("markers", "slow: mark test as slow to run")
|
||||
|
||||
|
||||
def pytest_collection_modifyitems(config, items):
|
||||
if config.getoption("--runslow"):
|
||||
# --runslow given in cli: do not skip slow tests
|
||||
|
@ -441,7 +445,7 @@ Now we can profile which test functions execute the slowest:
|
|||
|
||||
========================= slowest 3 test durations =========================
|
||||
0.30s call test_some_are_slow.py::test_funcslow2
|
||||
0.21s call test_some_are_slow.py::test_funcslow1
|
||||
0.20s call test_some_are_slow.py::test_funcslow1
|
||||
0.10s call test_some_are_slow.py::test_funcfast
|
||||
========================= 3 passed in 0.12 seconds =========================
|
||||
|
||||
|
|
|
@ -8,46 +8,215 @@ Sometimes tests need to invoke functionality which depends
|
|||
on global settings or which invokes code which cannot be easily
|
||||
tested such as network access. The ``monkeypatch`` fixture
|
||||
helps you to safely set/delete an attribute, dictionary item or
|
||||
environment variable or to modify ``sys.path`` for importing.
|
||||
environment variable, or to modify ``sys.path`` for importing.
|
||||
|
||||
The ``monkeypatch`` fixture provides these helper methods for safely patching and mocking
|
||||
functionality in tests:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
monkeypatch.setattr(obj, name, value, raising=True)
|
||||
monkeypatch.delattr(obj, name, raising=True)
|
||||
monkeypatch.setitem(mapping, name, value)
|
||||
monkeypatch.delitem(obj, name, raising=True)
|
||||
monkeypatch.setenv(name, value, prepend=False)
|
||||
monkeypatch.delenv(name, raising=True)
|
||||
monkeypatch.syspath_prepend(path)
|
||||
monkeypatch.chdir(path)
|
||||
|
||||
All modifications will be undone after the requesting
|
||||
test function or fixture has finished. The ``raising``
|
||||
parameter determines if a ``KeyError`` or ``AttributeError``
|
||||
will be raised if the target of the set/deletion operation does not exist.
|
||||
|
||||
Consider the following scenarios:
|
||||
|
||||
1. Modifying the behavior of a function or the property of a class for a test e.g.
|
||||
there is an API call or database connection you will not make for a test but you know
|
||||
what the expected output should be. Use :py:meth:`monkeypatch.setattr` to patch the
|
||||
function or property with your desired testing behavior. This can include your own functions.
|
||||
Use :py:meth:`monkeypatch.delattr` to remove the function or property for the test.
|
||||
|
||||
2. Modifying the values of dictionaries e.g. you have a global configuration that
|
||||
you want to modify for certain test cases. Use :py:meth:`monkeypatch.setitem` to patch the
|
||||
dictionary for the test. :py:meth:`monkeypatch.delitem` can be used to remove items.
|
||||
|
||||
3. Modifying environment variables for a test e.g. to test program behavior if an
|
||||
environment variable is missing, or to set multiple values to a known variable.
|
||||
:py:meth:`monkeypatch.setenv` and :py:meth:`monkeypatch.delenv` can be used for
|
||||
these patches.
|
||||
|
||||
4. Use :py:meth:`monkeypatch.syspath_prepend` to modify the system ``$PATH`` safely, and
|
||||
:py:meth:`monkeypatch.chdir` to change the context of the current working directory
|
||||
during a test.
|
||||
|
||||
See the `monkeypatch blog post`_ for some introduction material
|
||||
and a discussion of its motivation.
|
||||
|
||||
.. _`monkeypatch blog post`: http://tetamap.wordpress.com/2009/03/03/monkeypatching-in-unit-tests-done-right/
|
||||
|
||||
|
||||
Simple example: monkeypatching functions
|
||||
----------------------------------------
|
||||
|
||||
If you want to pretend that ``os.expanduser`` returns a certain
|
||||
directory, you can use the :py:meth:`monkeypatch.setattr` method to
|
||||
patch this function before calling into a function which uses it::
|
||||
Consider a scenario where you are working with user directories. In the context of
|
||||
testing, you do not want your test to depend on the running user. ``monkeypatch``
|
||||
can be used to patch functions dependent on the user to always return a
|
||||
specific value.
|
||||
|
||||
# content of test_module.py
|
||||
import os.path
|
||||
def getssh(): # pseudo application code
|
||||
return os.path.join(os.path.expanduser("~admin"), '.ssh')
|
||||
In this example, :py:meth:`monkeypatch.setattr` is used to patch ``Path.home``
|
||||
so that the known testing path ``Path("/abc")`` is always used when the test is run.
|
||||
This removes any dependency on the running user for testing purposes.
|
||||
:py:meth:`monkeypatch.setattr` must be called before the function which will use
|
||||
the patched function is called.
|
||||
After the test function finishes the ``Path.home`` modification will be undone.
|
||||
|
||||
def test_mytest(monkeypatch):
|
||||
def mockreturn(path):
|
||||
return '/abc'
|
||||
monkeypatch.setattr(os.path, 'expanduser', mockreturn)
|
||||
.. code-block:: python
|
||||
|
||||
# contents of test_module.py with source code and the test
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def getssh():
|
||||
"""Simple function to return expanded homedir ssh path."""
|
||||
return Path.home() / ".ssh"
|
||||
|
||||
|
||||
def test_getssh(monkeypatch):
|
||||
# mocked return function to replace Path.home
|
||||
# always return '/abc'
|
||||
def mockreturn():
|
||||
return Path("/abc")
|
||||
|
||||
# Application of the monkeypatch to replace Path.home
|
||||
# with the behavior of mockreturn defined above.
|
||||
monkeypatch.setattr(Path, "home", mockreturn)
|
||||
|
||||
# Calling getssh() will use mockreturn in place of Path.home
|
||||
# for this test with the monkeypatch.
|
||||
x = getssh()
|
||||
assert x == '/abc/.ssh'
|
||||
assert x == Path("/abc/.ssh")
|
||||
|
||||
Monkeypatching returned objects: building mock classes
|
||||
------------------------------------------------------
|
||||
|
||||
:py:meth:`monkeypatch.setattr` can be used in conjunction with classes to mock returned
|
||||
objects from functions instead of values.
|
||||
Imagine a simple function to take an API url and return the json response.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# contents of app.py, a simple API retrieval example
|
||||
import requests
|
||||
|
||||
|
||||
def get_json(url):
|
||||
"""Takes a URL, and returns the JSON."""
|
||||
r = requests.get(url)
|
||||
return r.json()
|
||||
|
||||
We need to mock ``r``, the returned response object for testing purposes.
|
||||
The mock of ``r`` needs a ``.json()`` method which returns a dictionary.
|
||||
This can be done in our test file by defining a class to represent ``r``.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# contents of test_app.py, a simple test for our API retrieval
|
||||
# import requests for the purposes of monkeypatching
|
||||
import requests
|
||||
|
||||
# our app.py that includes the get_json() function
|
||||
# this is the previous code block example
|
||||
import app
|
||||
|
||||
# custom class to be the mock return value
|
||||
# will override the requests.Response returned from requests.get
|
||||
class MockResponse:
|
||||
|
||||
# mock json() method always returns a specific testing dictionary
|
||||
@staticmethod
|
||||
def json():
|
||||
return {"mock_key": "mock_response"}
|
||||
|
||||
|
||||
def test_get_json(monkeypatch):
|
||||
|
||||
# Any arguments may be passed and mock_get() will always return our
|
||||
# mocked object, which only has the .json() method.
|
||||
def mock_get(*args, **kwargs):
|
||||
return MockResponse()
|
||||
|
||||
# apply the monkeypatch for requests.get to mock_get
|
||||
monkeypatch.setattr(requests, "get", mock_get)
|
||||
|
||||
# app.get_json, which contains requests.get, uses the monkeypatch
|
||||
result = app.get_json("https://fakeurl")
|
||||
assert result["mock_key"] == "mock_response"
|
||||
|
||||
|
||||
``monkeypatch`` applies the mock for ``requests.get`` with our ``mock_get`` function.
|
||||
The ``mock_get`` function returns an instance of the ``MockResponse`` class, which
|
||||
has a ``json()`` method defined to return a known testing dictionary and does not
|
||||
require any outside API connection.
|
||||
|
||||
You can build the ``MockResponse`` class with the appropriate degree of complexity for
|
||||
the scenario you are testing. For instance, it could include an ``ok`` property that
|
||||
always returns ``True``, or return different values from the ``json()`` mocked method
|
||||
based on input strings.
|
||||
|
||||
This mock can be shared across tests using a ``fixture``:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# contents of test_app.py, a simple test for our API retrieval
|
||||
import pytest
|
||||
import requests
|
||||
|
||||
# app.py that includes the get_json() function
|
||||
import app
|
||||
|
||||
# custom class to be the mock return value of requests.get()
|
||||
class MockResponse:
|
||||
@staticmethod
|
||||
def json():
|
||||
return {"mock_key": "mock_response"}
|
||||
|
||||
|
||||
# monkeypatched requests.get moved to a fixture
|
||||
@pytest.fixture
|
||||
def mock_response(monkeypatch):
|
||||
"""Requests.get() mocked to return {'mock_key':'mock_response'}."""
|
||||
|
||||
def mock_get(*args, **kwargs):
|
||||
return MockResponse()
|
||||
|
||||
monkeypatch.setattr(requests, "get", mock_get)
|
||||
|
||||
|
||||
# notice our test uses the custom fixture instead of monkeypatch directly
|
||||
def test_get_json(mock_response):
|
||||
result = app.get_json("https://fakeurl")
|
||||
assert result["mock_key"] == "mock_response"
|
||||
|
||||
|
||||
Furthermore, if the mock was designed to be applied to all tests, the ``fixture`` could
|
||||
be moved to a ``conftest.py`` file and use the with ``autouse=True`` option.
|
||||
|
||||
Here our test function monkeypatches ``os.path.expanduser`` and
|
||||
then calls into a function that calls it. After the test function
|
||||
finishes the ``os.path.expanduser`` modification will be undone.
|
||||
|
||||
Global patch example: preventing "requests" from remote operations
|
||||
------------------------------------------------------------------
|
||||
|
||||
If you want to prevent the "requests" library from performing http
|
||||
requests in all your tests, you can do::
|
||||
requests in all your tests, you can do:
|
||||
|
||||
# content of conftest.py
|
||||
.. code-block:: python
|
||||
|
||||
# contents of conftest.py
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def no_requests(monkeypatch):
|
||||
"""Remove requests.sessions.Session.request for all tests."""
|
||||
monkeypatch.delattr("requests.sessions.Session.request")
|
||||
|
||||
This autouse fixture will be executed for each test function and it
|
||||
|
@ -85,7 +254,7 @@ Monkeypatching environment variables
|
|||
------------------------------------
|
||||
|
||||
If you are working with environment variables you often need to safely change the values
|
||||
or delete them from the system for testing purposes. ``Monkeypatch`` provides a mechanism
|
||||
or delete them from the system for testing purposes. ``monkeypatch`` provides a mechanism
|
||||
to do this using the ``setenv`` and ``delenv`` method. Our example code to test:
|
||||
|
||||
.. code-block:: python
|
||||
|
@ -131,6 +300,7 @@ This behavior can be moved into ``fixture`` structures and shared across tests:
|
|||
|
||||
.. code-block:: python
|
||||
|
||||
# contents of our test file e.g. test_code.py
|
||||
import pytest
|
||||
|
||||
|
||||
|
@ -144,7 +314,7 @@ This behavior can be moved into ``fixture`` structures and shared across tests:
|
|||
monkeypatch.delenv("USER", raising=False)
|
||||
|
||||
|
||||
# Notice the tests reference the fixtures for mocks
|
||||
# notice the tests reference the fixtures for mocks
|
||||
def test_upper_to_lower(mock_env_user):
|
||||
assert get_os_user_lower() == "testinguser"
|
||||
|
||||
|
@ -154,6 +324,112 @@ This behavior can be moved into ``fixture`` structures and shared across tests:
|
|||
_ = get_os_user_lower()
|
||||
|
||||
|
||||
Monkeypatching dictionaries
|
||||
---------------------------
|
||||
|
||||
:py:meth:`monkeypatch.setitem` can be used to safely set the values of dictionaries
|
||||
to specific values during tests. Take this simplified connection string example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# contents of app.py to generate a simple connection string
|
||||
DEFAULT_CONFIG = {"user": "user1", "database": "db1"}
|
||||
|
||||
|
||||
def create_connection_string(config=None):
|
||||
"""Creates a connection string from input or defaults."""
|
||||
config = config or DEFAULT_CONFIG
|
||||
return f"User Id={config['user']}; Location={config['database']};"
|
||||
|
||||
For testing purposes we can patch the ``DEFAULT_CONFIG`` dictionary to specific values.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# contents of test_app.py
|
||||
# app.py with the connection string function (prior code block)
|
||||
import app
|
||||
|
||||
|
||||
def test_connection(monkeypatch):
|
||||
|
||||
# Patch the values of DEFAULT_CONFIG to specific
|
||||
# testing values only for this test.
|
||||
monkeypatch.setitem(app.DEFAULT_CONFIG, "user", "test_user")
|
||||
monkeypatch.setitem(app.DEFAULT_CONFIG, "database", "test_db")
|
||||
|
||||
# expected result based on the mocks
|
||||
expected = "User Id=test_user; Location=test_db;"
|
||||
|
||||
# the test uses the monkeypatched dictionary settings
|
||||
result = app.create_connection_string()
|
||||
assert result == expected
|
||||
|
||||
You can use the :py:meth:`monkeypatch.delitem` to remove values.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# contents of test_app.py
|
||||
import pytest
|
||||
|
||||
# app.py with the connection string function
|
||||
import app
|
||||
|
||||
|
||||
def test_missing_user(monkeypatch):
|
||||
|
||||
# patch the DEFAULT_CONFIG t be missing the 'user' key
|
||||
monkeypatch.delitem(app.DEFAULT_CONFIG, "user", raising=False)
|
||||
|
||||
# Key error expected because a config is not passed, and the
|
||||
# default is now missing the 'user' entry.
|
||||
with pytest.raises(KeyError):
|
||||
_ = app.create_connection_string()
|
||||
|
||||
|
||||
The modularity of fixtures gives you the flexibility to define
|
||||
separate fixtures for each potential mock and reference them in the needed tests.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# contents of test_app.py
|
||||
import pytest
|
||||
|
||||
# app.py with the connection string function
|
||||
import app
|
||||
|
||||
# all of the mocks are moved into separated fixtures
|
||||
@pytest.fixture
|
||||
def mock_test_user(monkeypatch):
|
||||
"""Set the DEFAULT_CONFIG user to test_user."""
|
||||
monkeypatch.setitem(app.DEFAULT_CONFIG, "user", "test_user")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_test_database(monkeypatch):
|
||||
"""Set the DEFAULT_CONFIG database to test_db."""
|
||||
monkeypatch.setitem(app.DEFAULT_CONFIG, "database", "test_db")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_missing_default_user(monkeypatch):
|
||||
"""Remove the user key from DEFAULT_CONFIG"""
|
||||
monkeypatch.delitem(app.DEFAULT_CONFIG, "user", raising=False)
|
||||
|
||||
|
||||
# tests reference only the fixture mocks that are needed
|
||||
def test_connection(mock_test_user, mock_test_database):
|
||||
|
||||
expected = "User Id=test_user; Location=test_db;"
|
||||
|
||||
result = app.create_connection_string()
|
||||
assert result == expected
|
||||
|
||||
|
||||
def test_missing_user(mock_missing_default_user):
|
||||
|
||||
with pytest.raises(KeyError):
|
||||
_ = app.create_connection_string()
|
||||
|
||||
|
||||
.. currentmodule:: _pytest.monkeypatch
|
||||
|
||||
|
|
|
@ -46,7 +46,7 @@ Unsupported idioms / known issues
|
|||
<https://github.com/pytest-dev/pytest/issues/377/>`_.
|
||||
|
||||
- nose imports test modules with the same import path (e.g.
|
||||
``tests.test_mod``) but different file system paths
|
||||
``tests.test_mode``) but different file system paths
|
||||
(e.g. ``tests/test_mode.py`` and ``other/tests/test_mode.py``)
|
||||
by extending sys.path/import semantics. pytest does not do that
|
||||
but there is discussion in `#268 <https://github.com/pytest-dev/pytest/issues/268>`_ for adding some support. Note that
|
||||
|
|
|
@ -665,15 +665,14 @@ Session related reporting hooks:
|
|||
.. autofunction:: pytest_fixture_post_finalizer
|
||||
.. autofunction:: pytest_warning_captured
|
||||
|
||||
And here is the central hook for reporting about
|
||||
test execution:
|
||||
Central hook for reporting about test execution:
|
||||
|
||||
.. autofunction:: pytest_runtest_logreport
|
||||
|
||||
You can also use this hook to customize assertion representation for some
|
||||
types:
|
||||
Assertion related hooks:
|
||||
|
||||
.. autofunction:: pytest_assertrepr_compare
|
||||
.. autofunction:: pytest_assertion_pass
|
||||
|
||||
|
||||
Debugging/Interaction hooks
|
||||
|
@ -727,6 +726,14 @@ ExceptionInfo
|
|||
.. autoclass:: _pytest._code.ExceptionInfo
|
||||
:members:
|
||||
|
||||
|
||||
pytest.ExitCode
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: _pytest.main.ExitCode
|
||||
:members:
|
||||
|
||||
|
||||
FixtureDef
|
||||
~~~~~~~~~~
|
||||
|
||||
|
@ -951,6 +958,14 @@ PYTEST_CURRENT_TEST
|
|||
This is not meant to be set by users, but is set by pytest internally with the name of the current test so other
|
||||
processes can inspect it, see :ref:`pytest current test env` for more information.
|
||||
|
||||
Exceptions
|
||||
----------
|
||||
|
||||
UsageError
|
||||
~~~~~~~~~~
|
||||
|
||||
.. autoclass:: _pytest.config.UsageError()
|
||||
|
||||
|
||||
.. _`ini options ref`:
|
||||
|
||||
|
@ -1068,6 +1083,23 @@ passed multiple times. The expected format is ``name=value``. For example::
|
|||
for more details.
|
||||
|
||||
|
||||
.. confval:: faulthandler_timeout
|
||||
|
||||
Dumps the tracebacks of all threads if a test takes longer than ``X`` seconds to run (including
|
||||
fixture setup and teardown). Implemented using the `faulthandler.dump_traceback_later`_ function,
|
||||
so all caveats there apply.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
# content of pytest.ini
|
||||
[pytest]
|
||||
faulthandler_timeout=5
|
||||
|
||||
For more information please refer to :ref:`faulthandler`.
|
||||
|
||||
.. _`faulthandler.dump_traceback_later`: https://docs.python.org/3/library/faulthandler.html#faulthandler.dump_traceback_later
|
||||
|
||||
|
||||
.. confval:: filterwarnings
|
||||
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ created in the `base temporary directory`_.
|
|||
# content of test_tmp_path.py
|
||||
import os
|
||||
|
||||
CONTENT = u"content"
|
||||
CONTENT = "content"
|
||||
|
||||
|
||||
def test_create_file(tmp_path):
|
||||
|
|
|
@ -33,6 +33,8 @@ Running ``pytest`` can result in six different exit codes:
|
|||
:Exit code 4: pytest command line usage error
|
||||
:Exit code 5: No tests were collected
|
||||
|
||||
They are represented by the :class:`_pytest.main.ExitCode` enum.
|
||||
|
||||
Getting help on version, option names, environment variables
|
||||
--------------------------------------------------------------
|
||||
|
||||
|
@ -408,7 +410,6 @@ Pytest supports the use of ``breakpoint()`` with the following behaviours:
|
|||
Profiling test execution duration
|
||||
-------------------------------------
|
||||
|
||||
.. versionadded: 2.2
|
||||
|
||||
To get a list of the slowest 10 test durations:
|
||||
|
||||
|
@ -418,6 +419,38 @@ To get a list of the slowest 10 test durations:
|
|||
|
||||
By default, pytest will not show test durations that are too small (<0.01s) unless ``-vv`` is passed on the command-line.
|
||||
|
||||
|
||||
.. _faulthandler:
|
||||
|
||||
Fault Handler
|
||||
-------------
|
||||
|
||||
.. versionadded:: 5.0
|
||||
|
||||
The `faulthandler <https://docs.python.org/3/library/faulthandler.html>`__ standard module
|
||||
can be used to dump Python tracebacks on a segfault or after a timeout.
|
||||
|
||||
The module is automatically enabled for pytest runs, unless the ``-p no:faulthandler`` is given
|
||||
on the command-line.
|
||||
|
||||
Also the :confval:`faulthandler_timeout=X<faulthandler_timeout>` configuration option can be used
|
||||
to dump the traceback of all threads if a test takes longer than ``X``
|
||||
seconds to finish (not available on Windows).
|
||||
|
||||
.. note::
|
||||
|
||||
This functionality has been integrated from the external
|
||||
`pytest-faulthandler <https://github.com/pytest-dev/pytest-faulthandler>`__ plugin, with two
|
||||
small differences:
|
||||
|
||||
* To disable it, use ``-p no:faulthandler`` instead of ``--no-faulthandler``: the former
|
||||
can be used with any plugin, so it saves one option.
|
||||
|
||||
* The ``--faulthandler-timeout`` command-line option has become the
|
||||
:confval:`faulthandler_timeout` configuration option. It can still be configured from
|
||||
the command-line using ``-o faulthandler_timeout=X``.
|
||||
|
||||
|
||||
Creating JUnitXML format files
|
||||
----------------------------------------------------
|
||||
|
||||
|
|
|
@ -74,7 +74,7 @@ def report(issues):
|
|||
if __name__ == "__main__":
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser("process bitbucket issues", allow_abbrev=False)
|
||||
parser = argparse.ArgumentParser("process bitbucket issues")
|
||||
parser.add_argument(
|
||||
"--refresh", action="store_true", help="invalidate cache, refresh issues"
|
||||
)
|
||||
|
|
|
@ -105,7 +105,7 @@ def changelog(version, write_out=False):
|
|||
|
||||
def main():
|
||||
init(autoreset=True)
|
||||
parser = argparse.ArgumentParser(allow_abbrev=False)
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("version", help="Release version")
|
||||
options = parser.parse_args()
|
||||
pre_release(options.version)
|
||||
|
|
|
@ -0,0 +1,16 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
set -x
|
||||
|
||||
if [ -z "$TOXENV" ]; then
|
||||
python -m pip install coverage
|
||||
else
|
||||
# Add last TOXENV to $PATH.
|
||||
PATH="$PWD/.tox/${TOXENV##*,}/bin:$PATH"
|
||||
fi
|
||||
|
||||
python -m coverage combine
|
||||
python -m coverage xml
|
||||
python -m coverage report -m
|
||||
bash <(curl -s https://codecov.io/bash) -Z -X gcov -X coveragepy -X search -X xcode -X gcovout -X fix -f coverage.xml
|
|
@ -1,7 +0,0 @@
|
|||
if "%PYTEST_COVERAGE%" == "1" (
|
||||
set "_PYTEST_TOX_COVERAGE_RUN=coverage run -m"
|
||||
set "_PYTEST_TOX_EXTRA_DEP=coverage-enable-subprocess"
|
||||
echo Coverage vars configured, PYTEST_COVERAGE=%PYTEST_COVERAGE%
|
||||
) else (
|
||||
echo Skipping coverage vars setup, PYTEST_COVERAGE=%PYTEST_COVERAGE%
|
||||
)
|
|
@ -1,16 +0,0 @@
|
|||
REM script called by Azure to combine and upload coverage information to codecov
|
||||
if "%PYTEST_COVERAGE%" == "1" (
|
||||
echo Prepare to upload coverage information
|
||||
if defined CODECOV_TOKEN (
|
||||
echo CODECOV_TOKEN defined
|
||||
) else (
|
||||
echo CODECOV_TOKEN NOT defined
|
||||
)
|
||||
python -m pip install codecov
|
||||
python -m coverage combine
|
||||
python -m coverage xml
|
||||
python -m coverage report -m
|
||||
scripts\retry python -m codecov --required -X gcov pycov search -f coverage.xml --name %PYTEST_CODECOV_NAME%
|
||||
) else (
|
||||
echo Skipping coverage upload, PYTEST_COVERAGE=%PYTEST_COVERAGE%
|
||||
)
|
|
@ -534,13 +534,6 @@ class ExceptionInfo:
|
|||
)
|
||||
return fmt.repr_excinfo(self)
|
||||
|
||||
def __str__(self):
|
||||
if self._excinfo is None:
|
||||
return repr(self)
|
||||
entry = self.traceback[-1]
|
||||
loc = ReprFileLocation(entry.path, entry.lineno + 1, self.exconly())
|
||||
return str(loc)
|
||||
|
||||
def match(self, regexp):
|
||||
"""
|
||||
Check whether the regular expression 'regexp' is found in the string
|
||||
|
@ -917,7 +910,6 @@ class ReprEntry(TerminalRepr):
|
|||
for line in self.lines:
|
||||
red = line.startswith("E ")
|
||||
tw.line(line, bold=True, red=red)
|
||||
# tw.line("")
|
||||
return
|
||||
if self.reprfuncargs:
|
||||
self.reprfuncargs.toterminal(tw)
|
||||
|
|
|
@ -23,6 +23,13 @@ def pytest_addoption(parser):
|
|||
test modules on import to provide assert
|
||||
expression information.""",
|
||||
)
|
||||
parser.addini(
|
||||
"enable_assertion_pass_hook",
|
||||
type="bool",
|
||||
default=False,
|
||||
help="Enables the pytest_assertion_pass hook."
|
||||
"Make sure to delete any previously generated pyc cache files.",
|
||||
)
|
||||
|
||||
|
||||
def register_assert_rewrite(*names):
|
||||
|
@ -92,7 +99,7 @@ def pytest_collection(session):
|
|||
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
"""Setup the pytest_assertrepr_compare hook
|
||||
"""Setup the pytest_assertrepr_compare and pytest_assertion_pass hooks
|
||||
|
||||
The newinterpret and rewrite modules will use util._reprcompare if
|
||||
it exists to use custom reporting via the
|
||||
|
@ -129,9 +136,19 @@ def pytest_runtest_setup(item):
|
|||
|
||||
util._reprcompare = callbinrepr
|
||||
|
||||
if item.ihook.pytest_assertion_pass.get_hookimpls():
|
||||
|
||||
def call_assertion_pass_hook(lineno, expl, orig):
|
||||
item.ihook.pytest_assertion_pass(
|
||||
item=item, lineno=lineno, orig=orig, expl=expl
|
||||
)
|
||||
|
||||
util._assertion_pass = call_assertion_pass_hook
|
||||
|
||||
|
||||
def pytest_runtest_teardown(item):
|
||||
util._reprcompare = None
|
||||
util._assertion_pass = None
|
||||
|
||||
|
||||
def pytest_sessionfinish(session):
|
||||
|
|
|
@ -1,21 +1,22 @@
|
|||
"""Rewrite assertion AST to produce nice error messages"""
|
||||
import ast
|
||||
import errno
|
||||
import imp
|
||||
import functools
|
||||
import importlib.machinery
|
||||
import importlib.util
|
||||
import io
|
||||
import itertools
|
||||
import marshal
|
||||
import os
|
||||
import re
|
||||
import string
|
||||
import struct
|
||||
import sys
|
||||
import tokenize
|
||||
import types
|
||||
from importlib.util import spec_from_file_location
|
||||
|
||||
import atomicwrites
|
||||
import py
|
||||
|
||||
from _pytest._io.saferepr import saferepr
|
||||
from _pytest._version import version
|
||||
from _pytest.assertion import util
|
||||
from _pytest.assertion.util import ( # noqa: F401
|
||||
format_explanation as _format_explanation,
|
||||
|
@ -24,23 +25,13 @@ from _pytest.pathlib import fnmatch_ex
|
|||
from _pytest.pathlib import PurePath
|
||||
|
||||
# pytest caches rewritten pycs in __pycache__.
|
||||
if hasattr(imp, "get_tag"):
|
||||
PYTEST_TAG = imp.get_tag() + "-PYTEST"
|
||||
else:
|
||||
if hasattr(sys, "pypy_version_info"):
|
||||
impl = "pypy"
|
||||
else:
|
||||
impl = "cpython"
|
||||
ver = sys.version_info
|
||||
PYTEST_TAG = "{}-{}{}-PYTEST".format(impl, ver[0], ver[1])
|
||||
del ver, impl
|
||||
|
||||
PYTEST_TAG = "{}-pytest-{}".format(sys.implementation.cache_tag, version)
|
||||
PYC_EXT = ".py" + (__debug__ and "c" or "o")
|
||||
PYC_TAIL = "." + PYTEST_TAG + PYC_EXT
|
||||
|
||||
|
||||
class AssertionRewritingHook:
|
||||
"""PEP302 Import hook which rewrites asserts."""
|
||||
"""PEP302/PEP451 import hook which rewrites asserts."""
|
||||
|
||||
def __init__(self, config):
|
||||
self.config = config
|
||||
|
@ -49,7 +40,6 @@ class AssertionRewritingHook:
|
|||
except ValueError:
|
||||
self.fnpats = ["test_*.py", "*_test.py"]
|
||||
self.session = None
|
||||
self.modules = {}
|
||||
self._rewritten_names = set()
|
||||
self._must_rewrite = set()
|
||||
# flag to guard against trying to rewrite a pyc file while we are already writing another pyc file,
|
||||
|
@ -63,55 +53,53 @@ class AssertionRewritingHook:
|
|||
self.session = session
|
||||
self._session_paths_checked = False
|
||||
|
||||
def _imp_find_module(self, name, path=None):
|
||||
"""Indirection so we can mock calls to find_module originated from the hook during testing"""
|
||||
return imp.find_module(name, path)
|
||||
# Indirection so we can mock calls to find_spec originated from the hook during testing
|
||||
_find_spec = importlib.machinery.PathFinder.find_spec
|
||||
|
||||
def find_module(self, name, path=None):
|
||||
def find_spec(self, name, path=None, target=None):
|
||||
if self._writing_pyc:
|
||||
return None
|
||||
state = self.config._assertstate
|
||||
if self._early_rewrite_bailout(name, state):
|
||||
return None
|
||||
state.trace("find_module called for: %s" % name)
|
||||
names = name.rsplit(".", 1)
|
||||
lastname = names[-1]
|
||||
pth = None
|
||||
if path is not None:
|
||||
# Starting with Python 3.3, path is a _NamespacePath(), which
|
||||
# causes problems if not converted to list.
|
||||
path = list(path)
|
||||
if len(path) == 1:
|
||||
pth = path[0]
|
||||
if pth is None:
|
||||
try:
|
||||
fd, fn, desc = self._imp_find_module(lastname, path)
|
||||
except ImportError:
|
||||
return None
|
||||
if fd is not None:
|
||||
fd.close()
|
||||
tp = desc[2]
|
||||
if tp == imp.PY_COMPILED:
|
||||
if hasattr(imp, "source_from_cache"):
|
||||
try:
|
||||
fn = imp.source_from_cache(fn)
|
||||
except ValueError:
|
||||
# Python 3 doesn't like orphaned but still-importable
|
||||
# .pyc files.
|
||||
fn = fn[:-1]
|
||||
else:
|
||||
fn = fn[:-1]
|
||||
elif tp != imp.PY_SOURCE:
|
||||
# Don't know what this is.
|
||||
return None
|
||||
else:
|
||||
fn = os.path.join(pth, name.rpartition(".")[2] + ".py")
|
||||
|
||||
fn_pypath = py.path.local(fn)
|
||||
if not self._should_rewrite(name, fn_pypath, state):
|
||||
spec = self._find_spec(name, path)
|
||||
if (
|
||||
# the import machinery could not find a file to import
|
||||
spec is None
|
||||
# this is a namespace package (without `__init__.py`)
|
||||
# there's nothing to rewrite there
|
||||
# python3.5 - python3.6: `namespace`
|
||||
# python3.7+: `None`
|
||||
or spec.origin in {None, "namespace"}
|
||||
# we can only rewrite source files
|
||||
or not isinstance(spec.loader, importlib.machinery.SourceFileLoader)
|
||||
# if the file doesn't exist, we can't rewrite it
|
||||
or not os.path.exists(spec.origin)
|
||||
):
|
||||
return None
|
||||
else:
|
||||
fn = spec.origin
|
||||
|
||||
if not self._should_rewrite(name, fn, state):
|
||||
return None
|
||||
|
||||
self._rewritten_names.add(name)
|
||||
return importlib.util.spec_from_file_location(
|
||||
name,
|
||||
fn,
|
||||
loader=self,
|
||||
submodule_search_locations=spec.submodule_search_locations,
|
||||
)
|
||||
|
||||
def create_module(self, spec):
|
||||
return None # default behaviour is fine
|
||||
|
||||
def exec_module(self, module):
|
||||
fn = module.__spec__.origin
|
||||
state = self.config._assertstate
|
||||
|
||||
self._rewritten_names.add(module.__name__)
|
||||
|
||||
# The requested module looks like a test file, so rewrite it. This is
|
||||
# the most magical part of the process: load the source, rewrite the
|
||||
|
@ -122,7 +110,7 @@ class AssertionRewritingHook:
|
|||
# cached pyc is always a complete, valid pyc. Operations on it must be
|
||||
# atomic. POSIX's atomic rename comes in handy.
|
||||
write = not sys.dont_write_bytecode
|
||||
cache_dir = os.path.join(fn_pypath.dirname, "__pycache__")
|
||||
cache_dir = os.path.join(os.path.dirname(fn), "__pycache__")
|
||||
if write:
|
||||
try:
|
||||
os.mkdir(cache_dir)
|
||||
|
@ -133,26 +121,23 @@ class AssertionRewritingHook:
|
|||
# common case) or it's blocked by a non-dir node. In the
|
||||
# latter case, we'll ignore it in _write_pyc.
|
||||
pass
|
||||
elif e in [errno.ENOENT, errno.ENOTDIR]:
|
||||
elif e in {errno.ENOENT, errno.ENOTDIR}:
|
||||
# One of the path components was not a directory, likely
|
||||
# because we're in a zip file.
|
||||
write = False
|
||||
elif e in [errno.EACCES, errno.EROFS, errno.EPERM]:
|
||||
state.trace("read only directory: %r" % fn_pypath.dirname)
|
||||
elif e in {errno.EACCES, errno.EROFS, errno.EPERM}:
|
||||
state.trace("read only directory: %r" % os.path.dirname(fn))
|
||||
write = False
|
||||
else:
|
||||
raise
|
||||
cache_name = fn_pypath.basename[:-3] + PYC_TAIL
|
||||
cache_name = os.path.basename(fn)[:-3] + PYC_TAIL
|
||||
pyc = os.path.join(cache_dir, cache_name)
|
||||
# Notice that even if we're in a read-only directory, I'm going
|
||||
# to check for a cached pyc. This may not be optimal...
|
||||
co = _read_pyc(fn_pypath, pyc, state.trace)
|
||||
co = _read_pyc(fn, pyc, state.trace)
|
||||
if co is None:
|
||||
state.trace("rewriting {!r}".format(fn))
|
||||
source_stat, co = _rewrite_test(self.config, fn_pypath)
|
||||
if co is None:
|
||||
# Probably a SyntaxError in the test.
|
||||
return None
|
||||
source_stat, co = _rewrite_test(fn, self.config)
|
||||
if write:
|
||||
self._writing_pyc = True
|
||||
try:
|
||||
|
@ -161,13 +146,11 @@ class AssertionRewritingHook:
|
|||
self._writing_pyc = False
|
||||
else:
|
||||
state.trace("found cached rewritten pyc for {!r}".format(fn))
|
||||
self.modules[name] = co, pyc
|
||||
return self
|
||||
exec(co, module.__dict__)
|
||||
|
||||
def _early_rewrite_bailout(self, name, state):
|
||||
"""
|
||||
This is a fast way to get out of rewriting modules. Profiling has
|
||||
shown that the call to imp.find_module (inside of the find_module
|
||||
"""This is a fast way to get out of rewriting modules. Profiling has
|
||||
shown that the call to PathFinder.find_spec (inside of the find_spec
|
||||
from this class) is a major slowdown, so, this method tries to
|
||||
filter what we're sure won't be rewritten before getting to it.
|
||||
"""
|
||||
|
@ -202,10 +185,9 @@ class AssertionRewritingHook:
|
|||
state.trace("early skip of rewriting module: {}".format(name))
|
||||
return True
|
||||
|
||||
def _should_rewrite(self, name, fn_pypath, state):
|
||||
def _should_rewrite(self, name, fn, state):
|
||||
# always rewrite conftest files
|
||||
fn = str(fn_pypath)
|
||||
if fn_pypath.basename == "conftest.py":
|
||||
if os.path.basename(fn) == "conftest.py":
|
||||
state.trace("rewriting conftest file: {!r}".format(fn))
|
||||
return True
|
||||
|
||||
|
@ -218,8 +200,9 @@ class AssertionRewritingHook:
|
|||
|
||||
# modules not passed explicitly on the command line are only
|
||||
# rewritten if they match the naming convention for test files
|
||||
fn_path = PurePath(fn)
|
||||
for pat in self.fnpats:
|
||||
if fn_pypath.fnmatch(pat):
|
||||
if fnmatch_ex(pat, fn_path):
|
||||
state.trace("matched test file {!r}".format(fn))
|
||||
return True
|
||||
|
||||
|
@ -250,9 +233,10 @@ class AssertionRewritingHook:
|
|||
set(names).intersection(sys.modules).difference(self._rewritten_names)
|
||||
)
|
||||
for name in already_imported:
|
||||
mod = sys.modules[name]
|
||||
if not AssertionRewriter.is_rewrite_disabled(
|
||||
sys.modules[name].__doc__ or ""
|
||||
):
|
||||
mod.__doc__ or ""
|
||||
) and not isinstance(mod.__loader__, type(self)):
|
||||
self._warn_already_imported(name)
|
||||
self._must_rewrite.update(names)
|
||||
self._marked_for_rewrite_cache.clear()
|
||||
|
@ -269,45 +253,8 @@ class AssertionRewritingHook:
|
|||
stacklevel=5,
|
||||
)
|
||||
|
||||
def load_module(self, name):
|
||||
co, pyc = self.modules.pop(name)
|
||||
if name in sys.modules:
|
||||
# If there is an existing module object named 'fullname' in
|
||||
# sys.modules, the loader must use that existing module. (Otherwise,
|
||||
# the reload() builtin will not work correctly.)
|
||||
mod = sys.modules[name]
|
||||
else:
|
||||
# I wish I could just call imp.load_compiled here, but __file__ has to
|
||||
# be set properly. In Python 3.2+, this all would be handled correctly
|
||||
# by load_compiled.
|
||||
mod = sys.modules[name] = imp.new_module(name)
|
||||
try:
|
||||
mod.__file__ = co.co_filename
|
||||
# Normally, this attribute is 3.2+.
|
||||
mod.__cached__ = pyc
|
||||
mod.__loader__ = self
|
||||
# Normally, this attribute is 3.4+
|
||||
mod.__spec__ = spec_from_file_location(name, co.co_filename, loader=self)
|
||||
exec(co, mod.__dict__)
|
||||
except: # noqa
|
||||
if name in sys.modules:
|
||||
del sys.modules[name]
|
||||
raise
|
||||
return sys.modules[name]
|
||||
|
||||
def is_package(self, name):
|
||||
try:
|
||||
fd, fn, desc = self._imp_find_module(name)
|
||||
except ImportError:
|
||||
return False
|
||||
if fd is not None:
|
||||
fd.close()
|
||||
tp = desc[2]
|
||||
return tp == imp.PKG_DIRECTORY
|
||||
|
||||
def get_data(self, pathname):
|
||||
"""Optional PEP302 get_data API.
|
||||
"""
|
||||
"""Optional PEP302 get_data API."""
|
||||
with open(pathname, "rb") as f:
|
||||
return f.read()
|
||||
|
||||
|
@ -315,15 +262,13 @@ class AssertionRewritingHook:
|
|||
def _write_pyc(state, co, source_stat, pyc):
|
||||
# Technically, we don't have to have the same pyc format as
|
||||
# (C)Python, since these "pycs" should never be seen by builtin
|
||||
# import. However, there's little reason deviate, and I hope
|
||||
# sometime to be able to use imp.load_compiled to load them. (See
|
||||
# the comment in load_module above.)
|
||||
# import. However, there's little reason deviate.
|
||||
try:
|
||||
with atomicwrites.atomic_write(pyc, mode="wb", overwrite=True) as fp:
|
||||
fp.write(imp.get_magic())
|
||||
fp.write(importlib.util.MAGIC_NUMBER)
|
||||
# as of now, bytecode header expects 32-bit numbers for size and mtime (#4903)
|
||||
mtime = int(source_stat.mtime) & 0xFFFFFFFF
|
||||
size = source_stat.size & 0xFFFFFFFF
|
||||
mtime = int(source_stat.st_mtime) & 0xFFFFFFFF
|
||||
size = source_stat.st_size & 0xFFFFFFFF
|
||||
# "<LL" stands for 2 unsigned longs, little-ending
|
||||
fp.write(struct.pack("<LL", mtime, size))
|
||||
fp.write(marshal.dumps(co))
|
||||
|
@ -336,35 +281,14 @@ def _write_pyc(state, co, source_stat, pyc):
|
|||
return True
|
||||
|
||||
|
||||
RN = "\r\n".encode()
|
||||
N = "\n".encode()
|
||||
|
||||
cookie_re = re.compile(r"^[ \t\f]*#.*coding[:=][ \t]*[-\w.]+")
|
||||
BOM_UTF8 = "\xef\xbb\xbf"
|
||||
|
||||
|
||||
def _rewrite_test(config, fn):
|
||||
"""Try to read and rewrite *fn* and return the code object."""
|
||||
state = config._assertstate
|
||||
try:
|
||||
stat = fn.stat()
|
||||
source = fn.read("rb")
|
||||
except EnvironmentError:
|
||||
return None, None
|
||||
try:
|
||||
tree = ast.parse(source, filename=fn.strpath)
|
||||
except SyntaxError:
|
||||
# Let this pop up again in the real import.
|
||||
state.trace("failed to parse: {!r}".format(fn))
|
||||
return None, None
|
||||
rewrite_asserts(tree, fn, config)
|
||||
try:
|
||||
co = compile(tree, fn.strpath, "exec", dont_inherit=True)
|
||||
except SyntaxError:
|
||||
# It's possible that this error is from some bug in the
|
||||
# assertion rewriting, but I don't know of a fast way to tell.
|
||||
state.trace("failed to compile: {!r}".format(fn))
|
||||
return None, None
|
||||
def _rewrite_test(fn, config):
|
||||
"""read and rewrite *fn* and return the code object."""
|
||||
stat = os.stat(fn)
|
||||
with open(fn, "rb") as f:
|
||||
source = f.read()
|
||||
tree = ast.parse(source, filename=fn)
|
||||
rewrite_asserts(tree, source, fn, config)
|
||||
co = compile(tree, fn, "exec", dont_inherit=True)
|
||||
return stat, co
|
||||
|
||||
|
||||
|
@ -379,8 +303,9 @@ def _read_pyc(source, pyc, trace=lambda x: None):
|
|||
return None
|
||||
with fp:
|
||||
try:
|
||||
mtime = int(source.mtime())
|
||||
size = source.size()
|
||||
stat_result = os.stat(source)
|
||||
mtime = int(stat_result.st_mtime)
|
||||
size = stat_result.st_size
|
||||
data = fp.read(12)
|
||||
except EnvironmentError as e:
|
||||
trace("_read_pyc({}): EnvironmentError {}".format(source, e))
|
||||
|
@ -388,7 +313,7 @@ def _read_pyc(source, pyc, trace=lambda x: None):
|
|||
# Check for invalid or out of date pyc file.
|
||||
if (
|
||||
len(data) != 12
|
||||
or data[:4] != imp.get_magic()
|
||||
or data[:4] != importlib.util.MAGIC_NUMBER
|
||||
or struct.unpack("<LL", data[4:]) != (mtime & 0xFFFFFFFF, size & 0xFFFFFFFF)
|
||||
):
|
||||
trace("_read_pyc(%s): invalid or out of date pyc" % source)
|
||||
|
@ -404,9 +329,9 @@ def _read_pyc(source, pyc, trace=lambda x: None):
|
|||
return co
|
||||
|
||||
|
||||
def rewrite_asserts(mod, module_path=None, config=None):
|
||||
def rewrite_asserts(mod, source, module_path=None, config=None):
|
||||
"""Rewrite the assert statements in mod."""
|
||||
AssertionRewriter(module_path, config).run(mod)
|
||||
AssertionRewriter(module_path, config, source).run(mod)
|
||||
|
||||
|
||||
def _saferepr(obj):
|
||||
|
@ -420,15 +345,7 @@ def _saferepr(obj):
|
|||
JSON reprs.
|
||||
|
||||
"""
|
||||
r = saferepr(obj)
|
||||
# only occurs in python2.x, repr must return text in python3+
|
||||
if isinstance(r, bytes):
|
||||
# Represent unprintable bytes as `\x##`
|
||||
r = "".join(
|
||||
"\\x{:x}".format(ord(c)) if c not in string.printable else c.decode()
|
||||
for c in r
|
||||
)
|
||||
return r.replace("\n", "\\n")
|
||||
return saferepr(obj).replace("\n", "\\n")
|
||||
|
||||
|
||||
def _format_assertmsg(obj):
|
||||
|
@ -448,9 +365,6 @@ def _format_assertmsg(obj):
|
|||
obj = saferepr(obj)
|
||||
replaces.append(("\\n", "\n~"))
|
||||
|
||||
if isinstance(obj, bytes):
|
||||
replaces = [(r1.encode(), r2.encode()) for r1, r2 in replaces]
|
||||
|
||||
for r1, r2 in replaces:
|
||||
obj = obj.replace(r1, r2)
|
||||
|
||||
|
@ -490,9 +404,20 @@ def _call_reprcompare(ops, results, expls, each_obj):
|
|||
return expl
|
||||
|
||||
|
||||
unary_map = {ast.Not: "not %s", ast.Invert: "~%s", ast.USub: "-%s", ast.UAdd: "+%s"}
|
||||
def _call_assertion_pass(lineno, orig, expl):
|
||||
if util._assertion_pass is not None:
|
||||
util._assertion_pass(lineno=lineno, orig=orig, expl=expl)
|
||||
|
||||
binop_map = {
|
||||
|
||||
def _check_if_assertion_pass_impl():
|
||||
"""Checks if any plugins implement the pytest_assertion_pass hook
|
||||
in order not to generate explanation unecessarily (might be expensive)"""
|
||||
return True if util._assertion_pass else False
|
||||
|
||||
|
||||
UNARY_MAP = {ast.Not: "not %s", ast.Invert: "~%s", ast.USub: "-%s", ast.UAdd: "+%s"}
|
||||
|
||||
BINOP_MAP = {
|
||||
ast.BitOr: "|",
|
||||
ast.BitXor: "^",
|
||||
ast.BitAnd: "&",
|
||||
|
@ -515,20 +440,8 @@ binop_map = {
|
|||
ast.IsNot: "is not",
|
||||
ast.In: "in",
|
||||
ast.NotIn: "not in",
|
||||
ast.MatMult: "@",
|
||||
}
|
||||
# Python 3.5+ compatibility
|
||||
try:
|
||||
binop_map[ast.MatMult] = "@"
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
# Python 3.4+ compatibility
|
||||
if hasattr(ast, "NameConstant"):
|
||||
_NameConstant = ast.NameConstant
|
||||
else:
|
||||
|
||||
def _NameConstant(c):
|
||||
return ast.Name(str(c), ast.Load())
|
||||
|
||||
|
||||
def set_location(node, lineno, col_offset):
|
||||
|
@ -546,6 +459,59 @@ def set_location(node, lineno, col_offset):
|
|||
return node
|
||||
|
||||
|
||||
def _get_assertion_exprs(src: bytes): # -> Dict[int, str]
|
||||
"""Returns a mapping from {lineno: "assertion test expression"}"""
|
||||
ret = {}
|
||||
|
||||
depth = 0
|
||||
lines = []
|
||||
assert_lineno = None
|
||||
seen_lines = set()
|
||||
|
||||
def _write_and_reset() -> None:
|
||||
nonlocal depth, lines, assert_lineno, seen_lines
|
||||
ret[assert_lineno] = "".join(lines).rstrip().rstrip("\\")
|
||||
depth = 0
|
||||
lines = []
|
||||
assert_lineno = None
|
||||
seen_lines = set()
|
||||
|
||||
tokens = tokenize.tokenize(io.BytesIO(src).readline)
|
||||
for tp, src, (lineno, offset), _, line in tokens:
|
||||
if tp == tokenize.NAME and src == "assert":
|
||||
assert_lineno = lineno
|
||||
elif assert_lineno is not None:
|
||||
# keep track of depth for the assert-message `,` lookup
|
||||
if tp == tokenize.OP and src in "([{":
|
||||
depth += 1
|
||||
elif tp == tokenize.OP and src in ")]}":
|
||||
depth -= 1
|
||||
|
||||
if not lines:
|
||||
lines.append(line[offset:])
|
||||
seen_lines.add(lineno)
|
||||
# a non-nested comma separates the expression from the message
|
||||
elif depth == 0 and tp == tokenize.OP and src == ",":
|
||||
# one line assert with message
|
||||
if lineno in seen_lines and len(lines) == 1:
|
||||
offset_in_trimmed = offset + len(lines[-1]) - len(line)
|
||||
lines[-1] = lines[-1][:offset_in_trimmed]
|
||||
# multi-line assert with message
|
||||
elif lineno in seen_lines:
|
||||
lines[-1] = lines[-1][:offset]
|
||||
# multi line assert with escapd newline before message
|
||||
else:
|
||||
lines.append(line[:offset])
|
||||
_write_and_reset()
|
||||
elif tp in {tokenize.NEWLINE, tokenize.ENDMARKER}:
|
||||
_write_and_reset()
|
||||
elif lines and lineno not in seen_lines:
|
||||
lines.append(line)
|
||||
seen_lines.add(lineno)
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
class AssertionRewriter(ast.NodeVisitor):
|
||||
"""Assertion rewriting implementation.
|
||||
|
||||
|
@ -562,7 +528,8 @@ class AssertionRewriter(ast.NodeVisitor):
|
|||
original assert statement: it rewrites the test of an assertion
|
||||
to provide intermediate values and replace it with an if statement
|
||||
which raises an assertion error with a detailed explanation in
|
||||
case the expression is false.
|
||||
case the expression is false and calls pytest_assertion_pass hook
|
||||
if expression is true.
|
||||
|
||||
For this .visit_Assert() uses the visitor pattern to visit all the
|
||||
AST nodes of the ast.Assert.test field, each visit call returning
|
||||
|
@ -580,9 +547,10 @@ class AssertionRewriter(ast.NodeVisitor):
|
|||
by statements. Variables are created using .variable() and
|
||||
have the form of "@py_assert0".
|
||||
|
||||
:on_failure: The AST statements which will be executed if the
|
||||
assertion test fails. This is the code which will construct
|
||||
the failure message and raises the AssertionError.
|
||||
:expl_stmts: The AST statements which will be executed to get
|
||||
data from the assertion. This is the code which will construct
|
||||
the detailed assertion message that is used in the AssertionError
|
||||
or for the pytest_assertion_pass hook.
|
||||
|
||||
:explanation_specifiers: A dict filled by .explanation_param()
|
||||
with %-formatting placeholders and their corresponding
|
||||
|
@ -598,10 +566,21 @@ class AssertionRewriter(ast.NodeVisitor):
|
|||
|
||||
"""
|
||||
|
||||
def __init__(self, module_path, config):
|
||||
def __init__(self, module_path, config, source):
|
||||
super().__init__()
|
||||
self.module_path = module_path
|
||||
self.config = config
|
||||
if config is not None:
|
||||
self.enable_assertion_pass_hook = config.getini(
|
||||
"enable_assertion_pass_hook"
|
||||
)
|
||||
else:
|
||||
self.enable_assertion_pass_hook = False
|
||||
self.source = source
|
||||
|
||||
@functools.lru_cache(maxsize=1)
|
||||
def _assert_expr_to_lineno(self):
|
||||
return _get_assertion_exprs(self.source)
|
||||
|
||||
def run(self, mod):
|
||||
"""Find all assert statements in *mod* and rewrite them."""
|
||||
|
@ -732,7 +711,7 @@ class AssertionRewriter(ast.NodeVisitor):
|
|||
|
||||
The expl_expr should be an ast.Str instance constructed from
|
||||
the %-placeholders created by .explanation_param(). This will
|
||||
add the required code to format said string to .on_failure and
|
||||
add the required code to format said string to .expl_stmts and
|
||||
return the ast.Name instance of the formatted string.
|
||||
|
||||
"""
|
||||
|
@ -743,7 +722,9 @@ class AssertionRewriter(ast.NodeVisitor):
|
|||
format_dict = ast.Dict(keys, list(current.values()))
|
||||
form = ast.BinOp(expl_expr, ast.Mod(), format_dict)
|
||||
name = "@py_format" + str(next(self.variable_counter))
|
||||
self.on_failure.append(ast.Assign([ast.Name(name, ast.Store())], form))
|
||||
if self.enable_assertion_pass_hook:
|
||||
self.format_variables.append(name)
|
||||
self.expl_stmts.append(ast.Assign([ast.Name(name, ast.Store())], form))
|
||||
return ast.Name(name, ast.Load())
|
||||
|
||||
def generic_visit(self, node):
|
||||
|
@ -770,15 +751,19 @@ class AssertionRewriter(ast.NodeVisitor):
|
|||
"assertion is always true, perhaps remove parentheses?"
|
||||
),
|
||||
category=None,
|
||||
filename=str(self.module_path),
|
||||
filename=self.module_path,
|
||||
lineno=assert_.lineno,
|
||||
)
|
||||
|
||||
self.statements = []
|
||||
self.variables = []
|
||||
self.variable_counter = itertools.count()
|
||||
|
||||
if self.enable_assertion_pass_hook:
|
||||
self.format_variables = []
|
||||
|
||||
self.stack = []
|
||||
self.on_failure = []
|
||||
self.expl_stmts = []
|
||||
self.push_format_context()
|
||||
# Rewrite assert into a bunch of statements.
|
||||
top_condition, explanation = self.visit(assert_.test)
|
||||
|
@ -789,28 +774,81 @@ class AssertionRewriter(ast.NodeVisitor):
|
|||
top_condition, module_path=self.module_path, lineno=assert_.lineno
|
||||
)
|
||||
)
|
||||
# Create failure message.
|
||||
body = self.on_failure
|
||||
negation = ast.UnaryOp(ast.Not(), top_condition)
|
||||
self.statements.append(ast.If(negation, body, []))
|
||||
if assert_.msg:
|
||||
assertmsg = self.helper("_format_assertmsg", assert_.msg)
|
||||
explanation = "\n>assert " + explanation
|
||||
else:
|
||||
assertmsg = ast.Str("")
|
||||
explanation = "assert " + explanation
|
||||
template = ast.BinOp(assertmsg, ast.Add(), ast.Str(explanation))
|
||||
msg = self.pop_format_context(template)
|
||||
fmt = self.helper("_format_explanation", msg)
|
||||
err_name = ast.Name("AssertionError", ast.Load())
|
||||
exc = ast.Call(err_name, [fmt], [])
|
||||
raise_ = ast.Raise(exc, None)
|
||||
|
||||
body.append(raise_)
|
||||
if self.enable_assertion_pass_hook: # Experimental pytest_assertion_pass hook
|
||||
negation = ast.UnaryOp(ast.Not(), top_condition)
|
||||
msg = self.pop_format_context(ast.Str(explanation))
|
||||
|
||||
# Failed
|
||||
if assert_.msg:
|
||||
assertmsg = self.helper("_format_assertmsg", assert_.msg)
|
||||
gluestr = "\n>assert "
|
||||
else:
|
||||
assertmsg = ast.Str("")
|
||||
gluestr = "assert "
|
||||
err_explanation = ast.BinOp(ast.Str(gluestr), ast.Add(), msg)
|
||||
err_msg = ast.BinOp(assertmsg, ast.Add(), err_explanation)
|
||||
err_name = ast.Name("AssertionError", ast.Load())
|
||||
fmt = self.helper("_format_explanation", err_msg)
|
||||
exc = ast.Call(err_name, [fmt], [])
|
||||
raise_ = ast.Raise(exc, None)
|
||||
statements_fail = []
|
||||
statements_fail.extend(self.expl_stmts)
|
||||
statements_fail.append(raise_)
|
||||
|
||||
# Passed
|
||||
fmt_pass = self.helper("_format_explanation", msg)
|
||||
orig = self._assert_expr_to_lineno()[assert_.lineno]
|
||||
hook_call_pass = ast.Expr(
|
||||
self.helper(
|
||||
"_call_assertion_pass",
|
||||
ast.Num(assert_.lineno),
|
||||
ast.Str(orig),
|
||||
fmt_pass,
|
||||
)
|
||||
)
|
||||
# If any hooks implement assert_pass hook
|
||||
hook_impl_test = ast.If(
|
||||
self.helper("_check_if_assertion_pass_impl"),
|
||||
self.expl_stmts + [hook_call_pass],
|
||||
[],
|
||||
)
|
||||
statements_pass = [hook_impl_test]
|
||||
|
||||
# Test for assertion condition
|
||||
main_test = ast.If(negation, statements_fail, statements_pass)
|
||||
self.statements.append(main_test)
|
||||
if self.format_variables:
|
||||
variables = [
|
||||
ast.Name(name, ast.Store()) for name in self.format_variables
|
||||
]
|
||||
clear_format = ast.Assign(variables, ast.NameConstant(None))
|
||||
self.statements.append(clear_format)
|
||||
|
||||
else: # Original assertion rewriting
|
||||
# Create failure message.
|
||||
body = self.expl_stmts
|
||||
negation = ast.UnaryOp(ast.Not(), top_condition)
|
||||
self.statements.append(ast.If(negation, body, []))
|
||||
if assert_.msg:
|
||||
assertmsg = self.helper("_format_assertmsg", assert_.msg)
|
||||
explanation = "\n>assert " + explanation
|
||||
else:
|
||||
assertmsg = ast.Str("")
|
||||
explanation = "assert " + explanation
|
||||
template = ast.BinOp(assertmsg, ast.Add(), ast.Str(explanation))
|
||||
msg = self.pop_format_context(template)
|
||||
fmt = self.helper("_format_explanation", msg)
|
||||
err_name = ast.Name("AssertionError", ast.Load())
|
||||
exc = ast.Call(err_name, [fmt], [])
|
||||
raise_ = ast.Raise(exc, None)
|
||||
|
||||
body.append(raise_)
|
||||
|
||||
# Clear temporary variables by setting them to None.
|
||||
if self.variables:
|
||||
variables = [ast.Name(name, ast.Store()) for name in self.variables]
|
||||
clear = ast.Assign(variables, _NameConstant(None))
|
||||
clear = ast.Assign(variables, ast.NameConstant(None))
|
||||
self.statements.append(clear)
|
||||
# Fix line numbers.
|
||||
for stmt in self.statements:
|
||||
|
@ -829,7 +867,7 @@ class AssertionRewriter(ast.NodeVisitor):
|
|||
AST_NONE = ast.parse("None").body[0].value
|
||||
val_is_none = ast.Compare(node, [ast.Is()], [AST_NONE])
|
||||
send_warning = ast.parse(
|
||||
"""
|
||||
"""\
|
||||
from _pytest.warning_types import PytestAssertRewriteWarning
|
||||
from warnings import warn_explicit
|
||||
warn_explicit(
|
||||
|
@ -839,7 +877,7 @@ warn_explicit(
|
|||
lineno={lineno},
|
||||
)
|
||||
""".format(
|
||||
filename=module_path.strpath, lineno=lineno
|
||||
filename=module_path, lineno=lineno
|
||||
)
|
||||
).body
|
||||
return ast.If(val_is_none, send_warning, [])
|
||||
|
@ -860,22 +898,22 @@ warn_explicit(
|
|||
app = ast.Attribute(expl_list, "append", ast.Load())
|
||||
is_or = int(isinstance(boolop.op, ast.Or))
|
||||
body = save = self.statements
|
||||
fail_save = self.on_failure
|
||||
fail_save = self.expl_stmts
|
||||
levels = len(boolop.values) - 1
|
||||
self.push_format_context()
|
||||
# Process each operand, short-circuting if needed.
|
||||
# Process each operand, short-circuiting if needed.
|
||||
for i, v in enumerate(boolop.values):
|
||||
if i:
|
||||
fail_inner = []
|
||||
# cond is set in a prior loop iteration below
|
||||
self.on_failure.append(ast.If(cond, fail_inner, [])) # noqa
|
||||
self.on_failure = fail_inner
|
||||
self.expl_stmts.append(ast.If(cond, fail_inner, [])) # noqa
|
||||
self.expl_stmts = fail_inner
|
||||
self.push_format_context()
|
||||
res, expl = self.visit(v)
|
||||
body.append(ast.Assign([ast.Name(res_var, ast.Store())], res))
|
||||
expl_format = self.pop_format_context(ast.Str(expl))
|
||||
call = ast.Call(app, [expl_format], [])
|
||||
self.on_failure.append(ast.Expr(call))
|
||||
self.expl_stmts.append(ast.Expr(call))
|
||||
if i < levels:
|
||||
cond = res
|
||||
if is_or:
|
||||
|
@ -884,41 +922,29 @@ warn_explicit(
|
|||
self.statements.append(ast.If(cond, inner, []))
|
||||
self.statements = body = inner
|
||||
self.statements = save
|
||||
self.on_failure = fail_save
|
||||
self.expl_stmts = fail_save
|
||||
expl_template = self.helper("_format_boolop", expl_list, ast.Num(is_or))
|
||||
expl = self.pop_format_context(expl_template)
|
||||
return ast.Name(res_var, ast.Load()), self.explanation_param(expl)
|
||||
|
||||
def visit_UnaryOp(self, unary):
|
||||
pattern = unary_map[unary.op.__class__]
|
||||
pattern = UNARY_MAP[unary.op.__class__]
|
||||
operand_res, operand_expl = self.visit(unary.operand)
|
||||
res = self.assign(ast.UnaryOp(unary.op, operand_res))
|
||||
return res, pattern % (operand_expl,)
|
||||
|
||||
def visit_BinOp(self, binop):
|
||||
symbol = binop_map[binop.op.__class__]
|
||||
symbol = BINOP_MAP[binop.op.__class__]
|
||||
left_expr, left_expl = self.visit(binop.left)
|
||||
right_expr, right_expl = self.visit(binop.right)
|
||||
explanation = "({} {} {})".format(left_expl, symbol, right_expl)
|
||||
res = self.assign(ast.BinOp(left_expr, binop.op, right_expr))
|
||||
return res, explanation
|
||||
|
||||
@staticmethod
|
||||
def _is_any_call_with_generator_or_list_comprehension(call):
|
||||
"""Return True if the Call node is an 'any' call with a generator or list comprehension"""
|
||||
return (
|
||||
isinstance(call.func, ast.Name)
|
||||
and call.func.id == "all"
|
||||
and len(call.args) == 1
|
||||
and isinstance(call.args[0], (ast.GeneratorExp, ast.ListComp))
|
||||
)
|
||||
|
||||
def visit_Call(self, call):
|
||||
"""
|
||||
visit `ast.Call` nodes
|
||||
"""
|
||||
if self._is_any_call_with_generator_or_list_comprehension(call):
|
||||
return self._visit_all(call)
|
||||
new_func, func_expl = self.visit(call.func)
|
||||
arg_expls = []
|
||||
new_args = []
|
||||
|
@ -942,25 +968,6 @@ warn_explicit(
|
|||
outer_expl = "{}\n{{{} = {}\n}}".format(res_expl, res_expl, expl)
|
||||
return res, outer_expl
|
||||
|
||||
def _visit_all(self, call):
|
||||
"""Special rewrite for the builtin all function, see #5062"""
|
||||
gen_exp = call.args[0]
|
||||
assertion_module = ast.Module(
|
||||
body=[ast.Assert(test=gen_exp.elt, lineno=1, msg="", col_offset=1)]
|
||||
)
|
||||
AssertionRewriter(module_path=None, config=None).run(assertion_module)
|
||||
for_loop = ast.For(
|
||||
iter=gen_exp.generators[0].iter,
|
||||
target=gen_exp.generators[0].target,
|
||||
body=assertion_module.body,
|
||||
orelse=[],
|
||||
)
|
||||
self.statements.append(for_loop)
|
||||
return (
|
||||
ast.Num(n=1),
|
||||
"",
|
||||
) # Return an empty expression, all the asserts are in the for_loop
|
||||
|
||||
def visit_Starred(self, starred):
|
||||
# From Python 3.5, a Starred node can appear in a function call
|
||||
res, expl = self.visit(starred.value)
|
||||
|
@ -994,7 +1001,7 @@ warn_explicit(
|
|||
if isinstance(next_operand, (ast.Compare, ast.BoolOp)):
|
||||
next_expl = "({})".format(next_expl)
|
||||
results.append(next_res)
|
||||
sym = binop_map[op.__class__]
|
||||
sym = BINOP_MAP[op.__class__]
|
||||
syms.append(ast.Str(sym))
|
||||
expl = "{} {} {}".format(left_expl, sym, next_expl)
|
||||
expls.append(ast.Str(expl))
|
||||
|
|
|
@ -12,14 +12,9 @@ from _pytest._io.saferepr import saferepr
|
|||
# DebugInterpreter.
|
||||
_reprcompare = None
|
||||
|
||||
|
||||
# the re-encoding is needed for python2 repr
|
||||
# with non-ascii characters (see issue 877 and 1379)
|
||||
def ecu(s):
|
||||
if isinstance(s, bytes):
|
||||
return s.decode("UTF-8", "replace")
|
||||
else:
|
||||
return s
|
||||
# Works similarly as _reprcompare attribute. Is populated with the hook call
|
||||
# when pytest_runtest_setup is called.
|
||||
_assertion_pass = None
|
||||
|
||||
|
||||
def format_explanation(explanation):
|
||||
|
@ -32,7 +27,7 @@ def format_explanation(explanation):
|
|||
for when one explanation needs to span multiple lines, e.g. when
|
||||
displaying diffs.
|
||||
"""
|
||||
explanation = ecu(explanation)
|
||||
explanation = explanation
|
||||
lines = _split_explanation(explanation)
|
||||
result = _format_lines(lines)
|
||||
return "\n".join(result)
|
||||
|
@ -90,19 +85,12 @@ def _format_lines(lines):
|
|||
return result
|
||||
|
||||
|
||||
# Provide basestring in python3
|
||||
try:
|
||||
basestring = basestring
|
||||
except NameError:
|
||||
basestring = str
|
||||
|
||||
|
||||
def issequence(x):
|
||||
return isinstance(x, Sequence) and not isinstance(x, basestring)
|
||||
return isinstance(x, Sequence) and not isinstance(x, str)
|
||||
|
||||
|
||||
def istext(x):
|
||||
return isinstance(x, basestring)
|
||||
return isinstance(x, str)
|
||||
|
||||
|
||||
def isdict(x):
|
||||
|
@ -135,7 +123,7 @@ def assertrepr_compare(config, op, left, right):
|
|||
left_repr = saferepr(left, maxsize=int(width // 2))
|
||||
right_repr = saferepr(right, maxsize=width - len(left_repr))
|
||||
|
||||
summary = "{} {} {}".format(ecu(left_repr), op, ecu(right_repr))
|
||||
summary = "{} {} {}".format(left_repr, op, right_repr)
|
||||
|
||||
verbose = config.getoption("verbose")
|
||||
explanation = None
|
||||
|
@ -260,17 +248,9 @@ def _compare_eq_iterable(left, right, verbose=0):
|
|||
# dynamic import to speedup pytest
|
||||
import difflib
|
||||
|
||||
try:
|
||||
left_formatting = pprint.pformat(left).splitlines()
|
||||
right_formatting = pprint.pformat(right).splitlines()
|
||||
explanation = ["Full diff:"]
|
||||
except Exception:
|
||||
# hack: PrettyPrinter.pformat() in python 2 fails when formatting items that can't be sorted(), ie, calling
|
||||
# sorted() on a list would raise. See issue #718.
|
||||
# As a workaround, the full diff is generated by using the repr() string of each item of each container.
|
||||
left_formatting = sorted(repr(x) for x in left)
|
||||
right_formatting = sorted(repr(x) for x in right)
|
||||
explanation = ["Full diff (fallback to calling repr on each item):"]
|
||||
left_formatting = pprint.pformat(left).splitlines()
|
||||
right_formatting = pprint.pformat(right).splitlines()
|
||||
explanation = ["Full diff:"]
|
||||
explanation.extend(
|
||||
line.strip() for line in difflib.ndiff(left_formatting, right_formatting)
|
||||
)
|
||||
|
@ -278,17 +258,38 @@ def _compare_eq_iterable(left, right, verbose=0):
|
|||
|
||||
|
||||
def _compare_eq_sequence(left, right, verbose=0):
|
||||
comparing_bytes = isinstance(left, bytes) and isinstance(right, bytes)
|
||||
explanation = []
|
||||
len_left = len(left)
|
||||
len_right = len(right)
|
||||
for i in range(min(len_left, len_right)):
|
||||
if left[i] != right[i]:
|
||||
if comparing_bytes:
|
||||
# when comparing bytes, we want to see their ascii representation
|
||||
# instead of their numeric values (#5260)
|
||||
# using a slice gives us the ascii representation:
|
||||
# >>> s = b'foo'
|
||||
# >>> s[0]
|
||||
# 102
|
||||
# >>> s[0:1]
|
||||
# b'f'
|
||||
left_value = left[i : i + 1]
|
||||
right_value = right[i : i + 1]
|
||||
else:
|
||||
left_value = left[i]
|
||||
right_value = right[i]
|
||||
|
||||
explanation += [
|
||||
"At index {} diff: {!r} != {!r}".format(i, left[i], right[i])
|
||||
"At index {} diff: {!r} != {!r}".format(i, left_value, right_value)
|
||||
]
|
||||
break
|
||||
len_diff = len_left - len_right
|
||||
|
||||
if comparing_bytes:
|
||||
# when comparing bytes, it doesn't help to show the "sides contain one or more items"
|
||||
# longer explanation, so skip it
|
||||
return explanation
|
||||
|
||||
len_diff = len_left - len_right
|
||||
if len_diff:
|
||||
if len_diff > 0:
|
||||
dir_with_more = "Left"
|
||||
|
|
|
@ -10,6 +10,7 @@ from contextlib import contextmanager
|
|||
from inspect import Parameter
|
||||
from inspect import signature
|
||||
|
||||
import attr
|
||||
import py
|
||||
|
||||
import _pytest
|
||||
|
@ -29,10 +30,6 @@ def _format_args(func):
|
|||
return str(signature(func))
|
||||
|
||||
|
||||
isfunction = inspect.isfunction
|
||||
isclass = inspect.isclass
|
||||
# used to work around a python2 exception info leak
|
||||
exc_clear = getattr(sys, "exc_clear", lambda: None)
|
||||
# The type of re.compile objects is not exposed in Python.
|
||||
REGEX_TYPE = type(re.compile(""))
|
||||
|
||||
|
@ -129,11 +126,15 @@ def getfuncargnames(function, is_method=False, cls=None):
|
|||
return arg_names
|
||||
|
||||
|
||||
@contextmanager
|
||||
def dummy_context_manager():
|
||||
"""Context manager that does nothing, useful in situations where you might need an actual context manager or not
|
||||
depending on some condition. Using this allow to keep the same code"""
|
||||
yield
|
||||
if sys.version_info < (3, 7):
|
||||
|
||||
@contextmanager
|
||||
def nullcontext():
|
||||
yield
|
||||
|
||||
|
||||
else:
|
||||
from contextlib import nullcontext # noqa
|
||||
|
||||
|
||||
def get_default_arg_names(function):
|
||||
|
@ -170,7 +171,7 @@ def ascii_escaped(val):
|
|||
"""If val is pure ascii, returns it as a str(). Otherwise, escapes
|
||||
bytes objects into a sequence of escaped bytes:
|
||||
|
||||
b'\xc3\xb4\xc5\xd6' -> u'\\xc3\\xb4\\xc5\\xd6'
|
||||
b'\xc3\xb4\xc5\xd6' -> '\\xc3\\xb4\\xc5\\xd6'
|
||||
|
||||
and escapes unicode objects into a sequence of escaped unicode
|
||||
ids, e.g.:
|
||||
|
@ -191,6 +192,7 @@ def ascii_escaped(val):
|
|||
return _translate_non_printable(ret)
|
||||
|
||||
|
||||
@attr.s
|
||||
class _PytestWrapper:
|
||||
"""Dummy wrapper around a function object for internal use only.
|
||||
|
||||
|
@ -199,8 +201,7 @@ class _PytestWrapper:
|
|||
to issue warnings when the fixture function is called directly.
|
||||
"""
|
||||
|
||||
def __init__(self, obj):
|
||||
self.obj = obj
|
||||
obj = attr.ib()
|
||||
|
||||
|
||||
def get_real_func(obj):
|
||||
|
@ -280,7 +281,7 @@ def safe_getattr(object, name, default):
|
|||
def safe_isclass(obj):
|
||||
"""Ignore any exception via isinstance on Python 3."""
|
||||
try:
|
||||
return isclass(obj)
|
||||
return inspect.isclass(obj)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
@ -304,8 +305,8 @@ def _setup_collect_fakemodule():
|
|||
|
||||
pytest.collect = ModuleType("pytest.collect")
|
||||
pytest.collect.__all__ = [] # used for setns
|
||||
for attr in COLLECT_FAKEMODULE_ATTRIBUTES:
|
||||
setattr(pytest.collect, attr, getattr(pytest, attr))
|
||||
for attr_name in COLLECT_FAKEMODULE_ATTRIBUTES:
|
||||
setattr(pytest.collect, attr_name, getattr(pytest, attr_name))
|
||||
|
||||
|
||||
class CaptureIO(io.TextIOWrapper):
|
||||
|
@ -324,4 +325,8 @@ class FuncargnamesCompatAttr:
|
|||
@property
|
||||
def funcargnames(self):
|
||||
""" alias attribute for ``fixturenames`` for pre-2.3 compatibility"""
|
||||
import warnings
|
||||
from _pytest.deprecated import FUNCARGNAMES
|
||||
|
||||
warnings.warn(FUNCARGNAMES, stacklevel=2)
|
||||
return self.fixturenames
|
||||
|
|
|
@ -23,7 +23,6 @@ from .exceptions import PrintHelp
|
|||
from .exceptions import UsageError
|
||||
from .findpaths import determine_setup
|
||||
from .findpaths import exists
|
||||
from _pytest import deprecated
|
||||
from _pytest._code import ExceptionInfo
|
||||
from _pytest._code import filter_traceback
|
||||
from _pytest.outcomes import fail
|
||||
|
@ -49,7 +48,7 @@ def main(args=None, plugins=None):
|
|||
:arg plugins: list of plugin objects to be auto-registered during
|
||||
initialization.
|
||||
"""
|
||||
from _pytest.main import EXIT_USAGEERROR
|
||||
from _pytest.main import ExitCode
|
||||
|
||||
try:
|
||||
try:
|
||||
|
@ -79,7 +78,7 @@ def main(args=None, plugins=None):
|
|||
tw = py.io.TerminalWriter(sys.stderr)
|
||||
for msg in e.args:
|
||||
tw.line("ERROR: {}\n".format(msg), red=True)
|
||||
return EXIT_USAGEERROR
|
||||
return ExitCode.USAGE_ERROR
|
||||
|
||||
|
||||
class cmdline: # compatibility namespace
|
||||
|
@ -141,6 +140,7 @@ default_plugins = essential_plugins + (
|
|||
"warnings",
|
||||
"logging",
|
||||
"reports",
|
||||
"faulthandler",
|
||||
)
|
||||
|
||||
builtin_plugins = set(default_plugins)
|
||||
|
@ -242,16 +242,6 @@ class PytestPluginManager(PluginManager):
|
|||
# Used to know when we are importing conftests after the pytest_configure stage
|
||||
self._configured = False
|
||||
|
||||
def addhooks(self, module_or_class):
|
||||
"""
|
||||
.. deprecated:: 2.8
|
||||
|
||||
Use :py:meth:`pluggy.PluginManager.add_hookspecs <PluginManager.add_hookspecs>`
|
||||
instead.
|
||||
"""
|
||||
warnings.warn(deprecated.PLUGIN_MANAGER_ADDHOOKS, stacklevel=2)
|
||||
return self.add_hookspecs(module_or_class)
|
||||
|
||||
def parse_hookimpl_opts(self, plugin, name):
|
||||
# pytest hooks are always prefixed with pytest_
|
||||
# so we avoid accessing possibly non-readable attributes
|
||||
|
@ -299,7 +289,7 @@ class PytestPluginManager(PluginManager):
|
|||
return opts
|
||||
|
||||
def register(self, plugin, name=None):
|
||||
if name in ["pytest_catchlog", "pytest_capturelog"]:
|
||||
if name in _pytest.deprecated.DEPRECATED_EXTERNAL_PLUGINS:
|
||||
warnings.warn(
|
||||
PytestConfigWarning(
|
||||
"{} plugin has been merged into the core, "
|
||||
|
@ -784,7 +774,7 @@ class Config:
|
|||
str(file)
|
||||
for dist in importlib_metadata.distributions()
|
||||
if any(ep.group == "pytest11" for ep in dist.entry_points)
|
||||
for file in dist.files
|
||||
for file in dist.files or []
|
||||
)
|
||||
|
||||
for name in _iter_rewritable_modules(package_files):
|
||||
|
|
|
@ -14,6 +14,14 @@ from _pytest.warning_types import UnformattedWarning
|
|||
|
||||
YIELD_TESTS = "yield tests were removed in pytest 4.0 - {name} will be ignored"
|
||||
|
||||
# set of plugins which have been integrated into the core; we use this list to ignore
|
||||
# them during registration to avoid conflicts
|
||||
DEPRECATED_EXTERNAL_PLUGINS = {
|
||||
"pytest_catchlog",
|
||||
"pytest_capturelog",
|
||||
"pytest_faulthandler",
|
||||
}
|
||||
|
||||
|
||||
FIXTURE_FUNCTION_CALL = (
|
||||
'Fixture "{name}" called directly. Fixtures are not meant to be called directly,\n'
|
||||
|
@ -32,15 +40,20 @@ GETFUNCARGVALUE = RemovedInPytest4Warning(
|
|||
"getfuncargvalue is deprecated, use getfixturevalue"
|
||||
)
|
||||
|
||||
FUNCARGNAMES = PytestDeprecationWarning(
|
||||
"The `funcargnames` attribute was an alias for `fixturenames`, "
|
||||
"since pytest 2.3 - use the newer attribute instead."
|
||||
)
|
||||
|
||||
RAISES_MESSAGE_PARAMETER = PytestDeprecationWarning(
|
||||
"The 'message' parameter is deprecated.\n"
|
||||
"(did you mean to use `match='some regex'` to check the exception message?)\n"
|
||||
"Please comment on https://github.com/pytest-dev/pytest/issues/3974 "
|
||||
"if you have concerns about removal of this parameter."
|
||||
"Please see:\n"
|
||||
" https://docs.pytest.org/en/4.6-maintenance/deprecations.html#message-parameter-of-pytest-raises"
|
||||
)
|
||||
|
||||
RESULT_LOG = PytestDeprecationWarning(
|
||||
"--result-log is deprecated and scheduled for removal in pytest 5.0.\n"
|
||||
"--result-log is deprecated and scheduled for removal in pytest 6.0.\n"
|
||||
"See https://docs.pytest.org/en/latest/deprecations.html#result-log-result-log for more information."
|
||||
)
|
||||
|
||||
|
|
|
@ -3,6 +3,7 @@ import inspect
|
|||
import platform
|
||||
import sys
|
||||
import traceback
|
||||
import warnings
|
||||
from contextlib import contextmanager
|
||||
|
||||
import pytest
|
||||
|
@ -12,6 +13,7 @@ from _pytest._code.code import TerminalRepr
|
|||
from _pytest.compat import safe_getattr
|
||||
from _pytest.fixtures import FixtureRequest
|
||||
from _pytest.outcomes import Skipped
|
||||
from _pytest.warning_types import PytestWarning
|
||||
|
||||
DOCTEST_REPORT_CHOICE_NONE = "none"
|
||||
DOCTEST_REPORT_CHOICE_CDIFF = "cdiff"
|
||||
|
@ -362,22 +364,27 @@ def _patch_unwrap_mock_aware():
|
|||
contextmanager which replaces ``inspect.unwrap`` with a version
|
||||
that's aware of mock objects and doesn't recurse on them
|
||||
"""
|
||||
real_unwrap = getattr(inspect, "unwrap", None)
|
||||
if real_unwrap is None:
|
||||
yield
|
||||
else:
|
||||
real_unwrap = inspect.unwrap
|
||||
|
||||
def _mock_aware_unwrap(obj, stop=None):
|
||||
if stop is None:
|
||||
return real_unwrap(obj, stop=_is_mocked)
|
||||
else:
|
||||
return real_unwrap(obj, stop=lambda obj: _is_mocked(obj) or stop(obj))
|
||||
|
||||
inspect.unwrap = _mock_aware_unwrap
|
||||
def _mock_aware_unwrap(obj, stop=None):
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
inspect.unwrap = real_unwrap
|
||||
if stop is None or stop is _is_mocked:
|
||||
return real_unwrap(obj, stop=_is_mocked)
|
||||
return real_unwrap(obj, stop=lambda obj: _is_mocked(obj) or stop(obj))
|
||||
except Exception as e:
|
||||
warnings.warn(
|
||||
"Got %r when unwrapping %r. This is usually caused "
|
||||
"by a violation of Python's object protocol; see e.g. "
|
||||
"https://github.com/pytest-dev/pytest/issues/5080" % (e, obj),
|
||||
PytestWarning,
|
||||
)
|
||||
raise
|
||||
|
||||
inspect.unwrap = _mock_aware_unwrap
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
inspect.unwrap = real_unwrap
|
||||
|
||||
|
||||
class DoctestModule(pytest.Module):
|
||||
|
|
|
@ -0,0 +1,86 @@
|
|||
import io
|
||||
import os
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
help = (
|
||||
"Dump the traceback of all threads if a test takes "
|
||||
"more than TIMEOUT seconds to finish.\n"
|
||||
"Not available on Windows."
|
||||
)
|
||||
parser.addini("faulthandler_timeout", help, default=0.0)
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
import faulthandler
|
||||
|
||||
# avoid trying to dup sys.stderr if faulthandler is already enabled
|
||||
if faulthandler.is_enabled():
|
||||
return
|
||||
|
||||
stderr_fd_copy = os.dup(_get_stderr_fileno())
|
||||
config.fault_handler_stderr = os.fdopen(stderr_fd_copy, "w")
|
||||
faulthandler.enable(file=config.fault_handler_stderr)
|
||||
|
||||
|
||||
def _get_stderr_fileno():
|
||||
try:
|
||||
return sys.stderr.fileno()
|
||||
except (AttributeError, io.UnsupportedOperation):
|
||||
# python-xdist monkeypatches sys.stderr with an object that is not an actual file.
|
||||
# https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors
|
||||
# This is potentially dangerous, but the best we can do.
|
||||
return sys.__stderr__.fileno()
|
||||
|
||||
|
||||
def pytest_unconfigure(config):
|
||||
import faulthandler
|
||||
|
||||
faulthandler.disable()
|
||||
# close our dup file installed during pytest_configure
|
||||
f = getattr(config, "fault_handler_stderr", None)
|
||||
if f is not None:
|
||||
# re-enable the faulthandler, attaching it to the default sys.stderr
|
||||
# so we can see crashes after pytest has finished, usually during
|
||||
# garbage collection during interpreter shutdown
|
||||
config.fault_handler_stderr.close()
|
||||
del config.fault_handler_stderr
|
||||
faulthandler.enable(file=_get_stderr_fileno())
|
||||
|
||||
|
||||
@pytest.hookimpl(hookwrapper=True)
|
||||
def pytest_runtest_protocol(item):
|
||||
timeout = float(item.config.getini("faulthandler_timeout") or 0.0)
|
||||
if timeout > 0:
|
||||
import faulthandler
|
||||
|
||||
stderr = item.config.fault_handler_stderr
|
||||
faulthandler.dump_traceback_later(timeout, file=stderr)
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
faulthandler.cancel_dump_traceback_later()
|
||||
else:
|
||||
yield
|
||||
|
||||
|
||||
@pytest.hookimpl(tryfirst=True)
|
||||
def pytest_enter_pdb():
|
||||
"""Cancel any traceback dumping due to timeout before entering pdb.
|
||||
"""
|
||||
import faulthandler
|
||||
|
||||
faulthandler.cancel_dump_traceback_later()
|
||||
|
||||
|
||||
@pytest.hookimpl(tryfirst=True)
|
||||
def pytest_exception_interact():
|
||||
"""Cancel any traceback dumping due to an interactive exception being
|
||||
raised.
|
||||
"""
|
||||
import faulthandler
|
||||
|
||||
faulthandler.cancel_dump_traceback_later()
|
|
@ -16,7 +16,6 @@ from _pytest._code.code import FormattedExcinfo
|
|||
from _pytest._code.code import TerminalRepr
|
||||
from _pytest.compat import _format_args
|
||||
from _pytest.compat import _PytestWrapper
|
||||
from _pytest.compat import exc_clear
|
||||
from _pytest.compat import FuncargnamesCompatAttr
|
||||
from _pytest.compat import get_real_func
|
||||
from _pytest.compat import get_real_method
|
||||
|
@ -25,7 +24,6 @@ from _pytest.compat import getfuncargnames
|
|||
from _pytest.compat import getimfunc
|
||||
from _pytest.compat import getlocation
|
||||
from _pytest.compat import is_generator
|
||||
from _pytest.compat import isclass
|
||||
from _pytest.compat import NOTSET
|
||||
from _pytest.compat import safe_getattr
|
||||
from _pytest.deprecated import FIXTURE_FUNCTION_CALL
|
||||
|
@ -572,10 +570,6 @@ class FixtureRequest(FuncargnamesCompatAttr):
|
|||
|
||||
# check if a higher-level scoped fixture accesses a lower level one
|
||||
subrequest._check_scope(argname, self.scope, scope)
|
||||
|
||||
# clear sys.exc_info before invoking the fixture (python bug?)
|
||||
# if it's not explicitly cleared it will leak into the call
|
||||
exc_clear()
|
||||
try:
|
||||
# call the fixture function
|
||||
fixturedef.execute(request=subrequest)
|
||||
|
@ -660,7 +654,7 @@ class SubRequest(FixtureRequest):
|
|||
# if the executing fixturedef was not explicitly requested in the argument list (via
|
||||
# getfixturevalue inside the fixture call) then ensure this fixture def will be finished
|
||||
# first
|
||||
if fixturedef.argname not in self.funcargnames:
|
||||
if fixturedef.argname not in self.fixturenames:
|
||||
fixturedef.addfinalizer(
|
||||
functools.partial(self._fixturedef.finish, request=self)
|
||||
)
|
||||
|
@ -970,7 +964,7 @@ class FixtureFunctionMarker:
|
|||
name = attr.ib(default=None)
|
||||
|
||||
def __call__(self, function):
|
||||
if isclass(function):
|
||||
if inspect.isclass(function):
|
||||
raise ValueError("class fixtures not supported (maybe in the future)")
|
||||
|
||||
if getattr(function, "_pytestfixturefunction", False):
|
||||
|
|
|
@ -485,6 +485,42 @@ def pytest_assertrepr_compare(config, op, left, right):
|
|||
"""
|
||||
|
||||
|
||||
def pytest_assertion_pass(item, lineno, orig, expl):
|
||||
"""
|
||||
**(Experimental)**
|
||||
|
||||
Hook called whenever an assertion *passes*.
|
||||
|
||||
Use this hook to do some processing after a passing assertion.
|
||||
The original assertion information is available in the `orig` string
|
||||
and the pytest introspected assertion information is available in the
|
||||
`expl` string.
|
||||
|
||||
This hook must be explicitly enabled by the ``enable_assertion_pass_hook``
|
||||
ini-file option:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pytest]
|
||||
enable_assertion_pass_hook=true
|
||||
|
||||
You need to **clean the .pyc** files in your project directory and interpreter libraries
|
||||
when enabling this option, as assertions will require to be re-written.
|
||||
|
||||
:param _pytest.nodes.Item item: pytest item object of current test
|
||||
:param int lineno: line number of the assert statement
|
||||
:param string orig: string with original assertion
|
||||
:param string expl: string with assert explanation
|
||||
|
||||
.. note::
|
||||
|
||||
This hook is **experimental**, so its parameters or even the hook itself might
|
||||
be changed/removed without warning in any future pytest release.
|
||||
|
||||
If you find this hook useful, please share your feedback opening an issue.
|
||||
"""
|
||||
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
# hooks for influencing reporting (invoked from _pytest_terminal)
|
||||
# -------------------------------------------------------------------------
|
||||
|
|
|
@ -6,7 +6,7 @@ from contextlib import contextmanager
|
|||
import py
|
||||
|
||||
import pytest
|
||||
from _pytest.compat import dummy_context_manager
|
||||
from _pytest.compat import nullcontext
|
||||
from _pytest.config import create_terminal_writer
|
||||
from _pytest.pathlib import Path
|
||||
|
||||
|
@ -409,10 +409,6 @@ class LoggingPlugin:
|
|||
"""
|
||||
self._config = config
|
||||
|
||||
# enable verbose output automatically if live logging is enabled
|
||||
if self._log_cli_enabled() and config.getoption("verbose") < 1:
|
||||
config.option.verbose = 1
|
||||
|
||||
self.print_logs = get_option_ini(config, "log_print")
|
||||
self.formatter = self._create_formatter(
|
||||
get_option_ini(config, "log_format"),
|
||||
|
@ -440,7 +436,7 @@ class LoggingPlugin:
|
|||
|
||||
self.log_cli_handler = None
|
||||
|
||||
self.live_logs_context = lambda: dummy_context_manager()
|
||||
self.live_logs_context = lambda: nullcontext()
|
||||
# Note that the lambda for the live_logs_context is needed because
|
||||
# live_logs_context can otherwise not be entered multiple times due
|
||||
# to limitations of contextlib.contextmanager.
|
||||
|
@ -628,6 +624,15 @@ class LoggingPlugin:
|
|||
@pytest.hookimpl(hookwrapper=True)
|
||||
def pytest_runtestloop(self, session):
|
||||
"""Runs all collected test items."""
|
||||
|
||||
if session.config.option.collectonly:
|
||||
yield
|
||||
return
|
||||
|
||||
if self._log_cli_enabled() and self._config.getoption("verbose") < 1:
|
||||
# setting verbose flag is needed to avoid messy test progress output
|
||||
self._config.option.verbose = 1
|
||||
|
||||
with self.live_logs_context():
|
||||
if self.log_file_handler is not None:
|
||||
with catching_logs(self.log_file_handler, level=self.log_file_level):
|
||||
|
@ -671,7 +676,7 @@ class _LiveLoggingStreamHandler(logging.StreamHandler):
|
|||
ctx_manager = (
|
||||
self.capture_manager.global_and_fixture_disabled()
|
||||
if self.capture_manager
|
||||
else dummy_context_manager()
|
||||
else nullcontext()
|
||||
)
|
||||
with ctx_manager:
|
||||
if not self._first_record_emitted:
|
||||
|
|
|
@ -1,8 +1,9 @@
|
|||
""" core implementation of testing process: init, session, runtest loop. """
|
||||
import enum
|
||||
import fnmatch
|
||||
import functools
|
||||
import importlib
|
||||
import os
|
||||
import pkgutil
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
|
@ -18,13 +19,26 @@ from _pytest.deprecated import PYTEST_CONFIG_GLOBAL
|
|||
from _pytest.outcomes import exit
|
||||
from _pytest.runner import collect_one_node
|
||||
|
||||
# exitcodes for the command line
|
||||
EXIT_OK = 0
|
||||
EXIT_TESTSFAILED = 1
|
||||
EXIT_INTERRUPTED = 2
|
||||
EXIT_INTERNALERROR = 3
|
||||
EXIT_USAGEERROR = 4
|
||||
EXIT_NOTESTSCOLLECTED = 5
|
||||
|
||||
class ExitCode(enum.IntEnum):
|
||||
"""
|
||||
Encodes the valid exit codes by pytest.
|
||||
|
||||
Currently users and plugins may supply other exit codes as well.
|
||||
"""
|
||||
|
||||
#: tests passed
|
||||
OK = 0
|
||||
#: tests failed
|
||||
TESTS_FAILED = 1
|
||||
#: pytest was interrupted
|
||||
INTERRUPTED = 2
|
||||
#: an internal error got in the way
|
||||
INTERNAL_ERROR = 3
|
||||
#: pytest was misused
|
||||
USAGE_ERROR = 4
|
||||
#: pytest couldn't find tests
|
||||
NO_TESTS_COLLECTED = 5
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
|
@ -188,7 +202,7 @@ def pytest_configure(config):
|
|||
def wrap_session(config, doit):
|
||||
"""Skeleton command line program"""
|
||||
session = Session(config)
|
||||
session.exitstatus = EXIT_OK
|
||||
session.exitstatus = ExitCode.OK
|
||||
initstate = 0
|
||||
try:
|
||||
try:
|
||||
|
@ -198,13 +212,13 @@ def wrap_session(config, doit):
|
|||
initstate = 2
|
||||
session.exitstatus = doit(config, session) or 0
|
||||
except UsageError:
|
||||
session.exitstatus = EXIT_USAGEERROR
|
||||
session.exitstatus = ExitCode.USAGE_ERROR
|
||||
raise
|
||||
except Failed:
|
||||
session.exitstatus = EXIT_TESTSFAILED
|
||||
session.exitstatus = ExitCode.TESTS_FAILED
|
||||
except (KeyboardInterrupt, exit.Exception):
|
||||
excinfo = _pytest._code.ExceptionInfo.from_current()
|
||||
exitstatus = EXIT_INTERRUPTED
|
||||
exitstatus = ExitCode.INTERRUPTED
|
||||
if isinstance(excinfo.value, exit.Exception):
|
||||
if excinfo.value.returncode is not None:
|
||||
exitstatus = excinfo.value.returncode
|
||||
|
@ -217,7 +231,7 @@ def wrap_session(config, doit):
|
|||
except: # noqa
|
||||
excinfo = _pytest._code.ExceptionInfo.from_current()
|
||||
config.notify_exception(excinfo, config.option)
|
||||
session.exitstatus = EXIT_INTERNALERROR
|
||||
session.exitstatus = ExitCode.INTERNAL_ERROR
|
||||
if excinfo.errisinstance(SystemExit):
|
||||
sys.stderr.write("mainloop: caught unexpected SystemExit!\n")
|
||||
|
||||
|
@ -243,9 +257,9 @@ def _main(config, session):
|
|||
config.hook.pytest_runtestloop(session=session)
|
||||
|
||||
if session.testsfailed:
|
||||
return EXIT_TESTSFAILED
|
||||
return ExitCode.TESTS_FAILED
|
||||
elif session.testscollected == 0:
|
||||
return EXIT_NOTESTSCOLLECTED
|
||||
return ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
def pytest_collection(session):
|
||||
|
@ -616,21 +630,18 @@ class Session(nodes.FSCollector):
|
|||
def _tryconvertpyarg(self, x):
|
||||
"""Convert a dotted module name to path."""
|
||||
try:
|
||||
loader = pkgutil.find_loader(x)
|
||||
except ImportError:
|
||||
spec = importlib.util.find_spec(x)
|
||||
# AttributeError: looks like package module, but actually filename
|
||||
# ImportError: module does not exist
|
||||
# ValueError: not a module name
|
||||
except (AttributeError, ImportError, ValueError):
|
||||
return x
|
||||
if loader is None:
|
||||
if spec is None or spec.origin in {None, "namespace"}:
|
||||
return x
|
||||
# This method is sometimes invoked when AssertionRewritingHook, which
|
||||
# does not define a get_filename method, is already in place:
|
||||
try:
|
||||
path = loader.get_filename(x)
|
||||
except AttributeError:
|
||||
# Retrieve path from AssertionRewritingHook:
|
||||
path = loader.modules[x][0].co_filename
|
||||
if loader.is_package(x):
|
||||
path = os.path.dirname(path)
|
||||
return path
|
||||
elif spec.submodule_search_locations:
|
||||
return os.path.dirname(spec.origin)
|
||||
else:
|
||||
return spec.origin
|
||||
|
||||
def _parsearg(self, arg):
|
||||
""" return (fspath, names) tuple after checking the file exists. """
|
||||
|
|
|
@ -102,10 +102,7 @@ class ParameterSet(namedtuple("ParameterSet", "values, marks, id")):
|
|||
return cls(parameterset, marks=[], id=None)
|
||||
|
||||
@staticmethod
|
||||
def _parse_parametrize_args(argnames, argvalues, **_):
|
||||
"""It receives an ignored _ (kwargs) argument so this function can
|
||||
take also calls from parametrize ignoring scope, indirect, and other
|
||||
arguments..."""
|
||||
def _parse_parametrize_args(argnames, argvalues, *args, **kwargs):
|
||||
if not isinstance(argnames, (tuple, list)):
|
||||
argnames = [x.strip() for x in argnames.split(",") if x.strip()]
|
||||
force_tuple = len(argnames) == 1
|
||||
|
|
|
@ -323,7 +323,7 @@ class Collector(Node):
|
|||
|
||||
# Respect explicit tbstyle option, but default to "short"
|
||||
# (None._repr_failure_py defaults to "long" without "fulltrace" option).
|
||||
tbstyle = self.config.getoption("tbstyle")
|
||||
tbstyle = self.config.getoption("tbstyle", "auto")
|
||||
if tbstyle == "auto":
|
||||
tbstyle = "short"
|
||||
|
||||
|
|
|
@ -73,7 +73,7 @@ def exit(msg, returncode=None):
|
|||
exit.Exception = Exit
|
||||
|
||||
|
||||
def skip(msg="", **kwargs):
|
||||
def skip(msg="", *, allow_module_level=False):
|
||||
"""
|
||||
Skip an executing test with the given message.
|
||||
|
||||
|
@ -93,9 +93,6 @@ def skip(msg="", **kwargs):
|
|||
to skip a doctest statically.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
allow_module_level = kwargs.pop("allow_module_level", False)
|
||||
if kwargs:
|
||||
raise TypeError("unexpected keyword arguments: {}".format(sorted(kwargs)))
|
||||
raise Skipped(msg=msg, allow_module_level=allow_module_level)
|
||||
|
||||
|
||||
|
@ -117,7 +114,7 @@ def fail(msg="", pytrace=True):
|
|||
fail.Exception = Failed
|
||||
|
||||
|
||||
class XFailed(fail.Exception):
|
||||
class XFailed(Failed):
|
||||
""" raised from an explicit call to pytest.xfail() """
|
||||
|
||||
|
||||
|
@ -152,7 +149,6 @@ def importorskip(modname, minversion=None, reason=None):
|
|||
|
||||
__tracebackhide__ = True
|
||||
compile(modname, "", "eval") # to catch syntaxerrors
|
||||
import_exc = None
|
||||
|
||||
with warnings.catch_warnings():
|
||||
# make sure to ignore ImportWarnings that might happen because
|
||||
|
@ -162,12 +158,9 @@ def importorskip(modname, minversion=None, reason=None):
|
|||
try:
|
||||
__import__(modname)
|
||||
except ImportError as exc:
|
||||
# Do not raise chained exception here(#1485)
|
||||
import_exc = exc
|
||||
if import_exc:
|
||||
if reason is None:
|
||||
reason = "could not import {!r}: {}".format(modname, import_exc)
|
||||
raise Skipped(reason, allow_module_level=True)
|
||||
if reason is None:
|
||||
reason = "could not import {!r}: {}".format(modname, exc)
|
||||
raise Skipped(reason, allow_module_level=True) from None
|
||||
mod = sys.modules[modname]
|
||||
if minversion is None:
|
||||
return mod
|
||||
|
|
|
@ -134,9 +134,7 @@ def create_cleanup_lock(p):
|
|||
raise
|
||||
else:
|
||||
pid = os.getpid()
|
||||
spid = str(pid)
|
||||
if not isinstance(spid, bytes):
|
||||
spid = spid.encode("ascii")
|
||||
spid = str(pid).encode()
|
||||
os.write(fd, spid)
|
||||
os.close(fd)
|
||||
if not lock_path.is_file():
|
||||
|
@ -296,6 +294,8 @@ def fnmatch_ex(pattern, path):
|
|||
name = path.name
|
||||
else:
|
||||
name = str(path)
|
||||
if path.is_absolute() and not os.path.isabs(pattern):
|
||||
pattern = "*{}{}".format(os.sep, pattern)
|
||||
return fnmatch.fnmatch(name, pattern)
|
||||
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
"""(disabled by default) support for testing pytest and pytest plugins."""
|
||||
import gc
|
||||
import importlib
|
||||
import os
|
||||
import platform
|
||||
import re
|
||||
|
@ -16,11 +17,9 @@ import py
|
|||
import pytest
|
||||
from _pytest._code import Source
|
||||
from _pytest._io.saferepr import saferepr
|
||||
from _pytest.assertion.rewrite import AssertionRewritingHook
|
||||
from _pytest.capture import MultiCapture
|
||||
from _pytest.capture import SysCapture
|
||||
from _pytest.main import EXIT_INTERRUPTED
|
||||
from _pytest.main import EXIT_OK
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.main import Session
|
||||
from _pytest.monkeypatch import MonkeyPatch
|
||||
from _pytest.pathlib import Path
|
||||
|
@ -68,14 +67,6 @@ def pytest_configure(config):
|
|||
)
|
||||
|
||||
|
||||
def raise_on_kwargs(kwargs):
|
||||
__tracebackhide__ = True
|
||||
if kwargs: # pragma: no branch
|
||||
raise TypeError(
|
||||
"Unexpected keyword arguments: {}".format(", ".join(sorted(kwargs)))
|
||||
)
|
||||
|
||||
|
||||
class LsofFdLeakChecker:
|
||||
def get_open_files(self):
|
||||
out = self._exec_lsof()
|
||||
|
@ -699,7 +690,7 @@ class Testdir:
|
|||
p = py.path.local(arg)
|
||||
config.hook.pytest_sessionstart(session=session)
|
||||
res = session.perform_collect([str(p)], genitems=False)[0]
|
||||
config.hook.pytest_sessionfinish(session=session, exitstatus=EXIT_OK)
|
||||
config.hook.pytest_sessionfinish(session=session, exitstatus=ExitCode.OK)
|
||||
return res
|
||||
|
||||
def getpathnode(self, path):
|
||||
|
@ -716,7 +707,7 @@ class Testdir:
|
|||
x = session.fspath.bestrelpath(path)
|
||||
config.hook.pytest_sessionstart(session=session)
|
||||
res = session.perform_collect([x], genitems=False)[0]
|
||||
config.hook.pytest_sessionfinish(session=session, exitstatus=EXIT_OK)
|
||||
config.hook.pytest_sessionfinish(session=session, exitstatus=ExitCode.OK)
|
||||
return res
|
||||
|
||||
def genitems(self, colitems):
|
||||
|
@ -778,7 +769,7 @@ class Testdir:
|
|||
items = [x.item for x in rec.getcalls("pytest_itemcollected")]
|
||||
return items, rec
|
||||
|
||||
def inline_run(self, *args, **kwargs):
|
||||
def inline_run(self, *args, plugins=(), no_reraise_ctrlc=False):
|
||||
"""Run ``pytest.main()`` in-process, returning a HookRecorder.
|
||||
|
||||
Runs the :py:func:`pytest.main` function to run all of pytest inside
|
||||
|
@ -789,15 +780,19 @@ class Testdir:
|
|||
|
||||
:param args: command line arguments to pass to :py:func:`pytest.main`
|
||||
|
||||
:param plugins: (keyword-only) extra plugin instances the
|
||||
``pytest.main()`` instance should use
|
||||
:kwarg plugins: extra plugin instances the ``pytest.main()`` instance should use.
|
||||
|
||||
:kwarg no_reraise_ctrlc: typically we reraise keyboard interrupts from the child run. If
|
||||
True, the KeyboardInterrupt exception is captured.
|
||||
|
||||
:return: a :py:class:`HookRecorder` instance
|
||||
"""
|
||||
plugins = kwargs.pop("plugins", [])
|
||||
no_reraise_ctrlc = kwargs.pop("no_reraise_ctrlc", None)
|
||||
raise_on_kwargs(kwargs)
|
||||
# (maybe a cpython bug?) the importlib cache sometimes isn't updated
|
||||
# properly between file creation and inline_run (especially if imports
|
||||
# are interspersed with file creation)
|
||||
importlib.invalidate_caches()
|
||||
|
||||
plugins = list(plugins)
|
||||
finalizers = []
|
||||
try:
|
||||
# Do not load user config (during runs only).
|
||||
|
@ -806,18 +801,6 @@ class Testdir:
|
|||
mp_run.setenv(k, v)
|
||||
finalizers.append(mp_run.undo)
|
||||
|
||||
# When running pytest inline any plugins active in the main test
|
||||
# process are already imported. So this disables the warning which
|
||||
# will trigger to say they can no longer be rewritten, which is
|
||||
# fine as they have already been rewritten.
|
||||
orig_warn = AssertionRewritingHook._warn_already_imported
|
||||
|
||||
def revert_warn_already_imported():
|
||||
AssertionRewritingHook._warn_already_imported = orig_warn
|
||||
|
||||
finalizers.append(revert_warn_already_imported)
|
||||
AssertionRewritingHook._warn_already_imported = lambda *a: None
|
||||
|
||||
# Any sys.module or sys.path changes done while running pytest
|
||||
# inline should be reverted after the test run completes to avoid
|
||||
# clashing with later inline tests run within the same pytest test,
|
||||
|
@ -850,7 +833,7 @@ class Testdir:
|
|||
|
||||
# typically we reraise keyboard interrupts from the child run
|
||||
# because it's our user requesting interruption of the testing
|
||||
if ret == EXIT_INTERRUPTED and not no_reraise_ctrlc:
|
||||
if ret == ExitCode.INTERRUPTED and not no_reraise_ctrlc:
|
||||
calls = reprec.getcalls("pytest_keyboard_interrupt")
|
||||
if calls and calls[-1].excinfo.type == KeyboardInterrupt:
|
||||
raise KeyboardInterrupt()
|
||||
|
@ -1059,15 +1042,15 @@ class Testdir:
|
|||
|
||||
return popen
|
||||
|
||||
def run(self, *cmdargs, **kwargs):
|
||||
def run(self, *cmdargs, timeout=None, stdin=CLOSE_STDIN):
|
||||
"""Run a command with arguments.
|
||||
|
||||
Run a process using subprocess.Popen saving the stdout and stderr.
|
||||
|
||||
:param args: the sequence of arguments to pass to `subprocess.Popen()`
|
||||
:param timeout: the period in seconds after which to timeout and raise
|
||||
:kwarg timeout: the period in seconds after which to timeout and raise
|
||||
:py:class:`Testdir.TimeoutExpired`
|
||||
:param stdin: optional standard input. Bytes are being send, closing
|
||||
:kwarg stdin: optional standard input. Bytes are being send, closing
|
||||
the pipe, otherwise it is passed through to ``popen``.
|
||||
Defaults to ``CLOSE_STDIN``, which translates to using a pipe
|
||||
(``subprocess.PIPE``) that gets closed.
|
||||
|
@ -1077,10 +1060,6 @@ class Testdir:
|
|||
"""
|
||||
__tracebackhide__ = True
|
||||
|
||||
timeout = kwargs.pop("timeout", None)
|
||||
stdin = kwargs.pop("stdin", Testdir.CLOSE_STDIN)
|
||||
raise_on_kwargs(kwargs)
|
||||
|
||||
cmdargs = [
|
||||
str(arg) if isinstance(arg, py.path.local) else arg for arg in cmdargs
|
||||
]
|
||||
|
@ -1158,7 +1137,7 @@ class Testdir:
|
|||
"""Run python -c "command", return a :py:class:`RunResult`."""
|
||||
return self.run(sys.executable, "-c", command)
|
||||
|
||||
def runpytest_subprocess(self, *args, **kwargs):
|
||||
def runpytest_subprocess(self, *args, timeout=None):
|
||||
"""Run pytest as a subprocess with given arguments.
|
||||
|
||||
Any plugins added to the :py:attr:`plugins` list will be added using the
|
||||
|
@ -1174,9 +1153,6 @@ class Testdir:
|
|||
Returns a :py:class:`RunResult`.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
timeout = kwargs.pop("timeout", None)
|
||||
raise_on_kwargs(kwargs)
|
||||
|
||||
p = py.path.local.make_numbered_dir(
|
||||
prefix="runpytest-", keep=None, rootdir=self.tmpdir
|
||||
)
|
||||
|
|
|
@ -23,8 +23,6 @@ from _pytest.compat import getfslineno
|
|||
from _pytest.compat import getimfunc
|
||||
from _pytest.compat import getlocation
|
||||
from _pytest.compat import is_generator
|
||||
from _pytest.compat import isclass
|
||||
from _pytest.compat import isfunction
|
||||
from _pytest.compat import NOTSET
|
||||
from _pytest.compat import REGEX_TYPE
|
||||
from _pytest.compat import safe_getattr
|
||||
|
@ -207,7 +205,7 @@ def pytest_pycollect_makeitem(collector, name, obj):
|
|||
# We need to try and unwrap the function if it's a functools.partial
|
||||
# or a funtools.wrapped.
|
||||
# We musn't if it's been wrapped with mock.patch (python 2 only)
|
||||
if not (isfunction(obj) or isfunction(get_real_func(obj))):
|
||||
if not (inspect.isfunction(obj) or inspect.isfunction(get_real_func(obj))):
|
||||
filename, lineno = getfslineno(obj)
|
||||
warnings.warn_explicit(
|
||||
message=PytestCollectionWarning(
|
||||
|
@ -1172,8 +1170,6 @@ def _idval(val, argname, idx, idfn, item, config):
|
|||
# See issue https://github.com/pytest-dev/pytest/issues/2169
|
||||
msg = "{}: error raised while trying to determine id of parameter '{}' at position {}\n"
|
||||
msg = msg.format(item.nodeid, argname, idx)
|
||||
# we only append the exception type and message because on Python 2 reraise does nothing
|
||||
msg += " {}: {}\n".format(type(e).__name__, e)
|
||||
raise ValueError(msg) from e
|
||||
elif config:
|
||||
hook_id = config.hook.pytest_make_parametrize_id(
|
||||
|
@ -1190,7 +1186,7 @@ def _idval(val, argname, idx, idfn, item, config):
|
|||
return ascii_escaped(val.pattern)
|
||||
elif enum is not None and isinstance(val, enum.Enum):
|
||||
return str(val)
|
||||
elif (isclass(val) or isfunction(val)) and hasattr(val, "__name__"):
|
||||
elif (inspect.isclass(val) or inspect.isfunction(val)) and hasattr(val, "__name__"):
|
||||
return val.__name__
|
||||
return str(argname) + str(idx)
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
import inspect
|
||||
import math
|
||||
import pprint
|
||||
import sys
|
||||
|
@ -13,27 +14,12 @@ from more_itertools.more import always_iterable
|
|||
|
||||
import _pytest._code
|
||||
from _pytest import deprecated
|
||||
from _pytest.compat import isclass
|
||||
from _pytest.compat import STRING_TYPES
|
||||
from _pytest.outcomes import fail
|
||||
|
||||
BASE_TYPE = (type, STRING_TYPES)
|
||||
|
||||
|
||||
def _cmp_raises_type_error(self, other):
|
||||
"""__cmp__ implementation which raises TypeError. Used
|
||||
by Approx base classes to implement only == and != and raise a
|
||||
TypeError for other comparisons.
|
||||
|
||||
Needed in Python 2 only, Python 3 all it takes is not implementing the
|
||||
other operators at all.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
raise TypeError(
|
||||
"Comparison operators other than == and != not supported by approx objects"
|
||||
)
|
||||
|
||||
|
||||
def _non_numeric_type_error(value, at):
|
||||
at_str = " at {}".format(at) if at else ""
|
||||
return TypeError(
|
||||
|
@ -658,7 +644,9 @@ def raises(expected_exception, *args, **kwargs):
|
|||
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
for exc in filterfalse(isclass, always_iterable(expected_exception, BASE_TYPE)):
|
||||
for exc in filterfalse(
|
||||
inspect.isclass, always_iterable(expected_exception, BASE_TYPE)
|
||||
):
|
||||
msg = (
|
||||
"exceptions must be old-style classes or"
|
||||
" derived from BaseException, not %s"
|
||||
|
|
|
@ -28,6 +28,7 @@ class StepwisePlugin:
|
|||
self.config = config
|
||||
self.active = config.getvalue("stepwise")
|
||||
self.session = None
|
||||
self.report_status = ""
|
||||
|
||||
if self.active:
|
||||
self.lastfailed = config.cache.get("cache/stepwise", None)
|
||||
|
@ -69,12 +70,6 @@ class StepwisePlugin:
|
|||
|
||||
config.hook.pytest_deselected(items=already_passed)
|
||||
|
||||
def pytest_collectreport(self, report):
|
||||
if self.active and report.failed:
|
||||
self.session.shouldstop = (
|
||||
"Error when collecting test, stopping test execution."
|
||||
)
|
||||
|
||||
def pytest_runtest_logreport(self, report):
|
||||
# Skip this hook if plugin is not active or the test is xfailed.
|
||||
if not self.active or "xfail" in report.keywords:
|
||||
|
@ -103,7 +98,7 @@ class StepwisePlugin:
|
|||
self.lastfailed = None
|
||||
|
||||
def pytest_report_collectionfinish(self):
|
||||
if self.active and self.config.getoption("verbose") >= 0:
|
||||
if self.active and self.config.getoption("verbose") >= 0 and self.report_status:
|
||||
return "stepwise: %s" % self.report_status
|
||||
|
||||
def pytest_sessionfinish(self, session):
|
||||
|
|
|
@ -16,11 +16,7 @@ from more_itertools import collapse
|
|||
|
||||
import pytest
|
||||
from _pytest import nodes
|
||||
from _pytest.main import EXIT_INTERRUPTED
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import EXIT_OK
|
||||
from _pytest.main import EXIT_TESTSFAILED
|
||||
from _pytest.main import EXIT_USAGEERROR
|
||||
from _pytest.main import ExitCode
|
||||
|
||||
REPORT_COLLECTING_RESOLUTION = 0.5
|
||||
|
||||
|
@ -80,8 +76,7 @@ def pytest_addoption(parser):
|
|||
help="show extra test summary info as specified by chars: (f)ailed, "
|
||||
"(E)rror, (s)kipped, (x)failed, (X)passed, "
|
||||
"(p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. "
|
||||
"Warnings are displayed at all times except when "
|
||||
"--disable-warnings is set.",
|
||||
"(w)arnings are enabled by default (see --disable-warnings).",
|
||||
)
|
||||
group._addoption(
|
||||
"--disable-warnings",
|
||||
|
@ -654,17 +649,17 @@ class TerminalReporter:
|
|||
outcome.get_result()
|
||||
self._tw.line("")
|
||||
summary_exit_codes = (
|
||||
EXIT_OK,
|
||||
EXIT_TESTSFAILED,
|
||||
EXIT_INTERRUPTED,
|
||||
EXIT_USAGEERROR,
|
||||
EXIT_NOTESTSCOLLECTED,
|
||||
ExitCode.OK,
|
||||
ExitCode.TESTS_FAILED,
|
||||
ExitCode.INTERRUPTED,
|
||||
ExitCode.USAGE_ERROR,
|
||||
ExitCode.NO_TESTS_COLLECTED,
|
||||
)
|
||||
if exitstatus in summary_exit_codes:
|
||||
self.config.hook.pytest_terminal_summary(
|
||||
terminalreporter=self, exitstatus=exitstatus, config=self.config
|
||||
)
|
||||
if exitstatus == EXIT_INTERRUPTED:
|
||||
if exitstatus == ExitCode.INTERRUPTED:
|
||||
self._report_keyboardinterrupt()
|
||||
del self._keyboardinterrupt_memo
|
||||
self.summary_stats()
|
||||
|
|
|
@ -108,11 +108,13 @@ class TestCaseFunction(Function):
|
|||
|
||||
def setup(self):
|
||||
self._testcase = self.parent.obj(self.name)
|
||||
self._obj = getattr(self._testcase, self.name)
|
||||
if hasattr(self, "_request"):
|
||||
self._request._fillfixtures()
|
||||
|
||||
def teardown(self):
|
||||
self._testcase = None
|
||||
self._obj = None
|
||||
|
||||
def startTest(self, testcase):
|
||||
pass
|
||||
|
|
|
@ -8,6 +8,8 @@ class PytestWarning(UserWarning):
|
|||
Base class for all warnings emitted by pytest.
|
||||
"""
|
||||
|
||||
__module__ = "pytest"
|
||||
|
||||
|
||||
class PytestAssertRewriteWarning(PytestWarning):
|
||||
"""
|
||||
|
@ -16,6 +18,8 @@ class PytestAssertRewriteWarning(PytestWarning):
|
|||
Warning emitted by the pytest assert rewrite module.
|
||||
"""
|
||||
|
||||
__module__ = "pytest"
|
||||
|
||||
|
||||
class PytestCacheWarning(PytestWarning):
|
||||
"""
|
||||
|
@ -24,6 +28,8 @@ class PytestCacheWarning(PytestWarning):
|
|||
Warning emitted by the cache plugin in various situations.
|
||||
"""
|
||||
|
||||
__module__ = "pytest"
|
||||
|
||||
|
||||
class PytestConfigWarning(PytestWarning):
|
||||
"""
|
||||
|
@ -32,6 +38,8 @@ class PytestConfigWarning(PytestWarning):
|
|||
Warning emitted for configuration issues.
|
||||
"""
|
||||
|
||||
__module__ = "pytest"
|
||||
|
||||
|
||||
class PytestCollectionWarning(PytestWarning):
|
||||
"""
|
||||
|
@ -40,6 +48,8 @@ class PytestCollectionWarning(PytestWarning):
|
|||
Warning emitted when pytest is not able to collect a file or symbol in a module.
|
||||
"""
|
||||
|
||||
__module__ = "pytest"
|
||||
|
||||
|
||||
class PytestDeprecationWarning(PytestWarning, DeprecationWarning):
|
||||
"""
|
||||
|
@ -48,6 +58,8 @@ class PytestDeprecationWarning(PytestWarning, DeprecationWarning):
|
|||
Warning class for features that will be removed in a future version.
|
||||
"""
|
||||
|
||||
__module__ = "pytest"
|
||||
|
||||
|
||||
class PytestExperimentalApiWarning(PytestWarning, FutureWarning):
|
||||
"""
|
||||
|
@ -57,6 +69,8 @@ class PytestExperimentalApiWarning(PytestWarning, FutureWarning):
|
|||
removed completely in future version
|
||||
"""
|
||||
|
||||
__module__ = "pytest"
|
||||
|
||||
@classmethod
|
||||
def simple(cls, apiname):
|
||||
return cls(
|
||||
|
@ -75,6 +89,8 @@ class PytestUnhandledCoroutineWarning(PytestWarning):
|
|||
are not natively supported.
|
||||
"""
|
||||
|
||||
__module__ = "pytest"
|
||||
|
||||
|
||||
class PytestUnknownMarkWarning(PytestWarning):
|
||||
"""
|
||||
|
@ -84,6 +100,8 @@ class PytestUnknownMarkWarning(PytestWarning):
|
|||
See https://docs.pytest.org/en/latest/mark.html for details.
|
||||
"""
|
||||
|
||||
__module__ = "pytest"
|
||||
|
||||
|
||||
class RemovedInPytest4Warning(PytestDeprecationWarning):
|
||||
"""
|
||||
|
@ -92,6 +110,8 @@ class RemovedInPytest4Warning(PytestDeprecationWarning):
|
|||
Warning class for features scheduled to be removed in pytest 4.0.
|
||||
"""
|
||||
|
||||
__module__ = "pytest"
|
||||
|
||||
|
||||
@attr.s
|
||||
class UnformattedWarning:
|
||||
|
|
|
@ -75,6 +75,7 @@ def catch_warnings_for_item(config, ihook, when, item):
|
|||
warnings.filterwarnings("always", category=PendingDeprecationWarning)
|
||||
|
||||
warnings.filterwarnings("error", category=pytest.RemovedInPytest4Warning)
|
||||
warnings.filterwarnings("error", category=pytest.PytestDeprecationWarning)
|
||||
|
||||
# filters should have this precedence: mark, cmdline options, ini
|
||||
# filters should be applied in the inverse order of precedence
|
||||
|
|
|
@ -2,7 +2,6 @@
|
|||
"""
|
||||
pytest: unit and functional testing with Python.
|
||||
"""
|
||||
# else we are imported
|
||||
from _pytest import __version__
|
||||
from _pytest.assertion import register_assert_rewrite
|
||||
from _pytest.config import cmdline
|
||||
|
@ -15,6 +14,7 @@ from _pytest.fixtures import fillfixtures as _fillfuncargs
|
|||
from _pytest.fixtures import fixture
|
||||
from _pytest.fixtures import yield_fixture
|
||||
from _pytest.freeze_support import freeze_includes
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.main import Session
|
||||
from _pytest.mark import MARK_GEN as mark
|
||||
from _pytest.mark import param
|
||||
|
@ -57,6 +57,7 @@ __all__ = [
|
|||
"Collector",
|
||||
"deprecated_call",
|
||||
"exit",
|
||||
"ExitCode",
|
||||
"fail",
|
||||
"File",
|
||||
"fixture",
|
||||
|
|
|
@ -8,8 +8,7 @@ import importlib_metadata
|
|||
import py
|
||||
|
||||
import pytest
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import EXIT_USAGEERROR
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.warnings import SHOW_PYTEST_WARNINGS_ARG
|
||||
|
||||
|
||||
|
@ -24,7 +23,7 @@ class TestGeneralUsage:
|
|||
def test_config_error(self, testdir):
|
||||
testdir.copy_example("conftest_usageerror/conftest.py")
|
||||
result = testdir.runpytest(testdir.tmpdir)
|
||||
assert result.ret == EXIT_USAGEERROR
|
||||
assert result.ret == ExitCode.USAGE_ERROR
|
||||
result.stderr.fnmatch_lines(["*ERROR: hello"])
|
||||
result.stdout.fnmatch_lines(["*pytest_unconfigure_called"])
|
||||
|
||||
|
@ -83,7 +82,7 @@ class TestGeneralUsage:
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest("-s", "asd")
|
||||
assert result.ret == 4 # EXIT_USAGEERROR
|
||||
assert result.ret == ExitCode.USAGE_ERROR
|
||||
result.stderr.fnmatch_lines(["ERROR: file not found*asd"])
|
||||
result.stdout.fnmatch_lines(["*---configure", "*---unconfigure"])
|
||||
|
||||
|
@ -229,7 +228,7 @@ class TestGeneralUsage:
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
result.stdout.fnmatch_lines(["*1 skip*"])
|
||||
|
||||
def test_issue88_initial_file_multinodes(self, testdir):
|
||||
|
@ -247,7 +246,7 @@ class TestGeneralUsage:
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
assert "should not be seen" not in result.stdout.str()
|
||||
assert "stderr42" not in result.stderr.str()
|
||||
|
||||
|
@ -290,13 +289,13 @@ class TestGeneralUsage:
|
|||
sub2 = testdir.mkdir("sub2")
|
||||
sub1.join("conftest.py").write("assert 0")
|
||||
result = testdir.runpytest(sub2)
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
sub2.ensure("__init__.py")
|
||||
p = sub2.ensure("test_hello.py")
|
||||
result = testdir.runpytest(p)
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
result = testdir.runpytest(sub1)
|
||||
assert result.ret == EXIT_USAGEERROR
|
||||
assert result.ret == ExitCode.USAGE_ERROR
|
||||
|
||||
def test_directory_skipped(self, testdir):
|
||||
testdir.makeconftest(
|
||||
|
@ -308,7 +307,7 @@ class TestGeneralUsage:
|
|||
)
|
||||
testdir.makepyfile("def test_hello(): pass")
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
result.stdout.fnmatch_lines(["*1 skipped*"])
|
||||
|
||||
def test_multiple_items_per_collector_byid(self, testdir):
|
||||
|
@ -410,10 +409,10 @@ class TestGeneralUsage:
|
|||
def test_report_all_failed_collections_initargs(self, testdir):
|
||||
testdir.makeconftest(
|
||||
"""
|
||||
from _pytest.main import EXIT_USAGEERROR
|
||||
from _pytest.main import ExitCode
|
||||
|
||||
def pytest_sessionfinish(exitstatus):
|
||||
assert exitstatus == EXIT_USAGEERROR
|
||||
assert exitstatus == ExitCode.USAGE_ERROR
|
||||
print("pytest_sessionfinish_called")
|
||||
"""
|
||||
)
|
||||
|
@ -421,7 +420,7 @@ class TestGeneralUsage:
|
|||
result = testdir.runpytest("test_a.py::a", "test_b.py::b")
|
||||
result.stderr.fnmatch_lines(["*ERROR*test_a.py::a*", "*ERROR*test_b.py::b*"])
|
||||
result.stdout.fnmatch_lines(["pytest_sessionfinish_called"])
|
||||
assert result.ret == EXIT_USAGEERROR
|
||||
assert result.ret == ExitCode.USAGE_ERROR
|
||||
|
||||
@pytest.mark.usefixtures("recwarn")
|
||||
def test_namespace_import_doesnt_confuse_import_hook(self, testdir):
|
||||
|
@ -510,7 +509,7 @@ class TestGeneralUsage:
|
|||
"""\
|
||||
import pytest
|
||||
|
||||
@pytest.mark.parametrize("data", [b"\\x00", "\\x00", u'ação'])
|
||||
@pytest.mark.parametrize("data", [b"\\x00", "\\x00", 'ação'])
|
||||
def test_foo(data):
|
||||
assert data
|
||||
"""
|
||||
|
@ -612,7 +611,7 @@ class TestInvocationVariants:
|
|||
|
||||
def test_invoke_with_path(self, tmpdir, capsys):
|
||||
retcode = pytest.main(tmpdir)
|
||||
assert retcode == EXIT_NOTESTSCOLLECTED
|
||||
assert retcode == ExitCode.NO_TESTS_COLLECTED
|
||||
out, err = capsys.readouterr()
|
||||
|
||||
def test_invoke_plugin_api(self, testdir, capsys):
|
||||
|
@ -634,6 +633,25 @@ class TestInvocationVariants:
|
|||
|
||||
result.stdout.fnmatch_lines(["collected*0*items*/*1*errors"])
|
||||
|
||||
def test_pyargs_only_imported_once(self, testdir):
|
||||
pkg = testdir.mkpydir("foo")
|
||||
pkg.join("test_foo.py").write("print('hello from test_foo')\ndef test(): pass")
|
||||
pkg.join("conftest.py").write(
|
||||
"def pytest_configure(config): print('configuring')"
|
||||
)
|
||||
|
||||
result = testdir.runpytest("--pyargs", "foo.test_foo", "-s", syspathinsert=True)
|
||||
# should only import once
|
||||
assert result.outlines.count("hello from test_foo") == 1
|
||||
# should only configure once
|
||||
assert result.outlines.count("configuring") == 1
|
||||
|
||||
def test_pyargs_filename_looks_like_module(self, testdir):
|
||||
testdir.tmpdir.join("conftest.py").ensure()
|
||||
testdir.tmpdir.join("t.py").write("def test(): pass")
|
||||
result = testdir.runpytest("--pyargs", "t.py")
|
||||
assert result.ret == ExitCode.OK
|
||||
|
||||
def test_cmdline_python_package(self, testdir, monkeypatch):
|
||||
import warnings
|
||||
|
||||
|
@ -998,16 +1016,8 @@ def test_zipimport_hook(testdir, tmpdir):
|
|||
|
||||
def test_import_plugin_unicode_name(testdir):
|
||||
testdir.makepyfile(myplugin="")
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test(): pass
|
||||
"""
|
||||
)
|
||||
testdir.makeconftest(
|
||||
"""
|
||||
pytest_plugins = [u'myplugin']
|
||||
"""
|
||||
)
|
||||
testdir.makepyfile("def test(): pass")
|
||||
testdir.makeconftest("pytest_plugins = ['myplugin']")
|
||||
r = testdir.runpytest()
|
||||
assert r.ret == 0
|
||||
|
||||
|
@ -1097,7 +1107,10 @@ def test_fixture_values_leak(testdir):
|
|||
assert fix_of_test1_ref() is None
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
# Running on subprocess does not activate the HookRecorder
|
||||
# which holds itself a reference to objects in case of the
|
||||
# pytest_assert_reprcompare hook
|
||||
result = testdir.runpytest_subprocess()
|
||||
result.stdout.fnmatch_lines(["* 2 passed *"])
|
||||
|
||||
|
||||
|
@ -1168,7 +1181,7 @@ def test_fixture_mock_integration(testdir):
|
|||
|
||||
def test_usage_error_code(testdir):
|
||||
result = testdir.runpytest("-unknown-option-")
|
||||
assert result.ret == EXIT_USAGEERROR
|
||||
assert result.ret == ExitCode.USAGE_ERROR
|
||||
|
||||
|
||||
@pytest.mark.filterwarnings("default")
|
||||
|
|
|
@ -333,18 +333,10 @@ def test_excinfo_exconly():
|
|||
assert msg.endswith("world")
|
||||
|
||||
|
||||
def test_excinfo_repr():
|
||||
def test_excinfo_repr_str():
|
||||
excinfo = pytest.raises(ValueError, h)
|
||||
s = repr(excinfo)
|
||||
assert s == "<ExceptionInfo ValueError tblen=4>"
|
||||
|
||||
|
||||
def test_excinfo_str():
|
||||
excinfo = pytest.raises(ValueError, h)
|
||||
s = str(excinfo)
|
||||
assert s.startswith(__file__[:-9]) # pyc file and $py.class
|
||||
assert s.endswith("ValueError")
|
||||
assert len(s.split(":")) >= 3 # on windows it's 4
|
||||
assert repr(excinfo) == "<ExceptionInfo ValueError tblen=4>"
|
||||
assert str(excinfo) == "<ExceptionInfo ValueError tblen=4>"
|
||||
|
||||
|
||||
def test_excinfo_for_later():
|
||||
|
|
|
@ -28,7 +28,7 @@ def test_source_str_function():
|
|||
def test_unicode():
|
||||
x = Source("4")
|
||||
assert str(x) == "4"
|
||||
co = _pytest._code.compile('u"å"', mode="eval")
|
||||
co = _pytest._code.compile('"å"', mode="eval")
|
||||
val = eval(co)
|
||||
assert isinstance(val, str)
|
||||
|
||||
|
|
|
@ -1,5 +1,20 @@
|
|||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
if sys.gettrace():
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def restore_tracing():
|
||||
"""Restore tracing function (when run with Coverage.py).
|
||||
|
||||
https://bugs.python.org/issue37011
|
||||
"""
|
||||
orig_trace = sys.gettrace()
|
||||
yield
|
||||
if sys.gettrace() != orig_trace:
|
||||
sys.settrace(orig_trace)
|
||||
|
||||
|
||||
@pytest.hookimpl(hookwrapper=True, tryfirst=True)
|
||||
def pytest_collection_modifyitems(config, items):
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
import os
|
||||
|
||||
import pytest
|
||||
from _pytest import deprecated
|
||||
from _pytest.warning_types import PytestDeprecationWarning
|
||||
from _pytest.warnings import SHOW_PYTEST_WARNINGS_ARG
|
||||
|
||||
|
@ -49,7 +50,7 @@ def test_resultlog_is_deprecated(testdir):
|
|||
result = testdir.runpytest("--result-log=%s" % testdir.tmpdir.join("result.log"))
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
"*--result-log is deprecated and scheduled for removal in pytest 5.0*",
|
||||
"*--result-log is deprecated and scheduled for removal in pytest 6.0*",
|
||||
"*See https://docs.pytest.org/en/latest/deprecations.html#result-log-result-log for more information*",
|
||||
]
|
||||
)
|
||||
|
@ -69,22 +70,14 @@ def test_terminal_reporter_writer_attr(pytestconfig):
|
|||
assert terminal_reporter.writer is terminal_reporter._tw
|
||||
|
||||
|
||||
@pytest.mark.parametrize("plugin", ["catchlog", "capturelog"])
|
||||
@pytest.mark.parametrize("plugin", deprecated.DEPRECATED_EXTERNAL_PLUGINS)
|
||||
@pytest.mark.filterwarnings("default")
|
||||
def test_pytest_catchlog_deprecated(testdir, plugin):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test_func(pytestconfig):
|
||||
pytestconfig.pluginmanager.register(None, 'pytest_{}')
|
||||
""".format(
|
||||
plugin
|
||||
)
|
||||
)
|
||||
res = testdir.runpytest()
|
||||
assert res.ret == 0
|
||||
res.stdout.fnmatch_lines(
|
||||
["*pytest-*log plugin has been merged into the core*", "*1 passed, 1 warnings*"]
|
||||
)
|
||||
def test_external_plugins_integrated(testdir, plugin):
|
||||
testdir.syspathinsert()
|
||||
testdir.makepyfile(**{plugin: ""})
|
||||
|
||||
with pytest.warns(pytest.PytestConfigWarning):
|
||||
testdir.parseconfig("-p", plugin)
|
||||
|
||||
|
||||
def test_raises_message_argument_deprecated():
|
||||
|
|
|
@ -4,7 +4,7 @@ import pathlib
|
|||
HERE = pathlib.Path(__file__).parent
|
||||
TEST_CONTENT = (HERE / "template_test.py").read_bytes()
|
||||
|
||||
parser = argparse.ArgumentParser(allow_abbrev=False)
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("numbers", nargs="*", type=int)
|
||||
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ def test_maxsize():
|
|||
|
||||
def test_maxsize_error_on_instance():
|
||||
class A:
|
||||
def __repr__():
|
||||
def __repr__(self):
|
||||
raise ValueError("...")
|
||||
|
||||
s = saferepr(("*" * 50, A()), maxsize=25)
|
||||
|
|
|
@ -916,15 +916,46 @@ def test_collection_live_logging(testdir):
|
|||
|
||||
result = testdir.runpytest("--log-cli-level=INFO")
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
"collecting*",
|
||||
"*--- live log collection ---*",
|
||||
"*Normal message*",
|
||||
"collected 0 items",
|
||||
]
|
||||
["*--- live log collection ---*", "*Normal message*", "collected 0 items"]
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("verbose", ["", "-q", "-qq"])
|
||||
def test_collection_collect_only_live_logging(testdir, verbose):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test_simple():
|
||||
pass
|
||||
"""
|
||||
)
|
||||
|
||||
result = testdir.runpytest("--collect-only", "--log-cli-level=INFO", verbose)
|
||||
|
||||
expected_lines = []
|
||||
|
||||
if not verbose:
|
||||
expected_lines.extend(
|
||||
[
|
||||
"*collected 1 item*",
|
||||
"*<Module test_collection_collect_only_live_logging.py>*",
|
||||
"*no tests ran*",
|
||||
]
|
||||
)
|
||||
elif verbose == "-q":
|
||||
assert "collected 1 item*" not in result.stdout.str()
|
||||
expected_lines.extend(
|
||||
[
|
||||
"*test_collection_collect_only_live_logging.py::test_simple*",
|
||||
"no tests ran in * seconds",
|
||||
]
|
||||
)
|
||||
elif verbose == "-qq":
|
||||
assert "collected 1 item*" not in result.stdout.str()
|
||||
expected_lines.extend(["*test_collection_collect_only_live_logging.py: 1*"])
|
||||
|
||||
result.stdout.fnmatch_lines(expected_lines)
|
||||
|
||||
|
||||
def test_collection_logging_to_file(testdir):
|
||||
log_file = testdir.tmpdir.join("pytest.log").strpath
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ import textwrap
|
|||
|
||||
import _pytest._code
|
||||
import pytest
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.nodes import Collector
|
||||
|
||||
|
||||
|
@ -116,7 +116,7 @@ class TestModule:
|
|||
"""Check test modules collected which raise ImportError with unicode messages
|
||||
are handled properly (#2336).
|
||||
"""
|
||||
testdir.makepyfile("raise ImportError(u'Something bad happened ☺')")
|
||||
testdir.makepyfile("raise ImportError('Something bad happened ☺')")
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
|
@ -246,7 +246,7 @@ class TestClass:
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
class TestFunction:
|
||||
|
@ -1140,7 +1140,7 @@ def test_unorderable_types(testdir):
|
|||
)
|
||||
result = testdir.runpytest()
|
||||
assert "TypeError" not in result.stdout.str()
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
def test_collect_functools_partial(testdir):
|
||||
|
|
|
@ -26,10 +26,10 @@ def test_getfuncargnames():
|
|||
|
||||
assert fixtures.getfuncargnames(h) == ("arg1",)
|
||||
|
||||
def h(arg1, arg2, arg3="hello"):
|
||||
def j(arg1, arg2, arg3="hello"):
|
||||
pass
|
||||
|
||||
assert fixtures.getfuncargnames(h) == ("arg1", "arg2")
|
||||
assert fixtures.getfuncargnames(j) == ("arg1", "arg2")
|
||||
|
||||
class A:
|
||||
def f(self, arg1, arg2="hello"):
|
||||
|
@ -793,12 +793,15 @@ class TestRequestBasic:
|
|||
"""
|
||||
import pytest
|
||||
def pytest_generate_tests(metafunc):
|
||||
assert metafunc.funcargnames == metafunc.fixturenames
|
||||
with pytest.warns(pytest.PytestDeprecationWarning):
|
||||
assert metafunc.funcargnames == metafunc.fixturenames
|
||||
@pytest.fixture
|
||||
def fn(request):
|
||||
assert request._pyfuncitem.funcargnames == \
|
||||
request._pyfuncitem.fixturenames
|
||||
return request.funcargnames, request.fixturenames
|
||||
with pytest.warns(pytest.PytestDeprecationWarning):
|
||||
assert request._pyfuncitem.funcargnames == \
|
||||
request._pyfuncitem.fixturenames
|
||||
with pytest.warns(pytest.PytestDeprecationWarning):
|
||||
return request.funcargnames, request.fixturenames
|
||||
|
||||
def test_hello(fn):
|
||||
assert fn[0] == fn[1]
|
||||
|
@ -1138,7 +1141,6 @@ class TestFixtureUsages:
|
|||
values = reprec.getfailedcollections()
|
||||
assert len(values) == 1
|
||||
|
||||
@pytest.mark.filterwarnings("ignore::pytest.PytestDeprecationWarning")
|
||||
def test_request_can_be_overridden(self, testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
|
@ -1151,7 +1153,7 @@ class TestFixtureUsages:
|
|||
assert request.a == 1
|
||||
"""
|
||||
)
|
||||
reprec = testdir.inline_run()
|
||||
reprec = testdir.inline_run("-Wignore::pytest.PytestDeprecationWarning")
|
||||
reprec.assertoutcome(passed=1)
|
||||
|
||||
def test_usefixtures_marker(self, testdir):
|
||||
|
|
|
@ -506,8 +506,8 @@ class TestMetafunc:
|
|||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
"*test_foo: error raised while trying to determine id of parameter 'arg' at position 0",
|
||||
"*Exception: bad ids",
|
||||
"*test_foo: error raised while trying to determine id of parameter 'arg' at position 0",
|
||||
]
|
||||
)
|
||||
|
||||
|
@ -1205,7 +1205,7 @@ class TestMetafuncFunctional:
|
|||
import pytest
|
||||
values = []
|
||||
def pytest_generate_tests(metafunc):
|
||||
if "arg" in metafunc.funcargnames:
|
||||
if "arg" in metafunc.fixturenames:
|
||||
metafunc.parametrize("arg", [1,2], indirect=True,
|
||||
scope=%r)
|
||||
@pytest.fixture
|
||||
|
@ -1761,3 +1761,16 @@ class TestMarkersWithParametrization:
|
|||
result.stdout.fnmatch_lines(
|
||||
["*test_func_a*0*PASS*", "*test_func_a*2*PASS*", "*test_func_b*10*PASS*"]
|
||||
)
|
||||
|
||||
def test_parametrize_positional_args(self, testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
import pytest
|
||||
|
||||
@pytest.mark.parametrize("a", [1], False)
|
||||
def test_foo(a):
|
||||
pass
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.assert_outcomes(passed=1)
|
||||
|
|
|
@ -202,6 +202,9 @@ class TestRaises:
|
|||
assert sys.exc_info() == (None, None, None)
|
||||
|
||||
del t
|
||||
# Make sure this does get updated in locals dict
|
||||
# otherwise it could keep a reference
|
||||
locals()
|
||||
|
||||
# ensure the t instance is not stuck in a cyclic reference
|
||||
for o in gc.get_objects():
|
||||
|
@ -235,7 +238,7 @@ class TestRaises:
|
|||
int("asdf")
|
||||
|
||||
def test_raises_exception_looks_iterable(self):
|
||||
class Meta(type(object)):
|
||||
class Meta(type):
|
||||
def __getitem__(self, item):
|
||||
return 1 / 0
|
||||
|
||||
|
|
|
@ -137,8 +137,8 @@ class TestImportHookInstallation:
|
|||
"hamster.py": "",
|
||||
"test_foo.py": """\
|
||||
def test_foo(pytestconfig):
|
||||
assert pytestconfig.pluginmanager.rewrite_hook.find_module('ham') is not None
|
||||
assert pytestconfig.pluginmanager.rewrite_hook.find_module('hamster') is None
|
||||
assert pytestconfig.pluginmanager.rewrite_hook.find_spec('ham') is not None
|
||||
assert pytestconfig.pluginmanager.rewrite_hook.find_spec('hamster') is None
|
||||
""",
|
||||
}
|
||||
testdir.makepyfile(**contents)
|
||||
|
@ -331,6 +331,27 @@ class TestAssert_reprcompare:
|
|||
assert "- spam" in diff
|
||||
assert "+ eggs" in diff
|
||||
|
||||
def test_bytes_diff_normal(self):
|
||||
"""Check special handling for bytes diff (#5260)"""
|
||||
diff = callequal(b"spam", b"eggs")
|
||||
|
||||
assert diff == [
|
||||
"b'spam' == b'eggs'",
|
||||
"At index 0 diff: b's' != b'e'",
|
||||
"Use -v to get the full diff",
|
||||
]
|
||||
|
||||
def test_bytes_diff_verbose(self):
|
||||
"""Check special handling for bytes diff (#5260)"""
|
||||
diff = callequal(b"spam", b"eggs", verbose=True)
|
||||
assert diff == [
|
||||
"b'spam' == b'eggs'",
|
||||
"At index 0 diff: b's' != b'e'",
|
||||
"Full diff:",
|
||||
"- b'spam'",
|
||||
"+ b'eggs'",
|
||||
]
|
||||
|
||||
def test_list(self):
|
||||
expl = callequal([0, 1], [0, 2])
|
||||
assert len(expl) > 1
|
||||
|
@ -1219,7 +1240,7 @@ def test_assert_with_unicode(monkeypatch, testdir):
|
|||
testdir.makepyfile(
|
||||
"""\
|
||||
def test_unicode():
|
||||
assert u'유니코드' == u'Unicode'
|
||||
assert '유니코드' == 'Unicode'
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
import ast
|
||||
import glob
|
||||
import importlib
|
||||
import os
|
||||
import py_compile
|
||||
import stat
|
||||
|
@ -12,10 +13,11 @@ import py
|
|||
import _pytest._code
|
||||
import pytest
|
||||
from _pytest.assertion import util
|
||||
from _pytest.assertion.rewrite import _get_assertion_exprs
|
||||
from _pytest.assertion.rewrite import AssertionRewritingHook
|
||||
from _pytest.assertion.rewrite import PYTEST_TAG
|
||||
from _pytest.assertion.rewrite import rewrite_asserts
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import ExitCode
|
||||
|
||||
|
||||
def setup_module(mod):
|
||||
|
@ -30,7 +32,7 @@ def teardown_module(mod):
|
|||
|
||||
def rewrite(src):
|
||||
tree = ast.parse(src)
|
||||
rewrite_asserts(tree)
|
||||
rewrite_asserts(tree, src.encode())
|
||||
return tree
|
||||
|
||||
|
||||
|
@ -117,6 +119,37 @@ class TestAssertionRewrite:
|
|||
result = testdir.runpytest_subprocess()
|
||||
assert "warnings" not in "".join(result.outlines)
|
||||
|
||||
def test_rewrites_plugin_as_a_package(self, testdir):
|
||||
pkgdir = testdir.mkpydir("plugin")
|
||||
pkgdir.join("__init__.py").write(
|
||||
"import pytest\n"
|
||||
"@pytest.fixture\n"
|
||||
"def special_asserter():\n"
|
||||
" def special_assert(x, y):\n"
|
||||
" assert x == y\n"
|
||||
" return special_assert\n"
|
||||
)
|
||||
testdir.makeconftest('pytest_plugins = ["plugin"]')
|
||||
testdir.makepyfile("def test(special_asserter): special_asserter(1, 2)\n")
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*assert 1 == 2*"])
|
||||
|
||||
def test_honors_pep_235(self, testdir, monkeypatch):
|
||||
# note: couldn't make it fail on macos with a single `sys.path` entry
|
||||
# note: these modules are named `test_*` to trigger rewriting
|
||||
testdir.tmpdir.join("test_y.py").write("x = 1")
|
||||
xdir = testdir.tmpdir.join("x").ensure_dir()
|
||||
xdir.join("test_Y").ensure_dir().join("__init__.py").write("x = 2")
|
||||
testdir.makepyfile(
|
||||
"import test_y\n"
|
||||
"import test_Y\n"
|
||||
"def test():\n"
|
||||
" assert test_y.x == 1\n"
|
||||
" assert test_Y.x == 2\n"
|
||||
)
|
||||
monkeypatch.syspath_prepend(xdir)
|
||||
testdir.runpytest().assert_outcomes(passed=1)
|
||||
|
||||
def test_name(self, request):
|
||||
def f():
|
||||
assert False
|
||||
|
@ -635,12 +668,6 @@ class TestAssertionRewrite:
|
|||
else:
|
||||
assert lines == ["assert 0 == 1\n + where 1 = \\n{ \\n~ \\n}.a"]
|
||||
|
||||
def test_unroll_expression(self):
|
||||
def f():
|
||||
assert all(x == 1 for x in range(10))
|
||||
|
||||
assert "0 == 1" in getmsg(f)
|
||||
|
||||
def test_custom_repr_non_ascii(self):
|
||||
def f():
|
||||
class A:
|
||||
|
@ -656,78 +683,6 @@ class TestAssertionRewrite:
|
|||
assert "UnicodeDecodeError" not in msg
|
||||
assert "UnicodeEncodeError" not in msg
|
||||
|
||||
def test_unroll_all_generator(self, testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def check_even(num):
|
||||
if num % 2 == 0:
|
||||
return True
|
||||
return False
|
||||
|
||||
def test_generator():
|
||||
odd_list = list(range(1,9,2))
|
||||
assert all(check_even(num) for num in odd_list)"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*assert False*", "*where False = check_even(1)*"])
|
||||
|
||||
def test_unroll_all_list_comprehension(self, testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def check_even(num):
|
||||
if num % 2 == 0:
|
||||
return True
|
||||
return False
|
||||
|
||||
def test_list_comprehension():
|
||||
odd_list = list(range(1,9,2))
|
||||
assert all([check_even(num) for num in odd_list])"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*assert False*", "*where False = check_even(1)*"])
|
||||
|
||||
def test_unroll_all_object(self, testdir):
|
||||
"""all() for non generators/non list-comprehensions (#5358)"""
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test():
|
||||
assert all((1, 0))
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*assert False*", "*where False = all((1, 0))*"])
|
||||
|
||||
def test_unroll_all_starred(self, testdir):
|
||||
"""all() for non generators/non list-comprehensions (#5358)"""
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def test():
|
||||
x = ((1, 0),)
|
||||
assert all(*x)
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(
|
||||
["*assert False*", "*where False = all(*((1, 0),))*"]
|
||||
)
|
||||
|
||||
def test_for_loop(self, testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
def check_even(num):
|
||||
if num % 2 == 0:
|
||||
return True
|
||||
return False
|
||||
|
||||
def test_for_loop():
|
||||
odd_list = list(range(1,9,2))
|
||||
for num in odd_list:
|
||||
assert check_even(num)
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*assert False*", "*where False = check_even(1)*"])
|
||||
|
||||
|
||||
class TestRewriteOnImport:
|
||||
def test_pycache_is_a_file(self, testdir):
|
||||
|
@ -770,7 +725,7 @@ class TestRewriteOnImport:
|
|||
import test_gum.test_lizard"""
|
||||
% (z_fn,)
|
||||
)
|
||||
assert testdir.runpytest().ret == EXIT_NOTESTSCOLLECTED
|
||||
assert testdir.runpytest().ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
def test_readonly(self, testdir):
|
||||
sub = testdir.mkdir("testing")
|
||||
|
@ -826,6 +781,24 @@ def test_rewritten():
|
|||
|
||||
assert testdir.runpytest().ret == 0
|
||||
|
||||
def test_cached_pyc_includes_pytest_version(self, testdir, monkeypatch):
|
||||
"""Avoid stale caches (#1671)"""
|
||||
monkeypatch.delenv("PYTHONDONTWRITEBYTECODE", raising=False)
|
||||
testdir.makepyfile(
|
||||
test_foo="""
|
||||
def test_foo():
|
||||
assert True
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest_subprocess()
|
||||
assert result.ret == 0
|
||||
found_names = glob.glob(
|
||||
"__pycache__/*-pytest-{}.pyc".format(pytest.__version__)
|
||||
)
|
||||
assert found_names, "pyc with expected tag not found in names: {}".format(
|
||||
glob.glob("__pycache__/*.pyc")
|
||||
)
|
||||
|
||||
@pytest.mark.skipif('"__pypy__" in sys.modules')
|
||||
def test_pyc_vs_pyo(self, testdir, monkeypatch):
|
||||
testdir.makepyfile(
|
||||
|
@ -870,7 +843,7 @@ def test_rewritten():
|
|||
pkg = testdir.mkdir("a_package_without_init_py")
|
||||
pkg.join("module.py").ensure()
|
||||
testdir.makepyfile("import a_package_without_init_py.module")
|
||||
assert testdir.runpytest().ret == EXIT_NOTESTSCOLLECTED
|
||||
assert testdir.runpytest().ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
def test_rewrite_warning(self, testdir):
|
||||
testdir.makeconftest(
|
||||
|
@ -909,8 +882,9 @@ def test_rewritten():
|
|||
monkeypatch.setattr(
|
||||
hook, "_warn_already_imported", lambda code, msg: warnings.append(msg)
|
||||
)
|
||||
hook.find_module("test_remember_rewritten_modules")
|
||||
hook.load_module("test_remember_rewritten_modules")
|
||||
spec = hook.find_spec("test_remember_rewritten_modules")
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
hook.exec_module(module)
|
||||
hook.mark_rewrite("test_remember_rewritten_modules")
|
||||
hook.mark_rewrite("test_remember_rewritten_modules")
|
||||
assert warnings == []
|
||||
|
@ -950,33 +924,6 @@ def test_rewritten():
|
|||
|
||||
|
||||
class TestAssertionRewriteHookDetails:
|
||||
def test_loader_is_package_false_for_module(self, testdir):
|
||||
testdir.makepyfile(
|
||||
test_fun="""
|
||||
def test_loader():
|
||||
assert not __loader__.is_package(__name__)
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["* 1 passed*"])
|
||||
|
||||
def test_loader_is_package_true_for_package(self, testdir):
|
||||
testdir.makepyfile(
|
||||
test_fun="""
|
||||
def test_loader():
|
||||
assert not __loader__.is_package(__name__)
|
||||
|
||||
def test_fun():
|
||||
assert __loader__.is_package('fun')
|
||||
|
||||
def test_missing():
|
||||
assert not __loader__.is_package('pytest_not_there')
|
||||
"""
|
||||
)
|
||||
testdir.mkpydir("fun")
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["* 3 passed*"])
|
||||
|
||||
def test_sys_meta_path_munged(self, testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
|
@ -995,7 +942,7 @@ class TestAssertionRewriteHookDetails:
|
|||
state = AssertionState(config, "rewrite")
|
||||
source_path = tmpdir.ensure("source.py")
|
||||
pycpath = tmpdir.join("pyc").strpath
|
||||
assert _write_pyc(state, [1], source_path.stat(), pycpath)
|
||||
assert _write_pyc(state, [1], os.stat(source_path.strpath), pycpath)
|
||||
|
||||
@contextmanager
|
||||
def atomic_write_failed(fn, mode="r", overwrite=False):
|
||||
|
@ -1057,7 +1004,7 @@ class TestAssertionRewriteHookDetails:
|
|||
assert len(contents) > strip_bytes
|
||||
pyc.write(contents[:strip_bytes], mode="wb")
|
||||
|
||||
assert _read_pyc(source, str(pyc)) is None # no error
|
||||
assert _read_pyc(str(source), str(pyc)) is None # no error
|
||||
|
||||
def test_reload_is_same(self, testdir):
|
||||
# A file that will be picked up during collecting.
|
||||
|
@ -1264,14 +1211,17 @@ def test_rewrite_infinite_recursion(testdir, pytestconfig, monkeypatch):
|
|||
# make a note that we have called _write_pyc
|
||||
write_pyc_called.append(True)
|
||||
# try to import a module at this point: we should not try to rewrite this module
|
||||
assert hook.find_module("test_bar") is None
|
||||
assert hook.find_spec("test_bar") is None
|
||||
return original_write_pyc(*args, **kwargs)
|
||||
|
||||
monkeypatch.setattr(rewrite, "_write_pyc", spy_write_pyc)
|
||||
monkeypatch.setattr(sys, "dont_write_bytecode", False)
|
||||
|
||||
hook = AssertionRewritingHook(pytestconfig)
|
||||
assert hook.find_module("test_foo") is not None
|
||||
spec = hook.find_spec("test_foo")
|
||||
assert spec is not None
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
hook.exec_module(module)
|
||||
assert len(write_pyc_called) == 1
|
||||
|
||||
|
||||
|
@ -1279,11 +1229,11 @@ class TestEarlyRewriteBailout:
|
|||
@pytest.fixture
|
||||
def hook(self, pytestconfig, monkeypatch, testdir):
|
||||
"""Returns a patched AssertionRewritingHook instance so we can configure its initial paths and track
|
||||
if imp.find_module has been called.
|
||||
if PathFinder.find_spec has been called.
|
||||
"""
|
||||
import imp
|
||||
import importlib.machinery
|
||||
|
||||
self.find_module_calls = []
|
||||
self.find_spec_calls = []
|
||||
self.initial_paths = set()
|
||||
|
||||
class StubSession:
|
||||
|
@ -1292,22 +1242,22 @@ class TestEarlyRewriteBailout:
|
|||
def isinitpath(self, p):
|
||||
return p in self._initialpaths
|
||||
|
||||
def spy_imp_find_module(name, path):
|
||||
self.find_module_calls.append(name)
|
||||
return imp.find_module(name, path)
|
||||
def spy_find_spec(name, path):
|
||||
self.find_spec_calls.append(name)
|
||||
return importlib.machinery.PathFinder.find_spec(name, path)
|
||||
|
||||
hook = AssertionRewritingHook(pytestconfig)
|
||||
# use default patterns, otherwise we inherit pytest's testing config
|
||||
hook.fnpats[:] = ["test_*.py", "*_test.py"]
|
||||
monkeypatch.setattr(hook, "_imp_find_module", spy_imp_find_module)
|
||||
monkeypatch.setattr(hook, "_find_spec", spy_find_spec)
|
||||
hook.set_session(StubSession())
|
||||
testdir.syspathinsert()
|
||||
return hook
|
||||
|
||||
def test_basic(self, testdir, hook):
|
||||
"""
|
||||
Ensure we avoid calling imp.find_module when we know for sure a certain module will not be rewritten
|
||||
to optimize assertion rewriting (#3918).
|
||||
Ensure we avoid calling PathFinder.find_spec when we know for sure a certain
|
||||
module will not be rewritten to optimize assertion rewriting (#3918).
|
||||
"""
|
||||
testdir.makeconftest(
|
||||
"""
|
||||
|
@ -1322,37 +1272,37 @@ class TestEarlyRewriteBailout:
|
|||
self.initial_paths.add(foobar_path)
|
||||
|
||||
# conftest files should always be rewritten
|
||||
assert hook.find_module("conftest") is not None
|
||||
assert self.find_module_calls == ["conftest"]
|
||||
assert hook.find_spec("conftest") is not None
|
||||
assert self.find_spec_calls == ["conftest"]
|
||||
|
||||
# files matching "python_files" mask should always be rewritten
|
||||
assert hook.find_module("test_foo") is not None
|
||||
assert self.find_module_calls == ["conftest", "test_foo"]
|
||||
assert hook.find_spec("test_foo") is not None
|
||||
assert self.find_spec_calls == ["conftest", "test_foo"]
|
||||
|
||||
# file does not match "python_files": early bailout
|
||||
assert hook.find_module("bar") is None
|
||||
assert self.find_module_calls == ["conftest", "test_foo"]
|
||||
assert hook.find_spec("bar") is None
|
||||
assert self.find_spec_calls == ["conftest", "test_foo"]
|
||||
|
||||
# file is an initial path (passed on the command-line): should be rewritten
|
||||
assert hook.find_module("foobar") is not None
|
||||
assert self.find_module_calls == ["conftest", "test_foo", "foobar"]
|
||||
assert hook.find_spec("foobar") is not None
|
||||
assert self.find_spec_calls == ["conftest", "test_foo", "foobar"]
|
||||
|
||||
def test_pattern_contains_subdirectories(self, testdir, hook):
|
||||
"""If one of the python_files patterns contain subdirectories ("tests/**.py") we can't bailout early
|
||||
because we need to match with the full path, which can only be found by calling imp.find_module.
|
||||
because we need to match with the full path, which can only be found by calling PathFinder.find_spec
|
||||
"""
|
||||
p = testdir.makepyfile(
|
||||
**{
|
||||
"tests/file.py": """
|
||||
def test_simple_failure():
|
||||
assert 1 + 1 == 3
|
||||
"""
|
||||
"tests/file.py": """\
|
||||
def test_simple_failure():
|
||||
assert 1 + 1 == 3
|
||||
"""
|
||||
}
|
||||
)
|
||||
testdir.syspathinsert(p.dirpath())
|
||||
hook.fnpats[:] = ["tests/**.py"]
|
||||
assert hook.find_module("file") is not None
|
||||
assert self.find_module_calls == ["file"]
|
||||
assert hook.find_spec("file") is not None
|
||||
assert self.find_spec_calls == ["file"]
|
||||
|
||||
@pytest.mark.skipif(
|
||||
sys.platform.startswith("win32"), reason="cannot remove cwd on Windows"
|
||||
|
@ -1366,20 +1316,215 @@ class TestEarlyRewriteBailout:
|
|||
|
||||
testdir.makepyfile(
|
||||
**{
|
||||
"test_setup_nonexisting_cwd.py": """
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
"test_setup_nonexisting_cwd.py": """\
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
|
||||
d = tempfile.mkdtemp()
|
||||
os.chdir(d)
|
||||
shutil.rmtree(d)
|
||||
""",
|
||||
"test_test.py": """
|
||||
def test():
|
||||
pass
|
||||
""",
|
||||
d = tempfile.mkdtemp()
|
||||
os.chdir(d)
|
||||
shutil.rmtree(d)
|
||||
""",
|
||||
"test_test.py": """\
|
||||
def test():
|
||||
pass
|
||||
""",
|
||||
}
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["* 1 passed in *"])
|
||||
|
||||
|
||||
class TestAssertionPass:
|
||||
def test_option_default(self, testdir):
|
||||
config = testdir.parseconfig()
|
||||
assert config.getini("enable_assertion_pass_hook") is False
|
||||
|
||||
@pytest.fixture
|
||||
def flag_on(self, testdir):
|
||||
testdir.makeini("[pytest]\nenable_assertion_pass_hook = True\n")
|
||||
|
||||
@pytest.fixture
|
||||
def hook_on(self, testdir):
|
||||
testdir.makeconftest(
|
||||
"""\
|
||||
def pytest_assertion_pass(item, lineno, orig, expl):
|
||||
raise Exception("Assertion Passed: {} {} at line {}".format(orig, expl, lineno))
|
||||
"""
|
||||
)
|
||||
|
||||
def test_hook_call(self, testdir, flag_on, hook_on):
|
||||
testdir.makepyfile(
|
||||
"""\
|
||||
def test_simple():
|
||||
a=1
|
||||
b=2
|
||||
c=3
|
||||
d=0
|
||||
|
||||
assert a+b == c+d
|
||||
|
||||
# cover failing assertions with a message
|
||||
def test_fails():
|
||||
assert False, "assert with message"
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(
|
||||
"*Assertion Passed: a+b == c+d (1 + 2) == (3 + 0) at line 7*"
|
||||
)
|
||||
|
||||
def test_hook_call_with_parens(self, testdir, flag_on, hook_on):
|
||||
testdir.makepyfile(
|
||||
"""\
|
||||
def f(): return 1
|
||||
def test():
|
||||
assert f()
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines("*Assertion Passed: f() 1")
|
||||
|
||||
def test_hook_not_called_without_hookimpl(self, testdir, monkeypatch, flag_on):
|
||||
"""Assertion pass should not be called (and hence formatting should
|
||||
not occur) if there is no hook declared for pytest_assertion_pass"""
|
||||
|
||||
def raise_on_assertionpass(*_, **__):
|
||||
raise Exception("Assertion passed called when it shouldn't!")
|
||||
|
||||
monkeypatch.setattr(
|
||||
_pytest.assertion.rewrite, "_call_assertion_pass", raise_on_assertionpass
|
||||
)
|
||||
|
||||
testdir.makepyfile(
|
||||
"""\
|
||||
def test_simple():
|
||||
a=1
|
||||
b=2
|
||||
c=3
|
||||
d=0
|
||||
|
||||
assert a+b == c+d
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.assert_outcomes(passed=1)
|
||||
|
||||
def test_hook_not_called_without_cmd_option(self, testdir, monkeypatch):
|
||||
"""Assertion pass should not be called (and hence formatting should
|
||||
not occur) if there is no hook declared for pytest_assertion_pass"""
|
||||
|
||||
def raise_on_assertionpass(*_, **__):
|
||||
raise Exception("Assertion passed called when it shouldn't!")
|
||||
|
||||
monkeypatch.setattr(
|
||||
_pytest.assertion.rewrite, "_call_assertion_pass", raise_on_assertionpass
|
||||
)
|
||||
|
||||
testdir.makeconftest(
|
||||
"""\
|
||||
def pytest_assertion_pass(item, lineno, orig, expl):
|
||||
raise Exception("Assertion Passed: {} {} at line {}".format(orig, expl, lineno))
|
||||
"""
|
||||
)
|
||||
|
||||
testdir.makepyfile(
|
||||
"""\
|
||||
def test_simple():
|
||||
a=1
|
||||
b=2
|
||||
c=3
|
||||
d=0
|
||||
|
||||
assert a+b == c+d
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.assert_outcomes(passed=1)
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
("src", "expected"),
|
||||
(
|
||||
# fmt: off
|
||||
pytest.param(b"", {}, id="trivial"),
|
||||
pytest.param(
|
||||
b"def x(): assert 1\n",
|
||||
{1: "1"},
|
||||
id="assert statement not on own line",
|
||||
),
|
||||
pytest.param(
|
||||
b"def x():\n"
|
||||
b" assert 1\n"
|
||||
b" assert 1+2\n",
|
||||
{2: "1", 3: "1+2"},
|
||||
id="multiple assertions",
|
||||
),
|
||||
pytest.param(
|
||||
# changes in encoding cause the byte offsets to be different
|
||||
"# -*- coding: latin1\n"
|
||||
"def ÀÀÀÀÀ(): assert 1\n".encode("latin1"),
|
||||
{2: "1"},
|
||||
id="latin1 encoded on first line\n",
|
||||
),
|
||||
pytest.param(
|
||||
# using the default utf-8 encoding
|
||||
"def ÀÀÀÀÀ(): assert 1\n".encode(),
|
||||
{1: "1"},
|
||||
id="utf-8 encoded on first line",
|
||||
),
|
||||
pytest.param(
|
||||
b"def x():\n"
|
||||
b" assert (\n"
|
||||
b" 1 + 2 # comment\n"
|
||||
b" )\n",
|
||||
{2: "(\n 1 + 2 # comment\n )"},
|
||||
id="multi-line assertion",
|
||||
),
|
||||
pytest.param(
|
||||
b"def x():\n"
|
||||
b" assert y == [\n"
|
||||
b" 1, 2, 3\n"
|
||||
b" ]\n",
|
||||
{2: "y == [\n 1, 2, 3\n ]"},
|
||||
id="multi line assert with list continuation",
|
||||
),
|
||||
pytest.param(
|
||||
b"def x():\n"
|
||||
b" assert 1 + \\\n"
|
||||
b" 2\n",
|
||||
{2: "1 + \\\n 2"},
|
||||
id="backslash continuation",
|
||||
),
|
||||
pytest.param(
|
||||
b"def x():\n"
|
||||
b" assert x, y\n",
|
||||
{2: "x"},
|
||||
id="assertion with message",
|
||||
),
|
||||
pytest.param(
|
||||
b"def x():\n"
|
||||
b" assert (\n"
|
||||
b" f(1, 2, 3)\n"
|
||||
b" ), 'f did not work!'\n",
|
||||
{2: "(\n f(1, 2, 3)\n )"},
|
||||
id="assertion with message, test spanning multiple lines",
|
||||
),
|
||||
pytest.param(
|
||||
b"def x():\n"
|
||||
b" assert \\\n"
|
||||
b" x\\\n"
|
||||
b" , 'failure message'\n",
|
||||
{2: "x"},
|
||||
id="escaped newlines plus message",
|
||||
),
|
||||
pytest.param(
|
||||
b"def x(): assert 5",
|
||||
{1: "5"},
|
||||
id="no newline at end of file",
|
||||
),
|
||||
# fmt: on
|
||||
),
|
||||
)
|
||||
def test_get_assertion_exprs(src, expected):
|
||||
assert _get_assertion_exprs(src) == expected
|
||||
|
|
|
@ -6,7 +6,7 @@ import textwrap
|
|||
import py
|
||||
|
||||
import pytest
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import ExitCode
|
||||
|
||||
pytest_plugins = ("pytester",)
|
||||
|
||||
|
@ -757,7 +757,7 @@ class TestLastFailed:
|
|||
"* 2 deselected in *",
|
||||
]
|
||||
)
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
def test_lastfailed_no_failures_behavior_empty_cache(self, testdir):
|
||||
testdir.makepyfile(
|
||||
|
|
|
@ -12,7 +12,7 @@ import py
|
|||
import pytest
|
||||
from _pytest import capture
|
||||
from _pytest.capture import CaptureManager
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import ExitCode
|
||||
|
||||
# note: py.io capture tests where copied from
|
||||
# pylib 1.4.20.dev2 (rev 13d9af95547e)
|
||||
|
@ -111,10 +111,10 @@ def test_capturing_unicode(testdir, method):
|
|||
@pytest.mark.parametrize("method", ["fd", "sys"])
|
||||
def test_capturing_bytes_in_utf8_encoding(testdir, method):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
"""\
|
||||
def test_unicode():
|
||||
print('b\\u00f6y')
|
||||
"""
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest("--capture=%s" % method)
|
||||
result.stdout.fnmatch_lines(["*1 passed*"])
|
||||
|
@ -361,7 +361,7 @@ class TestLoggingInteraction:
|
|||
)
|
||||
# make sure that logging is still captured in tests
|
||||
result = testdir.runpytest_subprocess("-s", "-p", "no:capturelog")
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
result.stderr.fnmatch_lines(["WARNING*hello435*"])
|
||||
assert "operation on closed file" not in result.stderr.str()
|
||||
|
||||
|
@ -511,7 +511,7 @@ class TestCaptureFixture:
|
|||
"""\
|
||||
def test_hello(capfd):
|
||||
import os
|
||||
os.write(1, "42".encode('ascii'))
|
||||
os.write(1, b"42")
|
||||
out, err = capfd.readouterr()
|
||||
assert out.startswith("42")
|
||||
capfd.close()
|
||||
|
@ -564,7 +564,7 @@ class TestCaptureFixture:
|
|||
"""\
|
||||
def test_hello(capfd):
|
||||
import os
|
||||
os.write(1, str(42).encode('ascii'))
|
||||
os.write(1, b'42')
|
||||
raise KeyboardInterrupt()
|
||||
"""
|
||||
)
|
||||
|
@ -1136,12 +1136,12 @@ class TestStdCaptureFD(TestStdCapture):
|
|||
|
||||
def test_simple_only_fd(self, testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
"""\
|
||||
import os
|
||||
def test_x():
|
||||
os.write(1, "hello\\n".encode("ascii"))
|
||||
os.write(1, b"hello\\n")
|
||||
assert 0
|
||||
"""
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest_subprocess()
|
||||
result.stdout.fnmatch_lines(
|
||||
|
|
|
@ -7,8 +7,7 @@ import py
|
|||
|
||||
import pytest
|
||||
from _pytest.main import _in_venv
|
||||
from _pytest.main import EXIT_INTERRUPTED
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.main import Session
|
||||
|
||||
|
||||
|
@ -347,7 +346,7 @@ class TestCustomConftests:
|
|||
assert result.ret == 0
|
||||
result.stdout.fnmatch_lines(["*1 passed*"])
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
result.stdout.fnmatch_lines(["*collected 0 items*"])
|
||||
|
||||
def test_collectignore_exclude_on_option(self, testdir):
|
||||
|
@ -364,7 +363,7 @@ class TestCustomConftests:
|
|||
testdir.mkdir("hello")
|
||||
testdir.makepyfile(test_world="def test_hello(): pass")
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
assert "passed" not in result.stdout.str()
|
||||
result = testdir.runpytest("--XX")
|
||||
assert result.ret == 0
|
||||
|
@ -384,7 +383,7 @@ class TestCustomConftests:
|
|||
testdir.makepyfile(test_world="def test_hello(): pass")
|
||||
testdir.makepyfile(test_welt="def test_hallo(): pass")
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
result.stdout.fnmatch_lines(["*collected 0 items*"])
|
||||
result = testdir.runpytest("--XX")
|
||||
assert result.ret == 0
|
||||
|
@ -1172,7 +1171,7 @@ def test_collectignore_via_conftest(testdir, monkeypatch):
|
|||
ignore_me.ensure("conftest.py").write("assert 0, 'should_not_be_called'")
|
||||
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
def test_collect_pkg_init_and_file_in_args(testdir):
|
||||
|
@ -1234,7 +1233,7 @@ def test_collect_sub_with_symlinks(use_pkg, testdir):
|
|||
def test_collector_respects_tbstyle(testdir):
|
||||
p1 = testdir.makepyfile("assert 0")
|
||||
result = testdir.runpytest(p1, "--tb=native")
|
||||
assert result.ret == EXIT_INTERRUPTED
|
||||
assert result.ret == ExitCode.INTERRUPTED
|
||||
result.stdout.fnmatch_lines(
|
||||
[
|
||||
"*_ ERROR collecting test_collector_respects_tbstyle.py _*",
|
||||
|
|
|
@ -10,10 +10,7 @@ from _pytest.config.exceptions import UsageError
|
|||
from _pytest.config.findpaths import determine_setup
|
||||
from _pytest.config.findpaths import get_common_ancestor
|
||||
from _pytest.config.findpaths import getcfg
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import EXIT_OK
|
||||
from _pytest.main import EXIT_TESTSFAILED
|
||||
from _pytest.main import EXIT_USAGEERROR
|
||||
from _pytest.main import ExitCode
|
||||
|
||||
|
||||
class TestParseIni:
|
||||
|
@ -189,7 +186,7 @@ class TestConfigCmdlineParsing:
|
|||
|
||||
temp_ini_file = normpath(str(temp_ini_file))
|
||||
ret = pytest.main(["-c", temp_ini_file])
|
||||
assert ret == _pytest.main.EXIT_OK
|
||||
assert ret == ExitCode.OK
|
||||
|
||||
|
||||
class TestConfigAPI:
|
||||
|
@ -578,6 +575,29 @@ def test_setuptools_importerror_issue1479(testdir, monkeypatch):
|
|||
testdir.parseconfig()
|
||||
|
||||
|
||||
def test_importlib_metadata_broken_distribution(testdir, monkeypatch):
|
||||
"""Integration test for broken distributions with 'files' metadata being None (#5389)"""
|
||||
monkeypatch.delenv("PYTEST_DISABLE_PLUGIN_AUTOLOAD", raising=False)
|
||||
|
||||
class DummyEntryPoint:
|
||||
name = "mytestplugin"
|
||||
group = "pytest11"
|
||||
|
||||
def load(self):
|
||||
return object()
|
||||
|
||||
class Distribution:
|
||||
version = "1.0"
|
||||
files = None
|
||||
entry_points = (DummyEntryPoint(),)
|
||||
|
||||
def distributions():
|
||||
return (Distribution(),)
|
||||
|
||||
monkeypatch.setattr(importlib_metadata, "distributions", distributions)
|
||||
testdir.parseconfig()
|
||||
|
||||
|
||||
@pytest.mark.parametrize("block_it", [True, False])
|
||||
def test_plugin_preparse_prevents_setuptools_loading(testdir, monkeypatch, block_it):
|
||||
monkeypatch.delenv("PYTEST_DISABLE_PLUGIN_AUTOLOAD", raising=False)
|
||||
|
@ -703,7 +723,7 @@ def test_consider_args_after_options_for_rootdir(testdir, args):
|
|||
@pytest.mark.skipif("sys.platform == 'win32'")
|
||||
def test_toolongargs_issue224(testdir):
|
||||
result = testdir.runpytest("-m", "hello" * 500)
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
def test_config_in_subdirectory_colon_command_line_issue2148(testdir):
|
||||
|
@ -721,10 +741,10 @@ def test_config_in_subdirectory_colon_command_line_issue2148(testdir):
|
|||
**{
|
||||
"conftest": conftest_source,
|
||||
"subdir/conftest": conftest_source,
|
||||
"subdir/test_foo": """
|
||||
"subdir/test_foo": """\
|
||||
def test_foo(pytestconfig):
|
||||
assert pytestconfig.getini('foo') == 'subdir'
|
||||
""",
|
||||
""",
|
||||
}
|
||||
)
|
||||
|
||||
|
@ -757,6 +777,12 @@ def test_notify_exception(testdir, capfd):
|
|||
assert "ValueError" in err
|
||||
|
||||
|
||||
def test_no_terminal_discovery_error(testdir):
|
||||
testdir.makepyfile("raise TypeError('oops!')")
|
||||
result = testdir.runpytest("-p", "no:terminal", "--collect-only")
|
||||
assert result.ret == ExitCode.INTERRUPTED
|
||||
|
||||
|
||||
def test_load_initial_conftest_last_ordering(testdir, _config_for_test):
|
||||
pm = _config_for_test.pluginmanager
|
||||
|
||||
|
@ -1063,7 +1089,7 @@ class TestOverrideIniArgs:
|
|||
% (testdir.request.config._parser.optparser.prog,)
|
||||
]
|
||||
)
|
||||
assert result.ret == _pytest.main.EXIT_USAGEERROR
|
||||
assert result.ret == _pytest.main.ExitCode.USAGE_ERROR
|
||||
|
||||
def test_override_ini_does_not_contain_paths(self, _config_for_test, _sys_snapshot):
|
||||
"""Check that -o no longer swallows all options after it (#3103)"""
|
||||
|
@ -1152,13 +1178,13 @@ def test_help_and_version_after_argument_error(testdir):
|
|||
)
|
||||
# Does not display full/default help.
|
||||
assert "to see available markers type: pytest --markers" not in result.stdout.lines
|
||||
assert result.ret == EXIT_USAGEERROR
|
||||
assert result.ret == ExitCode.USAGE_ERROR
|
||||
|
||||
result = testdir.runpytest("--version")
|
||||
result.stderr.fnmatch_lines(
|
||||
["*pytest*{}*imported from*".format(pytest.__version__)]
|
||||
)
|
||||
assert result.ret == EXIT_USAGEERROR
|
||||
assert result.ret == ExitCode.USAGE_ERROR
|
||||
|
||||
|
||||
def test_config_does_not_load_blocked_plugin_from_args(testdir):
|
||||
|
@ -1166,11 +1192,11 @@ def test_config_does_not_load_blocked_plugin_from_args(testdir):
|
|||
p = testdir.makepyfile("def test(capfd): pass")
|
||||
result = testdir.runpytest(str(p), "-pno:capture")
|
||||
result.stdout.fnmatch_lines(["E fixture 'capfd' not found"])
|
||||
assert result.ret == EXIT_TESTSFAILED
|
||||
assert result.ret == ExitCode.TESTS_FAILED
|
||||
|
||||
result = testdir.runpytest(str(p), "-pno:capture", "-s")
|
||||
result.stderr.fnmatch_lines(["*: error: unrecognized arguments: -s"])
|
||||
assert result.ret == EXIT_USAGEERROR
|
||||
assert result.ret == ExitCode.USAGE_ERROR
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
|
@ -1196,7 +1222,7 @@ def test_config_blocked_default_plugins(testdir, plugin):
|
|||
result = testdir.runpytest(str(p), "-pno:%s" % plugin)
|
||||
|
||||
if plugin == "python":
|
||||
assert result.ret == EXIT_USAGEERROR
|
||||
assert result.ret == ExitCode.USAGE_ERROR
|
||||
result.stderr.fnmatch_lines(
|
||||
[
|
||||
"ERROR: not found: */test_config_blocked_default_plugins.py",
|
||||
|
@ -1205,13 +1231,13 @@ def test_config_blocked_default_plugins(testdir, plugin):
|
|||
)
|
||||
return
|
||||
|
||||
assert result.ret == EXIT_OK
|
||||
assert result.ret == ExitCode.OK
|
||||
if plugin != "terminal":
|
||||
result.stdout.fnmatch_lines(["* 1 passed in *"])
|
||||
|
||||
p = testdir.makepyfile("def test(): assert 0")
|
||||
result = testdir.runpytest(str(p), "-pno:%s" % plugin)
|
||||
assert result.ret == EXIT_TESTSFAILED
|
||||
assert result.ret == ExitCode.TESTS_FAILED
|
||||
if plugin != "terminal":
|
||||
result.stdout.fnmatch_lines(["* 1 failed in *"])
|
||||
else:
|
||||
|
|
|
@ -4,9 +4,7 @@ import py
|
|||
|
||||
import pytest
|
||||
from _pytest.config import PytestPluginManager
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import EXIT_OK
|
||||
from _pytest.main import EXIT_USAGEERROR
|
||||
from _pytest.main import ExitCode
|
||||
|
||||
|
||||
def ConftestWithSetinitial(path):
|
||||
|
@ -223,11 +221,11 @@ def test_conftest_symlink(testdir):
|
|||
"PASSED",
|
||||
]
|
||||
)
|
||||
assert result.ret == EXIT_OK
|
||||
assert result.ret == ExitCode.OK
|
||||
|
||||
# Should not cause "ValueError: Plugin already registered" (#4174).
|
||||
result = testdir.runpytest("-vs", "symlink")
|
||||
assert result.ret == EXIT_OK
|
||||
assert result.ret == ExitCode.OK
|
||||
|
||||
realtests.ensure("__init__.py")
|
||||
result = testdir.runpytest("-vs", "symlinktests/test_foo.py::test1")
|
||||
|
@ -238,7 +236,7 @@ def test_conftest_symlink(testdir):
|
|||
"PASSED",
|
||||
]
|
||||
)
|
||||
assert result.ret == EXIT_OK
|
||||
assert result.ret == ExitCode.OK
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
|
@ -274,16 +272,16 @@ def test_conftest_symlink_files(testdir):
|
|||
build.chdir()
|
||||
result = testdir.runpytest("-vs", "app/test_foo.py")
|
||||
result.stdout.fnmatch_lines(["*conftest_loaded*", "PASSED"])
|
||||
assert result.ret == EXIT_OK
|
||||
assert result.ret == ExitCode.OK
|
||||
|
||||
|
||||
def test_no_conftest(testdir):
|
||||
testdir.makeconftest("assert 0")
|
||||
result = testdir.runpytest("--noconftest")
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == EXIT_USAGEERROR
|
||||
assert result.ret == ExitCode.USAGE_ERROR
|
||||
|
||||
|
||||
def test_conftest_existing_resultlog(testdir):
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
import inspect
|
||||
import textwrap
|
||||
|
||||
import pytest
|
||||
from _pytest.compat import MODULE_NOT_FOUND_ERROR
|
||||
from _pytest.doctest import _is_mocked
|
||||
from _pytest.doctest import _patch_unwrap_mock_aware
|
||||
from _pytest.doctest import DoctestItem
|
||||
from _pytest.doctest import DoctestModule
|
||||
from _pytest.doctest import DoctestTextfile
|
||||
|
@ -153,7 +156,7 @@ class TestDoctests:
|
|||
)
|
||||
)
|
||||
doctest = """
|
||||
>>> u"{}"
|
||||
>>> "{}"
|
||||
{}
|
||||
""".format(
|
||||
test_string, repr(test_string)
|
||||
|
@ -671,7 +674,7 @@ class TestDoctests:
|
|||
test_print_unicode_value=r"""
|
||||
Here is a doctest::
|
||||
|
||||
>>> print(u'\xE5\xE9\xEE\xF8\xFC')
|
||||
>>> print('\xE5\xE9\xEE\xF8\xFC')
|
||||
åéîøü
|
||||
"""
|
||||
)
|
||||
|
@ -1224,3 +1227,24 @@ def test_doctest_mock_objects_dont_recurse_missbehaved(mock_module, testdir):
|
|||
)
|
||||
result = testdir.runpytest("--doctest-modules")
|
||||
result.stdout.fnmatch_lines(["* 1 passed *"])
|
||||
|
||||
|
||||
class Broken:
|
||||
def __getattr__(self, _):
|
||||
raise KeyError("This should be an AttributeError")
|
||||
|
||||
|
||||
@pytest.mark.parametrize( # pragma: no branch (lambdas are not called)
|
||||
"stop", [None, _is_mocked, lambda f: None, lambda f: False, lambda f: True]
|
||||
)
|
||||
def test_warning_on_unwrap_of_broken_object(stop):
|
||||
bad_instance = Broken()
|
||||
assert inspect.unwrap.__module__ == "inspect"
|
||||
with _patch_unwrap_mock_aware():
|
||||
assert inspect.unwrap.__module__ != "inspect"
|
||||
with pytest.warns(
|
||||
pytest.PytestWarning, match="^Got KeyError.* when unwrapping"
|
||||
):
|
||||
with pytest.raises(KeyError):
|
||||
inspect.unwrap(bad_instance, stop=stop)
|
||||
assert inspect.unwrap.__module__ == "inspect"
|
||||
|
|
|
@ -0,0 +1,103 @@
|
|||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
def test_enabled(testdir):
|
||||
"""Test single crashing test displays a traceback."""
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
import faulthandler
|
||||
def test_crash():
|
||||
faulthandler._sigabrt()
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest_subprocess()
|
||||
result.stderr.fnmatch_lines(["*Fatal Python error*"])
|
||||
assert result.ret != 0
|
||||
|
||||
|
||||
def test_crash_near_exit(testdir):
|
||||
"""Test that fault handler displays crashes that happen even after
|
||||
pytest is exiting (for example, when the interpreter is shutting down).
|
||||
"""
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
import faulthandler
|
||||
import atexit
|
||||
def test_ok():
|
||||
atexit.register(faulthandler._sigabrt)
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest_subprocess()
|
||||
result.stderr.fnmatch_lines(["*Fatal Python error*"])
|
||||
assert result.ret != 0
|
||||
|
||||
|
||||
def test_disabled(testdir):
|
||||
"""Test option to disable fault handler in the command line.
|
||||
"""
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
import faulthandler
|
||||
def test_disabled():
|
||||
assert not faulthandler.is_enabled()
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest_subprocess("-p", "no:faulthandler")
|
||||
result.stdout.fnmatch_lines(["*1 passed*"])
|
||||
assert result.ret == 0
|
||||
|
||||
|
||||
@pytest.mark.parametrize("enabled", [True, False])
|
||||
def test_timeout(testdir, enabled):
|
||||
"""Test option to dump tracebacks after a certain timeout.
|
||||
If faulthandler is disabled, no traceback will be dumped.
|
||||
"""
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
import time
|
||||
def test_timeout():
|
||||
time.sleep(2.0)
|
||||
"""
|
||||
)
|
||||
testdir.makeini(
|
||||
"""
|
||||
[pytest]
|
||||
faulthandler_timeout = 1
|
||||
"""
|
||||
)
|
||||
args = ["-p", "no:faulthandler"] if not enabled else []
|
||||
|
||||
result = testdir.runpytest_subprocess(*args)
|
||||
tb_output = "most recent call first"
|
||||
if sys.version_info[:2] == (3, 3):
|
||||
tb_output = "Thread"
|
||||
if enabled:
|
||||
result.stderr.fnmatch_lines(["*%s*" % tb_output])
|
||||
else:
|
||||
assert tb_output not in result.stderr.str()
|
||||
result.stdout.fnmatch_lines(["*1 passed*"])
|
||||
assert result.ret == 0
|
||||
|
||||
|
||||
@pytest.mark.parametrize("hook_name", ["pytest_enter_pdb", "pytest_exception_interact"])
|
||||
def test_cancel_timeout_on_hook(monkeypatch, pytestconfig, hook_name):
|
||||
"""Make sure that we are cancelling any scheduled traceback dumping due
|
||||
to timeout before entering pdb (pytest-dev/pytest-faulthandler#12) or any other interactive
|
||||
exception (pytest-dev/pytest-faulthandler#14).
|
||||
"""
|
||||
import faulthandler
|
||||
from _pytest import faulthandler as plugin_module
|
||||
|
||||
called = []
|
||||
|
||||
monkeypatch.setattr(
|
||||
faulthandler, "cancel_dump_traceback_later", lambda: called.append(1)
|
||||
)
|
||||
|
||||
# call our hook explicitly, we can trust that pytest will call the hook
|
||||
# for us at the appropriate moment
|
||||
hook_func = getattr(plugin_module, hook_name)
|
||||
hook_func()
|
||||
assert called == [1]
|
|
@ -1,5 +1,5 @@
|
|||
import pytest
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import ExitCode
|
||||
|
||||
|
||||
def test_version(testdir, pytestconfig):
|
||||
|
@ -49,7 +49,7 @@ def test_hookvalidation_optional(testdir):
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
def test_traceconfig(testdir):
|
||||
|
@ -59,7 +59,7 @@ def test_traceconfig(testdir):
|
|||
|
||||
def test_debug(testdir, monkeypatch):
|
||||
result = testdir.runpytest_subprocess("--debug")
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
p = testdir.tmpdir.join("pytestdebug.log")
|
||||
assert "pytest_sessionstart" in p.read()
|
||||
|
||||
|
@ -67,7 +67,7 @@ def test_debug(testdir, monkeypatch):
|
|||
def test_PYTEST_DEBUG(testdir, monkeypatch):
|
||||
monkeypatch.setenv("PYTEST_DEBUG", "1")
|
||||
result = testdir.runpytest_subprocess()
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
result.stderr.fnmatch_lines(
|
||||
["*pytest_plugin_registered*", "*manager*PluginManager*"]
|
||||
)
|
||||
|
|
|
@ -873,12 +873,12 @@ def test_logxml_check_isdir(testdir):
|
|||
|
||||
def test_escaped_parametrized_names_xml(testdir):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
"""\
|
||||
import pytest
|
||||
@pytest.mark.parametrize('char', [u"\\x00"])
|
||||
@pytest.mark.parametrize('char', ["\\x00"])
|
||||
def test_func(char):
|
||||
assert char
|
||||
"""
|
||||
"""
|
||||
)
|
||||
result, dom = runandparse(testdir)
|
||||
assert result.ret == 0
|
||||
|
|
|
@ -3,7 +3,7 @@ import sys
|
|||
from unittest import mock
|
||||
|
||||
import pytest
|
||||
from _pytest.main import EXIT_INTERRUPTED
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.mark import EMPTY_PARAMETERSET_OPTION
|
||||
from _pytest.mark import MarkGenerator as Mark
|
||||
from _pytest.nodes import Collector
|
||||
|
@ -903,7 +903,7 @@ def test_parameterset_for_fail_at_collect(testdir):
|
|||
"*= 1 error in *",
|
||||
]
|
||||
)
|
||||
assert result.ret == EXIT_INTERRUPTED
|
||||
assert result.ret == ExitCode.INTERRUPTED
|
||||
|
||||
|
||||
def test_parameterset_for_parametrize_bad_markname(testdir):
|
||||
|
|
|
@ -370,7 +370,7 @@ def test_skip_test_with_unicode(testdir):
|
|||
import unittest
|
||||
class TestClass():
|
||||
def test_io(self):
|
||||
raise unittest.SkipTest(u'😊')
|
||||
raise unittest.SkipTest('😊')
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
import os.path
|
||||
import sys
|
||||
|
||||
import py
|
||||
|
@ -53,6 +54,10 @@ class TestPort:
|
|||
def test_matching(self, match, pattern, path):
|
||||
assert match(pattern, path)
|
||||
|
||||
def test_matching_abspath(self, match):
|
||||
abspath = os.path.abspath(os.path.join("tests/foo.py"))
|
||||
assert match("tests/foo.py", abspath)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"pattern, path",
|
||||
[
|
||||
|
|
|
@ -5,7 +5,7 @@ import types
|
|||
import pytest
|
||||
from _pytest.config import PytestPluginManager
|
||||
from _pytest.config.exceptions import UsageError
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.main import Session
|
||||
|
||||
|
||||
|
@ -227,7 +227,7 @@ class TestPytestPluginManager:
|
|||
p.copy(p.dirpath("skipping2.py"))
|
||||
monkeypatch.setenv("PYTEST_PLUGINS", "skipping2")
|
||||
result = testdir.runpytest("-rw", "-p", "skipping1", syspathinsert=True)
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
result.stdout.fnmatch_lines(
|
||||
["*skipped plugin*skipping1*hello*", "*skipped plugin*skipping2*hello*"]
|
||||
)
|
||||
|
|
|
@ -8,9 +8,7 @@ import py.path
|
|||
import _pytest.pytester as pytester
|
||||
import pytest
|
||||
from _pytest.config import PytestPluginManager
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import EXIT_OK
|
||||
from _pytest.main import EXIT_TESTSFAILED
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.pytester import CwdSnapshot
|
||||
from _pytest.pytester import HookRecorder
|
||||
from _pytest.pytester import LineMatcher
|
||||
|
@ -189,35 +187,28 @@ def test_hookrecorder_basic(holder):
|
|||
|
||||
|
||||
def test_makepyfile_unicode(testdir):
|
||||
global unichr
|
||||
try:
|
||||
unichr(65)
|
||||
except NameError:
|
||||
unichr = chr
|
||||
testdir.makepyfile(unichr(0xFFFD))
|
||||
testdir.makepyfile(chr(0xFFFD))
|
||||
|
||||
|
||||
def test_makepyfile_utf8(testdir):
|
||||
"""Ensure makepyfile accepts utf-8 bytes as input (#2738)"""
|
||||
utf8_contents = """
|
||||
def setup_function(function):
|
||||
mixed_encoding = u'São Paulo'
|
||||
""".encode(
|
||||
"utf-8"
|
||||
)
|
||||
mixed_encoding = 'São Paulo'
|
||||
""".encode()
|
||||
p = testdir.makepyfile(utf8_contents)
|
||||
assert "mixed_encoding = u'São Paulo'".encode() in p.read("rb")
|
||||
assert "mixed_encoding = 'São Paulo'".encode() in p.read("rb")
|
||||
|
||||
|
||||
class TestInlineRunModulesCleanup:
|
||||
def test_inline_run_test_module_not_cleaned_up(self, testdir):
|
||||
test_mod = testdir.makepyfile("def test_foo(): assert True")
|
||||
result = testdir.inline_run(str(test_mod))
|
||||
assert result.ret == EXIT_OK
|
||||
assert result.ret == ExitCode.OK
|
||||
# rewrite module, now test should fail if module was re-imported
|
||||
test_mod.write("def test_foo(): assert False")
|
||||
result2 = testdir.inline_run(str(test_mod))
|
||||
assert result2.ret == EXIT_TESTSFAILED
|
||||
assert result2.ret == ExitCode.TESTS_FAILED
|
||||
|
||||
def spy_factory(self):
|
||||
class SysModulesSnapshotSpy:
|
||||
|
@ -418,12 +409,12 @@ def test_testdir_subprocess(testdir):
|
|||
|
||||
def test_unicode_args(testdir):
|
||||
result = testdir.runpytest("-k", "💩")
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
def test_testdir_run_no_timeout(testdir):
|
||||
testfile = testdir.makepyfile("def test_no_timeout(): pass")
|
||||
assert testdir.runpytest_subprocess(testfile).ret == EXIT_OK
|
||||
assert testdir.runpytest_subprocess(testfile).ret == ExitCode.OK
|
||||
|
||||
|
||||
def test_testdir_run_with_timeout(testdir):
|
||||
|
@ -436,7 +427,7 @@ def test_testdir_run_with_timeout(testdir):
|
|||
end = time.time()
|
||||
duration = end - start
|
||||
|
||||
assert result.ret == EXIT_OK
|
||||
assert result.ret == ExitCode.OK
|
||||
assert duration < timeout
|
||||
|
||||
|
||||
|
|
|
@ -632,8 +632,7 @@ def test_pytest_fail_notrace_collection(testdir):
|
|||
assert "def some_internal_function()" not in result.stdout.str()
|
||||
|
||||
|
||||
@pytest.mark.parametrize("str_prefix", ["u", ""])
|
||||
def test_pytest_fail_notrace_non_ascii(testdir, str_prefix):
|
||||
def test_pytest_fail_notrace_non_ascii(testdir):
|
||||
"""Fix pytest.fail with pytrace=False with non-ascii characters (#1178).
|
||||
|
||||
This tests with native and unicode strings containing non-ascii chars.
|
||||
|
@ -643,9 +642,8 @@ def test_pytest_fail_notrace_non_ascii(testdir, str_prefix):
|
|||
import pytest
|
||||
|
||||
def test_hello():
|
||||
pytest.fail(%s'oh oh: ☺', pytrace=False)
|
||||
pytest.fail('oh oh: ☺', pytrace=False)
|
||||
"""
|
||||
% str_prefix
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*test_hello*", "oh oh: ☺"])
|
||||
|
@ -655,7 +653,7 @@ def test_pytest_fail_notrace_non_ascii(testdir, str_prefix):
|
|||
def test_pytest_no_tests_collected_exit_status(testdir):
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*collected 0 items*"])
|
||||
assert result.ret == main.EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == main.ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
testdir.makepyfile(
|
||||
test_foo="""
|
||||
|
@ -666,12 +664,12 @@ def test_pytest_no_tests_collected_exit_status(testdir):
|
|||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*collected 1 item*"])
|
||||
result.stdout.fnmatch_lines(["*1 passed*"])
|
||||
assert result.ret == main.EXIT_OK
|
||||
assert result.ret == main.ExitCode.OK
|
||||
|
||||
result = testdir.runpytest("-k nonmatch")
|
||||
result.stdout.fnmatch_lines(["*collected 1 item*"])
|
||||
result.stdout.fnmatch_lines(["*1 deselected*"])
|
||||
assert result.ret == main.EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == main.ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
def test_exception_printing_skip():
|
||||
|
@ -790,7 +788,7 @@ def test_unicode_in_longrepr(testdir):
|
|||
outcome = yield
|
||||
rep = outcome.get_result()
|
||||
if rep.when == "call":
|
||||
rep.longrepr = u'ä'
|
||||
rep.longrepr = 'ä'
|
||||
"""
|
||||
)
|
||||
testdir.makepyfile(
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
import pytest
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import ExitCode
|
||||
|
||||
|
||||
class SessionTests:
|
||||
|
@ -330,7 +330,7 @@ def test_sessionfinish_with_start(testdir):
|
|||
"""
|
||||
)
|
||||
res = testdir.runpytest("--collect-only")
|
||||
assert res.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert res.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
@pytest.mark.parametrize("path", ["root", "{relative}/root", "{environment}/root"])
|
||||
|
|
|
@ -156,14 +156,12 @@ def test_change_testfile(stepwise_testdir):
|
|||
assert "test_success PASSED" in stdout
|
||||
|
||||
|
||||
def test_stop_on_collection_errors(broken_testdir):
|
||||
result = broken_testdir.runpytest(
|
||||
"-v",
|
||||
"--strict-markers",
|
||||
"--stepwise",
|
||||
"working_testfile.py",
|
||||
"broken_testfile.py",
|
||||
)
|
||||
|
||||
stdout = result.stdout.str()
|
||||
assert "errors during collection" in stdout
|
||||
@pytest.mark.parametrize("broken_first", [True, False])
|
||||
def test_stop_on_collection_errors(broken_testdir, broken_first):
|
||||
"""Stop during collection errors. Broken test first or broken test last
|
||||
actually surfaced a bug (#5444), so we test both situations."""
|
||||
files = ["working_testfile.py", "broken_testfile.py"]
|
||||
if broken_first:
|
||||
files.reverse()
|
||||
result = broken_testdir.runpytest("-v", "--strict-markers", "--stepwise", *files)
|
||||
result.stdout.fnmatch_lines("*errors during collection*")
|
||||
|
|
|
@ -10,7 +10,7 @@ import pluggy
|
|||
import py
|
||||
|
||||
import pytest
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import ExitCode
|
||||
from _pytest.reports import BaseReport
|
||||
from _pytest.terminal import _folded_skips
|
||||
from _pytest.terminal import _get_line_with_reprcrash_message
|
||||
|
@ -937,7 +937,7 @@ def test_tbstyle_short(testdir):
|
|||
def test_traceconfig(testdir, monkeypatch):
|
||||
result = testdir.runpytest("--traceconfig")
|
||||
result.stdout.fnmatch_lines(["*active plugins*"])
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
class TestGenericReporting:
|
||||
|
@ -1672,7 +1672,6 @@ def test_line_with_reprcrash(monkeypatch):
|
|||
check("😄😄😄😄😄\n2nd line", 29, "FAILED some::nodeid - 😄😄...")
|
||||
|
||||
# NOTE: constructed, not sure if this is supported.
|
||||
# It would fail if not using u"" in Python 2 for mocked_pos.
|
||||
mocked_pos = "nodeid::😄::withunicode"
|
||||
check("😄😄😄😄😄\n2nd line", 29, "FAILED nodeid::😄::withunicode")
|
||||
check("😄😄😄😄😄\n2nd line", 40, "FAILED nodeid::😄::withunicode - 😄😄...")
|
||||
|
|
|
@ -24,7 +24,6 @@ def test_ensuretemp(recwarn):
|
|||
@attr.s
|
||||
class FakeConfig:
|
||||
basetemp = attr.ib()
|
||||
trace = attr.ib(default=None)
|
||||
|
||||
@property
|
||||
def trace(self):
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
import gc
|
||||
|
||||
import pytest
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
from _pytest.main import ExitCode
|
||||
|
||||
|
||||
def test_simple_unittest(testdir):
|
||||
|
@ -55,7 +55,7 @@ def test_isclasscheck_issue53(testdir):
|
|||
"""
|
||||
)
|
||||
result = testdir.runpytest(testpath)
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
def test_setup(testdir):
|
||||
|
@ -139,6 +139,29 @@ def test_new_instances(testdir):
|
|||
reprec.assertoutcome(passed=2)
|
||||
|
||||
|
||||
def test_function_item_obj_is_instance(testdir):
|
||||
"""item.obj should be a bound method on unittest.TestCase function items (#5390)."""
|
||||
testdir.makeconftest(
|
||||
"""
|
||||
def pytest_runtest_makereport(item, call):
|
||||
if call.when == 'call':
|
||||
class_ = item.parent.obj
|
||||
assert isinstance(item.obj.__self__, class_)
|
||||
"""
|
||||
)
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
import unittest
|
||||
|
||||
class Test(unittest.TestCase):
|
||||
def test_foo(self):
|
||||
pass
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest_inprocess()
|
||||
result.stdout.fnmatch_lines(["* 1 passed in*"])
|
||||
|
||||
|
||||
def test_teardown(testdir):
|
||||
testpath = testdir.makepyfile(
|
||||
"""
|
||||
|
@ -681,7 +704,7 @@ def test_unorderable_types(testdir):
|
|||
)
|
||||
result = testdir.runpytest()
|
||||
assert "TypeError" not in result.stdout.str()
|
||||
assert result.ret == EXIT_NOTESTSCOLLECTED
|
||||
assert result.ret == ExitCode.NO_TESTS_COLLECTED
|
||||
|
||||
|
||||
def test_unittest_typerror_traceback(testdir):
|
||||
|
|
|
@ -0,0 +1,37 @@
|
|||
import inspect
|
||||
|
||||
import _pytest.warning_types
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"warning_class",
|
||||
[
|
||||
w
|
||||
for n, w in vars(_pytest.warning_types).items()
|
||||
if inspect.isclass(w) and issubclass(w, Warning)
|
||||
],
|
||||
)
|
||||
def test_warning_types(warning_class):
|
||||
"""Make sure all warnings declared in _pytest.warning_types are displayed as coming
|
||||
from 'pytest' instead of the internal module (#5452).
|
||||
"""
|
||||
assert warning_class.__module__ == "pytest"
|
||||
|
||||
|
||||
@pytest.mark.filterwarnings("error::pytest.PytestWarning")
|
||||
def test_pytest_warnings_repr_integration_test(testdir):
|
||||
"""Small integration test to ensure our small hack of setting the __module__ attribute
|
||||
of our warnings actually works (#5452).
|
||||
"""
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
import pytest
|
||||
import warnings
|
||||
|
||||
def test():
|
||||
warnings.warn(pytest.PytestWarning("some warning"))
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["E pytest.PytestWarning: some warning"])
|
|
@ -130,7 +130,7 @@ def test_unicode(testdir, pyfile_with_warnings):
|
|||
|
||||
@pytest.fixture
|
||||
def fix():
|
||||
warnings.warn(u"测试")
|
||||
warnings.warn("测试")
|
||||
yield
|
||||
|
||||
def test_func(fix):
|
||||
|
@ -207,13 +207,13 @@ def test_filterwarnings_mark(testdir, default_config):
|
|||
def test_non_string_warning_argument(testdir):
|
||||
"""Non-str argument passed to warning breaks pytest (#2956)"""
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
"""\
|
||||
import warnings
|
||||
import pytest
|
||||
|
||||
def test():
|
||||
warnings.warn(UserWarning(1, u'foo'))
|
||||
"""
|
||||
warnings.warn(UserWarning(1, 'foo'))
|
||||
"""
|
||||
)
|
||||
result = testdir.runpytest("-W", "always")
|
||||
result.stdout.fnmatch_lines(["*= 1 passed, 1 warnings in *"])
|
||||
|
@ -528,6 +528,37 @@ def test_removed_in_pytest4_warning_as_error(testdir, change_default):
|
|||
result.stdout.fnmatch_lines(["* 1 passed in *"])
|
||||
|
||||
|
||||
@pytest.mark.parametrize("change_default", [None, "ini", "cmdline"])
|
||||
def test_deprecation_warning_as_error(testdir, change_default):
|
||||
testdir.makepyfile(
|
||||
"""
|
||||
import warnings, pytest
|
||||
def test():
|
||||
warnings.warn(pytest.PytestDeprecationWarning("some warning"))
|
||||
"""
|
||||
)
|
||||
if change_default == "ini":
|
||||
testdir.makeini(
|
||||
"""
|
||||
[pytest]
|
||||
filterwarnings =
|
||||
ignore::pytest.PytestDeprecationWarning
|
||||
"""
|
||||
)
|
||||
|
||||
args = (
|
||||
("-Wignore::pytest.PytestDeprecationWarning",)
|
||||
if change_default == "cmdline"
|
||||
else ()
|
||||
)
|
||||
result = testdir.runpytest(*args)
|
||||
if change_default is None:
|
||||
result.stdout.fnmatch_lines(["* 1 failed in *"])
|
||||
else:
|
||||
assert change_default in ("ini", "cmdline")
|
||||
result.stdout.fnmatch_lines(["* 1 passed in *"])
|
||||
|
||||
|
||||
class TestAssertionWarnings:
|
||||
@staticmethod
|
||||
def assert_result_warns(result, msg):
|
||||
|
|
8
tox.ini
8
tox.ini
|
@ -20,10 +20,10 @@ envlist =
|
|||
commands =
|
||||
{env:_PYTEST_TOX_COVERAGE_RUN:} pytest {posargs:{env:_PYTEST_TOX_DEFAULT_POSARGS:}}
|
||||
coverage: coverage combine
|
||||
coverage: coverage report
|
||||
coverage: coverage report -m
|
||||
passenv = USER USERNAME COVERAGE_* TRAVIS PYTEST_ADDOPTS
|
||||
setenv =
|
||||
_PYTEST_TOX_DEFAULT_POSARGS={env:_PYTEST_TOX_POSARGS_LSOF:} {env:_PYTEST_TOX_POSARGS_PEXPECT:} {env:_PYTEST_TOX_POSARGS_TWISTED:} {env:_PYTEST_TOX_POSARGS_XDIST:}
|
||||
_PYTEST_TOX_DEFAULT_POSARGS={env:_PYTEST_TOX_POSARGS_LSOF:} {env:_PYTEST_TOX_POSARGS_XDIST:}
|
||||
|
||||
# Configuration to run with coverage similar to Travis/Appveyor, e.g.
|
||||
# "tox -e py37-coverage".
|
||||
|
@ -37,9 +37,6 @@ setenv =
|
|||
lsof: _PYTEST_TOX_POSARGS_LSOF=--lsof
|
||||
|
||||
pexpect: _PYTEST_TOX_PLATFORM=linux|darwin
|
||||
pexpect: _PYTEST_TOX_POSARGS_PEXPECT=-m uses_pexpect
|
||||
|
||||
twisted: _PYTEST_TOX_POSARGS_TWISTED=testing/test_unittest.py
|
||||
|
||||
xdist: _PYTEST_TOX_POSARGS_XDIST=-n auto
|
||||
extras = testing
|
||||
|
@ -99,7 +96,6 @@ commands =
|
|||
|
||||
[testenv:py37-freeze]
|
||||
changedir = testing/freeze
|
||||
# Disable PEP 517 with pip, which does not work with PyInstaller currently.
|
||||
deps =
|
||||
pyinstaller
|
||||
commands =
|
||||
|
|
Loading…
Reference in New Issue