Merge remote-tracking branch 'upstream/features' into jonozzz/features

This commit is contained in:
Bruno Oliveira 2018-07-05 18:15:17 -03:00
commit 3c19370cec
288 changed files with 16932 additions and 12480 deletions

2
.gitignore vendored
View File

@ -19,7 +19,7 @@ include/
.hypothesis/
# autogenerated
_pytest/_version.py
src/_pytest/_version.py
# setuptools
.eggs/

36
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,36 @@
exclude: doc/en/example/py2py3/test_py2.py
repos:
- repo: https://github.com/ambv/black
rev: 18.6b4
hooks:
- id: black
args: [--safe, --quiet]
language_version: python3.6
- repo: https://github.com/asottile/blacken-docs
rev: v0.2.0
hooks:
- id: blacken-docs
additional_dependencies: [black==18.6b4]
language_version: python3.6
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v1.3.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: debug-statements
exclude: _pytest/debugging.py
- id: flake8
- repo: https://github.com/asottile/pyupgrade
rev: v1.2.0
hooks:
- id: pyupgrade
- repo: local
hooks:
- id: rst
name: rst
entry: rst-lint --encoding utf-8
files: ^(CHANGELOG.rst|HOWTORELEASE.rst|README.rst|changelog/.*)$
language: python
additional_dependencies: [pygments, restructuredtext_lint]
python_version: python3.6

View File

@ -1,5 +1,9 @@
sudo: false
language: python
stages:
- linting
- test
- deploy
python:
- '3.6'
install:
@ -9,7 +13,7 @@ env:
# coveralls is not listed in tox's envlist, but should run in travis
- TOXENV=coveralls
# note: please use "tox --listenvs" to populate the build matrix below
- TOXENV=linting
# please remove the linting env in all cases
- TOXENV=py27
- TOXENV=py34
- TOXENV=py36
@ -33,8 +37,8 @@ jobs:
python: 'pypy-5.4'
- env: TOXENV=py35
python: '3.5'
- env: TOXENV=py35-freeze
python: '3.5'
- env: TOXENV=py36-freeze
python: '3.6'
- env: TOXENV=py37
python: 'nightly'
@ -53,6 +57,14 @@ jobs:
on:
tags: true
repo: pytest-dev/pytest
- stage: linting
python: '3.6'
env:
install:
- pip install pre-commit
- pre-commit install-hooks
script:
- pre-commit run --all-files
script: tox --recreate
@ -65,3 +77,7 @@ notifications:
skip_join: true
email:
- pytest-commit@python.org
cache:
directories:
- $HOME/.cache/pip
- $HOME/.cache/pre-commit

View File

@ -23,6 +23,7 @@ Antony Lee
Armin Rigo
Aron Coyle
Aron Curzon
Aviral Verma
Aviv Palivoda
Barney Gale
Ben Webb
@ -81,6 +82,7 @@ Greg Price
Grig Gheorghiu
Grigorii Eremeev (budulianin)
Guido Wesdorp
Guoqiang Zhang
Harald Armin Massa
Henk-Jaap Wagenaar
Hugo van Kemenade
@ -125,6 +127,7 @@ Maik Figura
Mandeep Bhutani
Manuel Krebber
Marc Schlaich
Marcelo Duarte Trevisani
Marcin Bachry
Mark Abramowitz
Markus Unterwaditzer
@ -146,6 +149,7 @@ Michael Seifert
Michal Wajszczuk
Mihai Capotă
Mike Lundy
Miro Hrončok
Nathaniel Waisbrot
Ned Batchelder
Neven Mundar
@ -155,6 +159,7 @@ Oleg Sushchenko
Oliver Bestwalter
Omar Kohl
Omer Hadari
Ondřej Súkup
Patrick Hayes
Paweł Adamczak
Pedro Algarvio
@ -178,6 +183,7 @@ Ryan Wooden
Samuel Dion-Girardeau
Samuele Pedroni
Segev Finer
Serhii Mozghovyi
Simon Gomizelj
Skylar Downes
Srinivas Reddy Thatiparthy
@ -202,6 +208,7 @@ Victor Uriarte
Vidar T. Fauske
Vitaly Lashmanov
Vlad Dragos
Wil Cooley
William Lee
Wouter van Ackooy
Xuan Luong

View File

@ -8,6 +8,228 @@
.. towncrier release notes start
Pytest 3.6.3 (2018-07-04)
=========================
Bug Fixes
---------
- Fix ``ImportWarning`` triggered by explicit relative imports in
assertion-rewritten package modules. (`#3061
<https://github.com/pytest-dev/pytest/issues/3061>`_)
- Fix error in ``pytest.approx`` when dealing with 0-dimension numpy
arrays. (`#3593 <https://github.com/pytest-dev/pytest/issues/3593>`_)
- No longer raise ``ValueError`` when using the ``get_marker`` API. (`#3605
<https://github.com/pytest-dev/pytest/issues/3605>`_)
- Fix problem where log messages with non-ascii characters would not
appear in the output log file.
(`#3630 <https://github.com/pytest-dev/pytest/issues/3630>`_)
- No longer raise ``AttributeError`` when legacy marks can't be stored in
functions. (`#3631 <https://github.com/pytest-dev/pytest/issues/3631>`_)
Improved Documentation
----------------------
- The description above the example for ``@pytest.mark.skipif`` now better
matches the code. (`#3611
<https://github.com/pytest-dev/pytest/issues/3611>`_)
Trivial/Internal Changes
------------------------
- Internal refactoring: removed unused ``CallSpec2tox ._globalid_args``
attribute and ``metafunc`` parameter from ``CallSpec2.copy()``. (`#3598
<https://github.com/pytest-dev/pytest/issues/3598>`_)
- Silence usage of ``reduce`` warning in Python 2 (`#3609
<https://github.com/pytest-dev/pytest/issues/3609>`_)
- Fix usage of ``attr.ib`` deprecated ``convert`` parameter. (`#3653
<https://github.com/pytest-dev/pytest/issues/3653>`_)
Pytest 3.6.2 (2018-06-20)
=========================
Bug Fixes
---------
- Fix regression in ``Node.add_marker`` by extracting the mark object of a
``MarkDecorator``. (`#3555
<https://github.com/pytest-dev/pytest/issues/3555>`_)
- Warnings without ``location`` were reported as ``None``. This is corrected to
now report ``<undetermined location>``. (`#3563
<https://github.com/pytest-dev/pytest/issues/3563>`_)
- Continue to call finalizers in the stack when a finalizer in a former scope
raises an exception. (`#3569
<https://github.com/pytest-dev/pytest/issues/3569>`_)
- Fix encoding error with `print` statements in doctests (`#3583
<https://github.com/pytest-dev/pytest/issues/3583>`_)
Improved Documentation
----------------------
- Add documentation for the ``--strict`` flag. (`#3549
<https://github.com/pytest-dev/pytest/issues/3549>`_)
Trivial/Internal Changes
------------------------
- Update old quotation style to parens in fixture.rst documentation. (`#3525
<https://github.com/pytest-dev/pytest/issues/3525>`_)
- Improve display of hint about ``--fulltrace`` with ``KeyboardInterrupt``.
(`#3545 <https://github.com/pytest-dev/pytest/issues/3545>`_)
- pytest's testsuite is no longer runnable through ``python setup.py test`` --
instead invoke ``pytest`` or ``tox`` directly. (`#3552
<https://github.com/pytest-dev/pytest/issues/3552>`_)
- Fix typo in documentation (`#3567
<https://github.com/pytest-dev/pytest/issues/3567>`_)
Pytest 3.6.1 (2018-06-05)
=========================
Bug Fixes
---------
- Fixed a bug where stdout and stderr were logged twice by junitxml when a test
was marked xfail. (`#3491
<https://github.com/pytest-dev/pytest/issues/3491>`_)
- Fix ``usefixtures`` mark applyed to unittest tests by correctly instantiating
``FixtureInfo``. (`#3498
<https://github.com/pytest-dev/pytest/issues/3498>`_)
- Fix assertion rewriter compatibility with libraries that monkey patch
``file`` objects. (`#3503
<https://github.com/pytest-dev/pytest/issues/3503>`_)
Improved Documentation
----------------------
- Added a section on how to use fixtures as factories to the fixture
documentation. (`#3461 <https://github.com/pytest-dev/pytest/issues/3461>`_)
Trivial/Internal Changes
------------------------
- Enable caching for pip/pre-commit in order to reduce build time on
travis/appveyor. (`#3502
<https://github.com/pytest-dev/pytest/issues/3502>`_)
- Switch pytest to the src/ layout as we already suggested it for good practice
- now we implement it as well. (`#3513
<https://github.com/pytest-dev/pytest/issues/3513>`_)
- Fix if in tests to support 3.7.0b5, where a docstring handling in AST got
reverted. (`#3530 <https://github.com/pytest-dev/pytest/issues/3530>`_)
- Remove some python2.5 compatibility code. (`#3529
<https://github.com/pytest-dev/pytest/issues/3529>`_)
Pytest 3.6.0 (2018-05-23)
=========================
Features
--------
- Revamp the internals of the ``pytest.mark`` implementation with correct per
node handling which fixes a number of long standing bugs caused by the old
design. This introduces new ``Node.iter_markers(name)`` and
``Node.get_closest_mark(name)`` APIs. Users are **strongly encouraged** to
read the `reasons for the revamp in the docs
<https://docs.pytest.org/en/latest/mark.html#marker-revamp-and-iteration>`_,
or jump over to details about `updating existing code to use the new APIs
<https://docs.pytest.org/en/latest/mark.html#updating-code>`_. (`#3317
<https://github.com/pytest-dev/pytest/issues/3317>`_)
- Now when ``@pytest.fixture`` is applied more than once to the same function a
``ValueError`` is raised. This buggy behavior would cause surprising problems
and if was working for a test suite it was mostly by accident. (`#2334
<https://github.com/pytest-dev/pytest/issues/2334>`_)
- Support for Python 3.7's builtin ``breakpoint()`` method, see `Using the
builtin breakpoint function
<https://docs.pytest.org/en/latest/usage.html#breakpoint-builtin>`_ for
details. (`#3180 <https://github.com/pytest-dev/pytest/issues/3180>`_)
- ``monkeypatch`` now supports a ``context()`` function which acts as a context
manager which undoes all patching done within the ``with`` block. (`#3290
<https://github.com/pytest-dev/pytest/issues/3290>`_)
- The ``--pdb`` option now causes KeyboardInterrupt to enter the debugger,
instead of stopping the test session. On python 2.7, hitting CTRL+C again
exits the debugger. On python 3.2 and higher, use CTRL+D. (`#3299
<https://github.com/pytest-dev/pytest/issues/3299>`_)
- pytest not longer changes the log level of the root logger when the
``log-level`` parameter has greater numeric value than that of the level of
the root logger, which makes it play better with custom logging configuration
in user code. (`#3307 <https://github.com/pytest-dev/pytest/issues/3307>`_)
Bug Fixes
---------
- A rare race-condition which might result in corrupted ``.pyc`` files on
Windows has been hopefully solved. (`#3008
<https://github.com/pytest-dev/pytest/issues/3008>`_)
- Also use iter_marker for discovering the marks applying for marker
expressions from the cli to avoid the bad data from the legacy mark storage.
(`#3441 <https://github.com/pytest-dev/pytest/issues/3441>`_)
- When showing diffs of failed assertions where the contents contain only
whitespace, escape them using ``repr()`` first to make it easy to spot the
differences. (`#3443 <https://github.com/pytest-dev/pytest/issues/3443>`_)
Improved Documentation
----------------------
- Change documentation copyright year to a range which auto-updates itself each
time it is published. (`#3303
<https://github.com/pytest-dev/pytest/issues/3303>`_)
Trivial/Internal Changes
------------------------
- ``pytest`` now depends on the `python-atomicwrites
<https://github.com/untitaker/python-atomicwrites>`_ library. (`#3008
<https://github.com/pytest-dev/pytest/issues/3008>`_)
- Update all pypi.python.org URLs to pypi.org. (`#3431
<https://github.com/pytest-dev/pytest/issues/3431>`_)
- Detect `pytest_` prefixed hooks using the internal plugin manager since
``pluggy`` is deprecating the ``implprefix`` argument to ``PluginManager``.
(`#3487 <https://github.com/pytest-dev/pytest/issues/3487>`_)
- Import ``Mapping`` and ``Sequence`` from ``_pytest.compat`` instead of
directly from ``collections`` in ``python_api.py::approx``. Add ``Mapping``
to ``_pytest.compat``, import it from ``collections`` on python 2, but from
``collections.abc`` on Python 3 to avoid a ``DeprecationWarning`` on Python
3.7 or newer. (`#3497 <https://github.com/pytest-dev/pytest/issues/3497>`_)
Pytest 3.5.1 (2018-04-23)
=========================
@ -114,7 +336,7 @@ Features
- Captured log messages are added to the ``<system-out>`` tag in the generated
junit xml file if the ``junit_logging`` ini option is set to ``system-out``.
If the value of this ini option is ``system-err`, the logs are written to
If the value of this ini option is ``system-err``, the logs are written to
``<system-err>``. The default value for ``junit_logging`` is ``no``, meaning
captured logs are not written to the output file. (`#3156
<https://github.com/pytest-dev/pytest/issues/3156>`_)
@ -1206,7 +1428,7 @@ Changes
* Testcase reports with a ``url`` attribute will now properly write this to junitxml.
Thanks `@fushi`_ for the PR (`#1874`_).
* Remove common items from dict comparision output when verbosity=1. Also update
* Remove common items from dict comparison output when verbosity=1. Also update
the truncation message to make it clearer that pytest truncates all
assertion messages if verbosity < 2 (`#1512`_).
Thanks `@mattduck`_ for the PR
@ -1218,7 +1440,7 @@ Changes
* fix `#2013`_: turn RecordedWarning into ``namedtuple``,
to give it a comprehensible repr while preventing unwarranted modification.
* fix `#2208`_: ensure a iteration limit for _pytest.compat.get_real_func.
* fix `#2208`_: ensure an iteration limit for _pytest.compat.get_real_func.
Thanks `@RonnyPfannschmidt`_ for the report and PR.
* Hooks are now verified after collection is complete, rather than right after loading installed plugins. This
@ -1322,7 +1544,7 @@ Bug Fixes
Notably, importing the ``anydbm`` module is fixed. (`#2248`_).
Thanks `@pfhayes`_ for the PR.
* junitxml: Fix problematic case where system-out tag occured twice per testcase
* junitxml: Fix problematic case where system-out tag occurred twice per testcase
element in the XML report. Thanks `@kkoukiou`_ for the PR.
* Fix regression, pytest now skips unittest correctly if run with ``--pdb``
@ -2918,7 +3140,7 @@ time or change existing behaviors in order to make them less surprising/more use
"::" node id specifications (copy pasted from "-v" output)
- fix issue544 by only removing "@NUM" at the end of "::" separated parts
and if the part has an ".py" extension
and if the part has a ".py" extension
- don't use py.std import helper, rather import things directly.
Thanks Bruno Oliveira.
@ -3189,7 +3411,7 @@ time or change existing behaviors in order to make them less surprising/more use
would not work correctly because pytest assumes @pytest.mark.some
gets a function to be decorated already. We now at least detect if this
arg is an lambda and thus the example will work. Thanks Alex Gaynor
arg is a lambda and thus the example will work. Thanks Alex Gaynor
for bringing it up.
- xfail a test on pypy that checks wrong encoding/ascii (pypy does
@ -3502,7 +3724,7 @@ Bug fixes:
rather use the post-2.0 parametrize features instead of yield, see:
http://pytest.org/latest/example/parametrize.html
- fix autouse-issue where autouse-fixtures would not be discovered
if defined in a a/conftest.py file and tests in a/tests/test_some.py
if defined in an a/conftest.py file and tests in a/tests/test_some.py
- fix issue226 - LIFO ordering for fixture teardowns
- fix issue224 - invocations with >256 char arguments now work
- fix issue91 - add/discuss package/directory level setups in example
@ -4072,7 +4294,7 @@ Bug fixes:
- make path.bestrelpath(path) return ".", note that when calling
X.bestrelpath the assumption is that X is a directory.
- make initial conftest discovery ignore "--" prefixed arguments
- fix resultlog plugin when used in an multicpu/multihost xdist situation
- fix resultlog plugin when used in a multicpu/multihost xdist situation
(thanks Jakub Gustak)
- perform distributed testing related reporting in the xdist-plugin
rather than having dist-related code in the generic py.test

View File

@ -162,10 +162,11 @@ Preparing Pull Requests
Short version
~~~~~~~~~~~~~
#. Fork the repository;
#. Target ``master`` for bugfixes and doc changes;
#. Fork the repository.
#. Enable and install `pre-commit <https://pre-commit.com>`_ to ensure style-guides and code checks are followed.
#. Target ``master`` for bugfixes and doc changes.
#. Target ``features`` for new features or functionality changes.
#. Follow **PEP-8**. There's a ``tox`` command to help fixing it: ``tox -e fix-lint``.
#. Follow **PEP-8** for naming and `black <https://github.com/ambv/black>`_ for formatting.
#. Tests are run using ``tox``::
tox -e linting,py27,py36
@ -176,7 +177,7 @@ Short version
and one of ``bugfix``, ``removal``, ``feature``, ``vendor``, ``doc`` or
``trivial`` for the issue type.
#. Unless your change is a trivial or a documentation fix (e.g., a typo or reword of a small section) please
add yourself to the ``AUTHORS`` file, in alphabetical order;
add yourself to the ``AUTHORS`` file, in alphabetical order.
Long version
@ -216,6 +217,16 @@ Here is a simple overview, with pytest-specific bits:
If you need some help with Git, follow this quick start
guide: https://git.wiki.kernel.org/index.php/QuickStart
#. Install `pre-commit <https://pre-commit.com>`_ and its hook on the pytest repo::
$ pip install --user pre-commit
$ pre-commit install
Afterwards ``pre-commit`` will run whenever you commit.
https://pre-commit.com/ is a framework for managing and maintaining multi-language pre-commit hooks
to ensure code-style and code formatting is consistent.
#. Install tox
Tox is used to run all the tests and will automatically setup virtualenvs
@ -234,15 +245,7 @@ Here is a simple overview, with pytest-specific bits:
This command will run tests via the "tox" tool against Python 2.7 and 3.6
and also perform "lint" coding-style checks.
#. You can now edit your local working copy. Please follow PEP-8.
You can now make the changes you want and run the tests again as necessary.
If you have too much linting errors, try running::
$ tox -e fix-lint
To fix pep8 related errors.
#. You can now edit your local working copy and run the tests again as necessary. Please follow PEP-8 for naming.
You can pass different options to ``tox``. For example, to run tests on Python 2.7 and pass options to pytest
(e.g. enter pdb on failure) to pytest you can do::
@ -253,6 +256,9 @@ Here is a simple overview, with pytest-specific bits:
$ tox -e py36 -- testing/test_config.py
When committing, ``pre-commit`` will re-format the files if necessary.
#. Commit and push once your tests pass and you are happy with your change(s)::
$ git commit -a -m "<commit message>"

View File

@ -10,10 +10,6 @@ taking a lot of time to make a new one.
pytest releases must be prepared on **Linux** because the docs and examples expect
to be executed in that platform.
#. Install development dependencies in a virtual environment with::
pip3 install -U -r tasks/requirements.txt
#. Create a branch ``release-X.Y.Z`` with the version for the release.
* **patch releases**: from the latest ``master``;
@ -22,9 +18,19 @@ taking a lot of time to make a new one.
Ensure your are in a clean work tree.
#. Generate docs, changelog, announcements and a **local** tag::
#. Install development dependencies in a virtual environment with::
$ pip3 install -U -r tasks/requirements.txt
#. Generate docs, changelog, announcements, and a **local** tag::
$ invoke generate.pre-release <VERSION>
#. Execute pre-commit on all files to ensure the docs are conformant and commit your results::
$ pre-commit run --all-files
$ git commit -am "Fix files with pre-commit"
invoke generate.pre-release <VERSION>
#. Open a PR for this branch targeting ``master``.

View File

@ -6,13 +6,13 @@
------
.. image:: https://img.shields.io/pypi/v/pytest.svg
:target: https://pypi.python.org/pypi/pytest
:target: https://pypi.org/project/pytest/
.. image:: https://anaconda.org/conda-forge/pytest/badges/version.svg
.. image:: https://img.shields.io/conda/vn/conda-forge/pytest.svg
:target: https://anaconda.org/conda-forge/pytest
.. image:: https://img.shields.io/pypi/pyversions/pytest.svg
:target: https://pypi.python.org/pypi/pytest
:target: https://pypi.org/project/pytest/
.. image:: https://img.shields.io/coveralls/pytest-dev/pytest/master.svg
:target: https://coveralls.io/r/pytest-dev/pytest
@ -23,6 +23,9 @@
.. image:: https://ci.appveyor.com/api/projects/status/mrgbjaua7t33pg6b?svg=true
:target: https://ci.appveyor.com/project/pytestbot/pytest
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/ambv/black
.. image:: https://www.codetriage.com/pytest-dev/pytest/badges/users.svg
:target: https://www.codetriage.com/pytest-dev/pytest
@ -37,6 +40,7 @@ An example of a simple test:
def inc(x):
return x + 1
def test_answer():
assert inc(3) == 5

View File

@ -14,7 +14,7 @@ environment:
- TOXENV: "py34"
- TOXENV: "py35"
- TOXENV: "py36"
- TOXENV: "pypy"
# - TOXENV: "pypy" reenable when we are able to provide a scandir wheel or build scandir
- TOXENV: "py27-pexpect"
- TOXENV: "py27-xdist"
- TOXENV: "py27-trial"
@ -27,7 +27,7 @@ environment:
- TOXENV: "py36-pluggymaster"
- TOXENV: "py27-nobyte"
- TOXENV: "doctesting"
- TOXENV: "py35-freeze"
- TOXENV: "py36-freeze"
- TOXENV: "docs"
install:
@ -42,3 +42,11 @@ build: false # Not a C# project, build stuff at the test step instead.
test_script:
- call scripts\call-tox.bat
cache:
- '%LOCALAPPDATA%\pip\cache'
- '%USERPROFILE%\.cache\pre-commit'
# We don't deploy anything on tags with AppVeyor, we use Travis instead, so we
# might as well save resources
skip_tags: true

View File

@ -1,12 +1,13 @@
import sys
if __name__ == '__main__':
if __name__ == "__main__":
import cProfile
import pytest
import pytest # NOQA
import pstats
script = sys.argv[1:] if len(sys.argv) > 1 else "empty.py"
stats = cProfile.run('pytest.cmdline.main(%r)' % script, 'prof')
stats = cProfile.run("pytest.cmdline.main(%r)" % script, "prof")
p = pstats.Stats("prof")
p.strip_dirs()
p.sort_stats('cumulative')
p.sort_stats("cumulative")
print(p.print_stats(500))

View File

@ -5,15 +5,18 @@
# FilesCompleter 75.1109 69.2116
# FastFilesCompleter 0.7383 1.0760
import timeit
if __name__ == '__main__':
import sys
import timeit
from argcomplete.completers import FilesCompleter
from _pytest._argcomplete import FastFilesCompleter
count = 1000 # only a few seconds
setup = 'from __main__ import FastFilesCompleter\nfc = FastFilesCompleter()'
run = 'fc("/d")'
sys.stdout.write('%s\n' % (timeit.timeit(run,
setup=setup.replace('Fast', ''), number=count)))
sys.stdout.write('%s\n' % (timeit.timeit(run, setup=setup, number=count)))
imports = [
"from argcomplete.completers import FilesCompleter as completer",
"from _pytest._argcomplete import FastFilesCompleter as completer",
]
count = 1000 # only a few seconds
setup = "%s\nfc = completer()"
run = 'fc("/d")'
if __name__ == "__main__":
print(timeit.timeit(run, setup=setup % imports[0], number=count))
print((timeit.timeit(run, setup=setup % imports[1], number=count)))

View File

@ -1,3 +1,4 @@
import py
for i in range(1000):
py.builtin.exec_("def test_func_%d(): pass" % i)

View File

@ -1,12 +1,15 @@
import pytest
@pytest.fixture(scope='module', params=range(966))
@pytest.fixture(scope="module", params=range(966))
def foo(request):
return request.param
def test_it(foo):
pass
def test_it2(foo):
pass

View File

@ -1,10 +1,11 @@
from six.moves import range
import pytest
SKIP = True
@pytest.mark.parametrize("x", xrange(5000))
@pytest.mark.parametrize("x", range(5000))
def test_foo(x):
if SKIP:
pytest.skip("heh")

View File

@ -0,0 +1,3 @@
In case a (direct) parameter of a test overrides some fixture upon which the
test depends indirectly, do the pruning of the fixture dependency tree. That
is, recompute the full set of fixtures the test function needs.

View File

@ -1 +0,0 @@
Now when ``@pytest.fixture`` is applied more than once to the same function a ``ValueError`` is raised. This buggy behavior would cause surprising problems and if was working for a test suite it was mostly by accident.

View File

@ -1 +0,0 @@
A rare race-condition which might result in corrupted ``.pyc`` files on Windows has been hopefully solved.

View File

@ -1 +0,0 @@
``pytest`` now depends on the `python-atomicwrites <https://github.com/untitaker/python-atomicwrites>`_ library.

View File

@ -1 +0,0 @@
Support for Python 3.7's builtin ``breakpoint()`` method, see `Using the builtin breakpoint function <https://docs.pytest.org/en/latest/usage.html#breakpoint-builtin>`_ for details.

View File

@ -1,2 +0,0 @@
``monkeypatch`` now supports a ``context()`` function which acts as a context manager which undoes all patching done
within the ``with`` block.

View File

@ -1,3 +0,0 @@
pytest not longer changes the log level of the root logger when the
``log-level`` parameter has greater numeric value than that of the level of
the root logger, which makes it play better with custom logging configuration in user code.

View File

@ -1 +0,0 @@
introduce correct per node mark handling and deprecate the always incorrect existing mark handling

View File

@ -0,0 +1 @@
Now a ``README.md`` file is created in ``.pytest_cache`` to make it clear why the directory exists.

View File

@ -0,0 +1 @@
``Node.add_marker`` now supports an append=True/False to determine whether the mark comes last (default) or first.

View File

@ -0,0 +1 @@
Fixture ``caplog`` now has a ``messages`` property, providing convenient access to the format-interpolated log messages without the extra data provided by the formatter/handler.

View File

@ -0,0 +1 @@
introduce ``pytester.copy_example`` as helper to do acceptance tests against examples from the project

View File

@ -1,9 +1,8 @@
<h3>Useful Links</h3>
<ul>
<li><a href="https://pypi.python.org/pypi/pytest">pytest @ PyPI</a></li>
<li><a href="https://pypi.org/project/pytest/">pytest @ PyPI</a></li>
<li><a href="https://github.com/pytest-dev/pytest/">pytest @ GitHub</a></li>
<li><a href="http://plugincompat.herokuapp.com/">3rd party plugins</a></li>
<li><a href="https://github.com/pytest-dev/pytest/issues">Issue Tracker</a></li>
<li><a href="https://media.readthedocs.org/pdf/pytest/latest/pytest.pdf">PDF Documentation</a>
</ul>

View File

@ -1,7 +1,19 @@
# flasky extensions. flasky pygments style based on tango style
from pygments.style import Style
from pygments.token import Keyword, Name, Comment, String, Error, \
Number, Operator, Generic, Whitespace, Punctuation, Other, Literal
from pygments.token import (
Keyword,
Name,
Comment,
String,
Error,
Number,
Operator,
Generic,
Whitespace,
Punctuation,
Other,
Literal,
)
class FlaskyStyle(Style):
@ -10,77 +22,68 @@ class FlaskyStyle(Style):
styles = {
# No corresponding class for the following:
#Text: "", # class: ''
Whitespace: "underline #f8f8f8", # class: 'w'
Error: "#a40000 border:#ef2929", # class: 'err'
Other: "#000000", # class 'x'
Comment: "italic #8f5902", # class: 'c'
Comment.Preproc: "noitalic", # class: 'cp'
Keyword: "bold #004461", # class: 'k'
Keyword.Constant: "bold #004461", # class: 'kc'
Keyword.Declaration: "bold #004461", # class: 'kd'
Keyword.Namespace: "bold #004461", # class: 'kn'
Keyword.Pseudo: "bold #004461", # class: 'kp'
Keyword.Reserved: "bold #004461", # class: 'kr'
Keyword.Type: "bold #004461", # class: 'kt'
Operator: "#582800", # class: 'o'
Operator.Word: "bold #004461", # class: 'ow' - like keywords
Punctuation: "bold #000000", # class: 'p'
# Text: "", # class: ''
Whitespace: "underline #f8f8f8", # class: 'w'
Error: "#a40000 border:#ef2929", # class: 'err'
Other: "#000000", # class 'x'
Comment: "italic #8f5902", # class: 'c'
Comment.Preproc: "noitalic", # class: 'cp'
Keyword: "bold #004461", # class: 'k'
Keyword.Constant: "bold #004461", # class: 'kc'
Keyword.Declaration: "bold #004461", # class: 'kd'
Keyword.Namespace: "bold #004461", # class: 'kn'
Keyword.Pseudo: "bold #004461", # class: 'kp'
Keyword.Reserved: "bold #004461", # class: 'kr'
Keyword.Type: "bold #004461", # class: 'kt'
Operator: "#582800", # class: 'o'
Operator.Word: "bold #004461", # class: 'ow' - like keywords
Punctuation: "bold #000000", # class: 'p'
# because special names such as Name.Class, Name.Function, etc.
# are not recognized as such later in the parsing, we choose them
# to look the same as ordinary variables.
Name: "#000000", # class: 'n'
Name.Attribute: "#c4a000", # class: 'na' - to be revised
Name.Builtin: "#004461", # class: 'nb'
Name.Builtin.Pseudo: "#3465a4", # class: 'bp'
Name.Class: "#000000", # class: 'nc' - to be revised
Name.Constant: "#000000", # class: 'no' - to be revised
Name.Decorator: "#888", # class: 'nd' - to be revised
Name.Entity: "#ce5c00", # class: 'ni'
Name.Exception: "bold #cc0000", # class: 'ne'
Name.Function: "#000000", # class: 'nf'
Name.Property: "#000000", # class: 'py'
Name.Label: "#f57900", # class: 'nl'
Name.Namespace: "#000000", # class: 'nn' - to be revised
Name.Other: "#000000", # class: 'nx'
Name.Tag: "bold #004461", # class: 'nt' - like a keyword
Name.Variable: "#000000", # class: 'nv' - to be revised
Name.Variable.Class: "#000000", # class: 'vc' - to be revised
Name.Variable.Global: "#000000", # class: 'vg' - to be revised
Name.Variable.Instance: "#000000", # class: 'vi' - to be revised
Number: "#990000", # class: 'm'
Literal: "#000000", # class: 'l'
Literal.Date: "#000000", # class: 'ld'
String: "#4e9a06", # class: 's'
String.Backtick: "#4e9a06", # class: 'sb'
String.Char: "#4e9a06", # class: 'sc'
String.Doc: "italic #8f5902", # class: 'sd' - like a comment
String.Double: "#4e9a06", # class: 's2'
String.Escape: "#4e9a06", # class: 'se'
String.Heredoc: "#4e9a06", # class: 'sh'
String.Interpol: "#4e9a06", # class: 'si'
String.Other: "#4e9a06", # class: 'sx'
String.Regex: "#4e9a06", # class: 'sr'
String.Single: "#4e9a06", # class: 's1'
String.Symbol: "#4e9a06", # class: 'ss'
Generic: "#000000", # class: 'g'
Generic.Deleted: "#a40000", # class: 'gd'
Generic.Emph: "italic #000000", # class: 'ge'
Generic.Error: "#ef2929", # class: 'gr'
Generic.Heading: "bold #000080", # class: 'gh'
Generic.Inserted: "#00A000", # class: 'gi'
Generic.Output: "#888", # class: 'go'
Generic.Prompt: "#745334", # class: 'gp'
Generic.Strong: "bold #000000", # class: 'gs'
Generic.Subheading: "bold #800080", # class: 'gu'
Generic.Traceback: "bold #a40000", # class: 'gt'
Name: "#000000", # class: 'n'
Name.Attribute: "#c4a000", # class: 'na' - to be revised
Name.Builtin: "#004461", # class: 'nb'
Name.Builtin.Pseudo: "#3465a4", # class: 'bp'
Name.Class: "#000000", # class: 'nc' - to be revised
Name.Constant: "#000000", # class: 'no' - to be revised
Name.Decorator: "#888", # class: 'nd' - to be revised
Name.Entity: "#ce5c00", # class: 'ni'
Name.Exception: "bold #cc0000", # class: 'ne'
Name.Function: "#000000", # class: 'nf'
Name.Property: "#000000", # class: 'py'
Name.Label: "#f57900", # class: 'nl'
Name.Namespace: "#000000", # class: 'nn' - to be revised
Name.Other: "#000000", # class: 'nx'
Name.Tag: "bold #004461", # class: 'nt' - like a keyword
Name.Variable: "#000000", # class: 'nv' - to be revised
Name.Variable.Class: "#000000", # class: 'vc' - to be revised
Name.Variable.Global: "#000000", # class: 'vg' - to be revised
Name.Variable.Instance: "#000000", # class: 'vi' - to be revised
Number: "#990000", # class: 'm'
Literal: "#000000", # class: 'l'
Literal.Date: "#000000", # class: 'ld'
String: "#4e9a06", # class: 's'
String.Backtick: "#4e9a06", # class: 'sb'
String.Char: "#4e9a06", # class: 'sc'
String.Doc: "italic #8f5902", # class: 'sd' - like a comment
String.Double: "#4e9a06", # class: 's2'
String.Escape: "#4e9a06", # class: 'se'
String.Heredoc: "#4e9a06", # class: 'sh'
String.Interpol: "#4e9a06", # class: 'si'
String.Other: "#4e9a06", # class: 'sx'
String.Regex: "#4e9a06", # class: 'sr'
String.Single: "#4e9a06", # class: 's1'
String.Symbol: "#4e9a06", # class: 'ss'
Generic: "#000000", # class: 'g'
Generic.Deleted: "#a40000", # class: 'gd'
Generic.Emph: "italic #000000", # class: 'ge'
Generic.Error: "#ef2929", # class: 'gr'
Generic.Heading: "bold #000080", # class: 'gh'
Generic.Inserted: "#00A000", # class: 'gi'
Generic.Output: "#888", # class: 'go'
Generic.Prompt: "#745334", # class: 'gp'
Generic.Strong: "bold #000000", # class: 'gs'
Generic.Subheading: "bold #800080", # class: 'gu'
Generic.Traceback: "bold #a40000", # class: 'gt'
}

View File

@ -6,6 +6,10 @@ Release announcements
:maxdepth: 2
release-3.6.3
release-3.6.2
release-3.6.1
release-3.6.0
release-3.5.1
release-3.5.0
release-3.4.2

View File

@ -37,4 +37,3 @@ Changes between 2.0.2 and 2.0.3
internally)
- fix issue37: avoid invalid characters in junitxml's output

View File

@ -34,4 +34,3 @@ Changes between 2.1.0 and 2.1.1
- fix issue59: provide system-out/err tags for junitxml output
- fix issue61: assertion rewriting on boolean operations with 3 or more operands
- you can now build a man page with "cd doc ; make man"

View File

@ -30,4 +30,3 @@ Changes between 2.1.1 and 2.1.2
- fix issue68 / packages now work with assertion rewriting
- fix issue66: use different assertion rewriting caches when the -O option is passed
- don't try assertion rewriting on Jython, use reinterp

View File

@ -36,4 +36,3 @@ Changes between 2.2.3 and 2.2.4
configure/sessionstart where called
- fix issue #144: better mangle test ids to junitxml classnames
- upgrade distribute_setup.py to 0.6.27

View File

@ -131,4 +131,3 @@ Changes between 2.2.4 and 2.3.0
- don't show deselected reason line if there is none
- py.test -vv will show all of assert comparisons instead of truncating

View File

@ -59,4 +59,3 @@ Changes between 2.3.2 and 2.3.3
- fix issue127 - improve documentation for pytest_addoption() and
add a ``config.getoption(name)`` helper function for consistency.

View File

@ -18,7 +18,7 @@ comes with the following fixes and features:
rather use the post-2.0 parametrize features instead of yield, see:
http://pytest.org/latest/example/parametrize.html
- fix autouse-issue where autouse-fixtures would not be discovered
if defined in a a/conftest.py file and tests in a/tests/test_some.py
if defined in an a/conftest.py file and tests in a/tests/test_some.py
- fix issue226 - LIFO ordering for fixture teardowns
- fix issue224 - invocations with >256 char arguments now work
- fix issue91 - add/discuss package/directory level setups in example

View File

@ -14,7 +14,7 @@ few interesting new plugins saw the light last month:
And several others like pytest-django saw maintenance releases.
For a more complete list, check out
https://pypi.python.org/pypi?%3Aaction=search&term=pytest&submit=search.
https://pypi.org/search/?q=pytest
For general information see:
@ -94,4 +94,3 @@ Changes between 2.3.4 and 2.3.5
- fix issue134 - print the collect errors that prevent running specified test items
- fix issue266 - accept unicode in MarkEvaluator expressions

View File

@ -23,14 +23,14 @@ a full list of details. A few feature highlights:
called if the corresponding setup method succeeded.
- integrate tab-completion on command line options if you
have `argcomplete <http://pypi.python.org/pypi/argcomplete>`_
have `argcomplete <https://pypi.org/project/argcomplete/>`_
configured.
- allow boolean expression directly with skipif/xfail
if a "reason" is also specified.
- a new hook ``pytest_load_initial_conftests`` allows plugins like
`pytest-django <http://pypi.python.org/pypi/pytest-django>`_ to
`pytest-django <https://pypi.org/project/pytest-django/>`_ to
influence the environment before conftest files import ``django``.
- reporting: color the last line red or green depending if
@ -222,4 +222,3 @@ Bug fixes:
- pytest_terminal_summary(terminalreporter) hooks can now use
".section(title)" and ".line(msg)" methods to print extra
information at the end of a test run.

View File

@ -172,4 +172,3 @@ holger krekel
does not duplicate the unittest-API into the "plain" namespace.
- fix verbose reporting for @mock'd test functions

View File

@ -44,4 +44,3 @@ holger krekel
- fix issue407: fix addoption docstring to point to argparse instead of
optparse. Thanks Daniel D. Wright.

View File

@ -61,4 +61,3 @@ holger krekel
expressions now work better. Thanks Floris Bruynooghe.
- make capfd/capsys.capture private, its unused and shouldn't be exposed

View File

@ -52,8 +52,7 @@ Changes 2.6.1
"::" node id specifications (copy pasted from "-v" output)
- fix issue544 by only removing "@NUM" at the end of "::" separated parts
and if the part has an ".py" extension
and if the part has a ".py" extension
- don't use py.std import helper, rather import things directly.
Thanks Bruno Oliveira.

View File

@ -49,4 +49,3 @@ holger krekel
- Do not mark as universal wheel because Python 2.6 is different from
other builds due to the extra argparse dependency. Fixes issue566.
Thanks sontek.

View File

@ -49,4 +49,3 @@ Changes 2.6.3
- check xfail/skip also with non-python function test items. Thanks
Floris Bruynooghe.

View File

@ -98,4 +98,3 @@ holger krekel
- On failure, the ``sys.last_value``, ``sys.last_type`` and
``sys.last_traceback`` are set, so that a user can inspect the error
via postmortem debugging (almarklein).

View File

@ -55,4 +55,3 @@ The py.test Development Team
- fix issue756, fix issue752 (and similar issues): depend on py-1.4.29
which has a refined algorithm for traceback generation.

View File

@ -56,4 +56,3 @@ The py.test Development Team
- extend documentation on the --ignore cli option
- use pytest-runner for setuptools integration
- minor fixes for interaction with OS X El Capitan system integrity protection (thanks Florian)

View File

@ -0,0 +1,41 @@
pytest-3.6.0
=======================================
The pytest team is proud to announce the 3.6.0 release!
pytest is a mature Python testing tool with more than a 1600 tests
against itself, passing on many different interpreters and platforms.
This release contains a number of bugs fixes and improvements, so users are encouraged
to take a look at the CHANGELOG:
http://doc.pytest.org/en/latest/changelog.html
For complete documentation, please visit:
http://docs.pytest.org
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed to this release, among them:
* Anthony Shaw
* ApaDoctor
* Brian Maissy
* Bruno Oliveira
* Jon Dufresne
* Katerina Koukiou
* Miro Hrončok
* Rachel Kogan
* Ronny Pfannschmidt
* Tim Hughes
* Tyler Goodlet
* Ville Skyttä
* aviral1701
* feuillemorte
Happy testing,
The Pytest Development Team

View File

@ -0,0 +1,24 @@
pytest-3.6.1
=======================================
pytest 3.6.1 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Anthony Sottile
* Bruno Oliveira
* Jeffrey Rackauckas
* Miro Hrončok
* Niklas Meinzer
* Oliver Bestwalter
* Ronny Pfannschmidt
Happy testing,
The pytest Development Team

View File

@ -0,0 +1,29 @@
pytest-3.6.2
=======================================
pytest 3.6.2 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Alan Velasco
* Alex Barbato
* Anthony Sottile
* Bartosz Cierocki
* Bruno Oliveira
* Daniel Hahler
* Guoqiang Zhang
* Hynek Schlawack
* John T. Wodder II
* Michael Käufl
* Ronny Pfannschmidt
* Samuel Dion-Girardeau
Happy testing,
The pytest Development Team

View File

@ -0,0 +1,28 @@
pytest-3.6.3
=======================================
pytest 3.6.3 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* AdamEr8
* Anthony Sottile
* Bruno Oliveira
* Jean-Paul Calderone
* Jon Dufresne
* Marcelo Duarte Trevisani
* Ondřej Súkup
* Ronny Pfannschmidt
* T.E.A de Souza
* Victor
* victor
Happy testing,
The pytest Development Team

View File

@ -91,7 +91,7 @@ In the context manager form you may use the keyword argument
``message`` to specify a custom failure message::
>>> with raises(ZeroDivisionError, message="Expecting ZeroDivisionError"):
... pass
... pass
... Failed: Expecting ZeroDivisionError
If you want to write test code that works on Python 2.4 as well,

View File

@ -14,7 +14,7 @@ Install argcomplete using::
For global activation of all argcomplete enabled python applications run::
sudo activate-global-python-argcomplete
sudo activate-global-python-argcomplete
For permanent (but not global) ``pytest`` activation, use::
@ -23,6 +23,3 @@ For permanent (but not global) ``pytest`` activation, use::
For one-time activation of argcomplete for ``pytest`` only, use::
eval "$(register-python-argcomplete pytest)"

View File

@ -90,7 +90,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
monkeypatch.setitem(mapping, name, value)
monkeypatch.delitem(obj, name, raising=True)
monkeypatch.setenv(name, value, prepend=False)
monkeypatch.delenv(name, value, raising=True)
monkeypatch.delenv(name, raising=True)
monkeypatch.syspath_prepend(path)
monkeypatch.chdir(path)
@ -120,4 +120,3 @@ You can also interactively ask for help, e.g. by typing on the Python interactiv
import pytest
help(pytest)

View File

@ -234,7 +234,7 @@ Inspecting Cache content
You can always peek at the content of the cache using the
``--cache-show`` command line option::
$ py.test --cache-show
$ pytest --cache-show
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
@ -260,5 +260,3 @@ by adding the ``--cache-clear`` option like this::
This is recommended for invocations from Continuous Integration
servers where isolation and correctness is more important
than speed.

View File

@ -92,7 +92,7 @@ an example test function that performs some output related checks:
.. code-block:: python
def test_myoutput(capsys): # or use "capfd" for fd-level
def test_myoutput(capsys): # or use "capfd" for fd-level
print("hello")
sys.stderr.write("world\n")
captured = capsys.readouterr()
@ -145,9 +145,9 @@ as a context manager, disabling capture inside the ``with`` block:
.. code-block:: python
def test_disabling_capturing(capsys):
print('this output is captured')
print("this output is captured")
with capsys.disabled():
print('output not captured, going directly to sys.stdout')
print('this output is also captured')
print("output not captured, going directly to sys.stdout")
print("this output is also captured")
.. include:: links.inc

View File

@ -1,17 +0,0 @@
import py
import subprocess
def test_build_docs(tmpdir):
doctrees = tmpdir.join("doctrees")
htmldir = tmpdir.join("html")
subprocess.check_call([
"sphinx-build", "-W", "-bhtml",
"-d", str(doctrees), ".", str(htmldir)])
def test_linkcheck(tmpdir):
doctrees = tmpdir.join("doctrees")
htmldir = tmpdir.join("html")
subprocess.check_call(
["sphinx-build", "-blinkcheck",
"-d", str(doctrees), ".", str(htmldir)])

View File

@ -20,6 +20,7 @@
import os
import sys
import datetime
from _pytest import __version__ as version
@ -28,7 +29,7 @@ release = ".".join(version.split(".")[:2])
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# sys.path.insert(0, os.path.abspath('.'))
autodoc_member_order = "bysource"
todo_include_todos = 1
@ -36,58 +37,68 @@ todo_include_todos = 1
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.autosummary',
'sphinx.ext.intersphinx', 'sphinx.ext.viewcode', 'sphinxcontrib_trio']
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.todo",
"sphinx.ext.autosummary",
"sphinx.ext.intersphinx",
"sphinx.ext.viewcode",
"sphinxcontrib_trio",
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
templates_path = ["_templates"]
# The suffix of source filenames.
source_suffix = '.rst'
source_suffix = ".rst"
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'contents'
master_doc = "contents"
# General information about the project.
project = u'pytest'
copyright = u'2015, holger krekel and pytest-dev team'
project = u"pytest"
year = datetime.datetime.utcnow().year
copyright = u"2015{} , holger krekel and pytest-dev team".format(year)
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['links.inc', '_build', 'naming20.rst', 'test/*',
exclude_patterns = [
"links.inc",
"_build",
"naming20.rst",
"test/*",
"old_*",
'*attic*',
'*/attic*',
'funcargs.rst',
'setup.rst',
'example/remoteinterp.rst',
]
"*attic*",
"*/attic*",
"funcargs.rst",
"setup.rst",
"example/remoteinterp.rst",
]
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
@ -95,39 +106,36 @@ add_module_names = False
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
pygments_style = "sphinx"
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
sys.path.append(os.path.abspath('_themes'))
html_theme_path = ['_themes']
sys.path.append(os.path.abspath("_themes"))
html_theme_path = ["_themes"]
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'flask'
html_theme = "flask"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
'index_logo': None
}
html_theme_options = {"index_logo": None}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
html_title = 'pytest documentation'
html_title = "pytest documentation"
# A shorter title for the navigation bar. Default is the same as html_title.
html_short_title = "pytest-%s" % release
@ -148,37 +156,37 @@ html_favicon = "img/pytest1favi.ico"
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
#html_sidebars = {'index': 'indexsidebar.html'}
# html_sidebars = {}
# html_sidebars = {'index': 'indexsidebar.html'}
html_sidebars = {
'index': [
'sidebarintro.html',
'globaltoc.html',
'links.html',
'sourcelink.html',
'searchbox.html'
"index": [
"sidebarintro.html",
"globaltoc.html",
"links.html",
"sourcelink.html",
"searchbox.html",
],
"**": [
"globaltoc.html",
"relations.html",
"links.html",
"sourcelink.html",
"searchbox.html",
],
'**': [
'globaltoc.html',
'relations.html',
'links.html',
'sourcelink.html',
'searchbox.html'
]
}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
#html_additional_pages = {'index': 'index.html'}
# html_additional_pages = {}
# html_additional_pages = {'index': 'index.html'}
# If false, no module index is generated.
@ -188,63 +196,68 @@ html_domain_indices = True
html_use_index = False
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# html_split_index = False
# If true, links to the reST sources are added to the pages.
html_show_sourcelink = False
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'pytestdoc'
htmlhelp_basename = "pytestdoc"
# -- Options for LaTeX output --------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('contents', 'pytest.tex', u'pytest Documentation',
u'holger krekel, trainer and consultant, http://merlinux.eu', 'manual'),
(
"contents",
"pytest.tex",
u"pytest Documentation",
u"holger krekel, trainer and consultant, http://merlinux.eu",
"manual",
)
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
latex_logo = 'img/pytest1.png'
latex_logo = "img/pytest1.png"
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# latex_show_urls = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# latex_appendices = []
# If false, no module index is generated.
latex_domain_indices = False
@ -253,72 +266,78 @@ latex_domain_indices = False
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('usage', 'pytest', u'pytest usage',
[u'holger krekel at merlinux eu'], 1)
]
man_pages = [("usage", "pytest", u"pytest usage", [u"holger krekel at merlinux eu"], 1)]
# -- Options for Epub output ---------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = u'pytest'
epub_author = u'holger krekel at merlinux eu'
epub_publisher = u'holger krekel at merlinux eu'
epub_copyright = u'2013, holger krekel et alii'
epub_title = u"pytest"
epub_author = u"holger krekel at merlinux eu"
epub_publisher = u"holger krekel at merlinux eu"
epub_copyright = u"2013, holger krekel et alii"
# The language of the text. It defaults to the language option
# or en if the language is not set.
#epub_language = ''
# epub_language = ''
# The scheme of the identifier. Typical schemes are ISBN or URL.
#epub_scheme = ''
# epub_scheme = ''
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#epub_identifier = ''
# epub_identifier = ''
# A unique identification for the text.
#epub_uid = ''
# epub_uid = ''
# HTML files that should be inserted before the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_pre_files = []
# epub_pre_files = []
# HTML files shat should be inserted after the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_post_files = []
# epub_post_files = []
# A list of files that should not be packed into the epub file.
#epub_exclude_files = []
# epub_exclude_files = []
# The depth of the table of contents in toc.ncx.
#epub_tocdepth = 3
# epub_tocdepth = 3
# Allow duplicate toc entries.
#epub_tocdup = True
# epub_tocdup = True
# -- Options for texinfo output ------------------------------------------------
texinfo_documents = [
(master_doc, 'pytest', 'pytest Documentation',
('Holger Krekel@*Benjamin Peterson@*Ronny Pfannschmidt@*'
'Floris Bruynooghe@*others'),
'pytest',
'simple powerful testing with Python',
'Programming',
1),
(
master_doc,
"pytest",
"pytest Documentation",
(
"Holger Krekel@*Benjamin Peterson@*Ronny Pfannschmidt@*"
"Floris Bruynooghe@*others"
),
"pytest",
"simple powerful testing with Python",
"Programming",
1,
)
]
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'python': ('http://docs.python.org/3', None)}
intersphinx_mapping = {"python": ("http://docs.python.org/3", None)}
def setup(app):
#from sphinx.ext.autodoc import cut_lines
#app.connect('autodoc-process-docstring', cut_lines(4, what=['module']))
app.add_description_unit('confval', 'confval',
objname='configuration value',
indextemplate='pair: %s; configuration value')
# from sphinx.ext.autodoc import cut_lines
# app.connect('autodoc-process-docstring', cut_lines(4, what=['module']))
app.add_description_unit(
"confval",
"confval",
objname="configuration value",
indextemplate="pair: %s; configuration value",
)

View File

@ -47,4 +47,3 @@ Contact channels
.. _`development mailing list`:
.. _`pytest-dev at python.org (mailing list)`: http://mail.python.org/mailman/listinfo/pytest-dev
.. _`pytest-commit at python.org (mailing list)`: http://mail.python.org/mailman/listinfo/pytest-commit

View File

@ -15,12 +15,12 @@ Full pytest documentation
existingtestsuite
assert
fixture
mark
monkeypatch
tmpdir
capture
warnings
doctest
mark
skipping
parametrize
cache
@ -62,4 +62,3 @@ Full pytest documentation
:maxdepth: 1
changelog

View File

@ -10,7 +10,7 @@ Code Style
----------
* `PEP-8 <https://www.python.org/dev/peps/pep-0008>`_
* `flake8 <https://pypi.python.org/pypi/flake8>`_ for quality checks
* `flake8 <https://pypi.org/project/flake8/>`_ for quality checks
* `invoke <http://www.pyinvoke.org/>`_ to automate development tasks

View File

@ -168,5 +168,3 @@ by using one of standard doctest modules format in options
pytest --doctest-modules --doctest-report cdiff
pytest --doctest-modules --doctest-report ndiff
pytest --doctest-modules --doctest-report only_first_failure

View File

@ -2,102 +2,109 @@ from pytest import raises
import _pytest._code
import py
def otherfunc(a,b):
assert a==b
def somefunc(x,y):
otherfunc(x,y)
def otherfunc(a, b):
assert a == b
def somefunc(x, y):
otherfunc(x, y)
def otherfunc_multi(a, b):
assert a == b
def otherfunc_multi(a,b):
assert (a ==
b)
def test_generative(param1, param2):
assert param1 * 2 < param2
def pytest_generate_tests(metafunc):
if 'param1' in metafunc.fixturenames:
if "param1" in metafunc.fixturenames:
metafunc.addcall(funcargs=dict(param1=3, param2=6))
class TestFailing(object):
def test_simple(self):
def f():
return 42
def g():
return 43
assert f() == g()
def test_simple_multiline(self):
otherfunc_multi(
42,
6*9)
otherfunc_multi(42, 6 * 9)
def test_not(self):
def f():
return 42
assert not f()
class TestSpecialisedExplanations(object):
def test_eq_text(self):
assert 'spam' == 'eggs'
assert "spam" == "eggs"
def test_eq_similar_text(self):
assert 'foo 1 bar' == 'foo 2 bar'
assert "foo 1 bar" == "foo 2 bar"
def test_eq_multiline_text(self):
assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
assert "foo\nspam\nbar" == "foo\neggs\nbar"
def test_eq_long_text(self):
a = '1'*100 + 'a' + '2'*100
b = '1'*100 + 'b' + '2'*100
a = "1" * 100 + "a" + "2" * 100
b = "1" * 100 + "b" + "2" * 100
assert a == b
def test_eq_long_text_multiline(self):
a = '1\n'*100 + 'a' + '2\n'*100
b = '1\n'*100 + 'b' + '2\n'*100
a = "1\n" * 100 + "a" + "2\n" * 100
b = "1\n" * 100 + "b" + "2\n" * 100
assert a == b
def test_eq_list(self):
assert [0, 1, 2] == [0, 1, 3]
def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100
b = [0]*100 + [2] + [3]*100
a = [0] * 100 + [1] + [3] * 100
b = [0] * 100 + [2] + [3] * 100
assert a == b
def test_eq_dict(self):
assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
assert {"a": 0, "b": 1, "c": 0} == {"a": 0, "b": 2, "d": 0}
def test_eq_set(self):
assert set([0, 10, 11, 12]) == set([0, 20, 21])
assert {0, 10, 11, 12} == {0, 20, 21}
def test_eq_longer_list(self):
assert [1,2] == [1,2,3]
assert [1, 2] == [1, 2, 3]
def test_in_list(self):
assert 1 in [0, 2, 3, 4, 5]
def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
assert 'foo' not in text
text = "some multiline\ntext\nwhich\nincludes foo\nand a\ntail"
assert "foo" not in text
def test_not_in_text_single(self):
text = 'single foo line'
assert 'foo' not in text
text = "single foo line"
assert "foo" not in text
def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
assert 'foo' not in text
text = "head " * 50 + "foo " + "tail " * 20
assert "foo" not in text
def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
assert 'f'*70 not in text
text = "head " * 50 + "f" * 70 + "tail " * 20
assert "f" * 70 not in text
def test_attribute():
class Foo(object):
b = 1
i = Foo()
assert i.b == 2
@ -105,14 +112,17 @@ def test_attribute():
def test_attribute_instance():
class Foo(object):
b = 1
assert Foo().b == 2
def test_attribute_failure():
class Foo(object):
def _get_b(self):
raise Exception('Failed to get attrib')
raise Exception("Failed to get attrib")
b = property(_get_b)
i = Foo()
assert i.b == 2
@ -120,17 +130,20 @@ def test_attribute_failure():
def test_attribute_multiple():
class Foo(object):
b = 1
class Bar(object):
b = 2
assert Foo().b == Bar().b
def globf(x):
return x+1
return x + 1
class TestRaises(object):
def test_raises(self):
s = 'qwe'
s = "qwe" # NOQA
raises(TypeError, "int(s)")
def test_raises_doesnt(self):
@ -140,15 +153,15 @@ class TestRaises(object):
raise ValueError("demo error")
def test_tupleerror(self):
a,b = [1]
a, b = [1] # NOQA
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3]
print ("l is %r" % l)
a,b = l.pop()
items = [1, 2, 3]
print("items is %r" % items)
a, b = items.pop()
def test_some_error(self):
if namenotexi:
if namenotexi: # NOQA
pass
def func1(self):
@ -159,31 +172,33 @@ class TestRaises(object):
def test_dynamic_compile_shows_nicely():
import imp
import sys
src = 'def foo():\n assert 1 == 0\n'
name = 'abc-123'
src = "def foo():\n assert 1 == 0\n"
name = "abc-123"
module = imp.new_module(name)
code = _pytest._code.compile(src, name, 'exec')
code = _pytest._code.compile(src, name, "exec")
py.builtin.exec_(code, module.__dict__)
sys.modules[name] = module
module.foo()
class TestMoreErrors(object):
def test_complex_error(self):
def f():
return 44
def g():
return 43
somefunc(f(), g())
def test_z1_unpack_error(self):
l = []
a,b = l
items = []
a, b = items
def test_z2_type_error(self):
l = 3
a,b = l
items = 3
a, b = items
def test_startswith(self):
s = "123"
@ -193,15 +208,17 @@ class TestMoreErrors(object):
def test_startswith_nested(self):
def f():
return "123"
def g():
return "456"
assert f().startswith(g())
def test_global_func(self):
assert isinstance(globf(42), float)
def test_instance(self):
self.x = 6*7
self.x = 6 * 7
assert self.x != 42
def test_compare(self):
@ -216,25 +233,29 @@ class TestMoreErrors(object):
class TestCustomAssertMsg(object):
def test_single_line(self):
class A(object):
a = 1
b = 2
assert A.a == b, "A.a appears not to be b"
def test_multiline(self):
class A(object):
a = 1
b = 2
assert A.a == b, "A.a appears not to be b\n" \
"or does not appear to be b\none of those"
assert (
A.a == b
), "A.a appears not to be b\n" "or does not appear to be b\none of those"
def test_custom_repr(self):
class JSON(object):
a = 1
def __repr__(self):
return "This is JSON\n{\n 'foo': 'bar'\n}"
a = JSON()
b = 2
assert a.a == b, a

View File

@ -1,10 +1,13 @@
import pytest, py
import pytest
import py
mydir = py.path.local(__file__).dirpath()
def pytest_runtest_setup(item):
if isinstance(item, pytest.Function):
if not item.fspath.relto(mydir):
return
mod = item.getparent(pytest.Module).obj
if hasattr(mod, 'hello'):
print ("mod.hello %r" % (mod.hello,))
if hasattr(mod, "hello"):
print("mod.hello %r" % (mod.hello,))

View File

@ -1,5 +1,6 @@
hello = "world"
def test_func():
pass

View File

@ -1,14 +1,14 @@
import py
failure_demo = py.path.local(__file__).dirpath('failure_demo.py')
pytest_plugins = 'pytester',
failure_demo = py.path.local(__file__).dirpath("failure_demo.py")
pytest_plugins = ("pytester",)
def test_failure_demo_fails_properly(testdir):
target = testdir.tmpdir.join(failure_demo.basename)
failure_demo.copy(target)
failure_demo.copy(testdir.tmpdir.join(failure_demo.basename))
result = testdir.runpytest(target, syspathinsert=True)
result.stdout.fnmatch_lines([
"*42 failed*"
])
result.stdout.fnmatch_lines(["*42 failed*"])
assert result.ret != 0

View File

@ -1,6 +1,7 @@
def setup_module(module):
module.TestStateFullThing.classcount = 0
class TestStateFullThing(object):
def setup_class(cls):
cls.classcount += 1
@ -19,9 +20,11 @@ class TestStateFullThing(object):
assert self.classcount == 1
assert self.id == 23
def teardown_module(module):
assert module.TestStateFullThing.classcount == 0
""" For this example the control flow happens as follows::
import test_setup_flow_example
setup_module(test_setup_flow_example)
@ -39,4 +42,3 @@ Note that ``setup_class(TestStateFullThing)`` is called and not
to insert ``setup_class = classmethod(setup_class)`` to make
your setup function callable.
"""

View File

@ -9,15 +9,18 @@ example: specifying and selecting acceptance tests
# ./conftest.py
def pytest_option(parser):
group = parser.getgroup("myproject")
group.addoption("-A", dest="acceptance", action="store_true",
help="run (slow) acceptance tests")
group.addoption(
"-A", dest="acceptance", action="store_true", help="run (slow) acceptance tests"
)
def pytest_funcarg__accept(request):
return AcceptFixture(request)
class AcceptFixture(object):
def __init__(self, request):
if not request.config.getoption('acceptance'):
if not request.config.getoption("acceptance"):
pytest.skip("specify -A to run acceptance tests")
self.tmpdir = request.config.mktemp(request.function.__name__, numbered=True)
@ -61,6 +64,7 @@ extend the `accept example`_ by putting this in our test module:
arg.tmpdir.mkdir("special")
return arg
class TestSpecialAcceptance(object):
def test_sometest(self, accept):
assert accept.tmpdir.join("special").check()

View File

@ -1,16 +1,19 @@
import pytest
@pytest.fixture("session")
def setup(request):
setup = CostlySetup()
yield setup
setup.finalize()
class CostlySetup(object):
def __init__(self):
import time
print ("performing costly setup")
print("performing costly setup")
time.sleep(5)
self.timecostly = 1

View File

@ -1,3 +1,2 @@
def test_quick(setup):
pass

View File

@ -1,6 +1,6 @@
def test_something(setup):
assert setup.timecostly == 1
def test_something_more(setup):
assert setup.timecostly == 1

View File

@ -330,7 +330,7 @@ specifies via named environments::
"env(name): mark test to run only on named environment")
def pytest_runtest_setup(item):
envnames = [mark.args[0] for mark in item.iter_markers() if mark.name == "env"]
envnames = [mark.args[0] for mark in item.iter_markers(name='env')]
if envnames:
if item.config.getoption("-E") not in envnames:
pytest.skip("test requires env in %r" % envnames)
@ -402,10 +402,9 @@ Below is the config file that will be used in the next examples::
import sys
def pytest_runtest_setup(item):
for marker in item.iter_markers():
if marker.name == 'my_marker':
print(marker)
sys.stdout.flush()
for marker in item.iter_markers(name='my_marker'):
print(marker)
sys.stdout.flush()
A custom marker can have its argument set, i.e. ``args`` and ``kwargs`` properties, defined by either invoking it as a callable or using ``pytest.mark.MARKER_NAME.with_args``. These two methods achieve the same effect most of the time.
@ -458,10 +457,9 @@ test function. From a conftest file we can read it like this::
import sys
def pytest_runtest_setup(item):
for mark in item.iter_markers():
if mark.name == 'glob':
print ("glob args=%s kwargs=%s" %(mark.args, mark.kwargs))
sys.stdout.flush()
for mark in item.iter_markers(name='glob'):
print ("glob args=%s kwargs=%s" %(mark.args, mark.kwargs))
sys.stdout.flush()
Let's run this without capturing output and see what we get::

View File

@ -6,35 +6,47 @@ import py
import pytest
import _pytest._code
pythonlist = ['python2.7', 'python3.4', 'python3.5']
pythonlist = ["python2.7", "python3.4", "python3.5"]
@pytest.fixture(params=pythonlist)
def python1(request, tmpdir):
picklefile = tmpdir.join("data.pickle")
return Python(request.param, picklefile)
@pytest.fixture(params=pythonlist)
def python2(request, python1):
return Python(request.param, python1.picklefile)
class Python(object):
def __init__(self, version, picklefile):
self.pythonpath = py.path.local.sysfind(version)
if not self.pythonpath:
pytest.skip("%r not found" %(version,))
pytest.skip("%r not found" % (version,))
self.picklefile = picklefile
def dumps(self, obj):
dumpfile = self.picklefile.dirpath("dump.py")
dumpfile.write(_pytest._code.Source("""
dumpfile.write(
_pytest._code.Source(
"""
import pickle
f = open(%r, 'wb')
s = pickle.dump(%r, f, protocol=2)
f.close()
""" % (str(self.picklefile), obj)))
py.process.cmdexec("%s %s" %(self.pythonpath, dumpfile))
"""
% (str(self.picklefile), obj)
)
)
py.process.cmdexec("%s %s" % (self.pythonpath, dumpfile))
def load_and_is_true(self, expression):
loadfile = self.picklefile.dirpath("load.py")
loadfile.write(_pytest._code.Source("""
loadfile.write(
_pytest._code.Source(
"""
import pickle
f = open(%r, 'rb')
obj = pickle.load(f)
@ -42,11 +54,15 @@ class Python(object):
res = eval(%r)
if not res:
raise SystemExit(1)
""" % (str(self.picklefile), expression)))
print (loadfile)
py.process.cmdexec("%s %s" %(self.pythonpath, loadfile))
"""
% (str(self.picklefile), expression)
)
)
print(loadfile)
py.process.cmdexec("%s %s" % (self.pythonpath, loadfile))
@pytest.mark.parametrize("obj", [42, {}, {1:3},])
@pytest.mark.parametrize("obj", [42, {}, {1: 3}])
def test_basic_objects(python1, python2, obj):
python1.dumps(obj)
python2.load_and_is_true("obj == %s" % obj)

View File

@ -10,7 +10,7 @@ A basic example for specifying tests in Yaml files
--------------------------------------------------------------
.. _`pytest-yamlwsgi`: http://bitbucket.org/aafshar/pytest-yamlwsgi/src/tip/pytest_yamlwsgi.py
.. _`PyYAML`: http://pypi.python.org/pypi/PyYAML/
.. _`PyYAML`: https://pypi.org/project/PyYAML/
Here is an example ``conftest.py`` (extracted from Ali Afshnars special purpose `pytest-yamlwsgi`_ plugin). This ``conftest.py`` will collect ``test*.yml`` files and will execute the yaml-formatted content as custom tests:

View File

@ -2,17 +2,21 @@
import pytest
def pytest_collect_file(parent, path):
if path.ext == ".yml" and path.basename.startswith("test"):
return YamlFile(path, parent)
class YamlFile(pytest.File):
def collect(self):
import yaml # we need a yaml parser, e.g. PyYAML
import yaml # we need a yaml parser, e.g. PyYAML
raw = yaml.safe_load(self.fspath.open())
for name, spec in sorted(raw.items()):
yield YamlItem(name, self, spec)
class YamlItem(pytest.Item):
def __init__(self, name, parent, spec):
super(YamlItem, self).__init__(name, parent)
@ -27,14 +31,17 @@ class YamlItem(pytest.Item):
def repr_failure(self, excinfo):
""" called when self.runtest() raises an exception. """
if isinstance(excinfo.value, YamlException):
return "\n".join([
"usecase execution failed",
" spec failed: %r: %r" % excinfo.value.args[1:3],
" no further details known at this point."
])
return "\n".join(
[
"usecase execution failed",
" spec failed: %r: %r" % excinfo.value.args[1:3],
" no further details known at this point.",
]
)
def reportinfo(self):
return self.fspath, 0, "usecase: %s" % self.name
class YamlException(Exception):
""" custom exception for error reporting. """

View File

@ -160,7 +160,7 @@ together with the actual data, instead of listing them separately.
A quick port of "testscenarios"
------------------------------------
.. _`test scenarios`: http://pypi.python.org/pypi/testscenarios/
.. _`test scenarios`: https://pypi.org/project/testscenarios/
Here is a quick port to run tests configured with `test scenarios`_,
an add-on from Robert Collins for the standard unittest framework. We
@ -469,7 +469,7 @@ If you run this with reporting for skips enabled::
=================== 1 passed, 1 skipped in 0.12 seconds ====================
You'll see that we don't have a ``opt2`` module and thus the second test run
You'll see that we don't have an ``opt2`` module and thus the second test run
of our ``test_func1`` was skipped. A few notes:
- the fixture functions in the ``conftest.py`` file are "session-scoped" because we

View File

@ -3,14 +3,13 @@ import pytest
py3 = sys.version_info[0] >= 3
class DummyCollector(pytest.collect.File):
def collect(self):
return []
def pytest_pycollect_makemodule(path, parent):
bn = path.basename
if "py3" in bn and not py3 or ("py2" in bn and py3):
return DummyCollector(path, parent=parent)

View File

@ -4,4 +4,3 @@ def test_exception_syntax():
0/0
except ZeroDivisionError, e:
pass

View File

@ -1,7 +1,5 @@
def test_exception_syntax():
try:
0/0
0 / 0
except ZeroDivisionError as e:
pass

View File

@ -1,11 +1,14 @@
# run this with $ pytest --collect-only test_collectonly.py
#
def test_function():
pass
class TestClass(object):
def test_method(self):
pass
def test_anothermethod(self):
pass

View File

@ -54,7 +54,7 @@ Keeping duplicate paths specified from command line
Default behavior of ``pytest`` is to ignore duplicate paths specified from the command line.
Example::
py.test path_a path_a
pytest path_a path_a
...
collected 1 item
@ -65,7 +65,7 @@ Just collect tests once.
To collect duplicate tests, use the ``--keep-duplicates`` option on the cli.
Example::
py.test --keep-duplicates path_a path_a
pytest --keep-duplicates path_a path_a
...
collected 2 items
@ -75,7 +75,7 @@ As the collector just works on directories, if you specify twice a single test f
still collect it twice, no matter if the ``--keep-duplicates`` is not specified.
Example::
py.test test_a.py test_a.py
pytest test_a.py test_a.py
...
collected 2 items

View File

@ -26,7 +26,7 @@ get on the terminal - we are working on that)::
> assert param1 * 2 < param2
E assert (3 * 2) < 6
failure_demo.py:16: AssertionError
failure_demo.py:19: AssertionError
_________________________ TestFailing.test_simple __________________________
self = <failure_demo.TestFailing object at 0xdeadbeef>
@ -34,6 +34,7 @@ get on the terminal - we are working on that)::
def test_simple(self):
def f():
return 42
def g():
return 43
@ -42,27 +43,24 @@ get on the terminal - we are working on that)::
E + where 42 = <function TestFailing.test_simple.<locals>.f at 0xdeadbeef>()
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>()
failure_demo.py:29: AssertionError
failure_demo.py:35: AssertionError
____________________ TestFailing.test_simple_multiline _____________________
self = <failure_demo.TestFailing object at 0xdeadbeef>
def test_simple_multiline(self):
otherfunc_multi(
42,
> 6*9)
> otherfunc_multi(42, 6 * 9)
failure_demo.py:34:
failure_demo.py:38:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 42, b = 54
def otherfunc_multi(a,b):
> assert (a ==
b)
def otherfunc_multi(a, b):
> assert a == b
E assert 42 == 54
failure_demo.py:12: AssertionError
failure_demo.py:15: AssertionError
___________________________ TestFailing.test_not ___________________________
self = <failure_demo.TestFailing object at 0xdeadbeef>
@ -70,55 +68,56 @@ get on the terminal - we are working on that)::
def test_not(self):
def f():
return 42
> assert not f()
E assert not 42
E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>()
failure_demo.py:39: AssertionError
failure_demo.py:44: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_text(self):
> assert 'spam' == 'eggs'
> assert "spam" == "eggs"
E AssertionError: assert 'spam' == 'eggs'
E - spam
E + eggs
failure_demo.py:43: AssertionError
failure_demo.py:49: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_similar_text(self):
> assert 'foo 1 bar' == 'foo 2 bar'
> assert "foo 1 bar" == "foo 2 bar"
E AssertionError: assert 'foo 1 bar' == 'foo 2 bar'
E - foo 1 bar
E ? ^
E + foo 2 bar
E ? ^
failure_demo.py:46: AssertionError
failure_demo.py:52: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_multiline_text(self):
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
> assert "foo\nspam\nbar" == "foo\neggs\nbar"
E AssertionError: assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
E foo
E - spam
E + eggs
E bar
failure_demo.py:49: AssertionError
failure_demo.py:55: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_long_text(self):
a = '1'*100 + 'a' + '2'*100
b = '1'*100 + 'b' + '2'*100
a = "1" * 100 + "a" + "2" * 100
b = "1" * 100 + "b" + "2" * 100
> assert a == b
E AssertionError: assert '111111111111...2222222222222' == '1111111111111...2222222222222'
E Skipping 90 identical leading characters in diff, use -v to show
@ -128,14 +127,14 @@ get on the terminal - we are working on that)::
E + 1111111111b222222222
E ? ^
failure_demo.py:54: AssertionError
failure_demo.py:60: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_long_text_multiline(self):
a = '1\n'*100 + 'a' + '2\n'*100
b = '1\n'*100 + 'b' + '2\n'*100
a = "1\n" * 100 + "a" + "2\n" * 100
b = "1\n" * 100 + "b" + "2\n" * 100
> assert a == b
E AssertionError: assert '1\n1\n1\n1\n...n2\n2\n2\n2\n' == '1\n1\n1\n1\n1...n2\n2\n2\n2\n'
E Skipping 190 identical leading characters in diff, use -v to show
@ -148,7 +147,7 @@ get on the terminal - we are working on that)::
E
E ...Full output truncated (7 lines hidden), use '-vv' to show
failure_demo.py:59: AssertionError
failure_demo.py:65: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@ -159,26 +158,26 @@ get on the terminal - we are working on that)::
E At index 2 diff: 2 != 3
E Use -v to get the full diff
failure_demo.py:62: AssertionError
failure_demo.py:68: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100
b = [0]*100 + [2] + [3]*100
a = [0] * 100 + [1] + [3] * 100
b = [0] * 100 + [2] + [3] * 100
> assert a == b
E assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...]
E At index 100 diff: 1 != 2
E Use -v to get the full diff
failure_demo.py:67: AssertionError
failure_demo.py:73: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_dict(self):
> assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
> assert {"a": 0, "b": 1, "c": 0} == {"a": 0, "b": 2, "d": 0}
E AssertionError: assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E Omitting 1 identical items, use -vv to show
E Differing items:
@ -190,13 +189,13 @@ get on the terminal - we are working on that)::
E
E ...Full output truncated (2 lines hidden), use '-vv' to show
failure_demo.py:70: AssertionError
failure_demo.py:76: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_set(self):
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
> assert {0, 10, 11, 12} == {0, 20, 21}
E AssertionError: assert {0, 10, 11, 12} == {0, 20, 21}
E Extra items in the left set:
E 10
@ -208,18 +207,18 @@ get on the terminal - we are working on that)::
E
E ...Full output truncated (2 lines hidden), use '-vv' to show
failure_demo.py:73: AssertionError
failure_demo.py:79: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_longer_list(self):
> assert [1,2] == [1,2,3]
> assert [1, 2] == [1, 2, 3]
E assert [1, 2] == [1, 2, 3]
E Right contains more items, first extra item: 3
E Use -v to get the full diff
failure_demo.py:76: AssertionError
failure_demo.py:82: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@ -228,14 +227,14 @@ get on the terminal - we are working on that)::
> assert 1 in [0, 2, 3, 4, 5]
E assert 1 in [0, 2, 3, 4, 5]
failure_demo.py:79: AssertionError
failure_demo.py:85: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
> assert 'foo' not in text
text = "some multiline\ntext\nwhich\nincludes foo\nand a\ntail"
> assert "foo" not in text
E AssertionError: assert 'foo' not in 'some multiline\ntext\nw...ncludes foo\nand a\ntail'
E 'foo' is contained here:
E some multiline
@ -247,95 +246,101 @@ get on the terminal - we are working on that)::
E
E ...Full output truncated (2 lines hidden), use '-vv' to show
failure_demo.py:83: AssertionError
failure_demo.py:89: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_not_in_text_single(self):
text = 'single foo line'
> assert 'foo' not in text
text = "single foo line"
> assert "foo" not in text
E AssertionError: assert 'foo' not in 'single foo line'
E 'foo' is contained here:
E single foo line
E ? +++
failure_demo.py:87: AssertionError
failure_demo.py:93: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
> assert 'foo' not in text
text = "head " * 50 + "foo " + "tail " * 20
> assert "foo" not in text
E AssertionError: assert 'foo' not in 'head head head head hea...ail tail tail tail tail '
E 'foo' is contained here:
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? +++
failure_demo.py:91: AssertionError
failure_demo.py:97: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
> assert 'f'*70 not in text
text = "head " * 50 + "f" * 70 + "tail " * 20
> assert "f" * 70 not in text
E AssertionError: assert 'fffffffffff...ffffffffffff' not in 'head head he...l tail tail '
E 'ffffffffffffffffff...fffffffffffffffffff' is contained here:
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
failure_demo.py:95: AssertionError
failure_demo.py:101: AssertionError
______________________________ test_attribute ______________________________
def test_attribute():
class Foo(object):
b = 1
i = Foo()
> assert i.b == 2
E assert 1 == 2
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.b
failure_demo.py:102: AssertionError
failure_demo.py:109: AssertionError
_________________________ test_attribute_instance __________________________
def test_attribute_instance():
class Foo(object):
b = 1
> assert Foo().b == 2
E AssertionError: assert 1 == 2
E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
failure_demo.py:108: AssertionError
failure_demo.py:116: AssertionError
__________________________ test_attribute_failure __________________________
def test_attribute_failure():
class Foo(object):
def _get_b(self):
raise Exception('Failed to get attrib')
raise Exception("Failed to get attrib")
b = property(_get_b)
i = Foo()
> assert i.b == 2
failure_demo.py:117:
failure_demo.py:127:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0xdeadbeef>
def _get_b(self):
> raise Exception('Failed to get attrib')
> raise Exception("Failed to get attrib")
E Exception: Failed to get attrib
failure_demo.py:114: Exception
failure_demo.py:122: Exception
_________________________ test_attribute_multiple __________________________
def test_attribute_multiple():
class Foo(object):
b = 1
class Bar(object):
b = 2
> assert Foo().b == Bar().b
E AssertionError: assert 1 == 2
E + where 1 = <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef>.b
@ -343,22 +348,22 @@ get on the terminal - we are working on that)::
E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
failure_demo.py:125: AssertionError
failure_demo.py:137: AssertionError
__________________________ TestRaises.test_raises __________________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
def test_raises(self):
s = 'qwe'
s = "qwe" # NOQA
> raises(TypeError, "int(s)")
failure_demo.py:134:
failure_demo.py:147:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> int(s)
E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:615>:1: ValueError
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:635>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
@ -367,7 +372,7 @@ get on the terminal - we are working on that)::
> raises(IOError, "int('3')")
E Failed: DID NOT RAISE <class 'OSError'>
failure_demo.py:137: Failed
failure_demo.py:150: Failed
__________________________ TestRaises.test_raise ___________________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
@ -376,59 +381,60 @@ get on the terminal - we are working on that)::
> raise ValueError("demo error")
E ValueError: demo error
failure_demo.py:140: ValueError
failure_demo.py:153: ValueError
________________________ TestRaises.test_tupleerror ________________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
def test_tupleerror(self):
> a,b = [1]
> a, b = [1] # NOQA
E ValueError: not enough values to unpack (expected 2, got 1)
failure_demo.py:143: ValueError
failure_demo.py:156: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
self = <failure_demo.TestRaises object at 0xdeadbeef>
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3]
print ("l is %r" % l)
> a,b = l.pop()
items = [1, 2, 3]
print("items is %r" % items)
> a, b = items.pop()
E TypeError: 'int' object is not iterable
failure_demo.py:148: TypeError
failure_demo.py:161: TypeError
--------------------------- Captured stdout call ---------------------------
l is [1, 2, 3]
items is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
def test_some_error(self):
> if namenotexi:
> if namenotexi: # NOQA
E NameError: name 'namenotexi' is not defined
failure_demo.py:151: NameError
failure_demo.py:164: NameError
____________________ test_dynamic_compile_shows_nicely _____________________
def test_dynamic_compile_shows_nicely():
import imp
import sys
src = 'def foo():\n assert 1 == 0\n'
name = 'abc-123'
src = "def foo():\n assert 1 == 0\n"
name = "abc-123"
module = imp.new_module(name)
code = _pytest._code.compile(src, name, 'exec')
code = _pytest._code.compile(src, name, "exec")
py.builtin.exec_(code, module.__dict__)
sys.modules[name] = module
> module.foo()
failure_demo.py:168:
failure_demo.py:182:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def foo():
> assert 1 == 0
E AssertionError
<2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:165>:2: AssertionError
<2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:179>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -436,43 +442,45 @@ get on the terminal - we are working on that)::
def test_complex_error(self):
def f():
return 44
def g():
return 43
> somefunc(f(), g())
failure_demo.py:178:
failure_demo.py:193:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:9: in somefunc
otherfunc(x,y)
failure_demo.py:11: in somefunc
otherfunc(x, y)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 44, b = 43
def otherfunc(a,b):
> assert a==b
def otherfunc(a, b):
> assert a == b
E assert 44 == 43
failure_demo.py:6: AssertionError
failure_demo.py:7: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
def test_z1_unpack_error(self):
l = []
> a,b = l
items = []
> a, b = items
E ValueError: not enough values to unpack (expected 2, got 0)
failure_demo.py:182: ValueError
failure_demo.py:197: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
def test_z2_type_error(self):
l = 3
> a,b = l
items = 3
> a, b = items
E TypeError: 'int' object is not iterable
failure_demo.py:186: TypeError
failure_demo.py:201: TypeError
______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -485,7 +493,7 @@ get on the terminal - we are working on that)::
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
failure_demo.py:191: AssertionError
failure_demo.py:206: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -493,8 +501,10 @@ get on the terminal - we are working on that)::
def test_startswith_nested(self):
def f():
return "123"
def g():
return "456"
> assert f().startswith(g())
E AssertionError: assert False
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
@ -502,7 +512,7 @@ get on the terminal - we are working on that)::
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef>()
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>()
failure_demo.py:198: AssertionError
failure_demo.py:215: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -513,18 +523,18 @@ get on the terminal - we are working on that)::
E + where False = isinstance(43, float)
E + where 43 = globf(42)
failure_demo.py:201: AssertionError
failure_demo.py:218: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
def test_instance(self):
self.x = 6*7
self.x = 6 * 7
> assert self.x != 42
E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
failure_demo.py:205: AssertionError
failure_demo.py:222: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -534,7 +544,7 @@ get on the terminal - we are working on that)::
E assert 11 < 5
E + where 11 = globf(10)
failure_demo.py:208: AssertionError
failure_demo.py:225: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -545,7 +555,7 @@ get on the terminal - we are working on that)::
> assert x == 0
E assert 1 == 0
failure_demo.py:213: AssertionError
failure_demo.py:230: AssertionError
___________________ TestCustomAssertMsg.test_single_line ___________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
@ -553,13 +563,14 @@ get on the terminal - we are working on that)::
def test_single_line(self):
class A(object):
a = 1
b = 2
> assert A.a == b, "A.a appears not to be b"
E AssertionError: A.a appears not to be b
E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a
failure_demo.py:224: AssertionError
failure_demo.py:241: AssertionError
____________________ TestCustomAssertMsg.test_multiline ____________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
@ -567,16 +578,18 @@ get on the terminal - we are working on that)::
def test_multiline(self):
class A(object):
a = 1
b = 2
> assert A.a == b, "A.a appears not to be b\n" \
"or does not appear to be b\none of those"
> assert (
A.a == b
), "A.a appears not to be b\n" "or does not appear to be b\none of those"
E AssertionError: A.a appears not to be b
E or does not appear to be b
E one of those
E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a
failure_demo.py:230: AssertionError
failure_demo.py:248: AssertionError
___________________ TestCustomAssertMsg.test_custom_repr ___________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
@ -584,8 +597,10 @@ get on the terminal - we are working on that)::
def test_custom_repr(self):
class JSON(object):
a = 1
def __repr__(self):
return "This is JSON\n{\n 'foo': 'bar'\n}"
a = JSON()
b = 2
> assert a.a == b, a
@ -596,9 +611,9 @@ get on the terminal - we are working on that)::
E assert 1 == 2
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
failure_demo.py:240: AssertionError
failure_demo.py:261: AssertionError
============================= warnings summary =============================
None
<undetermined location>
Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0.
Please use Metafunc.parametrize instead.

View File

@ -18,10 +18,10 @@ Here is a basic pattern to achieve this:
# content of test_sample.py
def test_answer(cmdopt):
if cmdopt == "type1":
print ("first")
print("first")
elif cmdopt == "type2":
print ("second")
assert 0 # to see what was printed
print("second")
assert 0 # to see what was printed
For this to work we need to add a command line option and
@ -32,9 +32,12 @@ provide the ``cmdopt`` through a :ref:`fixture function <fixture function>`:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--cmdopt", action="store", default="type1",
help="my option: type1 or type2")
parser.addoption(
"--cmdopt", action="store", default="type1", help="my option: type1 or type2"
)
@pytest.fixture
def cmdopt(request):
@ -51,10 +54,10 @@ Let's run this without supplying our new option::
def test_answer(cmdopt):
if cmdopt == "type1":
print ("first")
print("first")
elif cmdopt == "type2":
print ("second")
> assert 0 # to see what was printed
print("second")
> assert 0 # to see what was printed
E assert 0
test_sample.py:6: AssertionError
@ -73,10 +76,10 @@ And now with supplying a command line option::
def test_answer(cmdopt):
if cmdopt == "type1":
print ("first")
print("first")
elif cmdopt == "type2":
print ("second")
> assert 0 # to see what was printed
print("second")
> assert 0 # to see what was printed
E assert 0
test_sample.py:6: AssertionError
@ -102,13 +105,16 @@ the command line arguments before they get processed:
# content of conftest.py
import sys
def pytest_load_initial_conftests(args):
if 'xdist' in sys.modules: # pytest-xdist plugin
if "xdist" in sys.modules: # pytest-xdist plugin
import multiprocessing
num = max(multiprocessing.cpu_count() / 2, 1)
args[:] = ["-n", str(num)] + args
If you have the `xdist plugin <https://pypi.python.org/pypi/pytest-xdist>`_ installed
If you have the `xdist plugin <https://pypi.org/project/pytest-xdist/>`_ installed
you will now always perform test runs using a number
of subprocesses close to your CPU. Running in an empty
directory with the above conftest.py::
@ -136,9 +142,13 @@ line option to control skipping of ``pytest.mark.slow`` marked tests:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--runslow", action="store_true",
default=False, help="run slow tests")
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
@ -206,10 +216,13 @@ Example:
# content of test_checkconfig.py
import pytest
def checkconfig(x):
__tracebackhide__ = True
if not hasattr(x, "config"):
pytest.fail("not configured: %s" %(x,))
pytest.fail("not configured: %s" % (x,))
def test_something():
checkconfig(42)
@ -228,7 +241,7 @@ Let's run our little function::
> checkconfig(42)
E Failed: not configured: 42
test_checkconfig.py:8: Failed
test_checkconfig.py:11: Failed
1 failed in 0.12 seconds
If you only want to hide certain exceptions, you can set ``__tracebackhide__``
@ -240,13 +253,16 @@ this to make sure unexpected exception types aren't hidden:
import operator
import pytest
class ConfigException(Exception):
pass
def checkconfig(x):
__tracebackhide__ = operator.methodcaller('errisinstance', ConfigException)
__tracebackhide__ = operator.methodcaller("errisinstance", ConfigException)
if not hasattr(x, "config"):
raise ConfigException("not configured: %s" %(x,))
raise ConfigException("not configured: %s" % (x,))
def test_something():
checkconfig(42)
@ -269,22 +285,28 @@ running from a test you can do something like this:
# content of conftest.py
def pytest_configure(config):
import sys
sys._called_from_test = True
def pytest_unconfigure(config):
import sys
del sys._called_from_test
and then check for the ``sys._called_from_test`` flag:
.. code-block:: python
if hasattr(sys, '_called_from_test'):
if hasattr(sys, "_called_from_test"):
# called from within a test run
...
else:
# called "normally"
...
accordingly in your application. It's also a good idea
to use your own application module rather than ``sys``
@ -301,6 +323,7 @@ It's easy to present extra information in a ``pytest`` run:
# content of conftest.py
def pytest_report_header(config):
return "project deps: mylib-1.1"
@ -325,8 +348,9 @@ display more information if applicable:
# content of conftest.py
def pytest_report_header(config):
if config.getoption('verbose') > 0:
if config.getoption("verbose") > 0:
return ["info1: did you know that ...", "did you?"]
which will add info only when run with "--v"::
@ -367,12 +391,15 @@ out which tests are the slowest. Let's make an artificial test suite:
# content of test_some_are_slow.py
import time
def test_funcfast():
time.sleep(0.1)
def test_funcslow1():
time.sleep(0.2)
def test_funcslow2():
time.sleep(0.3)
@ -389,7 +416,7 @@ Now we can profile which test functions execute the slowest::
========================= slowest 3 test durations =========================
0.30s call test_some_are_slow.py::test_funcslow2
0.20s call test_some_are_slow.py::test_funcslow1
0.10s call test_some_are_slow.py::test_funcfast
0.13s call test_some_are_slow.py::test_funcfast
========================= 3 passed in 0.12 seconds =========================
incremental testing - test steps
@ -409,17 +436,19 @@ an ``incremental`` marker which is to be used on classes:
import pytest
def pytest_runtest_makereport(item, call):
if "incremental" in item.keywords:
if call.excinfo is not None:
parent = item.parent
parent._previousfailed = item
def pytest_runtest_setup(item):
if "incremental" in item.keywords:
previousfailed = getattr(item.parent, "_previousfailed", None)
if previousfailed is not None:
pytest.xfail("previous test failed (%s)" %previousfailed.name)
pytest.xfail("previous test failed (%s)" % previousfailed.name)
These two hook implementations work together to abort incremental-marked
tests in a class. Here is a test module example:
@ -430,15 +459,19 @@ tests in a class. Here is a test module example:
import pytest
@pytest.mark.incremental
class TestUserHandling(object):
def test_login(self):
pass
def test_modification(self):
assert 0
def test_deletion(self):
pass
def test_normal():
pass
@ -461,7 +494,7 @@ If we run this::
> assert 0
E assert 0
test_step.py:9: AssertionError
test_step.py:11: AssertionError
========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::()::test_deletion
reason: previous test failed (test_modification)
@ -489,9 +522,11 @@ Here is an example for making a ``db`` fixture available in a directory:
# content of a/conftest.py
import pytest
class DB(object):
pass
@pytest.fixture(scope="session")
def db():
return DB()
@ -552,7 +587,7 @@ We can run this::
> assert 0
E assert 0
test_step.py:9: AssertionError
test_step.py:11: AssertionError
_________________________________ test_a1 __________________________________
db = <conftest.DB object at 0xdeadbeef>
@ -600,6 +635,7 @@ case we just write some information out to a ``failures`` file:
import pytest
import os.path
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
@ -626,6 +662,8 @@ if you then have failing tests:
# content of test_module.py
def test_fail1(tmpdir):
assert 0
def test_fail2():
assert 0
@ -655,7 +693,7 @@ and run them::
> assert 0
E assert 0
test_module.py:4: AssertionError
test_module.py:6: AssertionError
========================= 2 failed in 0.12 seconds =========================
you will have a "failures" file which contains the failing test ids::
@ -678,6 +716,7 @@ here is a little example implemented via a local plugin:
import pytest
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
@ -696,10 +735,10 @@ here is a little example implemented via a local plugin:
# request.node is an "item" because we use the default
# "function" scope
if request.node.rep_setup.failed:
print ("setting up a test failed!", request.node.nodeid)
print("setting up a test failed!", request.node.nodeid)
elif request.node.rep_setup.passed:
if request.node.rep_call.failed:
print ("executing test failed", request.node.nodeid)
print("executing test failed", request.node.nodeid)
if you then have failing tests:
@ -710,16 +749,20 @@ if you then have failing tests:
import pytest
@pytest.fixture
def other():
assert 0
def test_setup_fails(something, other):
pass
def test_call_fails(something):
assert 0
def test_fail2():
assert 0
@ -743,7 +786,7 @@ and run it::
> assert 0
E assert 0
test_module.py:6: AssertionError
test_module.py:7: AssertionError
================================= FAILURES =================================
_____________________________ test_call_fails ______________________________
@ -753,14 +796,14 @@ and run it::
> assert 0
E assert 0
test_module.py:12: AssertionError
test_module.py:15: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:15: AssertionError
test_module.py:19: AssertionError
==================== 2 failed, 1 error in 0.12 seconds =====================
You'll see that the fixture finalizers could use the precise reporting
@ -778,7 +821,7 @@ which test got stuck, for example if pytest was run in quiet mode (``-q``) or yo
output. This is particularly a problem if the problem helps only sporadically, the famous "flaky" kind of tests.
``pytest`` sets a ``PYTEST_CURRENT_TEST`` environment variable when running tests, which can be inspected
by process monitoring utilities or libraries like `psutil <https://pypi.python.org/pypi/psutil>`_ to discover which
by process monitoring utilities or libraries like `psutil <https://pypi.org/project/psutil/>`_ to discover which
test got stuck if necessary:
.. code-block:: python
@ -787,7 +830,7 @@ test got stuck if necessary:
for pid in psutil.pids():
environ = psutil.Process(pid).environ()
if 'PYTEST_CURRENT_TEST' in environ:
if "PYTEST_CURRENT_TEST" in environ:
print(f'pytest process {pid} running: {environ["PYTEST_CURRENT_TEST"]}')
During the test session pytest will set ``PYTEST_CURRENT_TEST`` to the current test
@ -841,8 +884,9 @@ like ``pytest-timeout`` they must be imported explicitly and passed on to pytest
import sys
import pytest_timeout # Third party plugin
if len(sys.argv) > 1 and sys.argv[1] == '--pytest':
if len(sys.argv) > 1 and sys.argv[1] == "--pytest":
import pytest
sys.exit(pytest.main(sys.argv[2:], plugins=[pytest_timeout]))
else:
# normal application execution: at this point argv can be parsed
@ -854,4 +898,3 @@ This allows you to execute tests using the frozen
application with standard ``pytest`` command-line options::
./app_main --pytest --verbose --tb=long --junitxml=results.xml test-suite/

View File

@ -1,29 +1,37 @@
import pytest
xfail = pytest.mark.xfail
@xfail
def test_hello():
assert 0
@xfail(run=False)
def test_hello2():
assert 0
@xfail("hasattr(os, 'sep')")
def test_hello3():
assert 0
@xfail(reason="bug 110")
def test_hello4():
assert 0
@xfail('pytest.__version__[0] != "17"')
def test_hello5():
assert 0
def test_hello6():
pytest.xfail("reason")
@xfail(raises=IndexError)
def test_hello7():
x = []

View File

@ -30,14 +30,14 @@ and does not handle Deferreds returned from a test in pytest style.
If you are using trial's unittest.TestCase chances are that you can
just run your tests even if you return Deferreds. In addition,
there also is a dedicated `pytest-twisted
<http://pypi.python.org/pypi/pytest-twisted>`_ plugin which allows you to
<https://pypi.org/project/pytest-twisted/>`_ plugin which allows you to
return deferreds from pytest-style tests, allowing the use of
:ref:`fixtures` and other features.
how does pytest work with Django?
++++++++++++++++++++++++++++++++++++++++++++++
In 2012, some work is going into the `pytest-django plugin <http://pypi.python.org/pypi/pytest-django>`_. It substitutes the usage of Django's
In 2012, some work is going into the `pytest-django plugin <https://pypi.org/project/pytest-django/>`_. It substitutes the usage of Django's
``manage.py test`` and allows the use of all pytest features_ most of which
are not available from Django directly.

View File

@ -154,7 +154,7 @@ This makes use of the automatic caching mechanisms of pytest.
Another good approach is by adding the data files in the ``tests`` folder.
There are also community plugins available to help managing this aspect of
testing, e.g. `pytest-datadir <https://github.com/gabrielcnr/pytest-datadir>`__
and `pytest-datafiles <https://pypi.python.org/pypi/pytest-datafiles>`__.
and `pytest-datafiles <https://pypi.org/project/pytest-datafiles/>`__.
.. _smtpshared:
@ -165,7 +165,7 @@ Scope: sharing a fixture instance across tests in a class, module or session
Fixtures requiring network access depend on connectivity and are
usually time-expensive to create. Extending the previous example, we
can add a ``scope='module'`` parameter to the
can add a ``scope="module"`` parameter to the
:py:func:`@pytest.fixture <_pytest.python.fixture>` invocation
to cause the decorated ``smtp`` fixture function to only be invoked once
per test *module* (the default is to invoke once per test *function*).
@ -250,9 +250,10 @@ instance, you can simply declare it:
.. code-block:: python
@pytest.fixture(scope="session")
def smtp(...):
def smtp():
# the returned fixture value will be shared for
# all tests needing it
...
Finally, the ``class`` scope will invoke the fixture once per test *class*.
@ -274,18 +275,22 @@ Consider the code below:
def s1():
pass
@pytest.fixture(scope="module")
def m1():
pass
@pytest.fixture
def f1(tmpdir):
pass
@pytest.fixture
def f2():
pass
def test_foo(f1, m1, f2, s1):
...
@ -316,6 +321,7 @@ the code after the *yield* statement serves as the teardown code:
import smtplib
import pytest
@pytest.fixture(scope="module")
def smtp():
smtp = smtplib.SMTP("smtp.gmail.com", 587, timeout=5)
@ -350,6 +356,7 @@ Note that we can also seamlessly use the ``yield`` syntax with ``with`` statemen
import smtplib
import pytest
@pytest.fixture(scope="module")
def smtp():
with smtplib.SMTP("smtp.gmail.com", 587, timeout=5) as smtp:
@ -375,12 +382,15 @@ Here's the ``smtp`` fixture changed to use ``addfinalizer`` for cleanup:
import smtplib
import pytest
@pytest.fixture(scope="module")
def smtp(request):
smtp = smtplib.SMTP("smtp.gmail.com", 587, timeout=5)
def fin():
print ("teardown smtp")
print("teardown smtp")
smtp.close()
request.addfinalizer(fin)
return smtp # provide the fixture value
@ -465,6 +475,59 @@ Running it::
voila! The ``smtp`` fixture function picked up our mail server name
from the module namespace.
.. _`fixture-factory`:
Factories as fixtures
-------------------------------------------------------------
The "factory as fixture" pattern can help in situations where the result
of a fixture is needed multiple times in a single test. Instead of returning
data directly, the fixture instead returns a function which generates the data.
This function can then be called multiple times in the test.
Factories can have have parameters as needed::
@pytest.fixture
def make_customer_record():
def _make_customer_record(name):
return {
"name": name,
"orders": []
}
return _make_customer_record
def test_customer_records(make_customer_record):
customer_1 = make_customer_record("Lisa")
customer_2 = make_customer_record("Mike")
customer_3 = make_customer_record("Meredith")
If the data created by the factory requires managing, the fixture can take care of that::
@pytest.fixture
def make_customer_record():
created_records = []
def _make_customer_record(name):
record = models.Customer(name=name, orders=[])
created_records.append(record)
return record
yield _make_customer_record
for record in created_records:
record.destroy()
def test_customer_records(make_customer_record):
customer_1 = make_customer_record("Lisa")
customer_2 = make_customer_record("Mike")
customer_3 = make_customer_record("Meredith")
.. _`fixture-parametrize`:
Parametrizing fixtures
@ -867,6 +930,8 @@ You can specify multiple fixtures like this:
.. code-block:: python
@pytest.mark.usefixtures("cleandir", "anotherfixture")
def test():
...
and you may specify fixture usage at the test module level, using
a generic feature of the mark mechanism:

View File

@ -214,5 +214,3 @@ fixtures:
and simplify your fixture function code to use the :ref:`@pytest.fixture`
decorator instead. This will also allow to take advantage of
the automatic per-resource grouping of tests.

View File

@ -11,4 +11,3 @@ and you can read on here:
- :ref:`fixtures`
- :ref:`parametrize`
- :ref:`funcargcompare`

View File

@ -1,41 +0,0 @@
import textwrap
import inspect
class Writer(object):
def __init__(self, clsname):
self.clsname = clsname
def __enter__(self):
self.file = open("%s.api" % self.clsname, "w")
return self
def __exit__(self, *args):
self.file.close()
print "wrote", self.file.name
def line(self, line):
self.file.write(line+"\n")
def docmethod(self, method):
doc = " ".join(method.__doc__.split())
indent = " "
w = textwrap.TextWrapper(initial_indent=indent,
subsequent_indent=indent)
spec = inspect.getargspec(method)
del spec.args[0]
self.line(".. py:method:: " + method.__name__ +
inspect.formatargspec(*spec))
self.line("")
self.line(w.fill(doc))
self.line("")
def pytest_funcarg__a(request):
with Writer("request") as writer:
writer.docmethod(request.getfixturevalue)
writer.docmethod(request.cached_setup)
writer.docmethod(request.addfinalizer)
writer.docmethod(request.applymarker)
def test_hello(a):
pass

View File

@ -5,10 +5,10 @@ Installation and Getting Started
**Platforms**: Unix/Posix and Windows
**PyPI package name**: `pytest <http://pypi.python.org/pypi/pytest>`_
**PyPI package name**: `pytest <https://pypi.org/project/pytest/>`_
**Dependencies**: `py <http://pypi.python.org/pypi/py>`_,
`colorama (Windows) <http://pypi.python.org/pypi/colorama>`_,
**Dependencies**: `py <https://pypi.org/project/py/>`_,
`colorama (Windows) <https://pypi.org/project/colorama/>`_,
**Documentation as PDF**: `download latest <https://media.readthedocs.org/pdf/pytest/latest/pytest.pdf>`_

View File

@ -145,7 +145,7 @@ Note that this layout also works in conjunction with the ``src`` layout mentione
.. note::
If ``pytest`` finds a "a/b/test_module.py" test file while
If ``pytest`` finds an "a/b/test_module.py" test file while
recursing into the filesystem it determines the import name
as follows:
@ -168,13 +168,13 @@ Note that this layout also works in conjunction with the ``src`` layout mentione
to avoid surprises such as a test module getting imported twice.
.. _`virtualenv`: http://pypi.python.org/pypi/virtualenv
.. _`virtualenv`: https://pypi.org/project/virtualenv/
.. _`buildout`: http://www.buildout.org/
.. _pip: http://pypi.python.org/pypi/pip
.. _pip: https://pypi.org/project/pip/
.. _`use tox`:
Tox
tox
------
For development, we recommend to use virtualenv_ environments and pip_
@ -194,7 +194,7 @@ Once you are done with your work and want to make sure that your actual
package passes all tests you may want to look into `tox`_, the
virtualenv test automation tool and its `pytest support
<https://tox.readthedocs.io/en/latest/example/pytest.html>`_.
Tox helps you to setup virtualenv environments with pre-defined
tox helps you to setup virtualenv environments with pre-defined
dependencies and then executing a pre-configured test command with
options. It will run tests against the installed package and not
against your source code checkout, helping to detect packaging
@ -205,7 +205,7 @@ Integrating with setuptools / ``python setup.py test`` / ``pytest-runner``
--------------------------------------------------------------------------
You can integrate test runs into your setuptools based project
with the `pytest-runner <https://pypi.python.org/pypi/pytest-runner>`_ plugin.
with the `pytest-runner <https://pypi.org/project/pytest-runner/>`_ plugin.
Add this to ``setup.py`` file:
@ -214,10 +214,10 @@ Add this to ``setup.py`` file:
from setuptools import setup
setup(
#...,
setup_requires=['pytest-runner', ...],
tests_require=['pytest', ...],
#...,
# ...,
setup_requires=["pytest-runner", ...],
tests_require=["pytest", ...],
# ...,
)
@ -263,25 +263,27 @@ your own setuptools Test command for invoking pytest.
class PyTest(TestCommand):
user_options = [('pytest-args=', 'a', "Arguments to pass to pytest")]
user_options = [("pytest-args=", "a", "Arguments to pass to pytest")]
def initialize_options(self):
TestCommand.initialize_options(self)
self.pytest_args = ''
self.pytest_args = ""
def run_tests(self):
import shlex
#import here, cause outside the eggs aren't loaded
# import here, cause outside the eggs aren't loaded
import pytest
errno = pytest.main(shlex.split(self.pytest_args))
sys.exit(errno)
setup(
#...,
tests_require=['pytest'],
cmdclass = {'test': PyTest},
)
# ...,
tests_require=["pytest"],
cmdclass={"test": PyTest},
)
Now if you run::

View File

@ -17,6 +17,7 @@ An example of a simple test:
def inc(x):
return x + 1
def test_answer():
assert inc(3) == 5
@ -39,7 +40,7 @@ To execute it::
E assert 4 == 5
E + where 4 = inc(3)
test_sample.py:5: AssertionError
test_sample.py:6: AssertionError
========================= 1 failed in 0.12 seconds =========================
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used.

View File

@ -7,14 +7,14 @@
.. _`reStructured Text`: http://docutils.sourceforge.net
.. _`Python debugger`: http://docs.python.org/lib/module-pdb.html
.. _nose: https://nose.readthedocs.io/en/latest/
.. _pytest: http://pypi.python.org/pypi/pytest
.. _pytest: https://pypi.org/project/pytest/
.. _mercurial: http://mercurial.selenic.com/wiki/
.. _`setuptools`: http://pypi.python.org/pypi/setuptools
.. _`setuptools`: https://pypi.org/project/setuptools/
.. _`easy_install`:
.. _`distribute docs`:
.. _`distribute`: http://pypi.python.org/pypi/distribute
.. _`pip`: http://pypi.python.org/pypi/pip
.. _`virtualenv`: http://pypi.python.org/pypi/virtualenv
.. _`distribute`: https://pypi.org/project/distribute/
.. _`pip`: https://pypi.org/project/pip/
.. _`virtualenv`: https://pypi.org/project/virtualenv/
.. _hudson: http://hudson-ci.org/
.. _jenkins: http://jenkins-ci.org/
.. _tox: http://testrun.org/tox

View File

@ -123,7 +123,7 @@ You can call ``caplog.clear()`` to reset the captured log records in a test::
assert ['Foo'] == [rec.message for rec in caplog.records]
The ``caplop.records`` attribute contains records from the current stage only, so
The ``caplog.records`` attribute contains records from the current stage only, so
inside the ``setup`` phase it contains only setup logs, same with the ``call`` and
``teardown`` phases.
@ -138,10 +138,14 @@ the records for the ``setup`` and ``call`` stages during teardown like so:
def window(caplog):
window = create_window()
yield window
for when in ('setup', 'call'):
messages = [x.message for x in caplog.get_records(when) if x.level == logging.WARNING]
for when in ("setup", "call"):
messages = [
x.message for x in caplog.get_records(when) if x.level == logging.WARNING
]
if messages:
pytest.fail('warning messages encountered during testing: {}'.format(messages))
pytest.fail(
"warning messages encountered during testing: {}".format(messages)
)

View File

@ -26,16 +26,36 @@ which also serve as documentation.
:ref:`fixtures <fixtures>`.
.. currentmodule:: _pytest.mark.structures
.. autoclass:: Mark
:members:
:noindex:
Raising errors on unknown marks: --strict
-----------------------------------------
When the ``--strict`` command-line flag is passed, any marks not registered in the ``pytest.ini`` file will trigger an error.
Marks can be registered like this:
.. code-block:: ini
[pytest]
markers =
slow
serial
This can be used to prevent users mistyping mark names by accident. Test suites that want to enforce this
should add ``--strict`` to ``addopts``:
.. code-block:: ini
[pytest]
addopts = --strict
markers =
slow
serial
.. `marker-iteration`
Marker iteration
=================
Marker revamp and iteration
---------------------------
.. versionadded:: 3.6
@ -44,15 +64,68 @@ pytest's marker implementation traditionally worked by simply updating the ``__d
This state of things made it technically next to impossible to use data from markers correctly without having a deep understanding of the internals, leading to subtle and hard to understand bugs in more advanced usages.
Depending on how a marker got declared/changed one would get either a ``MarkerInfo`` which might contain markers from sibling classes,
``MarkDecorators`` when marks came from parameterization or from a ``node.add_marker`` call, discarding prior marks. Also ``MarkerInfo`` acts like a single mark, when it in fact repressents a merged view on multiple marks with the same name.
``MarkDecorators`` when marks came from parameterization or from a ``node.add_marker`` call, discarding prior marks. Also ``MarkerInfo`` acts like a single mark, when it in fact represents a merged view on multiple marks with the same name.
On top of that markers where not accessible the same way for modules, classes, and functions/methods,
in fact, markers where only accessible in functions, even if they where declared on classes/modules.
A new API to access markers has been introduced in pytest 3.6 in order to solve the problems with the initial design, providing :func:`_pytest.nodes.Node.iter_markers` method to iterate over markers in a consistent manner and reworking the internals, which solved great deal of problems with the initial design.
Here is a non-exhaustive list of issues fixed by the new implementation:
.. _update marker code:
Updating code
~~~~~~~~~~~~~
The old ``Node.get_marker(name)`` function is considered deprecated because it returns an internal ``MarkerInfo`` object
which contains the merged name, ``*args`` and ``**kwargs`` of all the markers which apply to that node.
In general there are two scenarios on how markers should be handled:
1. Marks overwrite each other. Order matters but you only want to think of your mark as a single item. E.g.
``log_level('info')`` at a module level can be overwritten by ``log_level('debug')`` for a specific test.
In this case replace use ``Node.get_closest_marker(name)``:
.. code-block:: python
# replace this:
marker = item.get_marker("log_level")
if marker:
level = marker.args[0]
# by this:
marker = item.get_closest_marker("log_level")
if marker:
level = marker.args[0]
2. Marks compose additive. E.g. ``skipif(condition)`` marks means you just want to evaluate all of them,
order doesn't even matter. You probably want to think of your marks as a set here.
In this case iterate over each mark and handle their ``*args`` and ``**kwargs`` individually.
.. code-block:: python
# replace this
skipif = item.get_marker("skipif")
if skipif:
for condition in skipif.args:
# eval condition
...
# by this:
for skipif in item.iter_markers("skipif"):
condition = skipif.args[0]
# eval condition
If you are unsure or have any questions, please consider opening
`an issue <https://github.com/pytest-dev/pytest/issues>`_.
Related issues
~~~~~~~~~~~~~~
Here is a non-exhaustive list of issues fixed by the new implementation:
* Marks don't pick up nested classes (`#199 <https://github.com/pytest-dev/pytest/issues/199>`_).
@ -82,6 +155,5 @@ More details can be found in the `original PR <https://github.com/pytest-dev/pyt
.. note::
in a future major relase of pytest we will introduce class based markers,
at which points markers will no longer be limited to instances of :py:class:`Mark`
in a future major relase of pytest we will introduce class based markers,
at which points markers will no longer be limited to instances of :py:class:`Mark`

View File

@ -71,6 +71,8 @@ so that any attempts within tests to create http requests will fail.
.. code-block:: python
import functools
def test_partial(monkeypatch):
with monkeypatch.context() as m:
m.setattr(functools, "partial", 3)

View File

@ -58,7 +58,7 @@ Unsupported idioms / known issues
You may find yourself wanting to do this if you ran ``python setup.py install``
to set up your project, as opposed to ``python setup.py develop`` or any of
the package manager equivalents. Installing with develop in a
virtual environment like Tox is recommended over this pattern.
virtual environment like tox is recommended over this pattern.
- nose-style doctests are not collected and executed correctly,
also doctest fixtures don't work.
@ -70,6 +70,3 @@ Unsupported idioms / known issues
There are no plans to fix this currently because ``yield``-tests
are deprecated in pytest 3.0, with ``pytest.mark.parametrize``
being the recommended alternative.

View File

@ -20,39 +20,39 @@ Here is a little annotated list for some popular plugins:
.. _`django`: https://www.djangoproject.com/
* `pytest-django <http://pypi.python.org/pypi/pytest-django>`_: write tests
* `pytest-django <https://pypi.org/project/pytest-django/>`_: write tests
for `django`_ apps, using pytest integration.
* `pytest-twisted <http://pypi.python.org/pypi/pytest-twisted>`_: write tests
* `pytest-twisted <https://pypi.org/project/pytest-twisted/>`_: write tests
for `twisted <http://twistedmatrix.com>`_ apps, starting a reactor and
processing deferreds from test functions.
* `pytest-cov <http://pypi.python.org/pypi/pytest-cov>`_:
* `pytest-cov <https://pypi.org/project/pytest-cov/>`_:
coverage reporting, compatible with distributed testing
* `pytest-xdist <http://pypi.python.org/pypi/pytest-xdist>`_:
* `pytest-xdist <https://pypi.org/project/pytest-xdist/>`_:
to distribute tests to CPUs and remote hosts, to run in boxed
mode which allows to survive segmentation faults, to run in
looponfailing mode, automatically re-running failing tests
on file changes.
* `pytest-instafail <http://pypi.python.org/pypi/pytest-instafail>`_:
* `pytest-instafail <https://pypi.org/project/pytest-instafail/>`_:
to report failures while the test run is happening.
* `pytest-bdd <http://pypi.python.org/pypi/pytest-bdd>`_ and
`pytest-konira <http://pypi.python.org/pypi/pytest-konira>`_
* `pytest-bdd <https://pypi.org/project/pytest-bdd/>`_ and
`pytest-konira <https://pypi.org/project/pytest-konira/>`_
to write tests using behaviour-driven testing.
* `pytest-timeout <http://pypi.python.org/pypi/pytest-timeout>`_:
* `pytest-timeout <https://pypi.org/project/pytest-timeout/>`_:
to timeout tests based on function marks or global definitions.
* `pytest-pep8 <http://pypi.python.org/pypi/pytest-pep8>`_:
* `pytest-pep8 <https://pypi.org/project/pytest-pep8/>`_:
a ``--pep8`` option to enable PEP8 compliance checking.
* `pytest-flakes <https://pypi.python.org/pypi/pytest-flakes>`_:
* `pytest-flakes <https://pypi.org/project/pytest-flakes/>`_:
check source code with pyflakes.
* `oejskit <http://pypi.python.org/pypi/oejskit>`_:
* `oejskit <https://pypi.org/project/oejskit/>`_:
a plugin to run javascript unittests in live browsers.
To see a complete list of all plugins with their latest testing
@ -61,7 +61,7 @@ status against different pytest and Python versions, please visit
You may also discover more plugins through a `pytest- pypi.python.org search`_.
.. _`pytest- pypi.python.org search`: http://pypi.python.org/pypi?%3Aaction=search&term=pytest-&submit=search
.. _`pytest- pypi.python.org search`: https://pypi.org/search/?q=pytest-
.. _`available installable plugins`:

View File

@ -32,40 +32,40 @@ Here are some examples of projects using ``pytest`` (please send notes via :ref:
* `PyPM <http://code.activestate.com/pypm/>`_ ActiveState's package manager
* `Fom <http://packages.python.org/Fom/>`_ a fluid object mapper for FluidDB
* `applib <https://github.com/ActiveState/applib>`_ cross-platform utilities
* `six <http://pypi.python.org/pypi/six/>`_ Python 2 and 3 compatibility utilities
* `six <https://pypi.org/project/six/>`_ Python 2 and 3 compatibility utilities
* `pediapress <http://code.pediapress.com/wiki/wiki>`_ MediaWiki articles
* `mwlib <http://pypi.python.org/pypi/mwlib>`_ mediawiki parser and utility library
* `mwlib <https://pypi.org/project/mwlib/>`_ mediawiki parser and utility library
* `The Translate Toolkit <http://translate.sourceforge.net/wiki/toolkit/index>`_ for localization and conversion
* `execnet <http://codespeak.net/execnet>`_ rapid multi-Python deployment
* `pylib <https://py.readthedocs.io>`_ cross-platform path, IO, dynamic code library
* `Pacha <http://pacha.cafepais.com/>`_ configuration management in five minutes
* `bbfreeze <http://pypi.python.org/pypi/bbfreeze>`_ create standalone executables from Python scripts
* `bbfreeze <https://pypi.org/project/bbfreeze/>`_ create standalone executables from Python scripts
* `pdb++ <http://bitbucket.org/antocuni/pdb>`_ a fancier version of PDB
* `py-s3fuse <http://code.google.com/p/py-s3fuse/>`_ Amazon S3 FUSE based filesystem
* `waskr <http://code.google.com/p/waskr/>`_ WSGI Stats Middleware
* `guachi <http://code.google.com/p/guachi/>`_ global persistent configs for Python modules
* `Circuits <http://pypi.python.org/pypi/circuits>`_ lightweight Event Driven Framework
* `Circuits <https://pypi.org/project/circuits/>`_ lightweight Event Driven Framework
* `pygtk-helpers <http://bitbucket.org/aafshar/pygtkhelpers-main/>`_ easy interaction with PyGTK
* `QuantumCore <http://quantumcore.org/>`_ statusmessage and repoze openid plugin
* `pydataportability <http://pydataportability.net/>`_ libraries for managing the open web
* `XIST <http://www.livinglogic.de/Python/xist/>`_ extensible HTML/XML generator
* `tiddlyweb <http://pypi.python.org/pypi/tiddlyweb>`_ optionally headless, extensible RESTful datastore
* `tiddlyweb <https://pypi.org/project/tiddlyweb/>`_ optionally headless, extensible RESTful datastore
* `fancycompleter <http://bitbucket.org/antocuni/fancycompleter/src>`_ for colorful tab-completion
* `Paludis <http://paludis.exherbo.org/>`_ tools for Gentoo Paludis package manager
* `Gerald <http://halfcooked.com/code/gerald/>`_ schema comparison tool
* `abjad <http://code.google.com/p/abjad/>`_ Python API for Formalized Score control
* `bu <http://packages.python.org/bu/>`_ a microscopic build system
* `katcp <https://bitbucket.org/hodgestar/katcp>`_ Telescope communication protocol over Twisted
* `kss plugin timer <http://pypi.python.org/pypi/kss.plugin.timer>`_
* `kss plugin timer <https://pypi.org/project/kss.plugin.timer/>`_
* `pyudev <https://pyudev.readthedocs.io/en/latest/tests/plugins.html>`_ a pure Python binding to the Linux library libudev
* `pytest-localserver <https://bitbucket.org/pytest-dev/pytest-localserver/>`_ a plugin for pytest that provides an httpserver and smtpserver
* `pytest-monkeyplus <http://pypi.python.org/pypi/pytest-monkeyplus/>`_ a plugin that extends monkeypatch
* `pytest-monkeyplus <https://pypi.org/project/pytest-monkeyplus/>`_ a plugin that extends monkeypatch
These projects help integrate ``pytest`` into other Python frameworks:
* `pytest-django <http://pypi.python.org/pypi/pytest-django/>`_ for Django
* `pytest-django <https://pypi.org/project/pytest-django/>`_ for Django
* `zope.pytest <http://packages.python.org/zope.pytest/>`_ for Zope and Grok
* `pytest_gae <http://pypi.python.org/pypi/pytest_gae/0.2.1>`_ for Google App Engine
* `pytest_gae <https://pypi.org/project/pytest_gae/0.2.1/>`_ for Google App Engine
* There is `some work <https://github.com/Kotti/Kotti/blob/master/kotti/testing.py>`_ underway for Kotti, a CMS built in Pyramid/Pylons

View File

@ -35,26 +35,29 @@ This is how a functional test could look like:
import pytest
@pytest.fixture
def default_context():
return {'extra_context': {}}
return {"extra_context": {}}
@pytest.fixture(params=[
{'author': 'alice'},
{'project_slug': 'helloworld'},
{'author': 'bob', 'project_slug': 'foobar'},
])
@pytest.fixture(
params=[
{"author": "alice"},
{"project_slug": "helloworld"},
{"author": "bob", "project_slug": "foobar"},
]
)
def extra_context(request):
return {'extra_context': request.param}
return {"extra_context": request.param}
@pytest.fixture(params=['default', 'extra'])
@pytest.fixture(params=["default", "extra"])
def context(request):
if request.param == 'default':
return request.getfuncargvalue('default_context')
if request.param == "default":
return request.getfuncargvalue("default_context")
else:
return request.getfuncargvalue('extra_context')
return request.getfuncargvalue("extra_context")
def test_generate_project(cookies, context):
@ -95,8 +98,7 @@ fixtures from existing ones.
.. code-block:: python
pytest.define_combined_fixture(
name='context',
fixtures=['default_context', 'extra_context'],
name="context", fixtures=["default_context", "extra_context"]
)
The new fixture ``context`` inherits the scope from the used fixtures and yield
@ -118,15 +120,17 @@ all parameters marked as a fixture.
.. note::
The `pytest-lazy-fixture <https://pypi.python.org/pypi/pytest-lazy-fixture>`_ plugin implements a very
The `pytest-lazy-fixture <https://pypi.org/project/pytest-lazy-fixture/>`_ plugin implements a very
similar solution to the proposal below, make sure to check it out.
.. code-block:: python
@pytest.fixture(params=[
pytest.fixture_request('default_context'),
pytest.fixture_request('extra_context'),
])
@pytest.fixture(
params=[
pytest.fixture_request("default_context"),
pytest.fixture_request("extra_context"),
]
)
def context(request):
"""Returns all values for ``default_context``, one-by-one before it
does the same for ``extra_context``.
@ -145,10 +149,10 @@ The same helper can be used in combination with ``pytest.mark.parametrize``.
@pytest.mark.parametrize(
'context, expected_response_code',
"context, expected_response_code",
[
(pytest.fixture_request('default_context'), 0),
(pytest.fixture_request('extra_context'), 0),
(pytest.fixture_request("default_context"), 0),
(pytest.fixture_request("extra_context"), 0),
],
)
def test_generate_project(cookies, context, exit_code):

Some files were not shown because too many files have changed in this diff Show More