apply most other hooks and opt out of black reformating

This commit is contained in:
Ronny Pfannschmidt 2018-05-18 10:19:46 +02:00
parent b60376dc28
commit 86fc31db8d
79 changed files with 680 additions and 680 deletions

View File

@ -1,10 +1,10 @@
exclude: doc/en/example/py2py3/test_py2.py
repos:
- repo: https://github.com/ambv/black
rev: 18.4a4
hooks:
- id: black
args: [--safe, --quiet]
args: [--safe, --quiet, --check]
python_version: python3.6
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v1.2.3

View File

@ -139,7 +139,7 @@ Here's a rundown of how a repository transfer usually proceeds
* ``joedoe`` transfers repository ownership to ``pytest-dev`` administrator ``calvin``.
* ``calvin`` creates ``pytest-xyz-admin`` and ``pytest-xyz-developers`` teams, inviting ``joedoe`` to both as **maintainer**.
* ``calvin`` transfers repository to ``pytest-dev`` and configures team access:
- ``pytest-xyz-admin`` **admin** access;
- ``pytest-xyz-developers`` **write** access;
@ -203,15 +203,15 @@ Here is a simple overview, with pytest-specific bits:
$ git clone git@github.com:YOUR_GITHUB_USERNAME/pytest.git
$ cd pytest
# now, to fix a bug create your own branch off "master":
$ git checkout -b your-bugfix-branch-name master
# or to instead add a feature create your own branch off "features":
$ git checkout -b your-feature-branch-name features
Given we have "major.minor.micro" version numbers, bugfixes will usually
be released in micro releases whereas features will be released in
Given we have "major.minor.micro" version numbers, bugfixes will usually
be released in micro releases whereas features will be released in
minor releases and incompatible changes in major releases.
If you need some help with Git, follow this quick start

View File

@ -4,7 +4,7 @@ text that will be added to the next ``CHANGELOG``.
The ``CHANGELOG`` will be read by users, so this description should be aimed to pytest users
instead of describing internal changes which are only relevant to the developers.
Make sure to use full sentences with correct case and punctuation, for example::
Make sure to use full sentences with correct case and punctuation, for example::
Fix issue with non-ascii messages from the ``warnings`` module.

View File

@ -6,4 +6,4 @@ pygments_style = flask_theme_support.FlaskyStyle
[options]
index_logo = ''
index_logo_height = 120px
touch_icon =
touch_icon =

View File

@ -5,7 +5,7 @@ Release announcements
.. toctree::
:maxdepth: 2
release-3.6.0
release-3.5.1
release-3.5.0

View File

@ -1,4 +1,4 @@
py.test 2.0.3: bug fixes and speed ups
py.test 2.0.3: bug fixes and speed ups
===========================================================================
Welcome to pytest-2.0.3, a maintenance and bug fix release of pytest,

View File

@ -9,7 +9,7 @@ and integration testing. See extensive docs with examples here:
The release contains another fix to the perfected assertions introduced
with the 2.1 series as well as the new possibility to customize reporting
for assertion expressions on a per-directory level.
for assertion expressions on a per-directory level.
If you want to install or upgrade pytest, just type one of::

View File

@ -27,7 +27,7 @@ Changes between 2.2.0 and 2.2.1
----------------------------------------
- fix issue99 (in pytest and py) internallerrors with resultlog now
produce better output - fixed by normalizing pytest_internalerror
produce better output - fixed by normalizing pytest_internalerror
input arguments.
- fix issue97 / traceback issues (in pytest and py) improve traceback output
in conjunction with jinja2 and cython which hack tracebacks
@ -35,7 +35,7 @@ Changes between 2.2.0 and 2.2.1
the final test in a test node will now run its teardown directly
instead of waiting for the end of the session. Thanks Dave Hunt for
the good reporting and feedback. The pytest_runtest_protocol as well
as the pytest_runtest_teardown hooks now have "nextitem" available
as the pytest_runtest_teardown hooks now have "nextitem" available
which will be None indicating the end of the test run.
- fix collection crash due to unknown-source collected items, thanks
to Ralf Schmitt (fixed by depending on a more recent pylib)

View File

@ -4,7 +4,7 @@ pytest-2.2.2: bug fixes
pytest-2.2.2 (updated to 2.2.3 to fix packaging issues) is a minor
backward-compatible release of the versatile py.test testing tool. It
contains bug fixes and a few refinements particularly to reporting with
"--collectonly", see below for betails.
"--collectonly", see below for betails.
For general information see here:
@ -27,7 +27,7 @@ Changes between 2.2.1 and 2.2.2
- fix issue101: wrong args to unittest.TestCase test function now
produce better output
- fix issue102: report more useful errors and hints for when a
- fix issue102: report more useful errors and hints for when a
test directory was renamed and some pyc/__pycache__ remain
- fix issue106: allow parametrize to be applied multiple times
e.g. from module, class and at function level.
@ -38,6 +38,6 @@ Changes between 2.2.1 and 2.2.2
- fix issue115: make --collectonly robust against early failure
(missing files/directories)
- "-qq --collectonly" now shows only files and the number of tests in them
- "-q --collectonly" now shows test ids
- "-q --collectonly" now shows test ids
- allow adding of attributes to test reports such that it also works
with distributed testing (no upgrade of pytest-xdist needed)

View File

@ -1,7 +1,7 @@
pytest-2.3: improved fixtures / better unittest integration
=============================================================================
pytest-2.3 comes with many major improvements for fixture/funcarg management
pytest-2.3 comes with many major improvements for fixture/funcarg management
and parametrized testing in Python. It is now easier, more efficient and
more predicatable to re-run the same tests with different fixture
instances. Also, you can directly declare the caching "scope" of
@ -9,7 +9,7 @@ fixtures so that dependent tests throughout your whole test suite can
re-use database or other expensive fixture objects with ease. Lastly,
it's possible for fixture functions (formerly known as funcarg
factories) to use other fixtures, allowing for a completely modular and
re-useable fixture design.
re-useable fixture design.
For detailed info and tutorial-style examples, see:
@ -27,7 +27,7 @@ All changes are backward compatible and you should be able to continue
to run your test suites and 3rd party plugins that worked with
pytest-2.2.4.
If you are interested in the precise reasoning (including examples) of the
If you are interested in the precise reasoning (including examples) of the
pytest-2.3 fixture evolution, please consult
http://pytest.org/latest/funcarg_compare.html
@ -43,7 +43,7 @@ and more details for those already in the knowing of pytest can be found
in the CHANGELOG below.
Particular thanks for this release go to Floris Bruynooghe, Alex Okrushko
Carl Meyer, Ronny Pfannschmidt, Benjamin Peterson and Alex Gaynor for helping
Carl Meyer, Ronny Pfannschmidt, Benjamin Peterson and Alex Gaynor for helping
to get the new features right and well integrated. Ronny and Floris
also helped to fix a number of bugs and yet more people helped by
providing bug reports.
@ -94,7 +94,7 @@ Changes between 2.2.4 and 2.3.0
- pluginmanager.register(...) now raises ValueError if the
plugin has been already registered or the name is taken
- fix issue159: improve http://pytest.org/latest/faq.html
- fix issue159: improve http://pytest.org/latest/faq.html
especially with respect to the "magic" history, also mention
pytest-django, trial and unittest integration.
@ -125,7 +125,7 @@ Changes between 2.2.4 and 2.3.0
you can use startdir.bestrelpath(yourpath) to show
nice relative path
- allow plugins to implement both pytest_report_header and
- allow plugins to implement both pytest_report_header and
pytest_sessionstart (sessionstart is invoked first).
- don't show deselected reason line if there is none

View File

@ -3,16 +3,16 @@ pytest-2.3.1: fix regression with factory functions
pytest-2.3.1 is a quick follow-up release:
- fix issue202 - regression with fixture functions/funcarg factories:
using "self" is now safe again and works as in 2.2.4. Thanks
- fix issue202 - regression with fixture functions/funcarg factories:
using "self" is now safe again and works as in 2.2.4. Thanks
to Eduard Schettino for the quick bug report.
- disable pexpect pytest self tests on Freebsd - thanks Koob for the
- disable pexpect pytest self tests on Freebsd - thanks Koob for the
quick reporting
- fix/improve interactive docs with --markers
See
See
http://pytest.org/

View File

@ -8,9 +8,9 @@ pytest-2.3.2 is another stabilization release:
- fix teardown-ordering for parametrized setups
- fix unittest and trial compat behaviour with respect to runTest() methods
- issue 206 and others: some improvements to packaging
- fix issue127 and others: improve some docs
- fix issue127 and others: improve some docs
See
See
http://pytest.org/
@ -26,7 +26,7 @@ holger krekel
Changes between 2.3.1 and 2.3.2
-----------------------------------
- fix issue208 and fix issue29 use new py version to avoid long pauses
- fix issue208 and fix issue29 use new py version to avoid long pauses
when printing tracebacks in long modules
- fix issue205 - conftests in subdirs customizing

View File

@ -6,7 +6,7 @@ which offers uebersimple assertions, scalable fixture mechanisms
and deep customization for testing with Python. Particularly,
this release provides:
- integration fixes and improvements related to flask, numpy, nose,
- integration fixes and improvements related to flask, numpy, nose,
unittest, mock
- makes pytest work on py24 again (yes, people sometimes still need to use it)
@ -16,7 +16,7 @@ this release provides:
Thanks to Manuel Jacob, Thomas Waldmann, Ronny Pfannschmidt, Pavel Repin
and Andreas Taumoefolau for providing patches and all for the issues.
See
See
http://pytest.org/

View File

@ -10,10 +10,10 @@ comes with the following fixes and features:
can write: -k "name1 or name2" etc. This is a slight usage incompatibility
if you used special syntax like "TestClass.test_method" which you now
need to write as -k "TestClass and test_method" to match a certain
method in a certain test class.
method in a certain test class.
- allow to dynamically define markers via
item.keywords[...]=assignment integrating with "-m" option
- yielded test functions will now have autouse-fixtures active but
- yielded test functions will now have autouse-fixtures active but
cannot accept fixtures as funcargs - it's anyway recommended to
rather use the post-2.0 parametrize features instead of yield, see:
http://pytest.org/latest/example/parametrize.html
@ -26,7 +26,7 @@ comes with the following fixes and features:
Thanks in particular to Thomas Waldmann for spotting and reporting issues.
See
See
http://pytest.org/

View File

@ -13,7 +13,7 @@ few interesting new plugins saw the light last month:
- pytest-random: randomize test ordering
And several others like pytest-django saw maintenance releases.
For a more complete list, check out
For a more complete list, check out
https://pypi.org/search/?q=pytest
For general information see:
@ -81,7 +81,7 @@ Changes between 2.3.4 and 2.3.5
- fix bug where using capsys with pytest.set_trace() in a test
function would break when looking at capsys.readouterr()
- allow to specify prefixes starting with "_" when
- allow to specify prefixes starting with "_" when
customizing python_functions test discovery. (thanks Graham Horler)
- improve PYTEST_DEBUG tracing output by putting

View File

@ -1,9 +1,9 @@
pytest-2.4.0: new fixture features/hooks and bug fixes
===========================================================================
The just released pytest-2.4.0 brings many improvements and numerous
The just released pytest-2.4.0 brings many improvements and numerous
bug fixes while remaining plugin- and test-suite compatible apart
from a few supposedly very minor incompatibilities. See below for
from a few supposedly very minor incompatibilities. See below for
a full list of details. A few feature highlights:
- new yield-style fixtures `pytest.yield_fixture
@ -13,7 +13,7 @@ a full list of details. A few feature highlights:
- improved pdb support: ``import pdb ; pdb.set_trace()`` now works
without requiring prior disabling of stdout/stderr capturing.
Also the ``--pdb`` options works now on collection and internal errors
and we introduced a new experimental hook for IDEs/plugins to
and we introduced a new experimental hook for IDEs/plugins to
intercept debugging: ``pytest_exception_interact(node, call, report)``.
- shorter monkeypatch variant to allow specifying an import path as
@ -23,7 +23,7 @@ a full list of details. A few feature highlights:
called if the corresponding setup method succeeded.
- integrate tab-completion on command line options if you
have `argcomplete <https://pypi.org/project/argcomplete/>`_
have `argcomplete <http://pypi.python.org/pypi/argcomplete>`_
configured.
- allow boolean expression directly with skipif/xfail
@ -36,8 +36,8 @@ a full list of details. A few feature highlights:
- reporting: color the last line red or green depending if
failures/errors occurred or everything passed.
The documentation has been updated to accommodate the changes,
see `http://pytest.org <http://pytest.org>`_
The documentation has been updated to accommodate the changes,
see `http://pytest.org <http://pytest.org>`_
To install or upgrade pytest::
@ -45,8 +45,8 @@ To install or upgrade pytest::
easy_install -U pytest
**Many thanks to all who helped, including Floris Bruynooghe,
Brianna Laugher, Andreas Pelme, Anthon van der Neut, Anatoly Bubenkoff,
**Many thanks to all who helped, including Floris Bruynooghe,
Brianna Laugher, Andreas Pelme, Anthon van der Neut, Anatoly Bubenkoff,
Vladimir Keleshev, Mathieu Agopian, Ronny Pfannschmidt, Christian
Theunert and many others.**
@ -101,12 +101,12 @@ new features:
- make "import pdb ; pdb.set_trace()" work natively wrt capturing (no
"-s" needed anymore), making ``pytest.set_trace()`` a mere shortcut.
- fix issue181: --pdb now also works on collect errors (and
on internal errors) . This was implemented by a slight internal
refactoring and the introduction of a new hook
- fix issue181: --pdb now also works on collect errors (and
on internal errors) . This was implemented by a slight internal
refactoring and the introduction of a new hook
``pytest_exception_interact`` hook (see next item).
- fix issue341: introduce new experimental hook for IDEs/terminals to
- fix issue341: introduce new experimental hook for IDEs/terminals to
intercept debugging: ``pytest_exception_interact(node, call, report)``.
- new monkeypatch.setattr() variant to provide a shorter
@ -124,7 +124,7 @@ new features:
phase of a node.
- simplify pytest.mark.parametrize() signature: allow to pass a
CSV-separated string to specify argnames. For example:
CSV-separated string to specify argnames. For example:
``pytest.mark.parametrize("input,expected", [(1,2), (2,3)])``
works as well as the previous:
``pytest.mark.parametrize(("input", "expected"), ...)``.
@ -149,10 +149,10 @@ new features:
Bug fixes:
- fix issue358 - capturing options are now parsed more properly
- fix issue358 - capturing options are now parsed more properly
by using a new parser.parse_known_args method.
- pytest now uses argparse instead of optparse (thanks Anthon) which
- pytest now uses argparse instead of optparse (thanks Anthon) which
means that "argparse" is added as a dependency if installing into python2.6
environments or below.
@ -193,7 +193,7 @@ Bug fixes:
- fix issue323 - sorting of many module-scoped arg parametrizations
- make sessionfinish hooks execute with the same cwd-context as at
session start (helps fix plugin behaviour which write output files
session start (helps fix plugin behaviour which write output files
with relative path such as pytest-cov)
- fix issue316 - properly reference collection hooks in docs
@ -201,7 +201,7 @@ Bug fixes:
- fix issue 306 - cleanup of -k/-m options to only match markers/test
names/keywords respectively. Thanks Wouter van Ackooy.
- improved doctest counting for doctests in python modules --
- improved doctest counting for doctests in python modules --
files without any doctest items will not show up anymore
and doctest examples are counted as separate test items.
thanks Danilo Bellini.
@ -211,7 +211,7 @@ Bug fixes:
mode. Thanks Jason R. Coombs.
- fix junitxml generation when test output contains control characters,
addressing issue267, thanks Jaap Broekhuizen
addressing issue267, thanks Jaap Broekhuizen
- fix issue338: honor --tb style for setup/teardown errors as well. Thanks Maho.
@ -220,5 +220,5 @@ Bug fixes:
- better parametrize error messages, thanks Brianna Laugher
- pytest_terminal_summary(terminalreporter) hooks can now use
".section(title)" and ".line(msg)" methods to print extra
".section(title)" and ".line(msg)" methods to print extra
information at the end of a test run.

View File

@ -8,7 +8,7 @@ compared to 2.3.5 before they hit more people:
"type" keyword should also be converted to the respective types.
thanks Floris Bruynooghe, @dnozay. (fixes issue360 and issue362)
- fix dotted filename completion when using argcomplete
- fix dotted filename completion when using argcomplete
thanks Anthon van der Neuth. (fixes issue361)
- fix regression when a 1-tuple ("arg",) is used for specifying

View File

@ -26,9 +26,9 @@ pytest-2.4.2 is another bug-fixing release:
- remove attempt to "dup" stdout at startup as it's icky.
the normal capturing should catch enough possibilities
of tests messing up standard FDs.
of tests messing up standard FDs.
- add pluginmanager.do_configure(config) as a link to
- add pluginmanager.do_configure(config) as a link to
config.do_configure() for plugin-compatibility
as usual, docs at http://pytest.org and upgrades via::

View File

@ -4,7 +4,7 @@ pytest-2.5.0: now down to ZERO reported bugs!
pytest-2.5.0 is a big fixing release, the result of two community bug
fixing days plus numerous additional works from many people and
reporters. The release should be fully compatible to 2.4.2, existing
plugins and test suites. We aim at maintaining this level of ZERO reported
plugins and test suites. We aim at maintaining this level of ZERO reported
bugs because it's no fun if your testing tool has bugs, is it? Under a
condition, though: when submitting a bug report please provide
clear information about the circumstances and a simple example which
@ -17,12 +17,12 @@ help.
For those who use older Python versions, please note that pytest is not
automatically tested on python2.5 due to virtualenv, setuptools and tox
not supporting it anymore. Manual verification shows that it mostly
works fine but it's not going to be part of the automated release
works fine but it's not going to be part of the automated release
process and thus likely to break in the future.
As usual, current docs are at
As usual, current docs are at
http://pytest.org
http://pytest.org
and you can upgrade from pypi via::
@ -40,28 +40,28 @@ holger krekel
2.5.0
-----------------------------------
- dropped python2.5 from automated release testing of pytest itself
which means it's probably going to break soon (but still works
- dropped python2.5 from automated release testing of pytest itself
which means it's probably going to break soon (but still works
with this release we believe).
- simplified and fixed implementation for calling finalizers when
parametrized fixtures or function arguments are involved. finalization
parametrized fixtures or function arguments are involved. finalization
is now performed lazily at setup time instead of in the "teardown phase".
While this might sound odd at first, it helps to ensure that we are
While this might sound odd at first, it helps to ensure that we are
correctly handling setup/teardown even in complex code. User-level code
should not be affected unless it's implementing the pytest_runtest_teardown
hook and expecting certain fixture instances are torn down within (very
unlikely and would have been unreliable anyway).
- PR90: add --color=yes|no|auto option to force terminal coloring
- PR90: add --color=yes|no|auto option to force terminal coloring
mode ("auto" is default). Thanks Marc Abramowitz.
- fix issue319 - correctly show unicode in assertion errors. Many
thanks to Floris Bruynooghe for the complete PR. Also means
we depend on py>=1.4.19 now.
- fix issue396 - correctly sort and finalize class-scoped parametrized
tests independently from number of methods on the class.
- fix issue396 - correctly sort and finalize class-scoped parametrized
tests independently from number of methods on the class.
- refix issue323 in a better way -- parametrization should now never
cause Runtime Recursion errors because the underlying algorithm
@ -70,18 +70,18 @@ holger krekel
to problems for more than >966 non-function scoped parameters).
- fix issue290 - there is preliminary support now for parametrizing
with repeated same values (sometimes useful to test if calling
with repeated same values (sometimes useful to test if calling
a second time works as with the first time).
- close issue240 - document precisely how pytest module importing
works, discuss the two common test directory layouts, and how it
works, discuss the two common test directory layouts, and how it
interacts with PEP420-namespace packages.
- fix issue246 fix finalizer order to be LIFO on independent fixtures
depending on a parametrized higher-than-function scoped fixture.
depending on a parametrized higher-than-function scoped fixture.
(was quite some effort so please bear with the complexity of this sentence :)
Thanks Ralph Schmitt for the precise failure example.
- fix issue244 by implementing special index for parameters to only use
indices for paramentrized test ids
@ -99,9 +99,9 @@ holger krekel
filtering with simple strings that are not valid python expressions.
Examples: "-k 1.3" matches all tests parametrized with 1.3.
"-k None" filters all tests that have "None" in their name
and conversely "-k 'not None'".
and conversely "-k 'not None'".
Previously these examples would raise syntax errors.
- fix issue384 by removing the trial support code
since the unittest compat enhancements allow
trial to handle it on its own
@ -109,7 +109,7 @@ holger krekel
- don't hide an ImportError when importing a plugin produces one.
fixes issue375.
- fix issue275 - allow usefixtures and autouse fixtures
- fix issue275 - allow usefixtures and autouse fixtures
for running doctest text files.
- fix issue380 by making --resultlog only rely on longrepr instead
@ -135,20 +135,20 @@ holger krekel
(it already did neutralize pytest.mark.xfail markers)
- refine pytest / pkg_resources interactions: The AssertionRewritingHook
PEP302 compliant loader now registers itself with setuptools/pkg_resources
PEP302 compliant loader now registers itself with setuptools/pkg_resources
properly so that the pkg_resources.resource_stream method works properly.
Fixes issue366. Thanks for the investigations and full PR to Jason R. Coombs.
- pytestconfig fixture is now session-scoped as it is the same object during the
whole test run. Fixes issue370.
whole test run. Fixes issue370.
- avoid one surprising case of marker malfunction/confusion::
@pytest.mark.some(lambda arg: ...)
def test_function():
would not work correctly because pytest assumes @pytest.mark.some
gets a function to be decorated already. We now at least detect if this
would not work correctly because pytest assumes @pytest.mark.some
gets a function to be decorated already. We now at least detect if this
arg is a lambda and thus the example will work. Thanks Alex Gaynor
for bringing it up.
@ -159,11 +159,11 @@ holger krekel
although it's not needed by pytest itself atm. Also
fix caching. Fixes issue376.
- fix issue221 - handle importing of namespace-package with no
- fix issue221 - handle importing of namespace-package with no
__init__.py properly.
- refactor internal FixtureRequest handling to avoid monkeypatching.
One of the positive user-facing effects is that the "request" object
One of the positive user-facing effects is that the "request" object
can now be used in closures.
- fixed version comparison in pytest.importskip(modname, minverstring)

View File

@ -1,8 +1,8 @@
pytest-2.5.1: fixes and new home page styling
===========================================================================
pytest is a mature Python testing tool with more than a 1000 tests
against itself, passing on many different interpreters and platforms.
pytest is a mature Python testing tool with more than a 1000 tests
against itself, passing on many different interpreters and platforms.
The 2.5.1 release maintains the "zero-reported-bugs" promise by fixing
the three bugs reported since the last release a few days ago. It also
@ -11,12 +11,12 @@ the flask theme from Armin Ronacher:
http://pytest.org
If you have anything more to improve styling and docs,
If you have anything more to improve styling and docs,
we'd be very happy to merge further pull requests.
On the coding side, the release also contains a little enhancement to
fixture decorators allowing to directly influence generation of test
ids, thanks to Floris Bruynooghe. Other thanks for helping with
ids, thanks to Floris Bruynooghe. Other thanks for helping with
this release go to Anatoly Bubenkoff and Ronny Pfannschmidt.
As usual, you can upgrade from pypi via::
@ -37,7 +37,7 @@ holger krekel
- Allow parameterized fixtures to specify the ID of the parameters by
adding an ids argument to pytest.fixture() and pytest.yield_fixture().
Thanks Floris Bruynooghe.
Thanks Floris Bruynooghe.
- fix issue404 by always using the binary xml escape in the junitxml
plugin. Thanks Ronny Pfannschmidt.

View File

@ -1,8 +1,8 @@
pytest-2.5.2: fixes
pytest-2.5.2: fixes
===========================================================================
pytest is a mature Python testing tool with more than a 1000 tests
against itself, passing on many different interpreters and platforms.
pytest is a mature Python testing tool with more than a 1000 tests
against itself, passing on many different interpreters and platforms.
The 2.5.2 release fixes a few bugs with two maybe-bugs remaining and
actively being worked on (and waiting for the bug reporter's input).
@ -19,18 +19,18 @@ As usual, you can upgrade from pypi via::
Thanks to the following people who contributed to this release:
Anatoly Bubenkov
Anatoly Bubenkov
Ronny Pfannschmidt
Floris Bruynooghe
Bruno Oliveira
Andreas Pelme
Bruno Oliveira
Andreas Pelme
Jurko Gospodnetić
Piotr Banaszkiewicz
Simon Liedtke
lakka
Lukasz Balcerzak
Philippe Muller
Daniel Hahler
Piotr Banaszkiewicz
Simon Liedtke
lakka
Lukasz Balcerzak
Philippe Muller
Daniel Hahler
have fun,
holger krekel
@ -39,11 +39,11 @@ holger krekel
-----------------------------------
- fix issue409 -- better interoperate with cx_freeze by not
trying to import from collections.abc which causes problems
trying to import from collections.abc which causes problems
for py27/cx_freeze. Thanks Wolfgang L. for reporting and tracking it down.
- fixed docs and code to use "pytest" instead of "py.test" almost everywhere.
Thanks Jurko Gospodnetic for the complete PR.
Thanks Jurko Gospodnetic for the complete PR.
- fix issue425: mention at end of "py.test -h" that --markers
and --fixtures work according to specified test path (or current dir)
@ -54,7 +54,7 @@ holger krekel
- copy, cleanup and integrate py.io capture
from pylib 1.4.20.dev2 (rev 13d9af95547e)
- address issue416: clarify docs as to conftest.py loading semantics
- fix issue429: comparing byte strings with non-ascii chars in assert

View File

@ -53,6 +53,6 @@ The py.test Development Team
Thanks Gabriel Reis for the PR.
- add more talks to the documentation
- extend documentation on the --ignore cli option
- use pytest-runner for setuptools integration
- extend documentation on the --ignore cli option
- use pytest-runner for setuptools integration
- minor fixes for interaction with OS X El Capitan system integrity protection (thanks Florian)

View File

@ -14,25 +14,25 @@ As usual, you can upgrade from pypi via::
Thanks to all who contributed to this release, among them:
Anatoly Bubenkov
Bruno Oliveira
Buck Golemon
David Vierra
Florian Bruhin
Galaczi Endre
Georgy Dyuldin
Lukas Bednar
Luke Murphy
Marcin Biernat
Matt Williams
Michael Aquilina
Raphael Pierzina
Ronny Pfannschmidt
Ryan Wooden
Tiemo Kieft
TomV
holger krekel
jab
Anatoly Bubenkov
Bruno Oliveira
Buck Golemon
David Vierra
Florian Bruhin
Galaczi Endre
Georgy Dyuldin
Lukas Bednar
Luke Murphy
Marcin Biernat
Matt Williams
Michael Aquilina
Raphael Pierzina
Ronny Pfannschmidt
Ryan Wooden
Tiemo Kieft
TomV
holger krekel
jab
Happy testing,
@ -76,18 +76,18 @@ The py.test Development Team
**Changes**
* **Important**: `py.code <https://pylib.readthedocs.io/en/latest/code.html>`_ has been
merged into the ``pytest`` repository as ``pytest._code``. This decision
was made because ``py.code`` had very few uses outside ``pytest`` and the
fact that it was in a different repository made it difficult to fix bugs on
merged into the ``pytest`` repository as ``pytest._code``. This decision
was made because ``py.code`` had very few uses outside ``pytest`` and the
fact that it was in a different repository made it difficult to fix bugs on
its code in a timely manner. The team hopes with this to be able to better
refactor out and improve that code.
This change shouldn't affect users, but it is useful to let users aware
if they encounter any strange behavior.
Keep in mind that the code for ``pytest._code`` is **private** and
Keep in mind that the code for ``pytest._code`` is **private** and
**experimental**, so you definitely should not import it explicitly!
Please note that the original ``py.code`` is still available in
Please note that the original ``py.code`` is still available in
`pylib <https://pylib.readthedocs.io>`_.
* ``pytest_enter_pdb`` now optionally receives the pytest config object.
@ -129,8 +129,8 @@ The py.test Development Team
* Fix (`#1422`_): junit record_xml_property doesn't allow multiple records
with same name.
.. _`traceback style docs`: https://pytest.org/latest/usage.html#modifying-python-traceback-printing
.. _#1422: https://github.com/pytest-dev/pytest/issues/1422

View File

@ -14,17 +14,17 @@ As usual, you can upgrade from pypi via::
Thanks to all who contributed to this release, among them:
Bruno Oliveira
Daniel Hahler
Dmitry Malinovsky
Florian Bruhin
Floris Bruynooghe
Matt Bachmann
Ronny Pfannschmidt
TomV
Vladimir Bolshakov
Zearin
palaviv
Bruno Oliveira
Daniel Hahler
Dmitry Malinovsky
Florian Bruhin
Floris Bruynooghe
Matt Bachmann
Ronny Pfannschmidt
TomV
Vladimir Bolshakov
Zearin
palaviv
Happy testing,

View File

@ -8,10 +8,10 @@ against itself, passing on many different interpreters and platforms.
This release contains a lot of bugs fixes and improvements, and much of
the work done on it was possible because of the 2016 Sprint[1], which
was funded by an indiegogo campaign which raised over US$12,000 with
nearly 100 backers.
was funded by an indiegogo campaign which raised over US$12,000 with
nearly 100 backers.
There's a "What's new in pytest 3.0" [2] blog post highlighting the
There's a "What's new in pytest 3.0" [2] blog post highlighting the
major features in this release.
To see the complete changelog and documentation, please visit:

View File

@ -7,7 +7,7 @@ This release fixes some regressions reported in version 3.0.0, being a
drop-in replacement. To upgrade:
pip install --upgrade pytest
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -7,7 +7,7 @@ This release fixes some regressions and bugs reported in version 3.0.1, being a
drop-in replacement. To upgrade::
pip install --upgrade pytest
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -3,11 +3,11 @@ pytest-3.0.3
pytest 3.0.3 has just been released to PyPI.
This release fixes some regressions and bugs reported in the last version,
This release fixes some regressions and bugs reported in the last version,
being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -3,11 +3,11 @@ pytest-3.0.4
pytest 3.0.4 has just been released to PyPI.
This release fixes some regressions and bugs reported in the last version,
This release fixes some regressions and bugs reported in the last version,
being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.0.5 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.0.6 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.

View File

@ -6,7 +6,7 @@ pytest 3.0.7 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.1.1 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.1.2 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.1.3 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.2.1 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.2.2 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.2.3 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.2.4 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.2.5 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.3.1 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.3.2 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.4.1 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.4.2 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -6,7 +6,7 @@ pytest 3.5.1 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:

View File

@ -29,17 +29,17 @@ you will see the return value of the function call::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
test_assert1.py F [100%]
================================= FAILURES =================================
______________________________ test_function _______________________________
def test_function():
> assert f() == 4
E assert 3 == 4
E + where 3 = f()
test_assert1.py:5: AssertionError
========================= 1 failed in 0.12 seconds =========================
@ -172,12 +172,12 @@ if you run this module::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
test_assert2.py F [100%]
================================= FAILURES =================================
___________________________ test_set_comparison ____________________________
def test_set_comparison():
set1 = set("1308")
set2 = set("8035")
@ -188,7 +188,7 @@ if you run this module::
E Extra items in the right set:
E '5'
E Use -v to get the full diff
test_assert2.py:5: AssertionError
========================= 1 failed in 0.12 seconds =========================
@ -209,7 +209,7 @@ the ``pytest_assertrepr_compare`` hook.
.. autofunction:: _pytest.hookspec.pytest_assertrepr_compare
:noindex:
As an example consider adding the following hook in a :ref:`conftest.py <conftest.py>`
As an example consider adding the following hook in a :ref:`conftest.py <conftest.py>`
file which provides an alternative explanation for ``Foo`` objects::
# content of conftest.py
@ -241,14 +241,14 @@ the conftest file::
F [100%]
================================= FAILURES =================================
_______________________________ test_compare _______________________________
def test_compare():
f1 = Foo(1)
f2 = Foo(2)
> assert f1 == f2
E assert Comparing Foo instances:
E vals: 1 != 2
test_foocompare.py:11: AssertionError
1 failed in 0.12 seconds

View File

@ -17,13 +17,13 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
$ pytest -q --fixtures
cache
Return a cache object that can persist state between testing sessions.
cache.get(key, default)
cache.set(key, value)
Keys must be a ``/`` separated value, where the first part is usually the
name of your plugin or application to avoid clashes with other cache users.
Values can be any object handled by the json stdlib module.
capsys
Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
@ -49,9 +49,9 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
Fixture that returns a :py:class:`dict` that will be injected into the namespace of doctests.
pytestconfig
Session-scoped fixture that returns the :class:`_pytest.config.Config` object.
Example::
def test_foo(pytestconfig):
if pytestconfig.getoption("verbose"):
...
@ -61,9 +61,9 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
configured reporters, like JUnit XML.
The fixture is callable with ``(name, value)``, with value being automatically
xml-encoded.
Example::
def test_function(record_property):
record_property("example_key", 1)
record_xml_property
@ -74,9 +74,9 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
automatically xml-encoded
caplog
Access and control log capturing.
Captured logs are available through the following methods::
* caplog.text -> string containing formatted log output
* caplog.records -> list of logging.LogRecord instances
* caplog.record_tuples -> list of (logger_name, level, message) tuples
@ -84,7 +84,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
monkeypatch
The returned ``monkeypatch`` fixture provides these
helper methods to modify objects, dictionaries or os.environ::
monkeypatch.setattr(obj, name, value, raising=True)
monkeypatch.delattr(obj, name, raising=True)
monkeypatch.setitem(mapping, name, value)
@ -93,14 +93,14 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
monkeypatch.delenv(name, value, raising=True)
monkeypatch.syspath_prepend(path)
monkeypatch.chdir(path)
All modifications will be undone after the requesting
test function or fixture has finished. The ``raising``
parameter determines if a KeyError or AttributeError
will be raised if the set/deletion operation has no target.
recwarn
Return a :class:`WarningsRecorder` instance that records all warnings emitted by test functions.
See http://docs.python.org/library/warnings.html for information
on warning categories.
tmpdir_factory
@ -111,9 +111,9 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_
path object.
.. _`py.path.local`: https://py.readthedocs.io/en/latest/path.html
no tests ran in 0.12 seconds
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like::

View File

@ -20,7 +20,7 @@ last ``pytest`` invocation:
For cleanup (usually not needed), a ``--cache-clear`` option allows to remove
all cross-session cache contents ahead of a test run.
Other plugins may access the `config.cache`_ object to set/get
Other plugins may access the `config.cache`_ object to set/get
**json encodable** values between ``pytest`` invocations.
.. note::
@ -49,26 +49,26 @@ If you run this for the first time you will see two failures::
.................F.......F........................ [100%]
================================= FAILURES =================================
_______________________________ test_num[17] _______________________________
i = 17
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:6: Failed
_______________________________ test_num[25] _______________________________
i = 25
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:6: Failed
2 failed, 48 passed in 0.12 seconds
@ -80,31 +80,31 @@ If you then run it with ``--lf``::
rootdir: $REGENDOC_TMPDIR, inifile:
collected 50 items / 48 deselected
run-last-failure: rerun previous 2 failures
test_50.py FF [100%]
================================= FAILURES =================================
_______________________________ test_num[17] _______________________________
i = 17
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:6: Failed
_______________________________ test_num[25] _______________________________
i = 25
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:6: Failed
================= 2 failed, 48 deselected in 0.12 seconds ==================
@ -121,31 +121,31 @@ of ``FF`` and dots)::
rootdir: $REGENDOC_TMPDIR, inifile:
collected 50 items
run-last-failure: rerun previous 2 failures first
test_50.py FF................................................ [100%]
================================= FAILURES =================================
_______________________________ test_num[17] _______________________________
i = 17
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:6: Failed
_______________________________ test_num[25] _______________________________
i = 25
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:6: Failed
=================== 2 failed, 48 passed in 0.12 seconds ====================
@ -198,13 +198,13 @@ of the sleep::
F [100%]
================================= FAILURES =================================
______________________________ test_function _______________________________
mydata = 42
def test_function(mydata):
> assert mydata == 23
E assert 42 == 23
test_caching.py:14: AssertionError
1 failed in 0.12 seconds
@ -215,13 +215,13 @@ the cache and this will be quick::
F [100%]
================================= FAILURES =================================
______________________________ test_function _______________________________
mydata = 42
def test_function(mydata):
> assert mydata == 23
E assert 42 == 23
test_caching.py:14: AssertionError
1 failed in 0.12 seconds
@ -246,7 +246,7 @@ You can always peek at the content of the cache using the
['test_caching.py::test_function']
example/value contains:
42
======================= no tests ran in 0.12 seconds =======================
Clearing Cache content

View File

@ -68,16 +68,16 @@ of the failing function and hide the other one::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_module.py .F [100%]
================================= FAILURES =================================
________________________________ test_func2 ________________________________
def test_func2():
> assert False
E assert False
test_module.py:9: AssertionError
-------------------------- Captured stdout setup ---------------------------
setting up <function test_func2 at 0xdeadbeef>

View File

@ -8,9 +8,9 @@ Contact channels
- `pytest issue tracker`_ to report bugs or suggest features (for version
2.0 and above).
- `pytest on stackoverflow.com <http://stackoverflow.com/search?q=pytest>`_
to post questions with the tag ``pytest``. New Questions will usually
be seen by pytest users or developers and answered quickly.
- `pytest on stackoverflow.com <http://stackoverflow.com/search?q=pytest>`_
to post questions with the tag ``pytest``. New Questions will usually
be seen by pytest users or developers and answered quickly.
- `Testing In Python`_: a mailing list for Python testing tools and discussion.

View File

@ -38,7 +38,7 @@ Here's a summary what ``pytest`` uses ``rootdir`` for:
Important to emphasize that ``rootdir`` is **NOT** used to modify ``sys.path``/``PYTHONPATH`` or
influence how modules are imported. See :ref:`pythonpath` for more details.
``--rootdir=path`` command-line option can be used to force a specific directory.
``--rootdir=path`` command-line option can be used to force a specific directory.
The directory passed may contain environment variables when it is used in conjunction
with ``addopts`` in a ``pytest.ini`` file.

View File

@ -40,7 +40,7 @@ avoid creating labels just for the sake of creating them.
Each label should include a description in the GitHub's interface stating its purpose.
Temporary labels
~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~
To classify issues for a special event it is encouraged to create a temporary label. This helps those involved to find
the relevant issues to work on. Examples of that are sprints in Python events or global hacking events.

View File

@ -65,9 +65,9 @@ then you can just invoke ``pytest`` without command line options::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 1 item
mymodule.py . [100%]
========================= 1 passed in 0.12 seconds =========================
It is possible to use fixtures using the ``getfixture`` helper::

View File

@ -35,9 +35,9 @@ You can then restrict a test run to only run tests marked with ``webtest``::
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 3 deselected
test_server.py::test_send_http PASSED [100%]
================== 1 passed, 3 deselected in 0.12 seconds ==================
Or the inverse, running all tests except the webtest ones::
@ -48,11 +48,11 @@ Or the inverse, running all tests except the webtest ones::
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 1 deselected
test_server.py::test_something_quick PASSED [ 33%]
test_server.py::test_another PASSED [ 66%]
test_server.py::TestClass::test_method PASSED [100%]
================== 3 passed, 1 deselected in 0.12 seconds ==================
Selecting tests based on their node ID
@ -68,9 +68,9 @@ tests based on their module, class, method, or function name::
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 1 item
test_server.py::TestClass::test_method PASSED [100%]
========================= 1 passed in 0.12 seconds =========================
You can also select on the class::
@ -81,9 +81,9 @@ You can also select on the class::
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 1 item
test_server.py::TestClass::test_method PASSED [100%]
========================= 1 passed in 0.12 seconds =========================
Or select multiple nodes::
@ -94,10 +94,10 @@ Or select multiple nodes::
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items
test_server.py::TestClass::test_method PASSED [ 50%]
test_server.py::test_send_http PASSED [100%]
========================= 2 passed in 0.12 seconds =========================
.. _node-id:
@ -132,9 +132,9 @@ select tests based on their names::
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 3 deselected
test_server.py::test_send_http PASSED [100%]
================== 1 passed, 3 deselected in 0.12 seconds ==================
And you can also run all tests except the ones that match the keyword::
@ -145,11 +145,11 @@ And you can also run all tests except the ones that match the keyword::
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 1 deselected
test_server.py::test_something_quick PASSED [ 33%]
test_server.py::test_another PASSED [ 66%]
test_server.py::TestClass::test_method PASSED [100%]
================== 3 passed, 1 deselected in 0.12 seconds ==================
Or to select "http" and "quick" tests::
@ -160,10 +160,10 @@ Or to select "http" and "quick" tests::
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 2 deselected
test_server.py::test_send_http PASSED [ 50%]
test_server.py::test_something_quick PASSED [100%]
================== 2 passed, 2 deselected in 0.12 seconds ==================
.. note::
@ -199,21 +199,21 @@ You can ask which markers exist for your test suite - the list includes our just
$ pytest --markers
@pytest.mark.webtest: mark a test as a webtest.
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
For an example on how to add and work with markers from a plugin, see
:ref:`adding a custom marker from a plugin`.
@ -227,7 +227,7 @@ For an example on how to add and work with markers from a plugin, see
* Asking for existing markers via ``pytest --markers`` gives good output
* Typos in function markers are treated as an error if you use
the ``--strict`` option.
the ``--strict`` option.
.. _`scoped-marking`:
@ -352,9 +352,9 @@ the test needs::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
test_someenv.py s [100%]
======================== 1 skipped in 0.12 seconds =========================
and here is one that specifies exactly the environment needed::
@ -364,30 +364,30 @@ and here is one that specifies exactly the environment needed::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
test_someenv.py . [100%]
========================= 1 passed in 0.12 seconds =========================
The ``--markers`` option always gives you a list of available markers::
$ pytest --markers
@pytest.mark.env(name): mark test to run only on named environment
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
.. _`passing callables to custom markers`:
@ -523,11 +523,11 @@ then you will see two tests skipped and two executed tests as expected::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
test_plat.py s.s. [100%]
========================= short test summary info ==========================
SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux
=================== 2 passed, 2 skipped in 0.12 seconds ====================
Note that if you specify a platform via the marker-command line option like this::
@ -537,9 +537,9 @@ Note that if you specify a platform via the marker-command line option like this
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items / 3 deselected
test_plat.py . [100%]
================== 1 passed, 3 deselected in 0.12 seconds ==================
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
@ -588,9 +588,9 @@ We can now use the ``-m option`` to select one set::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items / 2 deselected
test_module.py FF [100%]
================================= FAILURES =================================
__________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple
@ -609,9 +609,9 @@ or to select both "event" and "interface" tests::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items / 1 deselected
test_module.py FFF [100%]
================================= FAILURES =================================
__________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple

View File

@ -30,9 +30,9 @@ now execute the test specification::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collected 2 items
test_simple.yml F. [100%]
================================= FAILURES =================================
______________________________ usecase: hello ______________________________
usecase execution failed
@ -63,10 +63,10 @@ consulted when reporting in ``verbose`` mode::
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collecting ... collected 2 items
test_simple.yml::hello FAILED [ 50%]
test_simple.yml::ok PASSED [100%]
================================= FAILURES =================================
______________________________ usecase: hello ______________________________
usecase execution failed
@ -87,5 +87,5 @@ interesting to just look at the collection tree::
<YamlFile 'test_simple.yml'>
<YamlItem 'hello'>
<YamlItem 'ok'>
======================= no tests ran in 0.12 seconds =======================

View File

@ -55,13 +55,13 @@ let's run the full monty::
....F [100%]
================================= FAILURES =================================
_____________________________ test_compute[4] ______________________________
param1 = 4
def test_compute(param1):
> assert param1 < 4
E assert 4 < 4
test_compute.py:3: AssertionError
1 failed, 4 passed in 0.12 seconds
@ -151,7 +151,7 @@ objects, they are still using the default pytest representation::
<Function 'test_timedistance_v2[20011211-20011212-expected1]'>
<Function 'test_timedistance_v3[forward]'>
<Function 'test_timedistance_v3[backward]'>
======================= no tests ran in 0.12 seconds =======================
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
@ -198,9 +198,9 @@ this is a fully self-contained example which you can run with::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
test_scenarios.py .... [100%]
========================= 4 passed in 0.12 seconds =========================
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
@ -218,7 +218,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
<Function 'test_demo2[basic]'>
<Function 'test_demo1[advanced]'>
<Function 'test_demo2[advanced]'>
======================= no tests ran in 0.12 seconds =======================
Note that we told ``metafunc.parametrize()`` that your scenario values
@ -279,7 +279,7 @@ Let's first see how it looks like at collection time::
<Module 'test_backends.py'>
<Function 'test_db_initialized[d1]'>
<Function 'test_db_initialized[d2]'>
======================= no tests ran in 0.12 seconds =======================
And then when we run the test::
@ -288,15 +288,15 @@ And then when we run the test::
.F [100%]
================================= FAILURES =================================
_________________________ test_db_initialized[d2] __________________________
db = <conftest.DB2 object at 0xdeadbeef>
def test_db_initialized(db):
# a dummy test
if db.__class__.__name__ == "DB2":
> pytest.fail("deliberately failing for demo purposes")
E Failed: deliberately failing for demo purposes
test_backends.py:6: Failed
1 failed, 1 passed in 0.12 seconds
@ -339,7 +339,7 @@ The result of this test will be successful::
collected 1 item
<Module 'test_indirect_list.py'>
<Function 'test_indirect[a-b]'>
======================= no tests ran in 0.12 seconds =======================
.. regendoc:wipe
@ -384,13 +384,13 @@ argument sets to use for each test function. Let's run it::
F.. [100%]
================================= FAILURES =================================
________________________ TestClass.test_equals[1-2] ________________________
self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2
def test_equals(self, a, b):
> assert a == b
E assert 1 == 2
test_parametrize.py:18: AssertionError
1 failed, 2 passed in 0.12 seconds
@ -462,11 +462,11 @@ If you run this with reporting for skips enabled::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_module.py .s [100%]
========================= short test summary info ==========================
SKIP [1] $REGENDOC_TMPDIR/conftest.py:11: could not import 'opt2'
=================== 1 passed, 1 skipped in 0.12 seconds ====================
You'll see that we don't have an ``opt2`` module and thus the second test run
@ -504,10 +504,10 @@ For example::
])
def test_eval(test_input, expected):
assert eval(test_input) == expected
In this example, we have 4 parametrized tests. Except for the first test,
we mark the rest three parametrized tests with the custom marker ``basic``,
and for the fourth test we also use the built-in mark ``xfail`` to indicate this
and for the fourth test we also use the built-in mark ``xfail`` to indicate this
test is expected to fail. For explicitness, we set test ids for some tests.
Then run ``pytest`` with verbose mode and with only the ``basic`` marker::

View File

@ -133,7 +133,7 @@ then the test collection looks like this::
<Instance '()'>
<Function 'simple_check'>
<Function 'complex_check'>
======================= no tests ran in 0.12 seconds =======================
.. note::
@ -180,7 +180,7 @@ You can always peek at the collection tree without running tests like this::
<Instance '()'>
<Function 'test_method'>
<Function 'test_anothermethod'>
======================= no tests ran in 0.12 seconds =======================
.. _customizing-test-collection:
@ -243,5 +243,5 @@ file will be left out::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 0 items
======================= no tests ran in 0.12 seconds =======================

View File

@ -14,82 +14,82 @@ get on the terminal - we are working on that)::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
collected 42 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%]
================================= FAILURES =================================
____________________________ test_generative[0] ____________________________
param1 = 3, param2 = 6
def test_generative(param1, param2):
> assert param1 * 2 < param2
E assert (3 * 2) < 6
failure_demo.py:16: AssertionError
_________________________ TestFailing.test_simple __________________________
self = <failure_demo.TestFailing object at 0xdeadbeef>
def test_simple(self):
def f():
return 42
def g():
return 43
> assert f() == g()
E assert 42 == 43
E + where 42 = <function TestFailing.test_simple.<locals>.f at 0xdeadbeef>()
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>()
failure_demo.py:29: AssertionError
____________________ TestFailing.test_simple_multiline _____________________
self = <failure_demo.TestFailing object at 0xdeadbeef>
def test_simple_multiline(self):
otherfunc_multi(
42,
> 6*9)
failure_demo.py:34:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:34:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 42, b = 54
def otherfunc_multi(a,b):
> assert (a ==
b)
E assert 42 == 54
failure_demo.py:12: AssertionError
___________________________ TestFailing.test_not ___________________________
self = <failure_demo.TestFailing object at 0xdeadbeef>
def test_not(self):
def f():
return 42
> assert not f()
E assert not 42
E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>()
failure_demo.py:39: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_text(self):
> assert 'spam' == 'eggs'
E AssertionError: assert 'spam' == 'eggs'
E - spam
E + eggs
failure_demo.py:43: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_similar_text(self):
> assert 'foo 1 bar' == 'foo 2 bar'
E AssertionError: assert 'foo 1 bar' == 'foo 2 bar'
@ -97,12 +97,12 @@ get on the terminal - we are working on that)::
E ? ^
E + foo 2 bar
E ? ^
failure_demo.py:46: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_multiline_text(self):
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
E AssertionError: assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
@ -110,12 +110,12 @@ get on the terminal - we are working on that)::
E - spam
E + eggs
E bar
failure_demo.py:49: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_long_text(self):
a = '1'*100 + 'a' + '2'*100
b = '1'*100 + 'b' + '2'*100
@ -127,12 +127,12 @@ get on the terminal - we are working on that)::
E ? ^
E + 1111111111b222222222
E ? ^
failure_demo.py:54: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_long_text_multiline(self):
a = '1\n'*100 + 'a' + '2\n'*100
b = '1\n'*100 + 'b' + '2\n'*100
@ -145,25 +145,25 @@ get on the terminal - we are working on that)::
E 1
E 1
E 1...
E
E
E ...Full output truncated (7 lines hidden), use '-vv' to show
failure_demo.py:59: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3]
E assert [0, 1, 2] == [0, 1, 3]
E At index 2 diff: 2 != 3
E Use -v to get the full diff
failure_demo.py:62: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100
b = [0]*100 + [2] + [3]*100
@ -171,12 +171,12 @@ get on the terminal - we are working on that)::
E assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...]
E At index 100 diff: 1 != 2
E Use -v to get the full diff
failure_demo.py:67: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_dict(self):
> assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E AssertionError: assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
@ -187,14 +187,14 @@ get on the terminal - we are working on that)::
E {'c': 0}
E Right contains more items:
E {'d': 0}...
E
E
E ...Full output truncated (2 lines hidden), use '-vv' to show
failure_demo.py:70: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_set(self):
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
E AssertionError: assert {0, 10, 11, 12} == {0, 20, 21}
@ -205,34 +205,34 @@ get on the terminal - we are working on that)::
E Extra items in the right set:
E 20
E 21...
E
E
E ...Full output truncated (2 lines hidden), use '-vv' to show
failure_demo.py:73: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_eq_longer_list(self):
> assert [1,2] == [1,2,3]
E assert [1, 2] == [1, 2, 3]
E Right contains more items, first extra item: 3
E Use -v to get the full diff
failure_demo.py:76: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5]
E assert 1 in [0, 2, 3, 4, 5]
failure_demo.py:79: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
> assert 'foo' not in text
@ -244,14 +244,14 @@ get on the terminal - we are working on that)::
E includes foo
E ? +++
E and a...
E
E
E ...Full output truncated (2 lines hidden), use '-vv' to show
failure_demo.py:83: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_not_in_text_single(self):
text = 'single foo line'
> assert 'foo' not in text
@ -259,36 +259,36 @@ get on the terminal - we are working on that)::
E 'foo' is contained here:
E single foo line
E ? +++
failure_demo.py:87: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
> assert 'foo' not in text
E AssertionError: assert 'foo' not in 'head head head head hea...ail tail tail tail tail '
E 'foo' is contained here:
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? +++
failure_demo.py:91: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
> assert 'f'*70 not in text
E AssertionError: assert 'fffffffffff...ffffffffffff' not in 'head head he...l tail tail '
E 'ffffffffffffffffff...fffffffffffffffffff' is contained here:
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
failure_demo.py:95: AssertionError
______________________________ test_attribute ______________________________
def test_attribute():
class Foo(object):
b = 1
@ -296,10 +296,10 @@ get on the terminal - we are working on that)::
> assert i.b == 2
E assert 1 == 2
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.b
failure_demo.py:102: AssertionError
_________________________ test_attribute_instance __________________________
def test_attribute_instance():
class Foo(object):
b = 1
@ -307,10 +307,10 @@ get on the terminal - we are working on that)::
E AssertionError: assert 1 == 2
E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
failure_demo.py:108: AssertionError
__________________________ test_attribute_failure __________________________
def test_attribute_failure():
class Foo(object):
def _get_b(self):
@ -318,19 +318,19 @@ get on the terminal - we are working on that)::
b = property(_get_b)
i = Foo()
> assert i.b == 2
failure_demo.py:117:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:117:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0xdeadbeef>
def _get_b(self):
> raise Exception('Failed to get attrib')
E Exception: Failed to get attrib
failure_demo.py:114: Exception
_________________________ test_attribute_multiple __________________________
def test_attribute_multiple():
class Foo(object):
b = 1
@ -342,74 +342,74 @@ get on the terminal - we are working on that)::
E + where <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Foo'>()
E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
failure_demo.py:125: AssertionError
__________________________ TestRaises.test_raises __________________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
def test_raises(self):
s = 'qwe'
> raises(TypeError, "int(s)")
failure_demo.py:134:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:134:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> int(s)
E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:615>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
def test_raises_doesnt(self):
> raises(IOError, "int('3')")
E Failed: DID NOT RAISE <class 'OSError'>
failure_demo.py:137: Failed
__________________________ TestRaises.test_raise ___________________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
def test_raise(self):
> raise ValueError("demo error")
E ValueError: demo error
failure_demo.py:140: ValueError
________________________ TestRaises.test_tupleerror ________________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
def test_tupleerror(self):
> a,b = [1]
E ValueError: not enough values to unpack (expected 2, got 1)
failure_demo.py:143: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
self = <failure_demo.TestRaises object at 0xdeadbeef>
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3]
print ("l is %r" % l)
> a,b = l.pop()
E TypeError: 'int' object is not iterable
failure_demo.py:148: TypeError
--------------------------- Captured stdout call ---------------------------
l is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
def test_some_error(self):
> if namenotexi:
E NameError: name 'namenotexi' is not defined
failure_demo.py:151: NameError
____________________ test_dynamic_compile_shows_nicely _____________________
def test_dynamic_compile_shows_nicely():
import imp
import sys
@ -420,63 +420,63 @@ get on the terminal - we are working on that)::
py.builtin.exec_(code, module.__dict__)
sys.modules[name] = module
> module.foo()
failure_demo.py:168:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:168:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def foo():
> assert 1 == 0
E AssertionError
<2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:165>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
def test_complex_error(self):
def f():
return 44
def g():
return 43
> somefunc(f(), g())
failure_demo.py:178:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:178:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:9: in somefunc
otherfunc(x,y)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 44, b = 43
def otherfunc(a,b):
> assert a==b
E assert 44 == 43
failure_demo.py:6: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
def test_z1_unpack_error(self):
l = []
> a,b = l
E ValueError: not enough values to unpack (expected 2, got 0)
failure_demo.py:182: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
def test_z2_type_error(self):
l = 3
> a,b = l
E TypeError: 'int' object is not iterable
failure_demo.py:186: TypeError
______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
def test_startswith(self):
s = "123"
g = "456"
@ -484,12 +484,12 @@ get on the terminal - we are working on that)::
E AssertionError: assert False
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
failure_demo.py:191: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
def test_startswith_nested(self):
def f():
return "123"
@ -501,55 +501,55 @@ get on the terminal - we are working on that)::
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef>()
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>()
failure_demo.py:198: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
def test_global_func(self):
> assert isinstance(globf(42), float)
E assert False
E + where False = isinstance(43, float)
E + where 43 = globf(42)
failure_demo.py:201: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
def test_instance(self):
self.x = 6*7
> assert self.x != 42
E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
failure_demo.py:205: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
def test_compare(self):
> assert globf(10) < 5
E assert 11 < 5
E + where 11 = globf(10)
failure_demo.py:208: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
def test_try_finally(self):
x = 1
try:
> assert x == 0
E assert 1 == 0
failure_demo.py:213: AssertionError
___________________ TestCustomAssertMsg.test_single_line ___________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
def test_single_line(self):
class A(object):
a = 1
@ -558,12 +558,12 @@ get on the terminal - we are working on that)::
E AssertionError: A.a appears not to be b
E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a
failure_demo.py:224: AssertionError
____________________ TestCustomAssertMsg.test_multiline ____________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
def test_multiline(self):
class A(object):
a = 1
@ -575,12 +575,12 @@ get on the terminal - we are working on that)::
E one of those
E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a
failure_demo.py:230: AssertionError
___________________ TestCustomAssertMsg.test_custom_repr ___________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
def test_custom_repr(self):
class JSON(object):
a = 1
@ -595,12 +595,12 @@ get on the terminal - we are working on that)::
E }
E assert 1 == 2
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
failure_demo.py:240: AssertionError
============================= warnings summary =============================
None
Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0.
Please use Metafunc.parametrize instead.
-- Docs: http://doc.pytest.org/en/latest/warnings.html
================== 42 failed, 1 warnings in 0.12 seconds ===================

View File

@ -46,9 +46,9 @@ Let's run this without supplying our new option::
F [100%]
================================= FAILURES =================================
_______________________________ test_answer ________________________________
cmdopt = 'type1'
def test_answer(cmdopt):
if cmdopt == "type1":
print ("first")
@ -56,7 +56,7 @@ Let's run this without supplying our new option::
print ("second")
> assert 0 # to see what was printed
E assert 0
test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
first
@ -68,9 +68,9 @@ And now with supplying a command line option::
F [100%]
================================= FAILURES =================================
_______________________________ test_answer ________________________________
cmdopt = 'type2'
def test_answer(cmdopt):
if cmdopt == "type1":
print ("first")
@ -78,7 +78,7 @@ And now with supplying a command line option::
print ("second")
> assert 0 # to see what was printed
E assert 0
test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
second
@ -118,7 +118,7 @@ directory with the above conftest.py::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items
======================= no tests ran in 0.12 seconds =======================
.. _`excontrolskip`:
@ -172,11 +172,11 @@ and when running it will see a skipped "slow" test::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_module.py .s [100%]
========================= short test summary info ==========================
SKIP [1] test_module.py:8: need --runslow option to run
=================== 1 passed, 1 skipped in 0.12 seconds ====================
Or run it including the ``slow`` marked test::
@ -186,9 +186,9 @@ Or run it including the ``slow`` marked test::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_module.py .. [100%]
========================= 2 passed in 0.12 seconds =========================
Writing well integrated assertion helpers
@ -223,11 +223,11 @@ Let's run our little function::
F [100%]
================================= FAILURES =================================
______________________________ test_something ______________________________
def test_something():
> checkconfig(42)
E Failed: not configured: 42
test_checkconfig.py:8: Failed
1 failed in 0.12 seconds
@ -312,7 +312,7 @@ which will add the string to the test header accordingly::
project deps: mylib-1.1
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items
======================= no tests ran in 0.12 seconds =======================
.. regendoc:wipe
@ -339,7 +339,7 @@ which will add info only when run with "--v"::
did you?
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 0 items
======================= no tests ran in 0.12 seconds =======================
and nothing when run plainly::
@ -349,7 +349,7 @@ and nothing when run plainly::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items
======================= no tests ran in 0.12 seconds =======================
profiling test duration
@ -383,9 +383,9 @@ Now we can profile which test functions execute the slowest::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_some_are_slow.py ... [100%]
========================= slowest 3 test durations =========================
0.30s call test_some_are_slow.py::test_funcslow2
0.20s call test_some_are_slow.py::test_funcslow1
@ -449,18 +449,18 @@ If we run this::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
test_step.py .Fx. [100%]
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling object at 0xdeadbeef>
def test_modification(self):
> assert 0
E assert 0
test_step.py:9: AssertionError
========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::()::test_deletion
@ -528,12 +528,12 @@ We can run this::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 7 items
test_step.py .Fx. [ 57%]
a/test_db.py F [ 71%]
a/test_db2.py F [ 85%]
b/test_error.py E [100%]
================================== ERRORS ==================================
_______________________ ERROR at setup of test_root ________________________
file $REGENDOC_TMPDIR/b/test_error.py, line 1
@ -541,37 +541,37 @@ We can run this::
E fixture 'db' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_xml_attribute, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
$REGENDOC_TMPDIR/b/test_error.py:1
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling object at 0xdeadbeef>
def test_modification(self):
> assert 0
E assert 0
test_step.py:9: AssertionError
_________________________________ test_a1 __________________________________
db = <conftest.DB object at 0xdeadbeef>
def test_a1(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB object at 0xdeadbeef>
E assert 0
a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________
db = <conftest.DB object at 0xdeadbeef>
def test_a2(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB object at 0xdeadbeef>
E assert 0
a/test_db2.py:2: AssertionError
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ==========
@ -636,25 +636,25 @@ and run them::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_module.py FF [100%]
================================= FAILURES =================================
________________________________ test_fail1 ________________________________
tmpdir = local('PYTEST_TMPDIR/test_fail10')
def test_fail1(tmpdir):
> assert 0
E assert 0
test_module.py:2: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:4: AssertionError
========================= 2 failed in 0.12 seconds =========================
@ -730,36 +730,36 @@ and run it::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
Fexecuting test failed test_module.py::test_call_fails
F
================================== ERRORS ==================================
____________________ ERROR at setup of test_setup_fails ____________________
@pytest.fixture
def other():
> assert 0
E assert 0
test_module.py:6: AssertionError
================================= FAILURES =================================
_____________________________ test_call_fails ______________________________
something = None
def test_call_fails(something):
> assert 0
E assert 0
test_module.py:12: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:15: AssertionError
==================== 2 failed, 1 error in 0.12 seconds =====================
@ -809,7 +809,7 @@ In that order.
can be changed between releases (even bug fixes) so it shouldn't be relied on for scripting
or automation.
Freezing pytest
Freezing pytest
---------------
If you freeze your application using a tool like
@ -821,18 +821,18 @@ while also allowing you to send test files to users so they can run them in thei
machines, which can be useful to obtain more information about a hard to reproduce bug.
Fortunately recent ``PyInstaller`` releases already have a custom hook
for pytest, but if you are using another tool to freeze executables
for pytest, but if you are using another tool to freeze executables
such as ``cx_freeze`` or ``py2exe``, you can use ``pytest.freeze_includes()``
to obtain the full list of internal pytest modules. How to configure the tools
to find the internal modules varies from tool to tool, however.
Instead of freezing the pytest runner as a separate executable, you can make
Instead of freezing the pytest runner as a separate executable, you can make
your frozen program work as the pytest runner by some clever
argument handling during program startup. This allows you to
argument handling during program startup. This allows you to
have a single executable, which is usually more convenient.
Please note that the mechanism for plugin discovery used by pytest
(setupttools entry points) doesn't work with frozen executables so pytest
can't find any third party plugins automatically. To include third party plugins
can't find any third party plugins automatically. To include third party plugins
like ``pytest-timeout`` they must be imported explicitly and passed on to pytest.main.
.. code-block:: python

View File

@ -73,20 +73,20 @@ marked ``smtp`` fixture function. Running the test looks like this::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
test_smtpsimple.py F [100%]
================================= FAILURES =================================
________________________________ test_ehlo _________________________________
smtp = <smtplib.SMTP object at 0xdeadbeef>
def test_ehlo(smtp):
response, msg = smtp.ehlo()
assert response == 250
> assert 0 # for demo purposes
E assert 0
test_smtpsimple.py:11: AssertionError
========================= 1 failed in 0.12 seconds =========================
@ -152,9 +152,9 @@ to do this is by loading these data in a fixture for use by your tests.
This makes use of the automatic caching mechanisms of pytest.
Another good approach is by adding the data files in the ``tests`` folder.
There are also community plugins available to help managing this aspect of
testing, e.g. `pytest-datadir <https://github.com/gabrielcnr/pytest-datadir>`__
and `pytest-datafiles <https://pypi.org/project/pytest-datafiles/>`__.
There are also community plugins available to help managing this aspect of
testing, e.g. `pytest-datadir <https://github.com/gabrielcnr/pytest-datadir>`__
and `pytest-datafiles <https://pypi.python.org/pypi/pytest-datafiles>`__.
.. _smtpshared:
@ -172,7 +172,7 @@ per test *module* (the default is to invoke once per test *function*).
Multiple test functions in a test module will thus
each receive the same ``smtp`` fixture instance, thus saving time.
The next example puts the fixture function into a separate ``conftest.py`` file
The next example puts the fixture function into a separate ``conftest.py`` file
so that tests from multiple test modules in the directory can
access the fixture function::
@ -209,32 +209,32 @@ inspect what is going on and can now run the tests::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_module.py FF [100%]
================================= FAILURES =================================
________________________________ test_ehlo _________________________________
smtp = <smtplib.SMTP object at 0xdeadbeef>
def test_ehlo(smtp):
response, msg = smtp.ehlo()
assert response == 250
assert b"smtp.gmail.com" in msg
> assert 0 # for demo purposes
E assert 0
test_module.py:6: AssertionError
________________________________ test_noop _________________________________
smtp = <smtplib.SMTP object at 0xdeadbeef>
def test_noop(smtp):
response, msg = smtp.noop()
assert response == 250
> assert 0 # for demo purposes
E assert 0
test_module.py:11: AssertionError
========================= 2 failed in 0.12 seconds =========================
@ -331,7 +331,7 @@ Let's execute it::
$ pytest -s -q --tb=no
FFteardown smtp
2 failed in 0.12 seconds
We see that the ``smtp`` instance is finalized after the two
@ -436,7 +436,7 @@ again, nothing much has changed::
$ pytest -s -q --tb=no
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
2 failed in 0.12 seconds
Let's quickly create another test module that actually sets the
@ -504,51 +504,51 @@ So let's just do another run::
FFFF [100%]
================================= FAILURES =================================
________________________ test_ehlo[smtp.gmail.com] _________________________
smtp = <smtplib.SMTP object at 0xdeadbeef>
def test_ehlo(smtp):
response, msg = smtp.ehlo()
assert response == 250
assert b"smtp.gmail.com" in msg
> assert 0 # for demo purposes
E assert 0
test_module.py:6: AssertionError
________________________ test_noop[smtp.gmail.com] _________________________
smtp = <smtplib.SMTP object at 0xdeadbeef>
def test_noop(smtp):
response, msg = smtp.noop()
assert response == 250
> assert 0 # for demo purposes
E assert 0
test_module.py:11: AssertionError
________________________ test_ehlo[mail.python.org] ________________________
smtp = <smtplib.SMTP object at 0xdeadbeef>
def test_ehlo(smtp):
response, msg = smtp.ehlo()
assert response == 250
> assert b"smtp.gmail.com" in msg
E AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nPIPELINING\nSIZE 51200000\nETRN\nSTARTTLS\nAUTH DIGEST-MD5 NTLM CRAM-MD5\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8'
test_module.py:5: AssertionError
-------------------------- Captured stdout setup ---------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef>
________________________ test_noop[mail.python.org] ________________________
smtp = <smtplib.SMTP object at 0xdeadbeef>
def test_noop(smtp):
response, msg = smtp.noop()
assert response == 250
> assert 0 # for demo purposes
E assert 0
test_module.py:11: AssertionError
------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef>
@ -620,7 +620,7 @@ Running the above tests results in the following test IDs being used::
<Function 'test_noop[smtp.gmail.com]'>
<Function 'test_ehlo[mail.python.org]'>
<Function 'test_noop[mail.python.org]'>
======================= no tests ran in 0.12 seconds =======================
.. _`fixture-parametrize-marks`:
@ -650,11 +650,11 @@ Running this test will *skip* the invocation of ``data_set`` with value ``2``::
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 3 items
test_fixture_marks.py::test_data[0] PASSED [ 33%]
test_fixture_marks.py::test_data[1] PASSED [ 66%]
test_fixture_marks.py::test_data[2] SKIPPED [100%]
=================== 2 passed, 1 skipped in 0.12 seconds ====================
.. _`interdependent fixtures`:
@ -693,10 +693,10 @@ Here we declare an ``app`` fixture which receives the previously defined
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items
test_appsetup.py::test_smtp_exists[smtp.gmail.com] PASSED [ 50%]
test_appsetup.py::test_smtp_exists[mail.python.org] PASSED [100%]
========================= 2 passed in 0.12 seconds =========================
Due to the parametrization of ``smtp`` the test will run twice with two
@ -762,26 +762,26 @@ Let's run the tests in verbose mode and with looking at the print-output::
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 8 items
test_module.py::test_0[1] SETUP otherarg 1
RUN test0 with otherarg 1
PASSED TEARDOWN otherarg 1
test_module.py::test_0[2] SETUP otherarg 2
RUN test0 with otherarg 2
PASSED TEARDOWN otherarg 2
test_module.py::test_1[mod1] SETUP modarg mod1
RUN test1 with modarg mod1
PASSED
test_module.py::test_2[mod1-1] SETUP otherarg 1
RUN test2 with otherarg 1 and modarg mod1
PASSED TEARDOWN otherarg 1
test_module.py::test_2[mod1-2] SETUP otherarg 2
RUN test2 with otherarg 2 and modarg mod1
PASSED TEARDOWN otherarg 2
test_module.py::test_1[mod2] TEARDOWN modarg mod1
SETUP modarg mod2
RUN test1 with modarg mod2
@ -789,13 +789,13 @@ Let's run the tests in verbose mode and with looking at the print-output::
test_module.py::test_2[mod2-1] SETUP otherarg 1
RUN test2 with otherarg 1 and modarg mod2
PASSED TEARDOWN otherarg 1
test_module.py::test_2[mod2-2] SETUP otherarg 2
RUN test2 with otherarg 2 and modarg mod2
PASSED TEARDOWN otherarg 2
TEARDOWN modarg mod2
========================= 8 passed in 0.12 seconds =========================
You can see that the parametrized module-scoped ``modarg`` resource caused an

View File

@ -5,9 +5,9 @@
pytest-2.3: reasoning for fixture/funcarg evolution
=============================================================
**Target audience**: Reading this document requires basic knowledge of
python testing, xUnit setup methods and the (previous) basic pytest
funcarg mechanism, see http://pytest.org/2.2.4/funcargs.html
**Target audience**: Reading this document requires basic knowledge of
python testing, xUnit setup methods and the (previous) basic pytest
funcarg mechanism, see http://pytest.org/2.2.4/funcargs.html
If you are new to pytest, then you can simply ignore this
section and read the other sections.
@ -18,12 +18,12 @@ Shortcomings of the previous ``pytest_funcarg__`` mechanism
The pre pytest-2.3 funcarg mechanism calls a factory each time a
funcarg for a test function is required. If a factory wants to
re-use a resource across different scopes, it often used
the ``request.cached_setup()`` helper to manage caching of
resources. Here is a basic example how we could implement
re-use a resource across different scopes, it often used
the ``request.cached_setup()`` helper to manage caching of
resources. Here is a basic example how we could implement
a per-session Database object::
# content of conftest.py
# content of conftest.py
class Database(object):
def __init__(self):
print ("database instance created")
@ -31,7 +31,7 @@ a per-session Database object::
print ("database instance destroyed")
def pytest_funcarg__db(request):
return request.cached_setup(setup=DataBase,
return request.cached_setup(setup=DataBase,
teardown=lambda db: db.destroy,
scope="session")
@ -40,13 +40,13 @@ There are several limitations and difficulties with this approach:
1. Scoping funcarg resource creation is not straight forward, instead one must
understand the intricate cached_setup() method mechanics.
2. parametrizing the "db" resource is not straight forward:
2. parametrizing the "db" resource is not straight forward:
you need to apply a "parametrize" decorator or implement a
:py:func:`~hookspec.pytest_generate_tests` hook
:py:func:`~hookspec.pytest_generate_tests` hook
calling :py:func:`~python.Metafunc.parametrize` which
performs parametrization at the places where the resource
is used. Moreover, you need to modify the factory to use an
``extrakey`` parameter containing ``request.param`` to the
performs parametrization at the places where the resource
is used. Moreover, you need to modify the factory to use an
``extrakey`` parameter containing ``request.param`` to the
:py:func:`~python.Request.cached_setup` call.
3. Multiple parametrized session-scoped resources will be active
@ -56,7 +56,7 @@ There are several limitations and difficulties with this approach:
4. there is no way how you can make use of funcarg factories
in xUnit setup methods.
5. A non-parametrized fixture function cannot use a parametrized
5. A non-parametrized fixture function cannot use a parametrized
funcarg resource if it isn't stated in the test function signature.
All of these limitations are addressed with pytest-2.3 and its
@ -72,18 +72,18 @@ the scope::
@pytest.fixture(scope="session")
def db(request):
# factory will only be invoked once per session -
# factory will only be invoked once per session -
db = DataBase()
request.addfinalizer(db.destroy) # destroy when session is finished
return db
This factory implementation does not need to call ``cached_setup()`` anymore
because it will only be invoked once per session. Moreover, the
because it will only be invoked once per session. Moreover, the
``request.addfinalizer()`` registers a finalizer according to the specified
resource scope on which the factory function is operating.
Direct parametrization of funcarg resource factories
Direct parametrization of funcarg resource factories
----------------------------------------------------------
Previously, funcarg factories could not directly cause parametrization.
@ -96,9 +96,9 @@ sets. pytest-2.3 introduces a decorator for use on the factory itself::
def db(request):
... # use request.param
Here the factory will be invoked twice (with the respective "mysql"
and "pg" values set as ``request.param`` attributes) and all of
the tests requiring "db" will run twice as well. The "mysql" and
Here the factory will be invoked twice (with the respective "mysql"
and "pg" values set as ``request.param`` attributes) and all of
the tests requiring "db" will run twice as well. The "mysql" and
"pg" values will also be used for reporting the test-invocation variants.
This new way of parametrizing funcarg factories should in many cases
@ -136,7 +136,7 @@ argument::
The name under which the funcarg resource can be requested is ``db``.
You can still use the "old" non-decorator way of specifying funcarg factories
You can still use the "old" non-decorator way of specifying funcarg factories
aka::
def pytest_funcarg__db(request):
@ -156,10 +156,10 @@ several problems:
1. in distributed testing the master process would setup test resources
that are never needed because it only co-ordinates the test run
activities of the slave processes.
activities of the slave processes.
2. if you only perform a collection (with "--collect-only")
resource-setup will still be executed.
2. if you only perform a collection (with "--collect-only")
resource-setup will still be executed.
3. If a pytest_sessionstart is contained in some subdirectories
conftest.py file, it will not be called. This stems from the
@ -194,17 +194,17 @@ overview of fixture management in your project.
Conclusion and compatibility notes
---------------------------------------------------------
**funcargs** were originally introduced to pytest-2.0. In pytest-2.3
**funcargs** were originally introduced to pytest-2.0. In pytest-2.3
the mechanism was extended and refined and is now described as
fixtures:
* previously funcarg factories were specified with a special
``pytest_funcarg__NAME`` prefix instead of using the
* previously funcarg factories were specified with a special
``pytest_funcarg__NAME`` prefix instead of using the
``@pytest.fixture`` decorator.
* Factories received a ``request`` object which managed caching through
``request.cached_setup()`` calls and allowed using other funcargs via
``request.getfuncargvalue()`` calls. These intricate APIs made it hard
``request.cached_setup()`` calls and allowed using other funcargs via
``request.getfuncargvalue()`` calls. These intricate APIs made it hard
to do proper parametrization and implement resource caching. The
new :py:func:`pytest.fixture` decorator allows to declare the scope
and let pytest figure things out for you.
@ -212,5 +212,5 @@ fixtures:
* if you used parametrization and funcarg factories which made use of
``request.cached_setup()`` it is recommended to invest a few minutes
and simplify your fixture function code to use the :ref:`@pytest.fixture`
decorator instead. This will also allow to take advantage of
decorator instead. This will also allow to take advantage of
the automatic per-resource grouping of tests.

View File

@ -1,6 +1,8 @@
from __future__ import print_function
import textwrap
import inspect
class Writer(object):
def __init__(self, clsname):
self.clsname = clsname
@ -11,10 +13,10 @@ class Writer(object):
def __exit__(self, *args):
self.file.close()
print "wrote", self.file.name
print("wrote", self.file.name)
def line(self, line):
self.file.write(line+"\n")
self.file.write(line + "\n")
def docmethod(self, method):
doc = " ".join(method.__doc__.split())
@ -30,6 +32,7 @@ class Writer(object):
self.line(w.fill(doc))
self.line("")
def pytest_funcarg__a(request):
with Writer("request") as writer:
writer.docmethod(request.getfixturevalue)
@ -37,5 +40,6 @@ def pytest_funcarg__a(request):
writer.docmethod(request.addfinalizer)
writer.docmethod(request.applymarker)
def test_hello(a):
pass

View File

@ -50,17 +50,17 @@ Thats it. You can now execute the test function::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
test_sample.py F [100%]
================================= FAILURES =================================
_______________________________ test_answer ________________________________
def test_answer():
> assert func(3) == 5
E assert 4 == 5
E + where 4 = func(3)
test_sample.py:5: AssertionError
========================= 1 failed in 0.12 seconds =========================
@ -117,15 +117,15 @@ Once you develop multiple tests, you may want to group them into a class. pytest
.F [100%]
================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________
self = <test_class.TestClass object at 0xdeadbeef>
def test_two(self):
x = "hello"
> assert hasattr(x, 'check')
E AssertionError: assert False
E + where False = hasattr('hello', 'check')
test_class.py:8: AssertionError
1 failed, 1 passed in 0.12 seconds
@ -147,14 +147,14 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look
F [100%]
================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________
tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')
def test_needsfiles(tmpdir):
print (tmpdir)
> assert 0
E assert 0
test_tmpdir.py:3: AssertionError
--------------------------- Captured stdout call ---------------------------
PYTEST_TMPDIR/test_needsfiles0

View File

@ -28,17 +28,17 @@ To execute it::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
test_sample.py F [100%]
================================= FAILURES =================================
_______________________________ test_answer ________________________________
def test_answer():
> assert inc(3) == 5
E assert 4 == 5
E + where 4 = inc(3)
test_sample.py:5: AssertionError
========================= 1 failed in 0.12 seconds =========================

View File

@ -35,7 +35,7 @@ patch this function before calling into a function which uses it::
assert x == '/abc/.ssh'
Here our test function monkeypatches ``os.path.expanduser`` and
then calls into a function that calls it. After the test function
then calls into a function that calls it. After the test function
finishes the ``os.path.expanduser`` modification will be undone.
example: preventing "requests" from remote operations
@ -51,15 +51,15 @@ requests in all your tests, you can do::
monkeypatch.delattr("requests.sessions.Session.request")
This autouse fixture will be executed for each test function and it
will delete the method ``request.session.Session.request``
will delete the method ``request.session.Session.request``
so that any attempts within tests to create http requests will fail.
.. note::
Be advised that it is not recommended to patch builtin functions such as ``open``,
``compile``, etc., because it might break pytest's internals. If that's
unavoidable, passing ``--tb=native``, ``--assert=plain`` and ``--capture=no`` might
unavoidable, passing ``--tb=native``, ``--assert=plain`` and ``--capture=no`` might
help although there's no guarantee.
.. note::
@ -77,7 +77,7 @@ so that any attempts within tests to create http requests will fail.
assert functools.partial == 3
See issue `#3290 <https://github.com/pytest-dev/pytest/issues/3290>`_ for details.
.. currentmodule:: _pytest.monkeypatch

View File

@ -11,13 +11,13 @@ Parametrizing fixtures and test functions
pytest enables test parametrization at several levels:
- :py:func:`pytest.fixture` allows one to :ref:`parametrize fixture
- :py:func:`pytest.fixture` allows one to :ref:`parametrize fixture
functions <fixture-parametrize>`.
* `@pytest.mark.parametrize`_ allows one to define multiple sets of
* `@pytest.mark.parametrize`_ allows one to define multiple sets of
arguments and fixtures at the test function or class.
* `pytest_generate_tests`_ allows one to define custom parametrization
* `pytest_generate_tests`_ allows one to define custom parametrization
schemes or extensions.
.. _parametrizemark:
@ -57,14 +57,14 @@ them in turn::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_expectation.py ..F [100%]
================================= FAILURES =================================
____________________________ test_eval[6*9-42] _____________________________
test_input = '6*9', expected = 42
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
@ -74,7 +74,7 @@ them in turn::
> assert eval(test_input) == expected
E AssertionError: assert 54 == 42
E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError
==================== 1 failed, 2 passed in 0.12 seconds ====================
@ -106,9 +106,9 @@ Let's run this::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_expectation.py ..x [100%]
=================== 2 passed, 1 xfailed in 0.12 seconds ====================
The one parameter set which caused a failure previously now
@ -123,7 +123,7 @@ To get all combinations of multiple parametrized arguments you can stack
def test_foo(x, y):
pass
This will run the test with the arguments set to ``x=0/y=2``, ``x=1/y=2``,
This will run the test with the arguments set to ``x=0/y=2``, ``x=1/y=2``,
``x=0/y=3``, and ``x=1/y=3`` exhausting parameters in the order of the decorators.
.. _`pytest_generate_tests`:
@ -174,15 +174,15 @@ Let's also run with a stringinput that will lead to a failing test::
F [100%]
================================= FAILURES =================================
___________________________ test_valid_string[!] ___________________________
stringinput = '!'
def test_valid_string(stringinput):
> assert stringinput.isalpha()
E AssertionError: assert False
E + where False = <built-in method isalpha of str object at 0xdeadbeef>()
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha
test_strings.py:3: AssertionError
1 failed in 0.12 seconds
@ -198,7 +198,7 @@ list::
SKIP [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1
1 skipped in 0.12 seconds
Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across
Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across
those sets cannot be duplicated, otherwise an error will be raised.
More examples

View File

@ -334,12 +334,12 @@ Running it with the report-on-xfail option gives this output::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR/example, inifile:
collected 7 items
xfail_demo.py xxxxxxx [100%]
========================= short test summary info ==========================
XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2
reason: [NOTRUN]
reason: [NOTRUN]
XFAIL xfail_demo.py::test_hello3
condition: hasattr(os, 'sep')
XFAIL xfail_demo.py::test_hello4
@ -349,7 +349,7 @@ Running it with the report-on-xfail option gives this output::
XFAIL xfail_demo.py::test_hello6
reason: reason
XFAIL xfail_demo.py::test_hello7
======================== 7 xfailed in 0.12 seconds =========================
.. _`skip/xfail with parametrize`:

View File

@ -15,4 +15,3 @@ pageTracker._trackPageview();
} catch(err) {}</script>
</body>
</html>

View File

@ -15,4 +15,3 @@ pageTracker._trackPageview();
} catch(err) {}</script>
</body>
</html>

View File

@ -15,4 +15,3 @@ pageTracker._trackPageview();
} catch(err) {}</script>
</body>
</html>

View File

@ -6,22 +6,22 @@ Write and report coverage data with the 'coverage' package.
.. contents::
:local:
Note: Original code by Ross Lawley.
Note: Original code by Ross Lawley.
Install
--------------
Use pip to (un)install::
pip install pytest-coverage
pip uninstall pytest-coverage
pip install pytest-coverage
pip uninstall pytest-coverage
or alternatively use easy_install to install::
easy_install pytest-coverage
easy_install pytest-coverage
Usage
Usage
-------------
To get full test coverage reports for a particular package type::

View File

@ -12,7 +12,7 @@ Install
To install the plugin issue::
easy_install pytest-figleaf # or
pip install pytest-figleaf
pip install pytest-figleaf
and if you are using pip you can also uninstall::

View File

@ -15,4 +15,3 @@ pageTracker._trackPageview();
} catch(err) {}</script>
</body>
</html>

View File

@ -32,14 +32,14 @@ Running this would result in a passed test except for the last
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
test_tmpdir.py F [100%]
================================= FAILURES =================================
_____________________________ test_create_file _____________________________
tmpdir = local('PYTEST_TMPDIR/test_create_file0')
def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt")
p.write("content")
@ -47,7 +47,7 @@ Running this would result in a passed test except for the last
assert len(tmpdir.listdir()) == 1
> assert 0
E assert 0
test_tmpdir.py:7: AssertionError
========================= 1 failed in 0.12 seconds =========================

View File

@ -92,18 +92,18 @@ it from a unittest-style test::
def db_class(request):
class DummyDB(object):
pass
# set a class attribute on the invoking test context
# set a class attribute on the invoking test context
request.cls.db = DummyDB()
This defines a fixture function ``db_class`` which - if used - is
called once for each test class and which sets the class-level
This defines a fixture function ``db_class`` which - if used - is
called once for each test class and which sets the class-level
``db`` attribute to a ``DummyDB`` instance. The fixture function
achieves this by receiving a special ``request`` object which gives
access to :ref:`the requesting test context <request-context>` such
as the ``cls`` attribute, denoting the class from which the fixture
as the ``cls`` attribute, denoting the class from which the fixture
is used. This architecture de-couples fixture writing from actual test
code and allows re-use of the fixture by a minimal reference, the fixture
name. So let's write an actual ``unittest.TestCase`` class using our
name. So let's write an actual ``unittest.TestCase`` class using our
fixture definition::
# content of test_unittest_db.py
@ -120,7 +120,7 @@ fixture definition::
def test_method2(self):
assert 0, self.db # fail for demo purposes
The ``@pytest.mark.usefixtures("db_class")`` class-decorator makes sure that
The ``@pytest.mark.usefixtures("db_class")`` class-decorator makes sure that
the pytest fixture function ``db_class`` is called once per class.
Due to the deliberately failing assert statements, we can take a look at
the ``self.db`` values in the traceback::
@ -130,30 +130,30 @@ the ``self.db`` values in the traceback::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
test_unittest_db.py FF [100%]
================================= FAILURES =================================
___________________________ MyTest.test_method1 ____________________________
self = <test_unittest_db.MyTest testMethod=test_method1>
def test_method1(self):
assert hasattr(self, "db")
> assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
E assert 0
test_unittest_db.py:9: AssertionError
___________________________ MyTest.test_method2 ____________________________
self = <test_unittest_db.MyTest testMethod=test_method2>
def test_method2(self):
> assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
E assert 0
test_unittest_db.py:12: AssertionError
========================= 2 failed in 0.12 seconds =========================
@ -166,10 +166,10 @@ Using autouse fixtures and accessing other fixtures
---------------------------------------------------
Although it's usually better to explicitly declare use of fixtures you need
for a given test, you may sometimes want to have fixtures that are
automatically used in a given context. After all, the traditional
for a given test, you may sometimes want to have fixtures that are
automatically used in a given context. After all, the traditional
style of unittest-setup mandates the use of this implicit fixture writing
and chances are, you are used to it or like it.
and chances are, you are used to it or like it.
You can flag fixture functions with ``@pytest.fixture(autouse=True)``
and define the fixture function in the context where you want it used.

View File

@ -111,9 +111,9 @@ For more information see :ref:`marks <mark>`.
::
pytest --pyargs pkg.testing
This will import ``pkg.testing`` and use its filesystem location to find and run tests from.
Modifying Python traceback printing
----------------------------------------------
@ -195,7 +195,7 @@ in your code and pytest automatically disables its output capture for that test:
Using the builtin breakpoint function
-------------------------------------
Python 3.7 introduces a builtin ``breakpoint()`` function.
Python 3.7 introduces a builtin ``breakpoint()`` function.
Pytest supports the use of ``breakpoint()`` with the following behaviours:
- When ``breakpoint()`` is called and ``PYTHONBREAKPOINT`` is set to the default value, pytest will use the custom internal PDB trace UI instead of the system default ``Pdb``.
@ -496,7 +496,7 @@ hook was invoked::
$ python myinvoke.py
. [100%]*** test run reporting finishing
.. note::

View File

@ -25,14 +25,14 @@ Running pytest now produces this output::
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
test_show_warnings.py . [100%]
============================= warnings summary =============================
test_show_warnings.py::test_one
$REGENDOC_TMPDIR/test_show_warnings.py:4: UserWarning: api v1, should use functions from v2
warnings.warn(UserWarning("api v1, should use functions from v2"))
-- Docs: http://doc.pytest.org/en/latest/warnings.html
=================== 1 passed, 1 warnings in 0.12 seconds ===================
@ -45,17 +45,17 @@ them into errors::
F [100%]
================================= FAILURES =================================
_________________________________ test_one _________________________________
def test_one():
> assert api_v1() == 1
test_show_warnings.py:8:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test_show_warnings.py:8:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def api_v1():
> warnings.warn(UserWarning("api v1, should use functions from v2"))
E UserWarning: api v1, should use functions from v2
test_show_warnings.py:4: UserWarning
1 failed in 0.12 seconds

View File

@ -6,7 +6,7 @@ classic xunit-style setup
========================================
This section describes a classic and popular way how you can implement
fixtures (setup and teardown test state) on a per-module/class/function basis.
fixtures (setup and teardown test state) on a per-module/class/function basis.
.. note::

View File

@ -6,7 +6,7 @@ pytest {version} has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them: