Merge from upstream

This commit is contained in:
Steffen Allner 2016-06-22 17:51:48 +02:00
commit dd97a2e7c8
77 changed files with 987 additions and 458 deletions

View File

@ -4,5 +4,5 @@ Here's a quick checklist in what to include:
- [ ] Include a detailed description of the bug or suggestion - [ ] Include a detailed description of the bug or suggestion
- [ ] `pip list` of the virtual environment you are using - [ ] `pip list` of the virtual environment you are using
- [ ] py.test and operating system versions - [ ] pytest and operating system versions
- [ ] Minimal example if possible - [ ] Minimal example if possible

28
AUTHORS
View File

@ -5,6 +5,7 @@ Contributors include::
Abdeali JK Abdeali JK
Abhijeet Kasurde Abhijeet Kasurde
Alexei Kozlenok
Anatoly Bubenkoff Anatoly Bubenkoff
Andreas Zeidler Andreas Zeidler
Andy Freeland Andy Freeland
@ -12,14 +13,17 @@ Anthon van der Neut
Armin Rigo Armin Rigo
Aron Curzon Aron Curzon
Aviv Palivoda Aviv Palivoda
Ben Webb
Benjamin Peterson Benjamin Peterson
Bob Ippolito Bob Ippolito
Brian Dorsey Brian Dorsey
Brian Okken Brian Okken
Brianna Laugher Brianna Laugher
Bruno Oliveira Bruno Oliveira
Cal Leeming
Carl Friedrich Bolz Carl Friedrich Bolz
Charles Cloud Charles Cloud
Charnjit SiNGH (CCSJ)
Chris Lamb Chris Lamb
Christian Theunert Christian Theunert
Christian Tismer Christian Tismer
@ -29,20 +33,24 @@ Daniel Hahler
Daniel Nuri Daniel Nuri
Danielle Jenkins Danielle Jenkins
Dave Hunt Dave Hunt
David Díaz-Barquero
David Mohr David Mohr
David Vierra David Vierra
Edison Gustavo Muenz Edison Gustavo Muenz
Eduardo Schettino Eduardo Schettino
Endre Galaczi
Elizaveta Shashkova Elizaveta Shashkova
Endre Galaczi
Eric Hunsberger
Eric Hunsberger Eric Hunsberger
Eric Siegerman Eric Siegerman
Erik M. Bray Erik M. Bray
Feng Ma
Florian Bruhin Florian Bruhin
Floris Bruynooghe Floris Bruynooghe
Gabriel Reis Gabriel Reis
Georgy Dyuldin Georgy Dyuldin
Graham Horler Graham Horler
Greg Price
Grig Gheorghiu Grig Gheorghiu
Guido Wesdorp Guido Wesdorp
Harald Armin Massa Harald Armin Massa
@ -66,6 +74,7 @@ Mark Abramowitz
Markus Unterwaditzer Markus Unterwaditzer
Martijn Faassen Martijn Faassen
Martin Prusse Martin Prusse
Martin K. Scherer
Matt Bachmann Matt Bachmann
Matt Williams Matt Williams
Michael Aquilina Michael Aquilina
@ -73,6 +82,8 @@ Michael Birtwell
Michael Droettboom Michael Droettboom
Mike Lundy Mike Lundy
Nicolas Delaby Nicolas Delaby
Oleg Pidsadnyi
Oliver Bestwalter
Omar Kohl Omar Kohl
Pieter Mulder Pieter Mulder
Piotr Banaszkiewicz Piotr Banaszkiewicz
@ -83,19 +94,18 @@ Raphael Pierzina
Roman Bolshakov Roman Bolshakov
Ronny Pfannschmidt Ronny Pfannschmidt
Ross Lawley Ross Lawley
Russel Winder
Ryan Wooden Ryan Wooden
Samuele Pedroni Samuele Pedroni
Steffen Allner Steffen Allner
Stephan Obermann
Tareq Alayan Tareq Alayan
Simon Gomizelj
Stefano Taschini
Stefan Farmbauer
Thomas Grainger
Tom Viner Tom Viner
Trevor Bekolay Trevor Bekolay
Vasily Kuznetsov Vasily Kuznetsov
Wouter van Ackooy Wouter van Ackooy
David Díaz-Barquero Bernard Pratz
Eric Hunsberger
Simon Gomizelj
Russel Winder
Ben Webb
Alexei Kozlenok
Cal Leeming
Feng Ma

View File

@ -43,6 +43,10 @@
Can also show where fixtures are defined if combined with ``-v``. Can also show where fixtures are defined if combined with ``-v``.
Thanks `@hackebrot`_ for the PR. Thanks `@hackebrot`_ for the PR.
* Introduce pytest command as recommended entry point. Closes proposal
`#1629`_. Thanks `@obestwalter`_ and `@davehunt`_ for the complete PR
(`#1633`_)
* New cli flags: * New cli flags:
``--setup-plan`` performs normal collection and reports the potential setup ``--setup-plan`` performs normal collection and reports the potential setup
and teardown, does not execute any fixtures and tests and teardown, does not execute any fixtures and tests
@ -81,8 +85,58 @@
message to raise when no exception occurred. message to raise when no exception occurred.
Thanks `@palaviv`_ for the complete PR (`#1616`_). Thanks `@palaviv`_ for the complete PR (`#1616`_).
* ``conftest.py`` files now benefit from assertion rewriting; previously it
was only available for test modules. Thanks `@flub`_, `@sober7`_ and
`@nicoddemus`_ for the PR (`#1619`_).
* Text documents without any doctests no longer appear as "skipped".
Thanks `@graingert`_ for reporting and providing a full PR (`#1580`_).
* Fix internal error issue when ``method`` argument is missing for
``teardown_method()``. Fixes (`#1605`_).
* Fix exception visualization in case the current working directory (CWD) gets
deleted during testing. Fixes (`#1235`). Thanks `@bukzor`_ for reporting. PR by
`@marscher`. Thanks `@nicoddemus`_ for his help.
* Ensure that a module within a namespace package can be found when it
is specified on the command line together with the ``--pyargs``
option. Thanks to `@taschini`_ for the PR (`#1597`_).
* Raise helpful failure message, when requesting parametrized fixture at runtime,
e.g. with ``request.getfuncargvalue``. BACKWARD INCOMPAT: Previously these params
were simply never defined. So a fixture decorated like ``@pytest.fixture(params=[0, 1, 2])``
only ran once. Now a failure is raised. Fixes (`#460`_). Thanks to
`@nikratio`_ for bug report, `@RedBeardCode`_ and `@tomviner`_ for PR.
* Create correct diff for strings ending with newlines. Fixes (`#1553`_).
Thanks `@Vogtinator`_ for reporting. Thanks to `@RedBeardCode`_ and
`@tomviner`_ for PR.
*
.. _#1580: https://github.com/pytest-dev/pytest/pull/1580
.. _#1605: https://github.com/pytest-dev/pytest/issues/1605
.. _#1597: https://github.com/pytest-dev/pytest/pull/1597
.. _#460: https://github.com/pytest-dev/pytest/pull/460
.. _#1553: https://github.com/pytest-dev/pytest/issues/1553
.. _@graingert: https://github.com/graingert
.. _@taschini: https://github.com/taschini
.. _@nikratio: https://github.com/nikratio
.. _@RedBeardCode: https://github.com/RedBeardCode
.. _@Vogtinator: https://github.com/Vogtinator
* Fix `#1421`_: Exit tests if a collection error occurs and add
``--continue-on-collection-errors`` option to restore previous behaviour.
Thanks `@olegpidsadnyi`_ and `@omarkohl`_ for the complete PR (`#1628`_).
*
.. _@milliams: https://github.com/milliams .. _@milliams: https://github.com/milliams
.. _@csaftoiu: https://github.com/csaftoiu .. _@csaftoiu: https://github.com/csaftoiu
.. _@flub: https://github.com/flub
.. _@novas0x2a: https://github.com/novas0x2a .. _@novas0x2a: https://github.com/novas0x2a
.. _@kalekundert: https://github.com/kalekundert .. _@kalekundert: https://github.com/kalekundert
.. _@tareqalayan: https://github.com/tareqalayan .. _@tareqalayan: https://github.com/tareqalayan
@ -90,7 +144,12 @@
.. _@palaviv: https://github.com/palaviv .. _@palaviv: https://github.com/palaviv
.. _@omarkohl: https://github.com/omarkohl .. _@omarkohl: https://github.com/omarkohl
.. _@mikofski: https://github.com/mikofski .. _@mikofski: https://github.com/mikofski
.. _@sober7: https://github.com/sober7
.. _@olegpidsadnyi: https://github.com/olegpidsadnyi
.. _@obestwalter: https://github.com/obestwalter
.. _@davehunt: https://github.com/davehunt
.. _#1421: https://github.com/pytest-dev/pytest/issues/1421
.. _#1426: https://github.com/pytest-dev/pytest/issues/1426 .. _#1426: https://github.com/pytest-dev/pytest/issues/1426
.. _#1428: https://github.com/pytest-dev/pytest/pull/1428 .. _#1428: https://github.com/pytest-dev/pytest/pull/1428
.. _#1444: https://github.com/pytest-dev/pytest/pull/1444 .. _#1444: https://github.com/pytest-dev/pytest/pull/1444
@ -102,9 +161,13 @@
.. _#1474: https://github.com/pytest-dev/pytest/pull/1474 .. _#1474: https://github.com/pytest-dev/pytest/pull/1474
.. _#1502: https://github.com/pytest-dev/pytest/pull/1502 .. _#1502: https://github.com/pytest-dev/pytest/pull/1502
.. _#1520: https://github.com/pytest-dev/pytest/pull/1520 .. _#1520: https://github.com/pytest-dev/pytest/pull/1520
.. _#1619: https://github.com/pytest-dev/pytest/issues/1619
.. _#372: https://github.com/pytest-dev/pytest/issues/372 .. _#372: https://github.com/pytest-dev/pytest/issues/372
.. _#1544: https://github.com/pytest-dev/pytest/issues/1544 .. _#1544: https://github.com/pytest-dev/pytest/issues/1544
.. _#1616: https://github.com/pytest-dev/pytest/pull/1616 .. _#1616: https://github.com/pytest-dev/pytest/pull/1616
.. _#1628: https://github.com/pytest-dev/pytest/pull/1628
.. _#1629: https://github.com/pytest-dev/pytest/issues/1629
.. _#1633: https://github.com/pytest-dev/pytest/pull/1633
**Bug Fixes** **Bug Fixes**
@ -271,7 +334,7 @@
Thanks `@biern`_ for the PR. Thanks `@biern`_ for the PR.
* Fix `traceback style docs`_ to describe all of the available options * Fix `traceback style docs`_ to describe all of the available options
(auto/long/short/line/native/no), with `auto` being the default since v2.6. (auto/long/short/line/native/no), with ``auto`` being the default since v2.6.
Thanks `@hackebrot`_ for the PR. Thanks `@hackebrot`_ for the PR.
* Fix (`#1422`_): junit record_xml_property doesn't allow multiple records * Fix (`#1422`_): junit record_xml_property doesn't allow multiple records

View File

@ -48,7 +48,7 @@ to fix the bug yet.
Fix bugs Fix bugs
-------- --------
Look through the GitHub issues for bugs. Here is sample filter you can use: Look through the GitHub issues for bugs. Here is a filter you can use:
https://github.com/pytest-dev/pytest/labels/bug https://github.com/pytest-dev/pytest/labels/bug
:ref:`Talk <contact>` to developers to find out how you can fix specific bugs. :ref:`Talk <contact>` to developers to find out how you can fix specific bugs.
@ -60,8 +60,7 @@ Don't forget to check the issue trackers of your favourite plugins, too!
Implement features Implement features
------------------ ------------------
Look through the GitHub issues for enhancements. Here is sample filter you Look through the GitHub issues for enhancements. Here is a filter you can use:
can use:
https://github.com/pytest-dev/pytest/labels/enhancement https://github.com/pytest-dev/pytest/labels/enhancement
:ref:`Talk <contact>` to developers to find out how you can implement specific :ref:`Talk <contact>` to developers to find out how you can implement specific
@ -70,16 +69,15 @@ features.
Write documentation Write documentation
------------------- -------------------
pytest could always use more documentation. What exactly is needed? Pytest could always use more documentation. What exactly is needed?
* More complementary documentation. Have you perhaps found something unclear? * More complementary documentation. Have you perhaps found something unclear?
* Documentation translations. We currently have only English. * Documentation translations. We currently have only English.
* Docstrings. There can never be too many of them. * Docstrings. There can never be too many of them.
* Blog posts, articles and such -- they're all very appreciated. * Blog posts, articles and such -- they're all very appreciated.
You can also edit documentation files directly in the Github web interface You can also edit documentation files directly in the GitHub web interface,
without needing to make a fork and local copy. This can be convenient for without using a local copy. This can be convenient for small fixes.
small fixes.
.. _submitplugin: .. _submitplugin:
@ -95,13 +93,14 @@ in repositories living under the ``pytest-dev`` organisations:
- `pytest-dev on Bitbucket <https://bitbucket.org/pytest-dev>`_ - `pytest-dev on Bitbucket <https://bitbucket.org/pytest-dev>`_
All pytest-dev Contributors team members have write access to all contained All pytest-dev Contributors team members have write access to all contained
repositories. pytest core and plugins are generally developed repositories. Pytest core and plugins are generally developed
using `pull requests`_ to respective repositories. using `pull requests`_ to respective repositories.
The objectives of the ``pytest-dev`` organisation are: The objectives of the ``pytest-dev`` organisation are:
* Having a central location for popular pytest plugins * Having a central location for popular pytest plugins
* Sharing some of the maintenance responsibility (in case a maintainer no longer whishes to maintain a plugin) * Sharing some of the maintenance responsibility (in case a maintainer no
longer wishes to maintain a plugin)
You can submit your plugin by subscribing to the `pytest-dev mail list You can submit your plugin by subscribing to the `pytest-dev mail list
<https://mail.python.org/mailman/listinfo/pytest-dev>`_ and writing a <https://mail.python.org/mailman/listinfo/pytest-dev>`_ and writing a
@ -127,27 +126,18 @@ transferred to the ``pytest-dev`` organisation.
Here's a rundown of how a repository transfer usually proceeds Here's a rundown of how a repository transfer usually proceeds
(using a repository named ``joedoe/pytest-xyz`` as example): (using a repository named ``joedoe/pytest-xyz`` as example):
* One of the ``pytest-dev`` administrators creates: * ``joedoe`` transfers repository ownership to ``pytest-dev`` administrator ``calvin``.
* ``calvin`` creates ``pytest-xyz-admin`` and ``pytest-xyz-developers`` teams, inviting ``joedoe`` to both as **maintainer**.
* ``calvin`` transfers repository to ``pytest-dev`` and configures team access:
- ``pytest-xyz-admin`` team, with full administration rights to - ``pytest-xyz-admin`` **admin** access;
``pytest-dev/pytest-xyz``. - ``pytest-xyz-developers`` **write** access;
- ``pytest-xyz-developers`` team, with write access to
``pytest-dev/pytest-xyz``.
* ``joedoe`` is invited to the ``pytest-xyz-admin`` team;
* After accepting the invitation, ``joedoe`` transfers the repository from its
original location to ``pytest-dev/pytest-xyz`` (A nice feature is that GitHub handles URL redirection from
the old to the new location automatically).
* ``joedoe`` is free to add any other collaborators to the
``pytest-xyz-admin`` or ``pytest-xyz-developers`` team as desired.
The ``pytest-dev/Contributors`` team has write access to all projects, and The ``pytest-dev/Contributors`` team has write access to all projects, and
every project administrator is in it. We recommend that each plugin has at least three every project administrator is in it. We recommend that each plugin has at least three
people who have the right to release to PyPI. people who have the right to release to PyPI.
Repository owners can be assured that no ``pytest-dev`` administrator will ever make Repository owners can rest assured that no ``pytest-dev`` administrator will ever make
releases of your repository or take ownership in any way, except in rare cases releases of your repository or take ownership in any way, except in rare cases
where someone becomes unresponsive after months of contact attempts. where someone becomes unresponsive after months of contact attempts.
As stated, the objective is to share maintenance and avoid "plugin-abandon". As stated, the objective is to share maintenance and avoid "plugin-abandon".
@ -159,15 +149,11 @@ As stated, the objective is to share maintenance and avoid "plugin-abandon".
Preparing Pull Requests on GitHub Preparing Pull Requests on GitHub
--------------------------------- ---------------------------------
There's an excellent tutorial on how Pull Requests work in the
`GitHub Help Center <https://help.github.com/articles/using-pull-requests/>`_
.. note:: .. note::
What is a "pull request"? It informs project's core developers about the What is a "pull request"? It informs project's core developers about the
changes you want to review and merge. Pull requests are stored on changes you want to review and merge. Pull requests are stored on
`GitHub servers <https://github.com/pytest-dev/pytest/pulls>`_. `GitHub servers <https://github.com/pytest-dev/pytest/pulls>`_.
Once you send pull request, we can discuss it's potential modifications and Once you send a pull request, we can discuss its potential modifications and
even add more commits to it later on. even add more commits to it later on.
There's an excellent tutorial on how Pull Requests work in the There's an excellent tutorial on how Pull Requests work in the
@ -216,19 +202,19 @@ but here is a simple overview:
This command will run tests via the "tox" tool against Python 2.7 and 3.5 This command will run tests via the "tox" tool against Python 2.7 and 3.5
and also perform "lint" coding-style checks. ``runtox.py`` is and also perform "lint" coding-style checks. ``runtox.py`` is
a thin wrapper around ``tox`` which installs from a development package a thin wrapper around ``tox`` which installs from a development package
index where newer (not yet released to pypi) versions of dependencies index where newer (not yet released to PyPI) versions of dependencies
(especially ``py``) might be present. (especially ``py``) might be present.
#. You can now edit your local working copy. #. You can now edit your local working copy.
You can now make the changes you want and run the tests again as necessary. You can now make the changes you want and run the tests again as necessary.
To run tests on py27 and pass options to pytest (e.g. enter pdb on failure) To run tests on Python 2.7 and pass options to pytest (e.g. enter pdb on
to pytest you can do:: failure) to pytest you can do::
$ python3 runtox.py -e py27 -- --pdb $ python3 runtox.py -e py27 -- --pdb
or to only run tests in a particular test module on py35:: Or to only run tests in a particular test module on Python 3.5::
$ python3 runtox.py -e py35 -- testing/test_config.py $ python3 runtox.py -e py35 -- testing/test_config.py
@ -237,9 +223,9 @@ but here is a simple overview:
$ git commit -a -m "<commit message>" $ git commit -a -m "<commit message>"
$ git push -u $ git push -u
Make sure you add a CHANGELOG message, and add yourself to AUTHORS. If you Make sure you add a message to ``CHANGELOG.rst`` and add yourself to
are unsure about either of these steps, submit your pull request and we'll ``AUTHORS``. If you are unsure about either of these steps, submit your
help you fix it up. pull request and we'll help you fix it up.
#. Finally, submit a pull request through the GitHub website using this data:: #. Finally, submit a pull request through the GitHub website using this data::
@ -248,6 +234,6 @@ but here is a simple overview:
base-fork: pytest-dev/pytest base-fork: pytest-dev/pytest
base: master # if it's a bugfix base: master # if it's a bugfix
base: feature # if it's a feature base: features # if it's a feature

View File

@ -33,7 +33,7 @@ An example of a simple test:
To execute it:: To execute it::
$ py.test $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.4.3, pytest-2.8.5, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.4.3, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
collected 1 items collected 1 items
@ -51,7 +51,7 @@ To execute it::
test_sample.py:5: AssertionError test_sample.py:5: AssertionError
======= 1 failed in 0.12 seconds ======== ======= 1 failed in 0.12 seconds ========
Due to ``py.test``'s detailed assertion introspection, only plain ``assert`` statements are used. See `getting-started <http://pytest.org/latest/getting-started.html#our-first-test-run>`_ for more examples. Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used. See `getting-started <http://pytest.org/latest/getting-started.html#our-first-test-run>`_ for more examples.
Features Features

View File

@ -1,3 +1,2 @@
# #
__version__ = '2.10.0.dev1' __version__ = '2.10.0.dev1'

View File

@ -3,7 +3,6 @@ from inspect import CO_VARARGS, CO_VARKEYWORDS
import re import re
import py import py
builtin_repr = repr builtin_repr = repr
reprlib = py.builtin._tryimport('repr', 'reprlib') reprlib = py.builtin._tryimport('repr', 'reprlib')
@ -36,12 +35,16 @@ class Code(object):
def path(self): def path(self):
""" return a path object pointing to source code (note that it """ return a path object pointing to source code (note that it
might not point to an actually existing file). """ might not point to an actually existing file). """
p = py.path.local(self.raw.co_filename) try:
# maybe don't try this checking p = py.path.local(self.raw.co_filename)
if not p.check(): # maybe don't try this checking
if not p.check():
raise OSError("py.path check failed.")
except OSError:
# XXX maybe try harder like the weird logic # XXX maybe try harder like the weird logic
# in the standard lib [linecache.updatecache] does? # in the standard lib [linecache.updatecache] does?
p = self.raw.co_filename p = self.raw.co_filename
return p return p
@property @property

View File

@ -4,6 +4,8 @@ support for presenting detailed information in failing assertions.
import py import py
import os import os
import sys import sys
from _pytest.config import hookimpl
from _pytest.monkeypatch import monkeypatch from _pytest.monkeypatch import monkeypatch
from _pytest.assertion import util from _pytest.assertion import util
@ -42,9 +44,13 @@ class AssertionState:
self.trace = config.trace.root.get("assertion") self.trace = config.trace.root.get("assertion")
def pytest_configure(config): @hookimpl(tryfirst=True)
mode = config.getvalue("assertmode") def pytest_load_initial_conftests(early_config, parser, args):
if config.getvalue("noassert") or config.getvalue("nomagic"): ns, ns_unknown_args = parser.parse_known_and_unknown_args(args)
mode = ns.assertmode
no_assert = ns.noassert
no_magic = ns.nomagic
if no_assert or no_magic:
mode = "plain" mode = "plain"
if mode == "rewrite": if mode == "rewrite":
try: try:
@ -57,25 +63,29 @@ def pytest_configure(config):
if (sys.platform.startswith('java') or if (sys.platform.startswith('java') or
sys.version_info[:3] == (2, 6, 0)): sys.version_info[:3] == (2, 6, 0)):
mode = "reinterp" mode = "reinterp"
early_config._assertstate = AssertionState(early_config, mode)
warn_about_missing_assertion(mode, early_config.pluginmanager)
if mode != "plain": if mode != "plain":
_load_modules(mode) _load_modules(mode)
m = monkeypatch() m = monkeypatch()
config._cleanup.append(m.undo) early_config._cleanup.append(m.undo)
m.setattr(py.builtin.builtins, 'AssertionError', m.setattr(py.builtin.builtins, 'AssertionError',
reinterpret.AssertionError) # noqa reinterpret.AssertionError) # noqa
hook = None hook = None
if mode == "rewrite": if mode == "rewrite":
hook = rewrite.AssertionRewritingHook() # noqa hook = rewrite.AssertionRewritingHook(early_config) # noqa
sys.meta_path.insert(0, hook) sys.meta_path.insert(0, hook)
warn_about_missing_assertion(mode)
config._assertstate = AssertionState(config, mode) early_config._assertstate.hook = hook
config._assertstate.hook = hook early_config._assertstate.trace("configured with mode set to %r" % (mode,))
config._assertstate.trace("configured with mode set to %r" % (mode,))
def undo(): def undo():
hook = config._assertstate.hook hook = early_config._assertstate.hook
if hook is not None and hook in sys.meta_path: if hook is not None and hook in sys.meta_path:
sys.meta_path.remove(hook) sys.meta_path.remove(hook)
config.add_cleanup(undo) early_config.add_cleanup(undo)
def pytest_collection(session): def pytest_collection(session):
@ -154,7 +164,7 @@ def _load_modules(mode):
from _pytest.assertion import rewrite # noqa from _pytest.assertion import rewrite # noqa
def warn_about_missing_assertion(mode): def warn_about_missing_assertion(mode, pluginmanager):
try: try:
assert False assert False
except AssertionError: except AssertionError:
@ -166,10 +176,18 @@ def warn_about_missing_assertion(mode):
else: else:
specifically = "failing tests may report as passing" specifically = "failing tests may report as passing"
sys.stderr.write("WARNING: " + specifically + # temporarily disable capture so we can print our warning
" because assert statements are not executed " capman = pluginmanager.getplugin('capturemanager')
"by the underlying Python interpreter " try:
"(are you using python -O?)\n") out, err = capman.suspendcapture()
sys.stderr.write("WARNING: " + specifically +
" because assert statements are not executed "
"by the underlying Python interpreter "
"(are you using python -O?)\n")
finally:
capman.resumecapture()
sys.stdout.write(out)
sys.stderr.write(err)
# Expose this plugin's implementation for the pytest_assertrepr_compare hook # Expose this plugin's implementation for the pytest_assertrepr_compare hook

View File

@ -44,20 +44,18 @@ else:
class AssertionRewritingHook(object): class AssertionRewritingHook(object):
"""PEP302 Import hook which rewrites asserts.""" """PEP302 Import hook which rewrites asserts."""
def __init__(self): def __init__(self, config):
self.config = config
self.fnpats = config.getini("python_files")
self.session = None self.session = None
self.modules = {} self.modules = {}
self._register_with_pkg_resources() self._register_with_pkg_resources()
def set_session(self, session): def set_session(self, session):
self.fnpats = session.config.getini("python_files")
self.session = session self.session = session
def find_module(self, name, path=None): def find_module(self, name, path=None):
if self.session is None: state = self.config._assertstate
return None
sess = self.session
state = sess.config._assertstate
state.trace("find_module called for: %s" % name) state.trace("find_module called for: %s" % name)
names = name.rsplit(".", 1) names = name.rsplit(".", 1)
lastname = names[-1] lastname = names[-1]
@ -86,24 +84,11 @@ class AssertionRewritingHook(object):
return None return None
else: else:
fn = os.path.join(pth, name.rpartition(".")[2] + ".py") fn = os.path.join(pth, name.rpartition(".")[2] + ".py")
fn_pypath = py.path.local(fn) fn_pypath = py.path.local(fn)
# Is this a test file? if not self._should_rewrite(fn_pypath, state):
if not sess.isinitpath(fn): return None
# We have to be very careful here because imports in this code can
# trigger a cycle.
self.session = None
try:
for pat in self.fnpats:
if fn_pypath.fnmatch(pat):
state.trace("matched test file %r" % (fn,))
break
else:
return None
finally:
self.session = sess
else:
state.trace("matched test file (was specified on cmdline): %r" %
(fn,))
# The requested module looks like a test file, so rewrite it. This is # The requested module looks like a test file, so rewrite it. This is
# the most magical part of the process: load the source, rewrite the # the most magical part of the process: load the source, rewrite the
# asserts, and load the rewritten source. We also cache the rewritten # asserts, and load the rewritten source. We also cache the rewritten
@ -151,6 +136,32 @@ class AssertionRewritingHook(object):
self.modules[name] = co, pyc self.modules[name] = co, pyc
return self return self
def _should_rewrite(self, fn_pypath, state):
# always rewrite conftest files
fn = str(fn_pypath)
if fn_pypath.basename == 'conftest.py':
state.trace("rewriting conftest file: %r" % (fn,))
return True
elif self.session is not None:
if self.session.isinitpath(fn):
state.trace("matched test file (was specified on cmdline): %r" %
(fn,))
return True
else:
# modules not passed explicitly on the command line are only
# rewritten if they match the naming convention for test files
session = self.session # avoid a cycle here
self.session = None
try:
for pat in self.fnpats:
if fn_pypath.fnmatch(pat):
state.trace("matched test file %r" % (fn,))
return True
finally:
self.session = session
del session
return False
def load_module(self, name): def load_module(self, name):
# If there is an existing module object named 'fullname' in # If there is an existing module object named 'fullname' in
# sys.modules, the loader must use that existing module. (Otherwise, # sys.modules, the loader must use that existing module. (Otherwise,

View File

@ -225,9 +225,10 @@ def _diff_text(left, right, verbose=False):
'characters in diff, use -v to show') % i] 'characters in diff, use -v to show') % i]
left = left[:-i] left = left[:-i]
right = right[:-i] right = right[:-i]
keepends = True
explanation += [line.strip('\n') explanation += [line.strip('\n')
for line in ndiff(left.splitlines(), for line in ndiff(left.splitlines(keepends),
right.splitlines())] right.splitlines(keepends))]
return explanation return explanation

View File

@ -463,7 +463,7 @@ def _readline_workaround():
Pdb uses readline support where available--when not running from the Python Pdb uses readline support where available--when not running from the Python
prompt, the readline module is not imported until running the pdb REPL. If prompt, the readline module is not imported until running the pdb REPL. If
running py.test with the --pdb option this means the readline module is not running pytest with the --pdb option this means the readline module is not
imported until after I/O capture has been started. imported until after I/O capture has been started.
This is a problem for pyreadline, which is often used to implement readline This is a problem for pyreadline, which is often used to implement readline

View File

@ -146,23 +146,19 @@ def get_optionflags(parent):
return flag_acc return flag_acc
class DoctestTextfile(DoctestItem, pytest.Module): class DoctestTextfile(pytest.Module):
obj = None
def runtest(self): def collect(self):
import doctest import doctest
fixture_request = _setup_fixtures(self)
# inspired by doctest.testfile; ideally we would use it directly, # inspired by doctest.testfile; ideally we would use it directly,
# but it doesn't support passing a custom checker # but it doesn't support passing a custom checker
text = self.fspath.read() text = self.fspath.read()
filename = str(self.fspath) filename = str(self.fspath)
name = self.fspath.basename name = self.fspath.basename
globs = dict(getfixture=fixture_request.getfuncargvalue) globs = {'__name__': '__main__'}
if '__name__' not in globs:
globs['__name__'] = '__main__'
for name, value in fixture_request.getfuncargvalue('doctest_namespace').items():
globs[name] = value
optionflags = get_optionflags(self) optionflags = get_optionflags(self)
runner = doctest.DebugRunner(verbose=0, optionflags=optionflags, runner = doctest.DebugRunner(verbose=0, optionflags=optionflags,
@ -170,8 +166,8 @@ class DoctestTextfile(DoctestItem, pytest.Module):
parser = doctest.DocTestParser() parser = doctest.DocTestParser()
test = parser.get_doctest(text, globs, name, filename, 0) test = parser.get_doctest(text, globs, name, filename, 0)
_check_all_skipped(test) if test.examples:
runner.run(test) yield DoctestItem(test.name, self, runner, test)
def _check_all_skipped(test): def _check_all_skipped(test):

View File

@ -99,7 +99,7 @@ def pytest_namespace():
def freeze_includes(): def freeze_includes():
""" """
Returns a list of module names used by py.test that should be Returns a list of module names used by pytest that should be
included by cx_freeze. included by cx_freeze.
""" """
result = list(_iter_all_modules(py)) result = list(_iter_all_modules(py))

View File

@ -92,8 +92,8 @@ def showhelp(config):
tw.line() tw.line()
tw.line() tw.line()
tw.line("to see available markers type: py.test --markers") tw.line("to see available markers type: pytest --markers")
tw.line("to see available fixtures type: py.test --fixtures") tw.line("to see available fixtures type: pytest --fixtures")
tw.line("(shown according to specified file_or_dir or current dir " tw.line("(shown according to specified file_or_dir or current dir "
"if not specified)") "if not specified)")

View File

@ -34,7 +34,7 @@ def pytest_addoption(parser):
.. note:: .. note::
This function should be implemented only in plugins or ``conftest.py`` This function should be implemented only in plugins or ``conftest.py``
files situated at the tests root directory due to how py.test files situated at the tests root directory due to how pytest
:ref:`discovers plugins during startup <pluginorder>`. :ref:`discovers plugins during startup <pluginorder>`.
:arg parser: To add command line options, call :arg parser: To add command line options, call

View File

@ -1,7 +1,5 @@
""" core implementation of testing process: init, session, runtest loop. """ """ core implementation of testing process: init, session, runtest loop. """
import imp
import os import os
import re
import sys import sys
import _pytest import _pytest
@ -25,8 +23,6 @@ EXIT_INTERNALERROR = 3
EXIT_USAGEERROR = 4 EXIT_USAGEERROR = 4
EXIT_NOTESTSCOLLECTED = 5 EXIT_NOTESTSCOLLECTED = 5
name_re = re.compile("^[a-zA-Z_]\w*$")
def pytest_addoption(parser): def pytest_addoption(parser):
parser.addini("norecursedirs", "directory patterns to avoid for recursion", parser.addini("norecursedirs", "directory patterns to avoid for recursion",
type="args", default=['.*', 'build', 'dist', 'CVS', '_darcs', '{arch}', '*.egg']) type="args", default=['.*', 'build', 'dist', 'CVS', '_darcs', '{arch}', '*.egg'])
@ -53,6 +49,9 @@ def pytest_addoption(parser):
group.addoption('--setupplan', '--setup-plan', action="store_true", group.addoption('--setupplan', '--setup-plan', action="store_true",
help="show what fixtures and tests would be executed but don't" help="show what fixtures and tests would be executed but don't"
" execute anything.") " execute anything.")
group._addoption("--continue-on-collection-errors", action="store_true",
default=False, dest="continue_on_collection_errors",
help="Force test execution even if collection errors occur.")
group = parser.getgroup("collect", "collection") group = parser.getgroup("collect", "collection")
group.addoption('--collectonly', '--collect-only', action="store_true", group.addoption('--collectonly', '--collect-only', action="store_true",
@ -138,20 +137,16 @@ def pytest_collection(session):
return session.perform_collect() return session.perform_collect()
def pytest_runtestloop(session): def pytest_runtestloop(session):
if (session.testsfailed and
not session.config.option.continue_on_collection_errors):
raise session.Interrupted(
"%d errors during collection" % session.testsfailed)
if session.config.option.collectonly: if session.config.option.collectonly:
return True return True
def getnextitem(i):
# this is a function to avoid python2
# keeping sys.exc_info set when calling into a test
# python2 keeps sys.exc_info till the frame is left
try:
return session.items[i+1]
except IndexError:
return None
for i, item in enumerate(session.items): for i, item in enumerate(session.items):
nextitem = getnextitem(i) nextitem = session.items[i+1] if i+1 < len(session.items) else None
item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
if session.shouldstop: if session.shouldstop:
raise session.Interrupted(session.shouldstop) raise session.Interrupted(session.shouldstop)
@ -397,7 +392,10 @@ class Node(object):
if self.config.option.fulltrace: if self.config.option.fulltrace:
style="long" style="long"
else: else:
tb = _pytest._code.Traceback([excinfo.traceback[-1]])
self._prunetraceback(excinfo) self._prunetraceback(excinfo)
if len(excinfo.traceback) == 0:
excinfo.traceback = tb
tbfilter = False # prunetraceback already does it tbfilter = False # prunetraceback already does it
if style == "auto": if style == "auto":
style = "long" style = "long"
@ -408,7 +406,13 @@ class Node(object):
else: else:
style = "long" style = "long"
return excinfo.getrepr(funcargs=True, try:
os.getcwd()
abspath = False
except OSError:
abspath = True
return excinfo.getrepr(funcargs=True, abspath=abspath,
showlocals=self.config.option.showlocals, showlocals=self.config.option.showlocals,
style=style, tbfilter=tbfilter) style=style, tbfilter=tbfilter)
@ -654,36 +658,32 @@ class Session(FSCollector):
return True return True
def _tryconvertpyarg(self, x): def _tryconvertpyarg(self, x):
mod = None """Convert a dotted module name to path.
path = [os.path.abspath('.')] + sys.path
for name in x.split('.'):
# ignore anything that's not a proper name here
# else something like --pyargs will mess up '.'
# since imp.find_module will actually sometimes work for it
# but it's supposed to be considered a filesystem path
# not a package
if name_re.match(name) is None:
return x
try:
fd, mod, type_ = imp.find_module(name, path)
except ImportError:
return x
else:
if fd is not None:
fd.close()
if type_[2] != imp.PKG_DIRECTORY: """
path = [os.path.dirname(mod)] import pkgutil
else: try:
path = [mod] loader = pkgutil.find_loader(x)
return mod except ImportError:
return x
if loader is None:
return x
# This method is sometimes invoked when AssertionRewritingHook, which
# does not define a get_filename method, is already in place:
try:
path = loader.get_filename()
except AttributeError:
# Retrieve path from AssertionRewritingHook:
path = loader.modules[x][0].co_filename
if loader.is_package(x):
path = os.path.dirname(path)
return path
def _parsearg(self, arg): def _parsearg(self, arg):
""" return (fspath, names) tuple after checking the file exists. """ """ return (fspath, names) tuple after checking the file exists. """
arg = str(arg)
if self.config.option.pyargs:
arg = self._tryconvertpyarg(arg)
parts = str(arg).split("::") parts = str(arg).split("::")
if self.config.option.pyargs:
parts[0] = self._tryconvertpyarg(parts[0])
relpath = parts[0].replace("/", os.sep) relpath = parts[0].replace("/", os.sep)
path = self.config.invocation_dir.join(relpath, abs=True) path = self.config.invocation_dir.join(relpath, abs=True)
if not path.check(): if not path.check():

View File

@ -123,15 +123,18 @@ def getexecutable(name, cache={}):
except KeyError: except KeyError:
executable = py.path.local.sysfind(name) executable = py.path.local.sysfind(name)
if executable: if executable:
import subprocess
popen = subprocess.Popen([str(executable), "--version"],
universal_newlines=True, stderr=subprocess.PIPE)
out, err = popen.communicate()
if name == "jython": if name == "jython":
import subprocess
popen = subprocess.Popen([str(executable), "--version"],
universal_newlines=True, stderr=subprocess.PIPE)
out, err = popen.communicate()
if not err or "2.5" not in err: if not err or "2.5" not in err:
executable = None executable = None
if "2.5.2" in err: if "2.5.2" in err:
executable = None # http://bugs.jython.org/issue1790 executable = None # http://bugs.jython.org/issue1790
elif popen.returncode != 0:
# Handle pyenv's 127.
executable = None
cache[name] = executable cache[name] = executable
return executable return executable
@ -374,10 +377,10 @@ class RunResult:
class Testdir: class Testdir:
"""Temporary test directory with tools to test/run py.test itself. """Temporary test directory with tools to test/run pytest itself.
This is based on the ``tmpdir`` fixture but provides a number of This is based on the ``tmpdir`` fixture but provides a number of
methods which aid with testing py.test itself. Unless methods which aid with testing pytest itself. Unless
:py:meth:`chdir` is used all methods will use :py:attr:`tmpdir` as :py:meth:`chdir` is used all methods will use :py:attr:`tmpdir` as
current working directory. current working directory.
@ -588,7 +591,7 @@ class Testdir:
"""Return the collection node of a file. """Return the collection node of a file.
This is like :py:meth:`getnode` but uses This is like :py:meth:`getnode` but uses
:py:meth:`parseconfigure` to create the (configured) py.test :py:meth:`parseconfigure` to create the (configured) pytest
Config instance. Config instance.
:param path: A :py:class:`py.path.local` instance of the file. :param path: A :py:class:`py.path.local` instance of the file.
@ -656,7 +659,7 @@ class Testdir:
:py:class:`HookRecorder` instance. :py:class:`HookRecorder` instance.
This runs the :py:func:`pytest.main` function to run all of This runs the :py:func:`pytest.main` function to run all of
py.test inside the test process itself like pytest inside the test process itself like
:py:meth:`inline_run`. However the return value is a tuple of :py:meth:`inline_run`. However the return value is a tuple of
the collection items and a :py:class:`HookRecorder` instance. the collection items and a :py:class:`HookRecorder` instance.
@ -669,7 +672,7 @@ class Testdir:
"""Run ``pytest.main()`` in-process, returning a HookRecorder. """Run ``pytest.main()`` in-process, returning a HookRecorder.
This runs the :py:func:`pytest.main` function to run all of This runs the :py:func:`pytest.main` function to run all of
py.test inside the test process itself. This means it can pytest inside the test process itself. This means it can
return a :py:class:`HookRecorder` instance which gives more return a :py:class:`HookRecorder` instance which gives more
detailed results from then run then can be done by matching detailed results from then run then can be done by matching
stdout/stderr from :py:meth:`runpytest`. stdout/stderr from :py:meth:`runpytest`.
@ -755,9 +758,9 @@ class Testdir:
return args return args
def parseconfig(self, *args): def parseconfig(self, *args):
"""Return a new py.test Config instance from given commandline args. """Return a new pytest Config instance from given commandline args.
This invokes the py.test bootstrapping code in _pytest.config This invokes the pytest bootstrapping code in _pytest.config
to create a new :py:class:`_pytest.core.PluginManager` and to create a new :py:class:`_pytest.core.PluginManager` and
call the pytest_cmdline_parse hook to create new call the pytest_cmdline_parse hook to create new
:py:class:`_pytest.config.Config` instance. :py:class:`_pytest.config.Config` instance.
@ -777,7 +780,7 @@ class Testdir:
return config return config
def parseconfigure(self, *args): def parseconfigure(self, *args):
"""Return a new py.test configured Config instance. """Return a new pytest configured Config instance.
This returns a new :py:class:`_pytest.config.Config` instance This returns a new :py:class:`_pytest.config.Config` instance
like :py:meth:`parseconfig`, but also calls the like :py:meth:`parseconfig`, but also calls the
@ -792,7 +795,7 @@ class Testdir:
def getitem(self, source, funcname="test_func"): def getitem(self, source, funcname="test_func"):
"""Return the test item for a test function. """Return the test item for a test function.
This writes the source to a python file and runs py.test's This writes the source to a python file and runs pytest's
collection on the resulting module, returning the test item collection on the resulting module, returning the test item
for the requested function name. for the requested function name.
@ -812,7 +815,7 @@ class Testdir:
def getitems(self, source): def getitems(self, source):
"""Return all test items collected from the module. """Return all test items collected from the module.
This writes the source to a python file and runs py.test's This writes the source to a python file and runs pytest's
collection on the resulting module, returning all test items collection on the resulting module, returning all test items
contained within. contained within.
@ -824,7 +827,7 @@ class Testdir:
"""Return the module collection node for ``source``. """Return the module collection node for ``source``.
This writes ``source`` to a file using :py:meth:`makepyfile` This writes ``source`` to a file using :py:meth:`makepyfile`
and then runs the py.test collection on it, returning the and then runs the pytest collection on it, returning the
collection node for the test module. collection node for the test module.
:param source: The source code of the module to collect. :param source: The source code of the module to collect.
@ -924,7 +927,7 @@ class Testdir:
def _getpytestargs(self): def _getpytestargs(self):
# we cannot use "(sys.executable,script)" # we cannot use "(sys.executable,script)"
# because on windows the script is e.g. a py.test.exe # because on windows the script is e.g. a pytest.exe
return (sys.executable, _pytest_fullpath,) # noqa return (sys.executable, _pytest_fullpath,) # noqa
def runpython(self, script): def runpython(self, script):
@ -939,7 +942,7 @@ class Testdir:
return self.run(sys.executable, "-c", command) return self.run(sys.executable, "-c", command)
def runpytest_subprocess(self, *args, **kwargs): def runpytest_subprocess(self, *args, **kwargs):
"""Run py.test as a subprocess with given arguments. """Run pytest as a subprocess with given arguments.
Any plugins added to the :py:attr:`plugins` list will added Any plugins added to the :py:attr:`plugins` list will added
using the ``-p`` command line option. Addtionally using the ``-p`` command line option. Addtionally
@ -967,9 +970,9 @@ class Testdir:
return self.run(*args) return self.run(*args)
def spawn_pytest(self, string, expect_timeout=10.0): def spawn_pytest(self, string, expect_timeout=10.0):
"""Run py.test using pexpect. """Run pytest using pexpect.
This makes sure to use the right py.test and sets up the This makes sure to use the right pytest and sets up the
temporary directory locations. temporary directory locations.
The pexpect child is returned. The pexpect child is returned.

View File

@ -2031,6 +2031,25 @@ class FixtureRequest(FuncargnamesCompatAttr):
except (AttributeError, ValueError): except (AttributeError, ValueError):
param = NOTSET param = NOTSET
param_index = 0 param_index = 0
if fixturedef.params is not None:
frame = inspect.stack()[3]
frameinfo = inspect.getframeinfo(frame[0])
source_path = frameinfo.filename
source_lineno = frameinfo.lineno
source_path = py.path.local(source_path)
if source_path.relto(funcitem.config.rootdir):
source_path = source_path.relto(funcitem.config.rootdir)
msg = (
"The requested fixture has no parameter defined for the "
"current test.\n\nRequested fixture '{0}' defined in:\n{1}"
"\n\nRequested here:\n{2}:{3}".format(
fixturedef.argname,
getlocation(fixturedef.func, funcitem.config.rootdir),
source_path,
source_lineno,
)
)
pytest.fail(msg)
else: else:
# indices might not be set if old-style metafunc.addcall() was used # indices might not be set if old-style metafunc.addcall() was used
param_index = funcitem.callspec.indices.get(argname, 0) param_index = funcitem.callspec.indices.get(argname, 0)
@ -2173,7 +2192,7 @@ class FixtureLookupError(LookupError):
available.append(name) available.append(name)
msg = "fixture %r not found" % (self.argname,) msg = "fixture %r not found" % (self.argname,)
msg += "\n available fixtures: %s" %(", ".join(available),) msg += "\n available fixtures: %s" %(", ".join(available),)
msg += "\n use 'py.test --fixtures [testpath]' for help on them." msg += "\n use 'pytest --fixtures [testpath]' for help on them."
return FixtureLookupErrorRepr(fspath, lineno, tblines, msg, self.argname) return FixtureLookupErrorRepr(fspath, lineno, tblines, msg, self.argname)
@ -2369,7 +2388,7 @@ class FixtureManager:
else: else:
if marker.name: if marker.name:
name = marker.name name = marker.name
assert not name.startswith(self._argprefix) assert not name.startswith(self._argprefix), name
fixturedef = FixtureDef(self, nodeid, name, obj, fixturedef = FixtureDef(self, nodeid, name, obj,
marker.scope, marker.params, marker.scope, marker.params,
unittest=unittest, ids=marker.ids) unittest=unittest, ids=marker.ids)

View File

@ -17,7 +17,7 @@
# #
# If you're wondering how this is created: you can create it yourself if you # If you're wondering how this is created: you can create it yourself if you
# have a complete pytest installation by using this command on the command- # have a complete pytest installation by using this command on the command-
# line: ``py.test --genscript=runtests.py``. # line: ``pytest --genscript=runtests.py``.
sources = """ sources = """
@SOURCES@""" @SOURCES@"""

View File

@ -7,7 +7,7 @@ Release announcements
sprint2016 sprint2016
release-2.9.1 release-2.9.2
release-2.9.1 release-2.9.1
release-2.9.0 release-2.9.0
release-2.8.7 release-2.8.7

View File

@ -24,7 +24,7 @@ following::
to assert that your function returns a certain value. If this assertion fails to assert that your function returns a certain value. If this assertion fails
you will see the return value of the function call:: you will see the return value of the function call::
$ py.test test_assert1.py $ pytest test_assert1.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -168,7 +168,7 @@ when it encounters comparisons. For example::
if you run this module:: if you run this module::
$ py.test test_assert2.py $ pytest test_assert2.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -237,7 +237,7 @@ now, given this test module::
you can run the test module and get the custom output defined in you can run the test module and get the custom output defined in
the conftest file:: the conftest file::
$ py.test -q test_foocompare.py $ pytest -q test_foocompare.py
F F
======= FAILURES ======== ======= FAILURES ========
_______ test_compare ________ _______ test_compare ________

View File

@ -18,11 +18,11 @@ For global activation of all argcomplete enabled python applications run::
For permanent (but not global) ``pytest`` activation, use:: For permanent (but not global) ``pytest`` activation, use::
register-python-argcomplete py.test >> ~/.bashrc register-python-argcomplete pytest >> ~/.bashrc
For one-time activation of argcomplete for ``pytest`` only, use:: For one-time activation of argcomplete for ``pytest`` only, use::
eval "$(register-python-argcomplete py.test)" eval "$(register-python-argcomplete pytest)"

View File

@ -77,7 +77,7 @@ Builtin fixtures/function arguments
You can ask for available builtin or project-custom You can ask for available builtin or project-custom
:ref:`fixtures <fixtures>` by typing:: :ref:`fixtures <fixtures>` by typing::
$ py.test -q --fixtures $ pytest -q --fixtures
cache cache
Return a cache object that can persist state between testing sessions. Return a cache object that can persist state between testing sessions.

View File

@ -15,7 +15,7 @@ Usage
--------- ---------
The plugin provides two command line options to rerun failures from the The plugin provides two command line options to rerun failures from the
last ``py.test`` invocation: last ``pytest`` invocation:
* ``--lf``, ``--last-failed`` - to only re-run the failures. * ``--lf``, ``--last-failed`` - to only re-run the failures.
* ``--ff``, ``--failed-first`` - to run the failures first and then the rest of * ``--ff``, ``--failed-first`` - to run the failures first and then the rest of
@ -25,7 +25,7 @@ For cleanup (usually not needed), a ``--cache-clear`` option allows to remove
all cross-session cache contents ahead of a test run. all cross-session cache contents ahead of a test run.
Other plugins may access the `config.cache`_ object to set/get Other plugins may access the `config.cache`_ object to set/get
**json encodable** values between ``py.test`` invocations. **json encodable** values between ``pytest`` invocations.
.. note:: .. note::
@ -49,7 +49,7 @@ First, let's create 50 test invocation of which only 2 fail::
If you run this for the first time you will see two failures:: If you run this for the first time you will see two failures::
$ py.test -q $ pytest -q
.................F.......F........................ .................F.......F........................
======= FAILURES ======== ======= FAILURES ========
_______ test_num[17] ________ _______ test_num[17] ________
@ -78,7 +78,7 @@ If you run this for the first time you will see two failures::
If you then run it with ``--lf``:: If you then run it with ``--lf``::
$ py.test --lf $ pytest --lf
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
run-last-failure: rerun last 2 failures run-last-failure: rerun last 2 failures
@ -119,7 +119,7 @@ Now, if you run with the ``--ff`` option, all tests will be run but the first
previous failures will be executed first (as can be seen from the series previous failures will be executed first (as can be seen from the series
of ``FF`` and dots):: of ``FF`` and dots)::
$ py.test --ff $ pytest --ff
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
run-last-failure: rerun last 2 failures first run-last-failure: rerun last 2 failures first
@ -163,7 +163,7 @@ The new config.cache object
Plugins or conftest.py support code can get a cached value using the Plugins or conftest.py support code can get a cached value using the
pytest ``config`` object. Here is a basic example plugin which pytest ``config`` object. Here is a basic example plugin which
implements a :ref:`fixture` which re-uses previously created state implements a :ref:`fixture` which re-uses previously created state
across py.test invocations:: across pytest invocations::
# content of test_caching.py # content of test_caching.py
import pytest import pytest
@ -184,7 +184,7 @@ across py.test invocations::
If you run this command once, it will take a while because If you run this command once, it will take a while because
of the sleep:: of the sleep::
$ py.test -q $ pytest -q
F F
======= FAILURES ======== ======= FAILURES ========
_______ test_function ________ _______ test_function ________
@ -201,7 +201,7 @@ of the sleep::
If you run it a second time the value will be retrieved from If you run it a second time the value will be retrieved from
the cache and this will be quick:: the cache and this will be quick::
$ py.test -q $ pytest -q
F F
======= FAILURES ======== ======= FAILURES ========
_______ test_function ________ _______ test_function ________
@ -222,9 +222,9 @@ Inspecting Cache content
------------------------------- -------------------------------
You can always peek at the content of the cache using the You can always peek at the content of the cache using the
``--cache-clear`` command line option:: ``--cache-show`` command line option::
$ py.test --cache-clear $ py.test --cache-show
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -250,7 +250,7 @@ Clearing Cache content
You can instruct pytest to clear all cache files and values You can instruct pytest to clear all cache files and values
by adding the ``--cache-clear`` option like this:: by adding the ``--cache-clear`` option like this::
py.test --cache-clear pytest --cache-clear
This is recommended for invocations from Continous Integration This is recommended for invocations from Continous Integration
servers where isolation and correctness is more important servers where isolation and correctness is more important

View File

@ -36,9 +36,9 @@ There are two ways in which ``pytest`` can perform capturing:
You can influence output capturing mechanisms from the command line:: You can influence output capturing mechanisms from the command line::
py.test -s # disable all capturing pytest -s # disable all capturing
py.test --capture=sys # replace sys.stdout/stderr with in-mem files pytest --capture=sys # replace sys.stdout/stderr with in-mem files
py.test --capture=fd # also point filedescriptors 1 and 2 to temp file pytest --capture=fd # also point filedescriptors 1 and 2 to temp file
.. _printdebugging: .. _printdebugging:
@ -62,7 +62,7 @@ is that you can use print statements for debugging::
and running this module will show you precisely the output and running this module will show you precisely the output
of the failing function and hide the other one:: of the failing function and hide the other one::
$ py.test $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:

View File

@ -7,7 +7,7 @@ Command line options and configuration file settings
You can get help on command line options and values in INI-style You can get help on command line options and values in INI-style
configurations files by using the general help option:: configurations files by using the general help option::
py.test -h # prints options _and_ config file settings pytest -h # prints options _and_ config file settings
This will display command line and configuration file settings This will display command line and configuration file settings
which were registered by installed plugins. which were registered by installed plugins.
@ -62,7 +62,7 @@ per-testrun information.
Example:: Example::
py.test path/to/testdir path/other/ pytest path/to/testdir path/other/
will determine the common ancestor as ``path`` and then will determine the common ancestor as ``path`` and then
check for ini-files as follows:: check for ini-files as follows::
@ -126,9 +126,9 @@ Builtin configuration file options
[pytest] [pytest]
addopts = --maxfail=2 -rf # exit after 2 failures, report fail info addopts = --maxfail=2 -rf # exit after 2 failures, report fail info
issuing ``py.test test_hello.py`` actually means:: issuing ``pytest test_hello.py`` actually means::
py.test --maxfail=2 -rf test_hello.py pytest --maxfail=2 -rf test_hello.py
Default is to add no options. Default is to add no options.
@ -218,7 +218,7 @@ Builtin configuration file options
.. confval:: doctest_optionflags .. confval:: doctest_optionflags
One or more doctest flag names from the standard ``doctest`` module. One or more doctest flag names from the standard ``doctest`` module.
:doc:`See how py.test handles doctests <doctest>`. :doc:`See how pytest handles doctests <doctest>`.
.. confval:: confcutdir .. confval:: confcutdir

View File

@ -6,7 +6,7 @@ By default all files matching the ``test*.txt`` pattern will
be run through the python standard ``doctest`` module. You be run through the python standard ``doctest`` module. You
can change the pattern by issuing:: can change the pattern by issuing::
py.test --doctest-glob='*.rst' pytest --doctest-glob='*.rst'
on the command line. Since version ``2.9``, ``--doctest-glob`` on the command line. Since version ``2.9``, ``--doctest-glob``
can be given multiple times in the command-line. can be given multiple times in the command-line.
@ -15,7 +15,7 @@ You can also trigger running of doctests
from docstrings in all python modules (including regular from docstrings in all python modules (including regular
python test modules):: python test modules)::
py.test --doctest-modules pytest --doctest-modules
You can make these changes permanent in your project by You can make these changes permanent in your project by
putting them into a pytest.ini file like this: putting them into a pytest.ini file like this:
@ -45,9 +45,9 @@ and another like this::
""" """
return 42 return 42
then you can just invoke ``py.test`` without command line options:: then you can just invoke ``pytest`` without command line options::
$ py.test $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
@ -68,7 +68,7 @@ Also, :ref:`usefixtures` and :ref:`autouse` fixtures are supported
when executing text doctest files. when executing text doctest files.
The standard ``doctest`` module provides some setting flags to configure the The standard ``doctest`` module provides some setting flags to configure the
strictness of doctest tests. In py.test You can enable those flags those flags strictness of doctest tests. In pytest You can enable those flags those flags
using the configuration file. To make pytest ignore trailing whitespaces and using the configuration file. To make pytest ignore trailing whitespaces and
ignore lengthy exception stack traces you can just write: ignore lengthy exception stack traces you can just write:
@ -77,7 +77,7 @@ ignore lengthy exception stack traces you can just write:
[pytest] [pytest]
doctest_optionflags= NORMALIZE_WHITESPACE IGNORE_EXCEPTION_DETAIL doctest_optionflags= NORMALIZE_WHITESPACE IGNORE_EXCEPTION_DETAIL
py.test also introduces new options to allow doctests to run in Python 2 and pytest also introduces new options to allow doctests to run in Python 2 and
Python 3 unchanged: Python 3 unchanged:
* ``ALLOW_UNICODE``: when enabled, the ``u`` prefix is stripped from unicode * ``ALLOW_UNICODE``: when enabled, the ``u`` prefix is stripped from unicode

View File

@ -29,7 +29,7 @@ You can "mark" a test function with custom metadata like this::
You can then restrict a test run to only run tests marked with ``webtest``:: You can then restrict a test run to only run tests marked with ``webtest``::
$ py.test -v -m webtest $ pytest -v -m webtest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
@ -43,7 +43,7 @@ You can then restrict a test run to only run tests marked with ``webtest``::
Or the inverse, running all tests except the webtest ones:: Or the inverse, running all tests except the webtest ones::
$ py.test -v -m "not webtest" $ pytest -v -m "not webtest"
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
@ -64,7 +64,7 @@ You can provide one or more :ref:`node IDs <node-id>` as positional
arguments to select only specified tests. This makes it easy to select arguments to select only specified tests. This makes it easy to select
tests based on their module, class, method, or function name:: tests based on their module, class, method, or function name::
$ py.test -v test_server.py::TestClass::test_method $ pytest -v test_server.py::TestClass::test_method
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
@ -77,7 +77,7 @@ tests based on their module, class, method, or function name::
You can also select on the class:: You can also select on the class::
$ py.test -v test_server.py::TestClass $ pytest -v test_server.py::TestClass
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
@ -90,7 +90,7 @@ You can also select on the class::
Or select multiple nodes:: Or select multiple nodes::
$ py.test -v test_server.py::TestClass test_server.py::test_send_http $ pytest -v test_server.py::TestClass test_server.py::test_send_http
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
@ -115,8 +115,8 @@ Or select multiple nodes::
``module.py::function[param]``. ``module.py::function[param]``.
Node IDs for failing tests are displayed in the test summary info Node IDs for failing tests are displayed in the test summary info
when running py.test with the ``-rf`` option. You can also when running pytest with the ``-rf`` option. You can also
construct Node IDs from the output of ``py.test --collectonly``. construct Node IDs from the output of ``pytest --collectonly``.
Using ``-k expr`` to select tests based on their name Using ``-k expr`` to select tests based on their name
------------------------------------------------------- -------------------------------------------------------
@ -128,7 +128,7 @@ which implements a substring match on the test names instead of the
exact match on markers that ``-m`` provides. This makes it easy to exact match on markers that ``-m`` provides. This makes it easy to
select tests based on their names:: select tests based on their names::
$ py.test -v -k http # running with the above defined example module $ pytest -v -k http # running with the above defined example module
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
@ -142,7 +142,7 @@ select tests based on their names::
And you can also run all tests except the ones that match the keyword:: And you can also run all tests except the ones that match the keyword::
$ py.test -k "not send_http" -v $ pytest -k "not send_http" -v
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
@ -158,7 +158,7 @@ And you can also run all tests except the ones that match the keyword::
Or to select "http" and "quick" tests:: Or to select "http" and "quick" tests::
$ py.test -k "http or quick" -v $ pytest -k "http or quick" -v
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
@ -198,7 +198,7 @@ Registering markers for your test suite is simple::
You can ask which markers exist for your test suite - the list includes our just defined ``webtest`` markers:: You can ask which markers exist for your test suite - the list includes our just defined ``webtest`` markers::
$ py.test --markers $ pytest --markers
@pytest.mark.webtest: mark a test as a webtest. @pytest.mark.webtest: mark a test as a webtest.
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test. @pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
@ -225,7 +225,7 @@ For an example on how to add and work with markers from a plugin, see
* there is one place in your test suite defining your markers * there is one place in your test suite defining your markers
* asking for existing markers via ``py.test --markers`` gives good output * asking for existing markers via ``pytest --markers`` gives good output
* typos in function markers are treated as an error if you use * typos in function markers are treated as an error if you use
the ``--strict`` option. Future versions of ``pytest`` are probably the ``--strict`` option. Future versions of ``pytest`` are probably
@ -350,7 +350,7 @@ A test file using this local plugin::
and an example invocations specifying a different environment than what and an example invocations specifying a different environment than what
the test needs:: the test needs::
$ py.test -E stage2 $ pytest -E stage2
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -362,7 +362,7 @@ the test needs::
and here is one that specifies exactly the environment needed:: and here is one that specifies exactly the environment needed::
$ py.test -E stage1 $ pytest -E stage1
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -374,7 +374,7 @@ and here is one that specifies exactly the environment needed::
The ``--markers`` option always gives you a list of available markers:: The ``--markers`` option always gives you a list of available markers::
$ py.test --markers $ pytest --markers
@pytest.mark.env(name): mark test to run only on named environment @pytest.mark.env(name): mark test to run only on named environment
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test. @pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
@ -427,7 +427,7 @@ test function. From a conftest file we can read it like this::
Let's run this without capturing output and see what we get:: Let's run this without capturing output and see what we get::
$ py.test -q -s $ pytest -q -s
glob args=('function',) kwargs={'x': 3} glob args=('function',) kwargs={'x': 3}
glob args=('class',) kwargs={'x': 2} glob args=('class',) kwargs={'x': 2}
glob args=('module',) kwargs={'x': 1} glob args=('module',) kwargs={'x': 1}
@ -483,7 +483,7 @@ Let's do a little test file to show how this looks like::
then you will see two test skipped and two executed tests as expected:: then you will see two test skipped and two executed tests as expected::
$ py.test -rs # this option reports skip reasons $ pytest -rs # this option reports skip reasons
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -497,7 +497,7 @@ then you will see two test skipped and two executed tests as expected::
Note that if you specify a platform via the marker-command line option like this:: Note that if you specify a platform via the marker-command line option like this::
$ py.test -m linux2 $ pytest -m linux2
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -549,7 +549,7 @@ We want to dynamically define two markers and can do it in a
We can now use the ``-m option`` to select one set:: We can now use the ``-m option`` to select one set::
$ py.test -m interface --tb=short $ pytest -m interface --tb=short
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -571,7 +571,7 @@ We can now use the ``-m option`` to select one set::
or to select both "event" and "interface" tests:: or to select both "event" and "interface" tests::
$ py.test -m "interface or event" --tb=short $ pytest -m "interface or event" --tb=short
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:

View File

@ -25,7 +25,7 @@ You can create a simple example file:
and if you installed `PyYAML`_ or a compatible YAML-parser you can and if you installed `PyYAML`_ or a compatible YAML-parser you can
now execute the test specification:: now execute the test specification::
nonpython $ py.test test_simple.yml nonpython $ pytest test_simple.yml
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR/nonpython, inifile: rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
@ -57,7 +57,7 @@ your own domain specific testing language this way.
``reportinfo()`` is used for representing the test location and is also ``reportinfo()`` is used for representing the test location and is also
consulted when reporting in ``verbose`` mode:: consulted when reporting in ``verbose`` mode::
nonpython $ py.test -v nonpython $ pytest -v
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
@ -79,7 +79,7 @@ consulted when reporting in ``verbose`` mode::
While developing your custom test collection and execution it's also While developing your custom test collection and execution it's also
interesting to just look at the collection tree:: interesting to just look at the collection tree::
nonpython $ py.test --collect-only nonpython $ pytest --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR/nonpython, inifile: rootdir: $REGENDOC_TMPDIR/nonpython, inifile:

View File

@ -44,14 +44,14 @@ Now we add a test configuration like this::
This means that we only run 2 tests if we do not pass ``--all``:: This means that we only run 2 tests if we do not pass ``--all``::
$ py.test -q test_compute.py $ pytest -q test_compute.py
.. ..
2 passed in 0.12 seconds 2 passed in 0.12 seconds
We run only two computations, so we see two dots. We run only two computations, so we see two dots.
let's run the full monty:: let's run the full monty::
$ py.test -q --all $ pytest -q --all
....F ....F
======= FAILURES ======== ======= FAILURES ========
_______ test_compute[4] ________ _______ test_compute[4] ________
@ -128,7 +128,7 @@ label generated by ``idfn``, but because we didn't generate a label for ``timede
objects, they are still using the default pytest representation:: objects, they are still using the default pytest representation::
$ py.test test_time.py --collect-only $ pytest test_time.py --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -179,7 +179,7 @@ only have to work a bit to construct the correct arguments for pytest's
this is a fully self-contained example which you can run with:: this is a fully self-contained example which you can run with::
$ py.test test_scenarios.py $ pytest test_scenarios.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -192,7 +192,7 @@ this is a fully self-contained example which you can run with::
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:: If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
$ py.test --collect-only test_scenarios.py $ pytest --collect-only test_scenarios.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -257,7 +257,7 @@ creates a database object for the actual test invocations::
Let's first see how it looks like at collection time:: Let's first see how it looks like at collection time::
$ py.test test_backends.py --collect-only $ pytest test_backends.py --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -270,7 +270,7 @@ Let's first see how it looks like at collection time::
And then when we run the test:: And then when we run the test::
$ py.test -q test_backends.py $ pytest -q test_backends.py
.F .F
======= FAILURES ======== ======= FAILURES ========
_______ test_db_initialized[d2] ________ _______ test_db_initialized[d2] ________
@ -318,7 +318,7 @@ will be passed to respective fixture function::
The result of this test will be successful:: The result of this test will be successful::
$ py.test test_indirect_list.py --collect-only $ pytest test_indirect_list.py --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -366,7 +366,7 @@ parametrizer`_ but in a lot less code::
Our test generator looks up a class-level definition which specifies which Our test generator looks up a class-level definition which specifies which
argument sets to use for each test function. Let's run it:: argument sets to use for each test function. Let's run it::
$ py.test -q $ pytest -q
F.. F..
======= FAILURES ======== ======= FAILURES ========
_______ TestClass.test_equals[1-2] ________ _______ TestClass.test_equals[1-2] ________
@ -396,7 +396,7 @@ is to be run with different sets of arguments for its three arguments:
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):: Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::
. $ py.test -rs -q multipython.py . $ pytest -rs -q multipython.py
........................... ...........................
27 passed in 0.12 seconds 27 passed in 0.12 seconds
@ -443,7 +443,7 @@ And finally a little test module::
If you run this with reporting for skips enabled:: If you run this with reporting for skips enabled::
$ py.test -rs test_module.py $ pytest -rs test_module.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:

View File

@ -1,5 +1,5 @@
# run this with $ py.test --collect-only test_collectonly.py # run this with $ pytest --collect-only test_collectonly.py
# #
def test_function(): def test_function():
pass pass

View File

@ -80,7 +80,7 @@ that match ``*_check``. For example, if we have::
then the test collection looks like this:: then the test collection looks like this::
$ py.test --collect-only $ pytest --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: setup.cfg rootdir: $REGENDOC_TMPDIR, inifile: setup.cfg
@ -107,7 +107,7 @@ interpreting arguments as python package names, deriving
their file system path and then running the test. For their file system path and then running the test. For
example if you have unittest2 installed you can type:: example if you have unittest2 installed you can type::
py.test --pyargs unittest2.test.test_skipping -q pytest --pyargs unittest2.test.test_skipping -q
which would run the respective test module. Like with which would run the respective test module. Like with
other options, through an ini-file and the :confval:`addopts` option you other options, through an ini-file and the :confval:`addopts` option you
@ -117,7 +117,7 @@ can make this change more permanently::
[pytest] [pytest]
addopts = --pyargs addopts = --pyargs
Now a simple invocation of ``py.test NAME`` will check Now a simple invocation of ``pytest NAME`` will check
if NAME exists as an importable package/module and otherwise if NAME exists as an importable package/module and otherwise
treat it as a filesystem path. treat it as a filesystem path.
@ -126,7 +126,7 @@ Finding out what is collected
You can always peek at the collection tree without running tests like this:: You can always peek at the collection tree without running tests like this::
. $ py.test --collect-only pythoncollection.py . $ pytest --collect-only pythoncollection.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
@ -180,7 +180,7 @@ and a setup.py dummy file like this::
then a pytest run on Python2 will find the one test and will leave out the then a pytest run on Python2 will find the one test and will leave out the
setup.py file:: setup.py file::
#$ py.test --collect-only #$ pytest --collect-only
====== test session starts ====== ====== test session starts ======
platform linux2 -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 platform linux2 -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
@ -193,7 +193,7 @@ setup.py file::
If you run with a Python3 interpreter both the one test and the setup.py file If you run with a Python3 interpreter both the one test and the setup.py file
will be left out:: will be left out::
$ py.test --collect-only $ pytest --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini

View File

@ -11,7 +11,7 @@ get on the terminal - we are working on that):
.. code-block:: python .. code-block:: python
assertion $ py.test failure_demo.py assertion $ pytest failure_demo.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR/assertion, inifile: rootdir: $REGENDOC_TMPDIR/assertion, inifile:

View File

@ -37,7 +37,7 @@ provide the ``cmdopt`` through a :ref:`fixture function <fixture function>`::
Let's run this without supplying our new option:: Let's run this without supplying our new option::
$ py.test -q test_sample.py $ pytest -q test_sample.py
F F
======= FAILURES ======== ======= FAILURES ========
_______ test_answer ________ _______ test_answer ________
@ -59,7 +59,7 @@ Let's run this without supplying our new option::
And now with supplying a command line option:: And now with supplying a command line option::
$ py.test -q --cmdopt=type2 $ pytest -q --cmdopt=type2
F F
======= FAILURES ======== ======= FAILURES ========
_______ test_answer ________ _______ test_answer ________
@ -106,7 +106,7 @@ you will now always perform test runs using a number
of subprocesses close to your CPU. Running in an empty of subprocesses close to your CPU. Running in an empty
directory with the above conftest.py:: directory with the above conftest.py::
$ py.test $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -154,7 +154,7 @@ We can now write a test module like this::
and when running it will see a skipped "slow" test:: and when running it will see a skipped "slow" test::
$ py.test -rs # "-rs" means report details on the little 's' $ pytest -rs # "-rs" means report details on the little 's'
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -168,7 +168,7 @@ and when running it will see a skipped "slow" test::
Or run it including the ``slow`` marked test:: Or run it including the ``slow`` marked test::
$ py.test --runslow $ pytest --runslow
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -204,7 +204,7 @@ of tracebacks: the ``checkconfig`` function will not be shown
unless the ``--full-trace`` command line option is specified. unless the ``--full-trace`` command line option is specified.
Let's run our little function:: Let's run our little function::
$ py.test -q test_checkconfig.py $ pytest -q test_checkconfig.py
F F
======= FAILURES ======== ======= FAILURES ========
_______ test_something ________ _______ test_something ________
@ -282,7 +282,7 @@ It's easy to present extra information in a ``pytest`` run::
which will add the string to the test header accordingly:: which will add the string to the test header accordingly::
$ py.test $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
project deps: mylib-1.1 project deps: mylib-1.1
@ -295,18 +295,18 @@ which will add the string to the test header accordingly::
You can also return a list of strings which will be considered as several You can also return a list of strings which will be considered as several
lines of information. You can of course also make the amount of reporting lines of information. You can of course also make the amount of reporting
information on e.g. the value of ``config.option.verbose`` so that information on e.g. the value of ``config.getoption('verbose')`` so that
you present more information appropriately:: you present more information appropriately::
# content of conftest.py # content of conftest.py
def pytest_report_header(config): def pytest_report_header(config):
if config.option.verbose > 0: if config.getoption('verbose') > 0:
return ["info1: did you know that ...", "did you?"] return ["info1: did you know that ...", "did you?"]
which will add info only when run with "--v":: which will add info only when run with "--v"::
$ py.test -v $ pytest -v
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
@ -319,7 +319,7 @@ which will add info only when run with "--v"::
and nothing when run plainly:: and nothing when run plainly::
$ py.test $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -352,7 +352,7 @@ out which tests are the slowest. Let's make an artificial test suite::
Now we can profile which test functions execute the slowest:: Now we can profile which test functions execute the slowest::
$ py.test --durations=3 $ pytest --durations=3
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -414,7 +414,7 @@ tests in a class. Here is a test module example::
If we run this:: If we run this::
$ py.test -rx $ pytest -rx
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -485,7 +485,7 @@ the ``db`` fixture::
We can run this:: We can run this::
$ py.test $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -502,7 +502,7 @@ We can run this::
def test_root(db): # no db here, will error out def test_root(db): # no db here, will error out
fixture 'db' not found fixture 'db' not found
available fixtures: tmpdir_factory, cache, tmpdir, pytestconfig, recwarn, monkeypatch, capfd, record_xml_property, capsys available fixtures: tmpdir_factory, cache, tmpdir, pytestconfig, recwarn, monkeypatch, capfd, record_xml_property, capsys
use 'py.test --fixtures [testpath]' for help on them. use 'pytest --fixtures [testpath]' for help on them.
$REGENDOC_TMPDIR/b/test_error.py:1 $REGENDOC_TMPDIR/b/test_error.py:1
======= FAILURES ======== ======= FAILURES ========
@ -589,7 +589,7 @@ if you then have failing tests::
and run them:: and run them::
$ py.test test_module.py $ pytest test_module.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -679,7 +679,7 @@ if you then have failing tests::
and run it:: and run it::
$ py.test -s test_module.py $ pytest -s test_module.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -767,6 +767,6 @@ over to ``pytest`` instead. For example::
... ...
This makes it convenient to execute your tests from within your frozen This makes it convenient to execute your tests from within your frozen
application, using standard ``py.test`` command-line options:: application, using standard ``pytest`` command-line options::
./app_main --pytest --verbose --tb=long --junitxml=results.xml test-suite/ ./app_main --pytest --verbose --tb=long --junitxml=results.xml test-suite/

View File

@ -59,7 +59,7 @@ will be called ahead of running any tests::
If you run this without output capturing:: If you run this without output capturing::
$ py.test -q -s test_module.py $ pytest -q -s test_module.py
callattr_ahead_of_alltests called callattr_ahead_of_alltests called
callme called! callme called!
callme other called callme other called

View File

@ -81,18 +81,17 @@ You can also turn off all assertion interaction using the
.. _`py/__init__.py`: http://bitbucket.org/hpk42/py-trunk/src/trunk/py/__init__.py .. _`py/__init__.py`: http://bitbucket.org/hpk42/py-trunk/src/trunk/py/__init__.py
Why a ``py.test`` instead of a ``pytest`` command? Why can I use both ``pytest`` and ``py.test`` commands?
++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++
Some of the reasons are historic, others are practical. ``pytest`` pytest used to be part of the py package, which provided several developer
used to be part of the ``py`` package which provided several developer utilities, all starting with ``py.<TAB>``, thus providing nice TAB-completion.
utilities, all starting with ``py.<TAB>``, thus providing nice If you install ``pip install pycmd`` you get these tools from a separate
TAB-completion. If package. Once ``pytest`` became a separate package, the ``py.test`` name was
you install ``pip install pycmd`` you get these tools from a separate retained due to avoid a naming conflict with another tool. This conflict was
package. These days the command line tool could be called ``pytest`` eventually resolved, and the ``pytest`` command was therefore introduced. In
but since many people have gotten used to the old name and there future versions of pytest, we may deprecate and later remove the ``py.test``
is another tool named "pytest" we just decided to stick with command to avoid perpetuating the confusion.
``py.test`` for now.
pytest fixtures, parametrized tests pytest fixtures, parametrized tests
------------------------------------------------------- -------------------------------------------------------

View File

@ -68,7 +68,7 @@ Here, the ``test_ehlo`` needs the ``smtp`` fixture value. pytest
will discover and call the :py:func:`@pytest.fixture <_pytest.python.fixture>` will discover and call the :py:func:`@pytest.fixture <_pytest.python.fixture>`
marked ``smtp`` fixture function. Running the test looks like this:: marked ``smtp`` fixture function. Running the test looks like this::
$ py.test test_smtpsimple.py $ pytest test_smtpsimple.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -113,7 +113,7 @@ with a list of available function arguments.
You can always issue:: You can always issue::
py.test --fixtures test_simplefactory.py pytest --fixtures test_simplefactory.py
to see available fixtures. to see available fixtures.
@ -186,7 +186,7 @@ function (in or below the directory where ``conftest.py`` is located)::
We deliberately insert failing ``assert 0`` statements in order to We deliberately insert failing ``assert 0`` statements in order to
inspect what is going on and can now run the tests:: inspect what is going on and can now run the tests::
$ py.test test_module.py $ pytest test_module.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -262,7 +262,7 @@ the fixture in the module has finished execution, regardless of the exception st
Let's execute it:: Let's execute it::
$ py.test -s -q --tb=no $ pytest -s -q --tb=no
FFteardown smtp FFteardown smtp
2 failed in 0.12 seconds 2 failed in 0.12 seconds
@ -350,7 +350,7 @@ We use the ``request.module`` attribute to optionally obtain an
``smtpserver`` attribute from the test module. If we just execute ``smtpserver`` attribute from the test module. If we just execute
again, nothing much has changed:: again, nothing much has changed::
$ py.test -s -q --tb=no $ pytest -s -q --tb=no
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com) FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
2 failed in 0.12 seconds 2 failed in 0.12 seconds
@ -367,7 +367,7 @@ server URL in its module namespace::
Running it:: Running it::
$ py.test -qq --tb=short test_anothersmtp.py $ pytest -qq --tb=short test_anothersmtp.py
F F
======= FAILURES ======== ======= FAILURES ========
_______ test_showhelo ________ _______ test_showhelo ________
@ -414,7 +414,7 @@ for each of which the fixture function will execute and can access
a value via ``request.param``. No test function code needs to change. a value via ``request.param``. No test function code needs to change.
So let's just do another run:: So let's just do another run::
$ py.test -q test_module.py $ pytest -q test_module.py
FFFF FFFF
======= FAILURES ======== ======= FAILURES ========
_______ test_ehlo[smtp.gmail.com] ________ _______ test_ehlo[smtp.gmail.com] ________
@ -514,7 +514,7 @@ return ``None`` then pytest's auto-generated ID will be used.
Running the above tests results in the following test IDs being used:: Running the above tests results in the following test IDs being used::
$ py.test --collect-only $ pytest --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -565,7 +565,7 @@ and instantiate an object ``app`` where we stick the already defined
Here we declare an ``app`` fixture which receives the previously defined Here we declare an ``app`` fixture which receives the previously defined
``smtp`` fixture and instantiates an ``App`` object with it. Let's run it:: ``smtp`` fixture and instantiates an ``App`` object with it. Let's run it::
$ py.test -v test_appsetup.py $ pytest -v test_appsetup.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
@ -634,7 +634,7 @@ to show the setup/teardown flow::
Let's run the tests in verbose mode and with looking at the print-output:: Let's run the tests in verbose mode and with looking at the print-output::
$ py.test -v -s test_module.py $ pytest -v -s test_module.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
@ -736,7 +736,7 @@ will be required for the execution of each test method, just as if
you specified a "cleandir" function argument to each of them. Let's run it you specified a "cleandir" function argument to each of them. Let's run it
to verify our fixture is activated and the tests pass:: to verify our fixture is activated and the tests pass::
$ py.test -q $ pytest -q
.. ..
2 passed in 0.12 seconds 2 passed in 0.12 seconds
@ -817,7 +817,7 @@ class-level ``usefixtures`` decorator.
If we run it, we get two passing tests:: If we run it, we get two passing tests::
$ py.test -q $ pytest -q
.. ..
2 passed in 0.12 seconds 2 passed in 0.12 seconds

View File

@ -180,9 +180,9 @@ and obsolete many prior uses of pytest hooks.
funcargs/fixture discovery now happens at collection time funcargs/fixture discovery now happens at collection time
--------------------------------------------------------------------- ---------------------------------------------------------------------
pytest-2.3 takes care to discover fixture/funcarg factories Since pytest-2.3, discovery of fixture/funcarg factories are taken care of
at collection time. This is more efficient especially for large test suites. at collection time. This is more efficient especially for large test suites.
Moreover, a call to "py.test --collect-only" should be able to in the future Moreover, a call to "pytest --collect-only" should be able to in the future
show a lot of setup-information and thus presents a nice method to get an show a lot of setup-information and thus presents a nice method to get an
overview of fixture management in your project. overview of fixture management in your project.

View File

@ -26,7 +26,7 @@ Installation options::
To check your installation has installed the correct version:: To check your installation has installed the correct version::
$ py.test --version $ pytest --version
This is pytest version 2.9.2, imported from $PYTHON_PREFIX/lib/python3.5/site-packages/pytest.py This is pytest version 2.9.2, imported from $PYTHON_PREFIX/lib/python3.5/site-packages/pytest.py
If you get an error checkout :ref:`installation issues`. If you get an error checkout :ref:`installation issues`.
@ -47,7 +47,7 @@ Let's create a first test file with a simple test function::
That's it. You can execute the test function now:: That's it. You can execute the test function now::
$ py.test $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -102,7 +102,7 @@ use the ``raises`` helper::
Running it with, this time in "quiet" reporting mode:: Running it with, this time in "quiet" reporting mode::
$ py.test -q test_sysexit.py $ pytest -q test_sysexit.py
. .
1 passed in 0.12 seconds 1 passed in 0.12 seconds
@ -127,7 +127,7 @@ The two tests are found because of the standard :ref:`test discovery`.
There is no need to subclass anything. We can simply There is no need to subclass anything. We can simply
run the module by passing its filename:: run the module by passing its filename::
$ py.test -q test_class.py $ pytest -q test_class.py
.F .F
======= FAILURES ======== ======= FAILURES ========
_______ TestClass.test_two ________ _______ TestClass.test_two ________
@ -163,7 +163,7 @@ We list the name ``tmpdir`` in the test function signature and
``pytest`` will lookup and call a fixture factory to create the resource ``pytest`` will lookup and call a fixture factory to create the resource
before performing the test function call. Let's just run it:: before performing the test function call. Let's just run it::
$ py.test -q test_tmpdir.py $ pytest -q test_tmpdir.py
F F
======= FAILURES ======== ======= FAILURES ========
_______ test_needsfiles ________ _______ test_needsfiles ________
@ -185,7 +185,7 @@ was created. More info at :ref:`tmpdir handling`.
You can find out what kind of builtin :ref:`fixtures` exist by typing:: You can find out what kind of builtin :ref:`fixtures` exist by typing::
py.test --fixtures # shows builtin and custom fixtures pytest --fixtures # shows builtin and custom fixtures
Where to go next Where to go next
------------------------------------- -------------------------------------
@ -213,12 +213,12 @@ easy_install or pip not found?
Install `setuptools`_ to get ``easy_install`` which allows to install Install `setuptools`_ to get ``easy_install`` which allows to install
``.egg`` binary format packages in addition to source-based ones. ``.egg`` binary format packages in addition to source-based ones.
py.test not found on Windows despite installation? pytest not found on Windows despite installation?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
.. _`Python for Windows`: http://www.imladris.com/Scripts/PythonForWindows.html .. _`Python for Windows`: http://www.imladris.com/Scripts/PythonForWindows.html
- **Windows**: If "easy_install" or "py.test" are not found - **Windows**: If "easy_install" or "pytest" are not found
you need to add the Python script path to your ``PATH``, see here: you need to add the Python script path to your ``PATH``, see here:
`Python for Windows`_. You may alternatively use an `ActivePython install`_ `Python for Windows`_. You may alternatively use an `ActivePython install`_
which does this for you automatically. which does this for you automatically.
@ -228,8 +228,8 @@ py.test not found on Windows despite installation?
.. _`Jython does not create command line launchers`: http://bugs.jython.org/issue1491 .. _`Jython does not create command line launchers`: http://bugs.jython.org/issue1491
- **Jython2.5.1 on Windows XP**: `Jython does not create command line launchers`_ - **Jython2.5.1 on Windows XP**: `Jython does not create command line launchers`_
so ``py.test`` will not work correctly. You may install py.test on so ``pytest`` will not work correctly. You may install pytest on
CPython and type ``py.test --genscript=mytest`` and then use CPython and type ``pytest --genscript=mytest`` and then use
``jython mytest`` to run your tests with Jython using ``pytest``. ``jython mytest`` to run your tests with Jython using ``pytest``.
:ref:`examples` for more complex examples :ref:`examples` for more complex examples

View File

@ -72,17 +72,17 @@ Important notes relating to both schemes:
- With inlined tests you might put ``__init__.py`` into test - With inlined tests you might put ``__init__.py`` into test
directories and make them installable as part of your application. directories and make them installable as part of your application.
Using the ``py.test --pyargs mypkg`` invocation pytest will Using the ``pytest --pyargs mypkg`` invocation pytest will
discover where mypkg is installed and collect tests from there. discover where mypkg is installed and collect tests from there.
With the "external" test you can still distribute tests but they With the "external" test you can still distribute tests but they
will not be installed or become importable. will not be installed or become importable.
Typically you can run tests by pointing to test directories or modules:: Typically you can run tests by pointing to test directories or modules::
py.test tests/test_app.py # for external test dirs pytest tests/test_app.py # for external test dirs
py.test mypkg/test/test_app.py # for inlined test dirs pytest mypkg/test/test_app.py # for inlined test dirs
py.test mypkg # run tests in all below test directories pytest mypkg # run tests in all below test directories
py.test # run all tests below current dir pytest # run all tests below current dir
... ...
Because of the above ``editable install`` mode you can change your Because of the above ``editable install`` mode you can change your
@ -193,7 +193,7 @@ If you now type::
this will execute your tests using ``pytest-runner``. As this is a this will execute your tests using ``pytest-runner``. As this is a
standalone version of ``pytest`` no prior installation whatsoever is standalone version of ``pytest`` no prior installation whatsoever is
required for calling the test command. You can also pass additional required for calling the test command. You can also pass additional
arguments to py.test such as your test directory or other arguments to pytest such as your test directory or other
options using ``--addopts``. options using ``--addopts``.
@ -211,7 +211,7 @@ your own setuptools Test command for invoking pytest.
class PyTest(TestCommand): class PyTest(TestCommand):
user_options = [('pytest-args=', 'a', "Arguments to pass to py.test")] user_options = [('pytest-args=', 'a', "Arguments to pass to pytest")]
def initialize_options(self): def initialize_options(self):
TestCommand.initialize_options(self) TestCommand.initialize_options(self)
@ -240,7 +240,7 @@ using the ``--pytest-args`` or ``-a`` command-line option. For example::
python setup.py test -a "--durations=5" python setup.py test -a "--durations=5"
is equivalent to running ``py.test --durations=5``. is equivalent to running ``pytest --durations=5``.
.. _standalone: .. _standalone:
@ -268,7 +268,7 @@ If you are a maintainer or application developer and want people
who don't deal with python much to easily run tests you may generate who don't deal with python much to easily run tests you may generate
a standalone ``pytest`` script:: a standalone ``pytest`` script::
py.test --genscript=runtests.py pytest --genscript=runtests.py
This generates a ``runtests.py`` script which is a fully functional basic This generates a ``runtests.py`` script which is a fully functional basic
``pytest`` script, running unchanged under Python2 and Python3. ``pytest`` script, running unchanged under Python2 and Python3.

View File

@ -13,7 +13,7 @@ Usage
After :ref:`installation` type:: After :ref:`installation` type::
python setup.py develop # make sure tests can import our package python setup.py develop # make sure tests can import our package
py.test # instead of 'nosetests' pytest # instead of 'nosetests'
and you should be able to run your nose style tests and and you should be able to run your nose style tests and
make use of pytest's capabilities. make use of pytest's capabilities.

View File

@ -53,7 +53,7 @@ Here, the ``@parametrize`` decorator defines three different ``(test_input,expec
tuples so that the ``test_eval`` function will run three times using tuples so that the ``test_eval`` function will run three times using
them in turn:: them in turn::
$ py.test $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -101,7 +101,7 @@ for example with the builtin ``mark.xfail``::
Let's run this:: Let's run this::
$ py.test $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -171,13 +171,13 @@ command line option and the parametrization of our test function::
If we now pass two stringinput values, our test will run twice:: If we now pass two stringinput values, our test will run twice::
$ py.test -q --stringinput="hello" --stringinput="world" test_strings.py $ pytest -q --stringinput="hello" --stringinput="world" test_strings.py
.. ..
2 passed in 0.12 seconds 2 passed in 0.12 seconds
Let's also run with a stringinput that will lead to a failing test:: Let's also run with a stringinput that will lead to a failing test::
$ py.test -q --stringinput="!" test_strings.py $ pytest -q --stringinput="!" test_strings.py
F F
======= FAILURES ======== ======= FAILURES ========
_______ test_valid_string[!] ________ _______ test_valid_string[!] ________
@ -198,7 +198,7 @@ If you don't specify a stringinput it will be skipped because
``metafunc.parametrize()`` will be called with an empty parameter ``metafunc.parametrize()`` will be called with an empty parameter
list:: list::
$ py.test -q -rs test_strings.py $ pytest -q -rs test_strings.py
s s
======= short test summary info ======== ======= short test summary info ========
SKIP [1] test_strings.py:1: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1 SKIP [1] test_strings.py:1: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1

View File

@ -59,7 +59,7 @@ Here is a little annotated list for some popular plugins:
a plugin to run javascript unittests in live browsers. a plugin to run javascript unittests in live browsers.
To see a complete list of all plugins with their latest testing To see a complete list of all plugins with their latest testing
status against different py.test and Python versions, please visit status against different pytest and Python versions, please visit
`plugincompat <http://plugincompat.herokuapp.com/>`_. `plugincompat <http://plugincompat.herokuapp.com/>`_.
You may also discover more plugins through a `pytest- pypi.python.org search`_. You may also discover more plugins through a `pytest- pypi.python.org search`_.
@ -90,7 +90,7 @@ Finding out which plugins are active
If you want to find out which plugins are active in your If you want to find out which plugins are active in your
environment you can type:: environment you can type::
py.test --trace-config pytest --trace-config
and will get an extended test header which shows activated plugins and will get an extended test header which shows activated plugins
and their names. It will also print local plugins aka and their names. It will also print local plugins aka
@ -103,7 +103,7 @@ Deactivating / unregistering a plugin by name
You can prevent plugins from loading or unregister them:: You can prevent plugins from loading or unregister them::
py.test -p no:NAME pytest -p no:NAME
This means that any subsequent try to activate/load the named This means that any subsequent try to activate/load the named
plugin will not work. plugin will not work.

View File

@ -19,7 +19,7 @@ information about skipped/xfailed tests is not shown by default to avoid
cluttering the output. You can use the ``-r`` option to see details cluttering the output. You can use the ``-r`` option to see details
corresponding to the "short" letters shown in the test progress:: corresponding to the "short" letters shown in the test progress::
py.test -rxs # show extra info on skips and xfails pytest -rxs # show extra info on skips and xfails
(See :ref:`how to change command line options defaults`) (See :ref:`how to change command line options defaults`)
@ -222,7 +222,7 @@ Here is a simple test file with the several usages:
Running it with the report-on-xfail option gives this output:: Running it with the report-on-xfail option gives this output::
example $ py.test -rx xfail_demo.py example $ pytest -rx xfail_demo.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR/example, inifile: rootdir: $REGENDOC_TMPDIR/example, inifile:
@ -368,6 +368,6 @@ The equivalent with "boolean conditions" is::
.. note:: .. note::
You cannot use ``pytest.config.getvalue()`` in code You cannot use ``pytest.config.getvalue()`` in code
imported before py.test's argument parsing takes place. For example, imported before pytest's argument parsing takes place. For example,
``conftest.py`` files are imported before command line parsing and thus ``conftest.py`` files are imported before command line parsing and thus
``config.getvalue()`` will not execute correctly. ``config.getvalue()`` will not execute correctly.

View File

@ -21,7 +21,7 @@ but note that project specific settings will be considered
first. There is a flag that helps you debugging your first. There is a flag that helps you debugging your
conftest.py configurations:: conftest.py configurations::
py.test --trace-config pytest --trace-config
customizing the collecting and running process customizing the collecting and running process

View File

@ -5,7 +5,7 @@ Mission
``pytest`` strives to make testing a fun and no-boilerplate effort. ``pytest`` strives to make testing a fun and no-boilerplate effort.
The tool is distributed as a `pytest` package. Its project independent The tool is distributed as a `pytest` package. Its project independent
``py.test`` command line tool helps you to: ``pytest`` command line tool helps you to:
* rapidly collect and run tests * rapidly collect and run tests
* run unit- or doctests, functional or integration tests * run unit- or doctests, functional or integration tests

View File

@ -53,7 +53,7 @@ subprocesses.
Running centralised testing:: Running centralised testing::
py.test --cov myproj tests/ pytest --cov myproj tests/
Shows a terminal report:: Shows a terminal report::
@ -76,7 +76,7 @@ file system. Each slave will have it's subprocesses measured.
Running distributed testing with dist mode set to load:: Running distributed testing with dist mode set to load::
py.test --cov myproj -n 2 tests/ pytest --cov myproj -n 2 tests/
Shows a terminal report:: Shows a terminal report::
@ -92,7 +92,7 @@ Shows a terminal report::
Again but spread over different hosts and different directories:: Again but spread over different hosts and different directories::
py.test --cov myproj --dist load pytest --cov myproj --dist load
--tx ssh=memedough@host1//chdir=testenv1 --tx ssh=memedough@host1//chdir=testenv1
--tx ssh=memedough@host2//chdir=/tmp/testenv2//python=/tmp/env1/bin/python --tx ssh=memedough@host2//chdir=/tmp/testenv2//python=/tmp/env1/bin/python
--rsyncdir myproj --rsyncdir tests --rsync examples --rsyncdir myproj --rsyncdir tests --rsync examples
@ -119,7 +119,7 @@ environments.
Running distributed testing with dist mode set to each:: Running distributed testing with dist mode set to each::
py.test --cov myproj --dist each pytest --cov myproj --dist each
--tx popen//chdir=/tmp/testenv3//python=/usr/local/python27/bin/python --tx popen//chdir=/tmp/testenv3//python=/usr/local/python27/bin/python
--tx ssh=memedough@host2//chdir=/tmp/testenv4//python=/tmp/env2/bin/python --tx ssh=memedough@host2//chdir=/tmp/testenv4//python=/tmp/env2/bin/python
--rsyncdir myproj --rsyncdir tests --rsync examples --rsyncdir myproj --rsyncdir tests --rsync examples
@ -149,7 +149,7 @@ annotated source code.
The terminal report without line numbers (default):: The terminal report without line numbers (default)::
py.test --cov-report term --cov myproj tests/ pytest --cov-report term --cov myproj tests/
-------------------- coverage: platform linux2, python 2.6.4-final-0 --------------------- -------------------- coverage: platform linux2, python 2.6.4-final-0 ---------------------
Name Stmts Miss Cover Name Stmts Miss Cover
@ -163,7 +163,7 @@ The terminal report without line numbers (default)::
The terminal report with line numbers:: The terminal report with line numbers::
py.test --cov-report term-missing --cov myproj tests/ pytest --cov-report term-missing --cov myproj tests/
-------------------- coverage: platform linux2, python 2.6.4-final-0 --------------------- -------------------- coverage: platform linux2, python 2.6.4-final-0 ---------------------
Name Stmts Miss Cover Missing Name Stmts Miss Cover Missing
@ -178,7 +178,7 @@ The terminal report with line numbers::
The remaining three reports output to files without showing anything on the terminal (useful for The remaining three reports output to files without showing anything on the terminal (useful for
when the output is going to a continuous integration server):: when the output is going to a continuous integration server)::
py.test --cov-report html --cov-report xml --cov-report annotate --cov myproj tests/ pytest --cov-report html --cov-report xml --cov-report annotate --cov myproj tests/
Coverage Data File Coverage Data File

View File

@ -26,7 +26,7 @@ Usage
To get full test coverage reports for a particular package type:: To get full test coverage reports for a particular package type::
py.test --cover-report=report pytest --cover-report=report
command line options command line options
-------------------- --------------------

View File

@ -24,7 +24,7 @@ Usage
After installation you can simply type:: After installation you can simply type::
py.test --figleaf [...] pytest --figleaf [...]
to enable figleaf coverage in your test run. A default ".figleaf" data file to enable figleaf coverage in your test run. A default ".figleaf" data file
and "html" directory will be created. You can use command line options and "html" directory will be created. You can use command line options

View File

@ -14,7 +14,7 @@ Usage
type:: type::
py.test # instead of 'nosetests' pytest # instead of 'nosetests'
and you should be able to run nose style tests and at the same and you should be able to run nose style tests and at the same
time can make full use of pytest's capabilities. time can make full use of pytest's capabilities.
@ -38,7 +38,7 @@ Unsupported idioms / issues
If you find other issues or have suggestions please run:: If you find other issues or have suggestions please run::
py.test --pastebin=all pytest --pastebin=all
and send the resulting URL to a ``pytest`` contact channel, and send the resulting URL to a ``pytest`` contact channel,
at best to the mailing list. at best to the mailing list.

View File

@ -36,7 +36,7 @@ Speed up test runs by sending tests to multiple CPUs
To send tests to multiple CPUs, type:: To send tests to multiple CPUs, type::
py.test -n NUM pytest -n NUM
Especially for longer running tests or tests requiring Especially for longer running tests or tests requiring
a lot of IO this can lead to considerable speed ups. a lot of IO this can lead to considerable speed ups.
@ -47,7 +47,7 @@ Running tests in a Python subprocess
To instantiate a python2.4 sub process and send tests to it, you may type:: To instantiate a python2.4 sub process and send tests to it, you may type::
py.test -d --tx popen//python=python2.4 pytest -d --tx popen//python=python2.4
This will start a subprocess which is run with the "python2.4" This will start a subprocess which is run with the "python2.4"
Python interpreter, found in your system binary lookup path. Python interpreter, found in your system binary lookup path.
@ -68,7 +68,7 @@ tests that you can successfully run locally. And you
have a ssh-reachable machine ``myhost``. Then have a ssh-reachable machine ``myhost``. Then
you can ad-hoc distribute your tests by typing:: you can ad-hoc distribute your tests by typing::
py.test -d --tx ssh=myhostpopen --rsyncdir mypkg mypkg pytest -d --tx ssh=myhostpopen --rsyncdir mypkg mypkg
This will synchronize your ``mypkg`` package directory This will synchronize your ``mypkg`` package directory
to an remote ssh account and then locally collect tests to an remote ssh account and then locally collect tests
@ -97,7 +97,7 @@ It will tell you that it starts listening on the default
port. You can now on your home machine specify this port. You can now on your home machine specify this
new socket host with something like this:: new socket host with something like this::
py.test -d --tx socket=192.168.1.102:8888 --rsyncdir mypkg mypkg pytest -d --tx socket=192.168.1.102:8888 --rsyncdir mypkg mypkg
.. _`atonce`: .. _`atonce`:
@ -107,7 +107,7 @@ Running tests on many platforms at once
The basic command to run tests on multiple platforms is:: The basic command to run tests on multiple platforms is::
py.test --dist=each --tx=spec1 --tx=spec2 pytest --dist=each --tx=spec1 --tx=spec2
If you specify a windows host, an OSX host and a Linux If you specify a windows host, an OSX host and a Linux
environment this command will send each tests to all environment this command will send each tests to all

View File

@ -27,7 +27,7 @@ and more. Here is an example test usage::
Running this would result in a passed test except for the last Running this would result in a passed test except for the last
``assert 0`` line which we use to look at values:: ``assert 0`` line which we use to look at values::
$ py.test test_tmpdir.py $ pytest test_tmpdir.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -100,7 +100,7 @@ than 3 temporary directories will be removed.
You can override the default temporary directory setting like this:: You can override the default temporary directory setting like this::
py.test --basetemp=mydir pytest --basetemp=mydir
When distributing tests on the local machine, ``pytest`` takes care to When distributing tests on the local machine, ``pytest`` takes care to
configure a basetemp directory for the sub processes such that all temporary configure a basetemp directory for the sub processes such that all temporary

View File

@ -21,7 +21,7 @@ Usage
After :ref:`installation` type:: After :ref:`installation` type::
py.test pytest
and you should be able to run your unittest-style tests if they and you should be able to run your unittest-style tests if they
are contained in ``test_*`` modules. If that works for you then are contained in ``test_*`` modules. If that works for you then
@ -86,7 +86,7 @@ the pytest fixture function ``db_class`` is called once per class.
Due to the deliberately failing assert statements, we can take a look at Due to the deliberately failing assert statements, we can take a look at
the ``self.db`` values in the traceback:: the ``self.db`` values in the traceback::
$ py.test test_unittest_db.py $ pytest test_unittest_db.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -161,7 +161,7 @@ on the class like in the previous example.
Running this test module ...:: Running this test module ...::
$ py.test -q test_unittest_cleandir.py $ pytest -q test_unittest_cleandir.py
. .
1 passed in 0.12 seconds 1 passed in 0.12 seconds

View File

@ -16,7 +16,7 @@ You can invoke testing through the Python interpreter from the command line::
python -m pytest [...] python -m pytest [...]
This is equivalent to invoking the command line script ``py.test [...]`` This is equivalent to invoking the command line script ``pytest [...]``
directly. directly.
Getting help on version, option names, environment variables Getting help on version, option names, environment variables
@ -24,9 +24,9 @@ Getting help on version, option names, environment variables
:: ::
py.test --version # shows where pytest was imported from pytest --version # shows where pytest was imported from
py.test --fixtures # show available builtin function arguments pytest --fixtures # show available builtin function arguments
py.test -h | --help # show help on command line and config file options pytest -h | --help # show help on command line and config file options
Stopping after the first (or N) failures Stopping after the first (or N) failures
@ -34,45 +34,45 @@ Stopping after the first (or N) failures
To stop the testing process after the first (N) failures:: To stop the testing process after the first (N) failures::
py.test -x # stop after first failure pytest -x # stop after first failure
py.test --maxfail=2 # stop after two failures pytest --maxfail=2 # stop after two failures
Specifying tests / selecting tests Specifying tests / selecting tests
--------------------------------------------------- ---------------------------------------------------
Several test run options:: Several test run options::
py.test test_mod.py # run tests in module pytest test_mod.py # run tests in module
py.test somepath # run all tests below somepath pytest somepath # run all tests below somepath
py.test -k stringexpr # only run tests with names that match the pytest -k stringexpr # only run tests with names that match the
# "string expression", e.g. "MyClass and not method" # "string expression", e.g. "MyClass and not method"
# will select TestMyClass.test_something # will select TestMyClass.test_something
# but not TestMyClass.test_method_simple # but not TestMyClass.test_method_simple
py.test test_mod.py::test_func # only run tests that match the "node ID", pytest test_mod.py::test_func # only run tests that match the "node ID",
# e.g "test_mod.py::test_func" will select # e.g "test_mod.py::test_func" will select
# only test_func in test_mod.py # only test_func in test_mod.py
py.test test_mod.py::TestClass::test_method # run a single method in pytest test_mod.py::TestClass::test_method # run a single method in
# a single class # a single class
Import 'pkg' and use its filesystem location to find and run tests:: Import 'pkg' and use its filesystem location to find and run tests::
py.test --pyargs pkg # run all tests found below directory of pkg pytest --pyargs pkg # run all tests found below directory of pkg
Modifying Python traceback printing Modifying Python traceback printing
---------------------------------------------- ----------------------------------------------
Examples for modifying traceback printing:: Examples for modifying traceback printing::
py.test --showlocals # show local variables in tracebacks pytest --showlocals # show local variables in tracebacks
py.test -l # show local variables (shortcut) pytest -l # show local variables (shortcut)
py.test --tb=auto # (default) 'long' tracebacks for the first and last pytest --tb=auto # (default) 'long' tracebacks for the first and last
# entry, but 'short' style for the other entries # entry, but 'short' style for the other entries
py.test --tb=long # exhaustive, informative traceback formatting pytest --tb=long # exhaustive, informative traceback formatting
py.test --tb=short # shorter traceback format pytest --tb=short # shorter traceback format
py.test --tb=line # only one line per failure pytest --tb=line # only one line per failure
py.test --tb=native # Python standard library formatting pytest --tb=native # Python standard library formatting
py.test --tb=no # no traceback at all pytest --tb=no # no traceback at all
The ``--full-trace`` causes very long traces to be printed on error (longer The ``--full-trace`` causes very long traces to be printed on error (longer
than ``--tb=long``). It also ensures that a stack trace is printed on than ``--tb=long``). It also ensures that a stack trace is printed on
@ -90,14 +90,14 @@ Dropping to PDB_ (Python Debugger) on failures
Python comes with a builtin Python debugger called PDB_. ``pytest`` Python comes with a builtin Python debugger called PDB_. ``pytest``
allows one to drop into the PDB_ prompt via a command line option:: allows one to drop into the PDB_ prompt via a command line option::
py.test --pdb pytest --pdb
This will invoke the Python debugger on every failure. Often you might This will invoke the Python debugger on every failure. Often you might
only want to do this for the first failing test to understand a certain only want to do this for the first failing test to understand a certain
failure situation:: failure situation::
py.test -x --pdb # drop to PDB on first failure, then end test session pytest -x --pdb # drop to PDB on first failure, then end test session
py.test --pdb --maxfail=3 # drop to PDB for first three failures pytest --pdb --maxfail=3 # drop to PDB for first three failures
Note that on any failure the exception information is stored on Note that on any failure the exception information is stored on
``sys.last_value``, ``sys.last_type`` and ``sys.last_traceback``. In ``sys.last_value``, ``sys.last_type`` and ``sys.last_traceback``. In
@ -125,7 +125,7 @@ can use a helper::
.. versionadded: 2.0.0 .. versionadded: 2.0.0
Prior to pytest version 2.0.0 you could only enter PDB_ tracing if you disabled Prior to pytest version 2.0.0 you could only enter PDB_ tracing if you disabled
capturing on the command line via ``py.test -s``. In later versions, pytest capturing on the command line via ``pytest -s``. In later versions, pytest
automatically disables its output capture when you enter PDB_ tracing: automatically disables its output capture when you enter PDB_ tracing:
* Output capture in other tests is not affected. * Output capture in other tests is not affected.
@ -141,7 +141,7 @@ automatically disables its output capture when you enter PDB_ tracing:
Since pytest version 2.4.0 you can also use the native Python Since pytest version 2.4.0 you can also use the native Python
``import pdb;pdb.set_trace()`` call to enter PDB_ tracing without having to use ``import pdb;pdb.set_trace()`` call to enter PDB_ tracing without having to use
the ``pytest.set_trace()`` wrapper or explicitly disable pytest's output the ``pytest.set_trace()`` wrapper or explicitly disable pytest's output
capturing via ``py.test -s``. capturing via ``pytest -s``.
.. _durations: .. _durations:
@ -152,7 +152,7 @@ Profiling test execution duration
To get a list of the slowest 10 test durations:: To get a list of the slowest 10 test durations::
py.test --durations=10 pytest --durations=10
Creating JUnitXML format files Creating JUnitXML format files
@ -161,7 +161,7 @@ Creating JUnitXML format files
To create result files which can be read by Jenkins_ or other Continuous To create result files which can be read by Jenkins_ or other Continuous
integration servers, use this invocation:: integration servers, use this invocation::
py.test --junitxml=path pytest --junitxml=path
to create an XML file at ``path``. to create an XML file at ``path``.
@ -253,7 +253,7 @@ Creating resultlog format files
To create plain-text machine-readable result files you can issue:: To create plain-text machine-readable result files you can issue::
py.test --resultlog=path pytest --resultlog=path
and look at the content at the ``path`` location. Such files are used e.g. and look at the content at the ``path`` location. Such files are used e.g.
by the `PyPy-test`_ web page to show test results over several revisions. by the `PyPy-test`_ web page to show test results over several revisions.
@ -266,7 +266,7 @@ Sending test report to online pastebin service
**Creating a URL for each test failure**:: **Creating a URL for each test failure**::
py.test --pastebin=failed pytest --pastebin=failed
This will submit test run information to a remote Paste service and This will submit test run information to a remote Paste service and
provide a URL for each failure. You may select tests as usual or add provide a URL for each failure. You may select tests as usual or add
@ -274,7 +274,7 @@ for example ``-x`` if you only want to send one particular failure.
**Creating a URL for a whole test session log**:: **Creating a URL for a whole test session log**::
py.test --pastebin=all pytest --pastebin=all
Currently only pasting to the http://bpaste.net service is implemented. Currently only pasting to the http://bpaste.net service is implemented.
@ -285,9 +285,9 @@ To disable loading specific plugins at invocation time, use the ``-p`` option
together with the prefix ``no:``. together with the prefix ``no:``.
Example: to disable loading the plugin ``doctest``, which is responsible for Example: to disable loading the plugin ``doctest``, which is responsible for
executing doctest tests from text files, invoke py.test like this:: executing doctest tests from text files, invoke pytest like this::
py.test -p no:doctest pytest -p no:doctest
.. _`pytest.main-usage`: .. _`pytest.main-usage`:
@ -300,7 +300,7 @@ You can invoke ``pytest`` from Python code directly::
pytest.main() pytest.main()
this acts as if you would call "py.test" from the command line. this acts as if you would call "pytest" from the command line.
It will not raise ``SystemExit`` but return the exitcode instead. It will not raise ``SystemExit`` but return the exitcode instead.
You can pass in options and arguments:: You can pass in options and arguments::

View File

@ -87,8 +87,8 @@ sub directory but not for other directories::
Here is how you might run it:: Here is how you might run it::
py.test test_flat.py # will not show "setting up" pytest test_flat.py # will not show "setting up"
py.test a/test_sub.py # will show "setting up" pytest a/test_sub.py # will show "setting up"
.. Note:: .. Note::
If you have ``conftest.py`` files which do not reside in a If you have ``conftest.py`` files which do not reside in a

View File

@ -52,7 +52,7 @@ Speed up test runs by sending tests to multiple CPUs
To send tests to multiple CPUs, type:: To send tests to multiple CPUs, type::
py.test -n NUM pytest -n NUM
Especially for longer running tests or tests requiring Especially for longer running tests or tests requiring
a lot of I/O this can lead to considerable speed ups. a lot of I/O this can lead to considerable speed ups.
@ -63,14 +63,14 @@ Running tests in a Python subprocess
To instantiate a Python-2.7 subprocess and send tests to it, you may type:: To instantiate a Python-2.7 subprocess and send tests to it, you may type::
py.test -d --tx popen//python=python2.7 pytest -d --tx popen//python=python2.7
This will start a subprocess which is run with the "python2.7" This will start a subprocess which is run with the "python2.7"
Python interpreter, found in your system binary lookup path. Python interpreter, found in your system binary lookup path.
If you prefix the --tx option value like this:: If you prefix the --tx option value like this::
py.test -d --tx 3*popen//python=python2.7 pytest -d --tx 3*popen//python=python2.7
then three subprocesses would be created and the tests then three subprocesses would be created and the tests
will be distributed to three subprocesses and run simultanously. will be distributed to three subprocesses and run simultanously.
@ -84,7 +84,7 @@ Running tests in looponfailing mode
For refactoring a project with a medium or large test suite For refactoring a project with a medium or large test suite
you can use the looponfailing mode. Simply add the ``--f`` option:: you can use the looponfailing mode. Simply add the ``--f`` option::
py.test -f pytest -f
and ``pytest`` will run your tests. Assuming you have failures it will then and ``pytest`` will run your tests. Assuming you have failures it will then
wait for file changes and re-run the failing test set. File changes are detected by looking at ``looponfailingroots`` root directories and all of their contents (recursively). If the default for this value does not work for you you wait for file changes and re-run the failing test set. File changes are detected by looking at ``looponfailingroots`` root directories and all of their contents (recursively). If the default for this value does not work for you you
@ -104,7 +104,7 @@ tests that you can successfully run locally. And you also
have a ssh-reachable machine ``myhost``. Then have a ssh-reachable machine ``myhost``. Then
you can ad-hoc distribute your tests by typing:: you can ad-hoc distribute your tests by typing::
py.test -d --tx ssh=myhostpopen --rsyncdir mypkg mypkg pytest -d --tx ssh=myhostpopen --rsyncdir mypkg mypkg
This will synchronize your ``mypkg`` package directory This will synchronize your ``mypkg`` package directory
with a remote ssh account and then collect and run your with a remote ssh account and then collect and run your
@ -135,7 +135,7 @@ It will tell you that it starts listening on the default
port. You can now on your home machine specify this port. You can now on your home machine specify this
new socket host with something like this:: new socket host with something like this::
py.test -d --tx socket=192.168.1.102:8888 --rsyncdir mypkg mypkg pytest -d --tx socket=192.168.1.102:8888 --rsyncdir mypkg mypkg
.. _`atonce`: .. _`atonce`:
@ -145,7 +145,7 @@ Running tests on many platforms at once
The basic command to run tests on multiple platforms is:: The basic command to run tests on multiple platforms is::
py.test --dist=each --tx=spec1 --tx=spec2 pytest --dist=each --tx=spec1 --tx=spec2
If you specify a windows host, an OSX host and a Linux If you specify a windows host, an OSX host and a Linux
environment this command will send each tests to all environment this command will send each tests to all
@ -174,7 +174,7 @@ You can also add default environments like this::
and then just type:: and then just type::
py.test --dist=each pytest --dist=each
to run tests in each of the environments. to run tests in each of the environments.

View File

@ -14,4 +14,3 @@
Marking functions as ``yield_fixture`` is still supported, but deprecated and should not Marking functions as ``yield_fixture`` is still supported, but deprecated and should not
be used in new code. be used in new code.

View File

@ -4,17 +4,17 @@
set -e set -e
cd ../pytest-pep8 cd ../pytest-pep8
py.test pytest
cd ../pytest-instafail cd ../pytest-instafail
py.test pytest
cd ../pytest-cache cd ../pytest-cache
py.test pytest
cd ../pytest-xprocess cd ../pytest-xprocess
py.test pytest
#cd ../pytest-cov #cd ../pytest-cov
#py.test #pytest
cd ../pytest-capturelog cd ../pytest-capturelog
py.test pytest
cd ../pytest-xdist cd ../pytest-xdist
py.test pytest

View File

@ -91,6 +91,7 @@ def cmdline_entrypoints(versioninfo, platform, basename):
else: # cpython else: # cpython
points = {'py.test-%s.%s' % versioninfo[:2] : target} points = {'py.test-%s.%s' % versioninfo[:2] : target}
points['py.test'] = target points['py.test'] = target
points['pytest'] = target
return points return points

View File

@ -1,3 +1,5 @@
# -*- coding: utf-8 -*-
import os
import sys import sys
import _pytest._code import _pytest._code
@ -120,7 +122,7 @@ class TestGeneralUsage:
"ImportError while importing test module*", "ImportError while importing test module*",
"'No module named *does_not_work*", "'No module named *does_not_work*",
]) ])
assert result.ret == 1 assert result.ret == 2
def test_not_collectable_arguments(self, testdir): def test_not_collectable_arguments(self, testdir):
p1 = testdir.makepyfile("") p1 = testdir.makepyfile("")
@ -513,12 +515,11 @@ class TestInvocationVariants:
path = testdir.mkpydir("tpkg") path = testdir.mkpydir("tpkg")
path.join("test_hello.py").write('raise ImportError') path.join("test_hello.py").write('raise ImportError')
result = testdir.runpytest("--pyargs", "tpkg.test_hello") result = testdir.runpytest_subprocess("--pyargs", "tpkg.test_hello")
assert result.ret != 0 assert result.ret != 0
# FIXME: It would be more natural to match NOT
# "ERROR*file*or*package*not*found*".
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
"*collected 0 items*" "collected*0*items*/*1*errors"
]) ])
def test_cmdline_python_package(self, testdir, monkeypatch): def test_cmdline_python_package(self, testdir, monkeypatch):
@ -540,7 +541,7 @@ class TestInvocationVariants:
def join_pythonpath(what): def join_pythonpath(what):
cur = py.std.os.environ.get('PYTHONPATH') cur = py.std.os.environ.get('PYTHONPATH')
if cur: if cur:
return str(what) + ':' + cur return str(what) + os.pathsep + cur
return what return what
empty_package = testdir.mkpydir("empty_package") empty_package = testdir.mkpydir("empty_package")
monkeypatch.setenv('PYTHONPATH', join_pythonpath(empty_package)) monkeypatch.setenv('PYTHONPATH', join_pythonpath(empty_package))
@ -551,11 +552,72 @@ class TestInvocationVariants:
]) ])
monkeypatch.setenv('PYTHONPATH', join_pythonpath(testdir)) monkeypatch.setenv('PYTHONPATH', join_pythonpath(testdir))
path.join('test_hello.py').remove() result = testdir.runpytest("--pyargs", "tpkg.test_missing")
result = testdir.runpytest("--pyargs", "tpkg.test_hello")
assert result.ret != 0 assert result.ret != 0
result.stderr.fnmatch_lines([ result.stderr.fnmatch_lines([
"*not*found*test_hello*", "*not*found*test_missing*",
])
def test_cmdline_python_namespace_package(self, testdir, monkeypatch):
"""
test --pyargs option with namespace packages (#1567)
"""
monkeypatch.delenv('PYTHONDONTWRITEBYTECODE', raising=False)
search_path = []
for dirname in "hello", "world":
d = testdir.mkdir(dirname)
search_path.append(d)
ns = d.mkdir("ns_pkg")
ns.join("__init__.py").write(
"__import__('pkg_resources').declare_namespace(__name__)")
lib = ns.mkdir(dirname)
lib.ensure("__init__.py")
lib.join("test_{0}.py".format(dirname)). \
write("def test_{0}(): pass\n"
"def test_other():pass".format(dirname))
# The structure of the test directory is now:
# .
# ├── hello
# │   └── ns_pkg
# │   ├── __init__.py
# │   └── hello
# │   ├── __init__.py
# │   └── test_hello.py
# └── world
# └── ns_pkg
# ├── __init__.py
# └── world
# ├── __init__.py
# └── test_world.py
def join_pythonpath(*dirs):
cur = py.std.os.environ.get('PYTHONPATH')
if cur:
dirs += (cur,)
return os.pathsep.join(str(p) for p in dirs)
monkeypatch.setenv('PYTHONPATH', join_pythonpath(*search_path))
for p in search_path:
monkeypatch.syspath_prepend(p)
# mixed module and filenames:
result = testdir.runpytest("--pyargs", "-v", "ns_pkg.hello", "world/ns_pkg")
assert result.ret == 0
result.stdout.fnmatch_lines([
"*test_hello.py::test_hello*PASSED",
"*test_hello.py::test_other*PASSED",
"*test_world.py::test_world*PASSED",
"*test_world.py::test_other*PASSED",
"*4 passed*"
])
# specify tests within a module
result = testdir.runpytest("--pyargs", "-v", "ns_pkg.world.test_world::test_other")
assert result.ret == 0
result.stdout.fnmatch_lines([
"*test_world.py::test_other*PASSED",
"*1 passed*"
]) ])
def test_cmdline_python_package_not_exists(self, testdir): def test_cmdline_python_package_not_exists(self, testdir):
@ -665,11 +727,13 @@ class TestDurations:
testdir.makepyfile(self.source) testdir.makepyfile(self.source)
testdir.makepyfile(test_collecterror="""xyz""") testdir.makepyfile(test_collecterror="""xyz""")
result = testdir.runpytest("--durations=2", "-k test_1") result = testdir.runpytest("--durations=2", "-k test_1")
assert result.ret != 0 assert result.ret == 2
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
"*durations*", "*Interrupted: 1 errors during collection*",
"*call*test_1*",
]) ])
# Collection errors abort test execution, therefore no duration is
# output
assert "duration" not in result.stdout.str()
def test_with_not(self, testdir): def test_with_not(self, testdir):
testdir.makepyfile(self.source) testdir.makepyfile(self.source)
@ -698,4 +762,3 @@ class TestDurationWithFixture:
* setup *test_1* * setup *test_1*
* call *test_1* * call *test_1*
""") """)

View File

@ -1066,3 +1066,15 @@ def test_repr_traceback_with_unicode(style, encoding):
formatter = FormattedExcinfo(style=style) formatter = FormattedExcinfo(style=style)
repr_traceback = formatter.repr_traceback(e_info) repr_traceback = formatter.repr_traceback(e_info)
assert repr_traceback is not None assert repr_traceback is not None
def test_cwd_deleted(testdir):
testdir.makepyfile("""
def test(tmpdir):
tmpdir.chdir()
tmpdir.remove()
assert False
""")
result = testdir.runpytest()
result.stdout.fnmatch_lines(['* 1 failed in *'])
assert 'INTERNALERROR' not in result.stdout.str() + result.stderr.str()

View File

@ -1,6 +1,6 @@
""" """
This is the script that is actually frozen into an executable: simply executes This is the script that is actually frozen into an executable: simply executes
py.test main(). pytest main().
""" """
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -8,7 +8,7 @@ if __name__ == '__main__':
setup( setup(
name="runtests", name="runtests",
version="0.1", version="0.1",
description="exemple of how embedding py.test into an executable using cx_freeze", description="exemple of how embedding pytest into an executable using cx_freeze",
executables=[Executable("runtests_script.py")], executables=[Executable("runtests_script.py")],
options={"build_exe": {'includes': pytest.freeze_includes()}}, options={"build_exe": {'includes': pytest.freeze_includes()}},
) )

View File

@ -490,6 +490,20 @@ class TestRequestBasic:
print(ss.stack) print(ss.stack)
assert teardownlist == [1] assert teardownlist == [1]
def test_mark_as_fixture_with_prefix_and_decorator_fails(self, testdir):
testdir.makeconftest("""
import pytest
@pytest.fixture
def pytest_funcarg__marked_with_prefix_and_decorator():
pass
""")
result = testdir.runpytest_subprocess()
assert result.ret != 0
result.stdout.fnmatch_lines([
"*AssertionError:*pytest_funcarg__marked_with_prefix_and_decorator*"
])
def test_request_addfinalizer_failing_setup(self, testdir): def test_request_addfinalizer_failing_setup(self, testdir):
testdir.makepyfile(""" testdir.makepyfile("""
import pytest import pytest
@ -2704,3 +2718,108 @@ class TestContextManagerFixtureFuncs:
""".format(flavor=flavor)) """.format(flavor=flavor))
result = testdir.runpytest("-s") result = testdir.runpytest("-s")
result.stdout.fnmatch_lines("*mew*") result.stdout.fnmatch_lines("*mew*")
class TestParameterizedSubRequest:
def test_call_from_fixture(self, testdir):
testfile = testdir.makepyfile("""
import pytest
@pytest.fixture(params=[0, 1, 2])
def fix_with_param(request):
return request.param
@pytest.fixture
def get_named_fixture(request):
return request.getfuncargvalue('fix_with_param')
def test_foo(request, get_named_fixture):
pass
""")
result = testdir.runpytest()
result.stdout.fnmatch_lines("""
E*Failed: The requested fixture has no parameter defined for the current test.
E*
E*Requested fixture 'fix_with_param' defined in:
E*{0}:4
E*Requested here:
E*{1}:9
*1 error*
""".format(testfile.basename, testfile.basename))
def test_call_from_test(self, testdir):
testfile = testdir.makepyfile("""
import pytest
@pytest.fixture(params=[0, 1, 2])
def fix_with_param(request):
return request.param
def test_foo(request):
request.getfuncargvalue('fix_with_param')
""")
result = testdir.runpytest()
result.stdout.fnmatch_lines("""
E*Failed: The requested fixture has no parameter defined for the current test.
E*
E*Requested fixture 'fix_with_param' defined in:
E*{0}:4
E*Requested here:
E*{1}:8
*1 failed*
""".format(testfile.basename, testfile.basename))
def test_external_fixture(self, testdir):
conffile = testdir.makeconftest("""
import pytest
@pytest.fixture(params=[0, 1, 2])
def fix_with_param(request):
return request.param
""")
testfile = testdir.makepyfile("""
def test_foo(request):
request.getfuncargvalue('fix_with_param')
""")
result = testdir.runpytest()
result.stdout.fnmatch_lines("""
E*Failed: The requested fixture has no parameter defined for the current test.
E*
E*Requested fixture 'fix_with_param' defined in:
E*{0}:4
E*Requested here:
E*{1}:2
*1 failed*
""".format(conffile.basename, testfile.basename))
def test_non_relative_path(self, testdir):
tests_dir = testdir.mkdir('tests')
fixdir = testdir.mkdir('fixtures')
fixfile = fixdir.join("fix.py")
fixfile.write(_pytest._code.Source("""
import pytest
@pytest.fixture(params=[0, 1, 2])
def fix_with_param(request):
return request.param
"""))
testfile = tests_dir.join("test_foos.py")
testfile.write(_pytest._code.Source("""
from fix import fix_with_param
def test_foo(request):
request.getfuncargvalue('fix_with_param')
"""))
tests_dir.chdir()
testdir.syspathinsert(fixdir)
result = testdir.runpytest()
result.stdout.fnmatch_lines("""
E*Failed: The requested fixture has no parameter defined for the current test.
E*
E*Requested fixture 'fix_with_param' defined in:
E*{0}:5
E*Requested here:
E*{1}:5
*1 failed*
""".format(fixfile.strpath, testfile.basename))

View File

@ -428,7 +428,7 @@ def test_assert_compare_truncate_longmessage(monkeypatch, testdir):
"*- 3", "*- 3",
"*- 5", "*- 5",
"*- 7", "*- 7",
"*truncated (191 more lines)*use*-vv*", "*truncated (193 more lines)*use*-vv*",
]) ])
@ -626,3 +626,17 @@ def test_set_with_unsortable_elements():
+ repr(3) + repr(3)
""").strip() """).strip()
assert '\n'.join(expl) == dedent assert '\n'.join(expl) == dedent
def test_diff_newline_at_end(monkeypatch, testdir):
testdir.makepyfile(r"""
def test_diff():
assert 'asdf' == 'asdf\n'
""")
result = testdir.runpytest()
result.stdout.fnmatch_lines(r"""
*assert 'asdf' == 'asdf\n'
* - asdf
* + asdf
* ? +
""")

View File

@ -694,6 +694,40 @@ class TestAssertionRewriteHookDetails(object):
result = testdir.runpytest() result = testdir.runpytest()
result.stdout.fnmatch_lines('*1 passed*') result.stdout.fnmatch_lines('*1 passed*')
@pytest.mark.parametrize('initial_conftest', [True, False])
@pytest.mark.parametrize('mode', ['plain', 'rewrite', 'reinterp'])
def test_conftest_assertion_rewrite(self, testdir, initial_conftest, mode):
"""Test that conftest files are using assertion rewrite on import.
(#1619)
"""
testdir.tmpdir.join('foo/tests').ensure(dir=1)
conftest_path = 'conftest.py' if initial_conftest else 'foo/conftest.py'
contents = {
conftest_path: """
import pytest
@pytest.fixture
def check_first():
def check(values, value):
assert values.pop(0) == value
return check
""",
'foo/tests/test_foo.py': """
def test(check_first):
check_first([10, 30], 30)
"""
}
testdir.makepyfile(**contents)
result = testdir.runpytest_subprocess('--assert=%s' % mode)
if mode == 'plain':
expected = 'E AssertionError'
elif mode == 'rewrite':
expected = '*assert 10 == 30*'
elif mode == 'reinterp':
expected = '*AssertionError:*was re-run*'
else:
assert 0
result.stdout.fnmatch_lines([expected])
def test_issue731(testdir): def test_issue731(testdir):
testdir.makepyfile(""" testdir.makepyfile("""

View File

@ -152,7 +152,9 @@ class TestCollectPluginHookRelay:
wascalled = [] wascalled = []
class Plugin: class Plugin:
def pytest_collect_file(self, path, parent): def pytest_collect_file(self, path, parent):
wascalled.append(path) if not path.basename.startswith("."):
# Ignore hidden files, e.g. .testmondata.
wascalled.append(path)
testdir.makefile(".abc", "xyz") testdir.makefile(".abc", "xyz")
pytest.main([testdir.tmpdir], plugins=[Plugin()]) pytest.main([testdir.tmpdir], plugins=[Plugin()])
assert len(wascalled) == 1 assert len(wascalled) == 1
@ -642,3 +644,114 @@ class TestNodekeywords:
""") """)
reprec = testdir.inline_run("-k repr") reprec = testdir.inline_run("-k repr")
reprec.assertoutcome(passed=1, failed=0) reprec.assertoutcome(passed=1, failed=0)
COLLECTION_ERROR_PY_FILES = dict(
test_01_failure="""
def test_1():
assert False
""",
test_02_import_error="""
import asdfasdfasdf
def test_2():
assert True
""",
test_03_import_error="""
import asdfasdfasdf
def test_3():
assert True
""",
test_04_success="""
def test_4():
assert True
""",
)
def test_exit_on_collection_error(testdir):
"""Verify that all collection errors are collected and no tests executed"""
testdir.makepyfile(**COLLECTION_ERROR_PY_FILES)
res = testdir.runpytest()
assert res.ret == 2
res.stdout.fnmatch_lines([
"collected 2 items / 2 errors",
"*ERROR collecting test_02_import_error.py*",
"*No module named *asdfa*",
"*ERROR collecting test_03_import_error.py*",
"*No module named *asdfa*",
])
def test_exit_on_collection_with_maxfail_smaller_than_n_errors(testdir):
"""
Verify collection is aborted once maxfail errors are encountered ignoring
further modules which would cause more collection errors.
"""
testdir.makepyfile(**COLLECTION_ERROR_PY_FILES)
res = testdir.runpytest("--maxfail=1")
assert res.ret == 2
res.stdout.fnmatch_lines([
"*ERROR collecting test_02_import_error.py*",
"*No module named *asdfa*",
"*Interrupted: stopping after 1 failures*",
])
assert 'test_03' not in res.stdout.str()
def test_exit_on_collection_with_maxfail_bigger_than_n_errors(testdir):
"""
Verify the test run aborts due to collection errors even if maxfail count of
errors was not reached.
"""
testdir.makepyfile(**COLLECTION_ERROR_PY_FILES)
res = testdir.runpytest("--maxfail=4")
assert res.ret == 2
res.stdout.fnmatch_lines([
"collected 2 items / 2 errors",
"*ERROR collecting test_02_import_error.py*",
"*No module named *asdfa*",
"*ERROR collecting test_03_import_error.py*",
"*No module named *asdfa*",
])
def test_continue_on_collection_errors(testdir):
"""
Verify tests are executed even when collection errors occur when the
--continue-on-collection-errors flag is set
"""
testdir.makepyfile(**COLLECTION_ERROR_PY_FILES)
res = testdir.runpytest("--continue-on-collection-errors")
assert res.ret == 1
res.stdout.fnmatch_lines([
"collected 2 items / 2 errors",
"*1 failed, 1 passed, 2 error*",
])
def test_continue_on_collection_errors_maxfail(testdir):
"""
Verify tests are executed even when collection errors occur and that maxfail
is honoured (including the collection error count).
4 tests: 2 collection errors + 1 failure + 1 success
test_4 is never executed because the test run is with --maxfail=3 which
means it is interrupted after the 2 collection errors + 1 failure.
"""
testdir.makepyfile(**COLLECTION_ERROR_PY_FILES)
res = testdir.runpytest("--continue-on-collection-errors", "--maxfail=3")
assert res.ret == 2
res.stdout.fnmatch_lines([
"collected 2 items / 2 errors",
"*Interrupted: stopping after 3 failures*",
"*1 failed, 2 error*",
])

View File

@ -18,7 +18,7 @@ class TestParseIni:
assert config.inicfg['name'] == 'value' assert config.inicfg['name'] == 'value'
def test_getcfg_empty_path(self, tmpdir): def test_getcfg_empty_path(self, tmpdir):
getcfg([''], ['setup.cfg']) #happens on py.test "" getcfg([''], ['setup.cfg']) #happens on pytest ""
def test_append_parse_args(self, testdir, tmpdir, monkeypatch): def test_append_parse_args(self, testdir, tmpdir, monkeypatch):
monkeypatch.setenv('PYTEST_ADDOPTS', '--color no -rs --tb="short"') monkeypatch.setenv('PYTEST_ADDOPTS', '--color no -rs --tb="short"')
@ -485,9 +485,14 @@ def test_load_initial_conftest_last_ordering(testdir):
pm.register(m) pm.register(m)
hc = pm.hook.pytest_load_initial_conftests hc = pm.hook.pytest_load_initial_conftests
l = hc._nonwrappers + hc._wrappers l = hc._nonwrappers + hc._wrappers
assert l[-1].function.__module__ == "_pytest.capture" expected = [
assert l[-2].function == m.pytest_load_initial_conftests "_pytest.config",
assert l[-3].function.__module__ == "_pytest.config" 'test_config',
'_pytest.assertion',
'_pytest.capture',
]
assert [x.function.__module__ for x in l] == expected
class TestWarning: class TestWarning:
def test_warn_config(self, testdir): def test_warn_config(self, testdir):

View File

@ -14,13 +14,16 @@ class TestDoctests:
>>> i-1 >>> i-1
4 4
""") """)
for x in (testdir.tmpdir, checkfile): for x in (testdir.tmpdir, checkfile):
#print "checking that %s returns custom items" % (x,) #print "checking that %s returns custom items" % (x,)
items, reprec = testdir.inline_genitems(x) items, reprec = testdir.inline_genitems(x)
assert len(items) == 1 assert len(items) == 1
assert isinstance(items[0], DoctestTextfile) assert isinstance(items[0], DoctestItem)
assert isinstance(items[0].parent, DoctestTextfile)
# Empty file has no items.
items, reprec = testdir.inline_genitems(w) items, reprec = testdir.inline_genitems(w)
assert len(items) == 1 assert len(items) == 0
def test_collect_module_empty(self, testdir): def test_collect_module_empty(self, testdir):
path = testdir.makepyfile(whatever="#") path = testdir.makepyfile(whatever="#")
@ -199,8 +202,20 @@ class TestDoctests:
"*1 failed*", "*1 failed*",
]) ])
def test_doctest_unex_importerror_only_txt(self, testdir):
testdir.maketxtfile("""
>>> import asdalsdkjaslkdjasd
>>>
""")
result = testdir.runpytest()
# doctest is never executed because of error during hello.py collection
result.stdout.fnmatch_lines([
"*>>> import asdals*",
"*UNEXPECTED*ImportError*",
"ImportError: No module named *asdal*",
])
def test_doctest_unex_importerror(self, testdir): def test_doctest_unex_importerror_with_module(self, testdir):
testdir.tmpdir.join("hello.py").write(_pytest._code.Source(""" testdir.tmpdir.join("hello.py").write(_pytest._code.Source("""
import asdalsdkjaslkdjasd import asdalsdkjaslkdjasd
""")) """))
@ -209,10 +224,11 @@ class TestDoctests:
>>> >>>
""") """)
result = testdir.runpytest("--doctest-modules") result = testdir.runpytest("--doctest-modules")
# doctest is never executed because of error during hello.py collection
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
"*>>> import hello", "*ERROR collecting hello.py*",
"*UNEXPECTED*ImportError*", "*ImportError: No module named *asdals*",
"*import asdals*", "*Interrupted: 1 errors during collection*",
]) ])
def test_doctestmodule(self, testdir): def test_doctestmodule(self, testdir):
@ -595,6 +611,11 @@ class TestDoctestSkips:
reprec = testdir.inline_run("--doctest-modules") reprec = testdir.inline_run("--doctest-modules")
reprec.assertoutcome(skipped=1) reprec.assertoutcome(skipped=1)
def test_vacuous_all_skipped(self, testdir, makedoctest):
makedoctest('')
reprec = testdir.inline_run("--doctest-modules")
reprec.assertoutcome(passed=0, skipped=0)
class TestDoctestAutoUseFixtures: class TestDoctestAutoUseFixtures:

View File

@ -0,0 +1,13 @@
import pkg_resources
import pytest
@pytest.mark.parametrize("entrypoint", ['py.test', 'pytest'])
def test_entry_point_exist(entrypoint):
assert entrypoint in pkg_resources.get_entry_map('pytest')['console_scripts']
def test_pytest_entry_points_are_identical():
entryMap = pkg_resources.get_entry_map('pytest')['console_scripts']
assert entryMap['pytest'].module_name == entryMap['py.test'].module_name

View File

@ -21,8 +21,8 @@ def test_help(testdir):
*-v*verbose* *-v*verbose*
*setup.cfg* *setup.cfg*
*minversion* *minversion*
*to see*markers*py.test --markers* *to see*markers*pytest --markers*
*to see*fixtures*py.test --fixtures* *to see*fixtures*pytest --fixtures*
""") """)
def test_hookvalidation_unknown(testdir): def test_hookvalidation_unknown(testdir):

View File

@ -362,7 +362,7 @@ class TestPython:
file="test_collect_skipped.py", file="test_collect_skipped.py",
name="test_collect_skipped") name="test_collect_skipped")
# py.test doesn't give us a line here. # pytest doesn't give us a line here.
assert tnode["line"] is None assert tnode["line"] is None
fnode = tnode.find_first_by_tag("skipped") fnode = tnode.find_first_by_tag("skipped")

View File

@ -246,8 +246,8 @@ def test_argcomplete(testdir, monkeypatch):
pytest.skip("bash not available") pytest.skip("bash not available")
script = str(testdir.tmpdir.join("test_argcomplete")) script = str(testdir.tmpdir.join("test_argcomplete"))
pytest_bin = sys.argv[0] pytest_bin = sys.argv[0]
if "py.test" not in os.path.basename(pytest_bin): if "pytest" not in os.path.basename(pytest_bin):
pytest.skip("need to be run with py.test executable, not %s" %(pytest_bin,)) pytest.skip("need to be run with pytest executable, not %s" %(pytest_bin,))
with open(str(script), 'w') as fp: with open(str(script), 'w') as fp:
# redirect output from argcomplete to stdin and stderr is not trivial # redirect output from argcomplete to stdin and stderr is not trivial
@ -262,8 +262,8 @@ def test_argcomplete(testdir, monkeypatch):
monkeypatch.setenv('COMP_WORDBREAKS', ' \\t\\n"\\\'><=;|&(:') monkeypatch.setenv('COMP_WORDBREAKS', ' \\t\\n"\\\'><=;|&(:')
arg = '--fu' arg = '--fu'
monkeypatch.setenv('COMP_LINE', "py.test " + arg) monkeypatch.setenv('COMP_LINE', "pytest " + arg)
monkeypatch.setenv('COMP_POINT', str(len("py.test " + arg))) monkeypatch.setenv('COMP_POINT', str(len("pytest " + arg)))
result = testdir.run('bash', str(script), arg) result = testdir.run('bash', str(script), arg)
if result.ret == 255: if result.ret == 255:
# argcomplete not found # argcomplete not found
@ -280,8 +280,7 @@ def test_argcomplete(testdir, monkeypatch):
return return
os.mkdir('test_argcomplete.d') os.mkdir('test_argcomplete.d')
arg = 'test_argc' arg = 'test_argc'
monkeypatch.setenv('COMP_LINE', "py.test " + arg) monkeypatch.setenv('COMP_LINE', "pytest " + arg)
monkeypatch.setenv('COMP_POINT', str(len('py.test ' + arg))) monkeypatch.setenv('COMP_POINT', str(len('pytest ' + arg)))
result = testdir.run('bash', str(script), arg) result = testdir.run('bash', str(script), arg)
result.stdout.fnmatch_lines(["test_argcomplete", "test_argcomplete.d/"]) result.stdout.fnmatch_lines(["test_argcomplete", "test_argcomplete.d/"])

View File

@ -231,6 +231,6 @@ def test_failure_issue380(testdir):
pass pass
""") """)
result = testdir.runpytest("--resultlog=log") result = testdir.runpytest("--resultlog=log")
assert result.ret == 1 assert result.ret == 2

View File

@ -228,6 +228,39 @@ class BaseFunctionalTests:
assert reps[5].nodeid.endswith("test_func") assert reps[5].nodeid.endswith("test_func")
assert reps[5].failed assert reps[5].failed
def test_exact_teardown_issue1206(self, testdir):
rec = testdir.inline_runsource("""
import pytest
class TestClass:
def teardown_method(self):
pass
def test_method(self):
assert True
""")
reps = rec.getreports("pytest_runtest_logreport")
print (reps)
assert len(reps) == 3
#
assert reps[0].nodeid.endswith("test_method")
assert reps[0].passed
assert reps[0].when == 'setup'
#
assert reps[1].nodeid.endswith("test_method")
assert reps[1].passed
assert reps[1].when == 'call'
#
assert reps[2].nodeid.endswith("test_method")
assert reps[2].failed
assert reps[2].when == "teardown"
assert reps[2].longrepr.reprcrash.message in (
# python3 error
'TypeError: teardown_method() takes 1 positional argument but 2 were given',
# python2 error
'TypeError: teardown_method() takes exactly 1 argument (2 given)'
)
def test_failure_in_setup_function_ignores_custom_repr(self, testdir): def test_failure_in_setup_function_ignores_custom_repr(self, testdir):
testdir.makepyfile(conftest=""" testdir.makepyfile(conftest="""
import pytest import pytest

View File

@ -8,16 +8,11 @@ import _pytest._pluggy as pluggy
import _pytest._code import _pytest._code
import py import py
import pytest import pytest
from _pytest import runner
from _pytest.main import EXIT_NOTESTSCOLLECTED from _pytest.main import EXIT_NOTESTSCOLLECTED
from _pytest.terminal import TerminalReporter, repr_pythonversion, getreportopt from _pytest.terminal import TerminalReporter, repr_pythonversion, getreportopt
from _pytest.terminal import build_summary_stats_line, _plugin_nameversions from _pytest.terminal import build_summary_stats_line, _plugin_nameversions
def basic_run_report(item):
runner.call_and_report(item, "setup", log=False)
return runner.call_and_report(item, "call", log=False)
DistInfo = collections.namedtuple('DistInfo', ['project_name', 'version']) DistInfo = collections.namedtuple('DistInfo', ['project_name', 'version'])
@ -273,7 +268,7 @@ class TestCollectonly:
def test_collectonly_error(self, testdir): def test_collectonly_error(self, testdir):
p = testdir.makepyfile("import Errlkjqweqwe") p = testdir.makepyfile("import Errlkjqweqwe")
result = testdir.runpytest("--collect-only", p) result = testdir.runpytest("--collect-only", p)
assert result.ret == 1 assert result.ret == 2
result.stdout.fnmatch_lines(_pytest._code.Source(""" result.stdout.fnmatch_lines(_pytest._code.Source("""
*ERROR* *ERROR*
*ImportError* *ImportError*

26
tox.ini
View File

@ -7,7 +7,7 @@ envlist=
py27-nobyte,doctesting,py27-cxfreeze py27-nobyte,doctesting,py27-cxfreeze
[testenv] [testenv]
commands= py.test --lsof -rfsxX {posargs:testing} commands= pytest --lsof -rfsxX {posargs:testing}
passenv = USER USERNAME passenv = USER USERNAME
deps= deps=
hypothesis hypothesis
@ -16,7 +16,7 @@ deps=
requests requests
[testenv:py26] [testenv:py26]
commands= py.test --lsof -rfsxX {posargs:testing} commands= pytest --lsof -rfsxX {posargs:testing}
# pinning mock to last supported version for python 2.6 # pinning mock to last supported version for python 2.6
deps= deps=
hypothesis<3.0 hypothesis<3.0
@ -30,10 +30,10 @@ deps=pytest-xdist>=1.13
mock mock
nose nose
commands= commands=
py.test -n3 -rfsxX --runpytest=subprocess {posargs:testing} pytest -n3 -rfsxX --runpytest=subprocess {posargs:testing}
[testenv:genscript] [testenv:genscript]
commands= py.test --genscript=pytest1 commands= pytest --genscript=pytest1
[testenv:linting] [testenv:linting]
basepython = python2.7 basepython = python2.7
@ -48,26 +48,26 @@ deps=pytest-xdist>=1.13
nose nose
hypothesis hypothesis
commands= commands=
py.test -n1 -rfsxX {posargs:testing} pytest -n1 -rfsxX {posargs:testing}
[testenv:py35-xdist] [testenv:py35-xdist]
deps={[testenv:py27-xdist]deps} deps={[testenv:py27-xdist]deps}
commands= commands=
py.test -n3 -rfsxX {posargs:testing} pytest -n3 -rfsxX {posargs:testing}
[testenv:py27-pexpect] [testenv:py27-pexpect]
changedir=testing changedir=testing
platform=linux|darwin platform=linux|darwin
deps=pexpect deps=pexpect
commands= commands=
py.test -rfsxX test_pdb.py test_terminal.py test_unittest.py pytest -rfsxX test_pdb.py test_terminal.py test_unittest.py
[testenv:py35-pexpect] [testenv:py35-pexpect]
changedir=testing changedir=testing
platform=linux|darwin platform=linux|darwin
deps={[testenv:py27-pexpect]deps} deps={[testenv:py27-pexpect]deps}
commands= commands=
py.test -rfsxX test_pdb.py test_terminal.py test_unittest.py pytest -rfsxX test_pdb.py test_terminal.py test_unittest.py
[testenv:py27-nobyte] [testenv:py27-nobyte]
deps=pytest-xdist>=1.13 deps=pytest-xdist>=1.13
@ -76,21 +76,21 @@ distribute=true
setenv= setenv=
PYTHONDONTWRITEBYTECODE=1 PYTHONDONTWRITEBYTECODE=1
commands= commands=
py.test -n3 -rfsxX {posargs:testing} pytest -n3 -rfsxX {posargs:testing}
[testenv:py27-trial] [testenv:py27-trial]
deps=twisted deps=twisted
commands= commands=
py.test -rsxf {posargs:testing/test_unittest.py} pytest -rsxf {posargs:testing/test_unittest.py}
[testenv:py35-trial] [testenv:py35-trial]
platform=linux|darwin platform=linux|darwin
deps={[testenv:py27-trial]deps} deps={[testenv:py27-trial]deps}
commands= commands=
py.test -rsxf {posargs:testing/test_unittest.py} pytest -rsxf {posargs:testing/test_unittest.py}
[testenv:doctest] [testenv:doctest]
commands=py.test --doctest-modules _pytest commands=pytest --doctest-modules _pytest
deps= deps=
[testenv:doc] [testenv:doc]
@ -107,7 +107,7 @@ commands=
basepython = python basepython = python
changedir=doc/en changedir=doc/en
deps=PyYAML deps=PyYAML
commands= py.test -rfsxX {posargs} commands= pytest -rfsxX {posargs}
[testenv:regen] [testenv:regen]
changedir=doc/en changedir=doc/en