Merge branch 'master' into merge-master-features

This commit is contained in:
Bruno Oliveira 2016-11-11 18:56:53 -02:00
commit efc54b2e56
47 changed files with 469 additions and 177 deletions

View File

@ -36,6 +36,7 @@ Christopher Gilling
Daniel Grana Daniel Grana
Daniel Hahler Daniel Hahler
Daniel Nuri Daniel Nuri
Daniel Wandschneider
Danielle Jenkins Danielle Jenkins
Dave Hunt Dave Hunt
David Díaz-Barquero David Díaz-Barquero
@ -90,6 +91,7 @@ Markus Unterwaditzer
Martijn Faassen Martijn Faassen
Martin K. Scherer Martin K. Scherer
Martin Prusse Martin Prusse
Mathieu Clabaut
Matt Bachmann Matt Bachmann
Matt Duck Matt Duck
Matt Williams Matt Williams
@ -98,6 +100,7 @@ mbyt
Michael Aquilina Michael Aquilina
Michael Birtwell Michael Birtwell
Michael Droettboom Michael Droettboom
Michael Seifert
Mike Lundy Mike Lundy
Nicolas Delaby Nicolas Delaby
Oleg Pidsadnyi Oleg Pidsadnyi

View File

@ -41,31 +41,74 @@ Changes
.. _#2013: https://github.com/pytest-dev/pytest/issues/2013 .. _#2013: https://github.com/pytest-dev/pytest/issues/2013
3.0.4.dev 3.0.5.dev0
========= ==========
* *
*
*
*
*
3.0.4
=====
* Import errors when collecting test modules now display the full traceback (`#1976`_). * Import errors when collecting test modules now display the full traceback (`#1976`_).
Thanks `@cwitty`_ for the report and `@nicoddemus`_ for the PR. Thanks `@cwitty`_ for the report and `@nicoddemus`_ for the PR.
* Fix confusing command-line help message for custom options with two or more `metavar` properties (`#2004`_). * Fix confusing command-line help message for custom options with two or more ``metavar`` properties (`#2004`_).
Thanks `@okulynyak`_ and `@davehunt`_ for the report and `@nicoddemus`_ for the PR. Thanks `@okulynyak`_ and `@davehunt`_ for the report and `@nicoddemus`_ for the PR.
* When loading plugins, import errors which contain non-ascii messages are now properly handled in Python 2 (`#1998`_). * When loading plugins, import errors which contain non-ascii messages are now properly handled in Python 2 (`#1998`_).
Thanks `@nicoddemus`_ for the PR. Thanks `@nicoddemus`_ for the PR.
* * Fixed cyclic reference when ``pytest.raises`` is used in context-manager form (`#1965`_). Also as a
result of this fix, ``sys.exc_info()`` is left empty in both context-manager and function call usages.
Previously, ``sys.exc_info`` would contain the exception caught by the context manager,
even when the expected exception occurred.
Thanks `@MSeifert04`_ for the report and the PR.
* Fixed false-positives warnings from assertion rewrite hook for modules that were rewritten but
were later marked explicitly by ``pytest.register_assert_rewrite``
or implicitly as a plugin (`#2005`_).
Thanks `@RonnyPfannschmidt`_ for the report and `@nicoddemus`_ for the PR.
* Report teardown output on test failure (`#442`_).
Thanks `@matclab`_ or the PR.
* Fix teardown error message in generated xUnit XML.
Thanks `@gdyuldin`_ or the PR.
* Properly handle exceptions in ``multiprocessing`` tasks (`#1984`_).
Thanks `@adborden`_ for the report and `@nicoddemus`_ for the PR.
* Clean up unittest TestCase objects after tests are complete (`#1649`_).
Thanks `@d_b_w`_ for the report and PR.
.. _@adborden: https://github.com/adborden
.. _@cwitty: https://github.com/cwitty .. _@cwitty: https://github.com/cwitty
.. _@d_b_w: https://github.com/d_b_w
.. _@gdyuldin: https://github.com/gdyuldin
.. _@matclab: https://github.com/matclab
.. _@MSeifert04: https://github.com/MSeifert04
.. _@okulynyak: https://github.com/okulynyak .. _@okulynyak: https://github.com/okulynyak
.. _#442: https://github.com/pytest-dev/pytest/issues/442
.. _#1965: https://github.com/pytest-dev/pytest/issues/1965
.. _#1976: https://github.com/pytest-dev/pytest/issues/1976 .. _#1976: https://github.com/pytest-dev/pytest/issues/1976
.. _#1984: https://github.com/pytest-dev/pytest/issues/1984
.. _#1998: https://github.com/pytest-dev/pytest/issues/1998 .. _#1998: https://github.com/pytest-dev/pytest/issues/1998
.. _#2004: https://github.com/pytest-dev/pytest/issues/2004 .. _#2004: https://github.com/pytest-dev/pytest/issues/2004
.. _#2005: https://github.com/pytest-dev/pytest/issues/2005
.. _#1649: https://github.com/pytest-dev/pytest/issues/1649
3.0.3 3.0.3

View File

@ -199,13 +199,10 @@ but here is a simple overview:
You need to have Python 2.7 and 3.5 available in your system. Now You need to have Python 2.7 and 3.5 available in your system. Now
running tests is as simple as issuing this command:: running tests is as simple as issuing this command::
$ python3 runtox.py -e linting,py27,py35 $ tox -e linting,py27,py35
This command will run tests via the "tox" tool against Python 2.7 and 3.5 This command will run tests via the "tox" tool against Python 2.7 and 3.5
and also perform "lint" coding-style checks. ``runtox.py`` is and also perform "lint" coding-style checks.
a thin wrapper around ``tox`` which installs from a development package
index where newer (not yet released to PyPI) versions of dependencies
(especially ``py``) might be present.
#. You can now edit your local working copy. #. You can now edit your local working copy.
@ -214,11 +211,11 @@ but here is a simple overview:
To run tests on Python 2.7 and pass options to pytest (e.g. enter pdb on To run tests on Python 2.7 and pass options to pytest (e.g. enter pdb on
failure) to pytest you can do:: failure) to pytest you can do::
$ python3 runtox.py -e py27 -- --pdb $ tox -e py27 -- --pdb
Or to only run tests in a particular test module on Python 3.5:: Or to only run tests in a particular test module on Python 3.5::
$ python3 runtox.py -e py35 -- testing/test_config.py $ tox -e py35 -- testing/test_config.py
#. Commit and push once your tests pass and you are happy with your change(s):: #. Commit and push once your tests pass and you are happy with your change(s)::

View File

@ -11,22 +11,23 @@ include setup.py
include .coveragerc include .coveragerc
include plugin-test.sh
include requirements-docs.txt
include runtox.py
recursive-include bench *.py recursive-include bench *.py
recursive-include extra *.py recursive-include extra *.py
graft testing graft testing
graft doc graft doc
prune doc/en/_build
exclude _pytest/impl exclude _pytest/impl
graft _pytest/vendored_packages graft _pytest/vendored_packages
recursive-exclude * *.pyc *.pyo recursive-exclude * *.pyc *.pyo
recursive-exclude testing/.hypothesis *
recursive-exclude testing/freeze/~ *
recursive-exclude testing/freeze/build *
recursive-exclude testing/freeze/dist *
exclude appveyor/install.ps1
exclude appveyor.yml exclude appveyor.yml
exclude appveyor exclude .travis.yml
prune .github

View File

@ -89,7 +89,7 @@ Please use the `GitHub issue tracker <https://github.com/pytest-dev/pytest/issue
Changelog Changelog
--------- ---------
Consult the `Changelog <http://docs.pytest.org/en/latest/changelog.html>`_ page for fixes and enhancements of each version. Consult the `Changelog <http://docs.pytest.org/en/latest/changelog.html>`__ page for fixes and enhancements of each version.
License License

View File

@ -1,6 +1,7 @@
import sys import sys
from inspect import CO_VARARGS, CO_VARKEYWORDS from inspect import CO_VARARGS, CO_VARKEYWORDS
import re import re
from weakref import ref
import py import py
builtin_repr = repr builtin_repr = repr
@ -230,7 +231,7 @@ class TracebackEntry(object):
return False return False
if py.builtin.callable(tbh): if py.builtin.callable(tbh):
return tbh(self._excinfo) return tbh(None if self._excinfo is None else self._excinfo())
else: else:
return tbh return tbh
@ -370,7 +371,7 @@ class ExceptionInfo(object):
#: the exception type name #: the exception type name
self.typename = self.type.__name__ self.typename = self.type.__name__
#: the exception traceback (_pytest._code.Traceback instance) #: the exception traceback (_pytest._code.Traceback instance)
self.traceback = _pytest._code.Traceback(self.tb, excinfo=self) self.traceback = _pytest._code.Traceback(self.tb, excinfo=ref(self))
def __repr__(self): def __repr__(self):
return "<ExceptionInfo %s tblen=%d>" % (self.typename, len(self.traceback)) return "<ExceptionInfo %s tblen=%d>" % (self.typename, len(self.traceback))
@ -623,16 +624,23 @@ class FormattedExcinfo(object):
e = excinfo.value e = excinfo.value
descr = None descr = None
while e is not None: while e is not None:
if excinfo:
reprtraceback = self.repr_traceback(excinfo) reprtraceback = self.repr_traceback(excinfo)
reprcrash = excinfo._getreprcrash() reprcrash = excinfo._getreprcrash()
else:
# fallback to native repr if the exception doesn't have a traceback:
# ExceptionInfo objects require a full traceback to work
reprtraceback = ReprTracebackNative(py.std.traceback.format_exception(type(e), e, None))
reprcrash = None
repr_chain += [(reprtraceback, reprcrash, descr)] repr_chain += [(reprtraceback, reprcrash, descr)]
if e.__cause__ is not None: if e.__cause__ is not None:
e = e.__cause__ e = e.__cause__
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
descr = 'The above exception was the direct cause of the following exception:' descr = 'The above exception was the direct cause of the following exception:'
elif e.__context__ is not None: elif e.__context__ is not None:
e = e.__context__ e = e.__context__
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
descr = 'During handling of the above exception, another exception occurred:' descr = 'During handling of the above exception, another exception occurred:'
else: else:
e = None e = None

View File

@ -51,6 +51,7 @@ class AssertionRewritingHook(object):
self.fnpats = config.getini("python_files") self.fnpats = config.getini("python_files")
self.session = None self.session = None
self.modules = {} self.modules = {}
self._rewritten_names = set()
self._register_with_pkg_resources() self._register_with_pkg_resources()
self._must_rewrite = set() self._must_rewrite = set()
@ -92,6 +93,8 @@ class AssertionRewritingHook(object):
if not self._should_rewrite(name, fn_pypath, state): if not self._should_rewrite(name, fn_pypath, state):
return None return None
self._rewritten_names.add(name)
# The requested module looks like a test file, so rewrite it. This is # The requested module looks like a test file, so rewrite it. This is
# the most magical part of the process: load the source, rewrite the # the most magical part of the process: load the source, rewrite the
# asserts, and load the rewritten source. We also cache the rewritten # asserts, and load the rewritten source. We also cache the rewritten
@ -178,6 +181,8 @@ class AssertionRewritingHook(object):
""" """
already_imported = set(names).intersection(set(sys.modules)) already_imported = set(names).intersection(set(sys.modules))
if already_imported: if already_imported:
for name in names:
if name not in self._rewritten_names:
self._warn_already_imported(already_imported) self._warn_already_imported(already_imported)
self._must_rewrite.update(names) self._must_rewrite.update(names)

View File

@ -158,8 +158,12 @@ class _NodeReporter(object):
Junit.skipped, "collection skipped", report.longrepr) Junit.skipped, "collection skipped", report.longrepr)
def append_error(self, report): def append_error(self, report):
if getattr(report, 'when', None) == 'teardown':
msg = "test teardown failure"
else:
msg = "test setup failure"
self._add_simple( self._add_simple(
Junit.error, "test setup failure", report.longrepr) Junit.error, msg, report.longrepr)
self._write_captured_output(report) self._write_captured_output(report)
def append_skipped(self, report): def append_skipped(self, report):

View File

@ -1008,8 +1008,6 @@ class Testdir:
pexpect = pytest.importorskip("pexpect", "3.0") pexpect = pytest.importorskip("pexpect", "3.0")
if hasattr(sys, 'pypy_version_info') and '64' in platform.machine(): if hasattr(sys, 'pypy_version_info') and '64' in platform.machine():
pytest.skip("pypy-64 bit not supported") pytest.skip("pypy-64 bit not supported")
if sys.platform == "darwin":
pytest.xfail("pexpect does not work reliably on darwin?!")
if sys.platform.startswith("freebsd"): if sys.platform.startswith("freebsd"):
pytest.xfail("pexpect does not work reliably on freebsd") pytest.xfail("pexpect does not work reliably on freebsd")
logfile = self.tmpdir.join("spawn.out").open("wb") logfile = self.tmpdir.join("spawn.out").open("wb")

View File

@ -1106,7 +1106,9 @@ def raises(expected_exception, *args, **kwargs):
>>> with raises(ZeroDivisionError, message="Expecting ZeroDivisionError"): >>> with raises(ZeroDivisionError, message="Expecting ZeroDivisionError"):
... pass ... pass
... Failed: Expecting ZeroDivisionError Traceback (most recent call last):
...
Failed: Expecting ZeroDivisionError
.. note:: .. note::
@ -1117,19 +1119,21 @@ def raises(expected_exception, *args, **kwargs):
Lines of code after that, within the scope of the context manager will Lines of code after that, within the scope of the context manager will
not be executed. For example:: not be executed. For example::
>>> with raises(OSError) as exc_info: >>> value = 15
assert 1 == 1 # this will execute as expected >>> with raises(ValueError) as exc_info:
raise OSError(errno.EEXISTS, 'directory exists') ... if value > 10:
assert exc_info.value.errno == errno.EEXISTS # this will not execute ... raise ValueError("value must be <= 10")
... assert str(exc_info.value) == "value must be <= 10" # this will not execute
Instead, the following approach must be taken (note the difference in Instead, the following approach must be taken (note the difference in
scope):: scope)::
>>> with raises(OSError) as exc_info: >>> with raises(ValueError) as exc_info:
assert 1 == 1 # this will execute as expected ... if value > 10:
raise OSError(errno.EEXISTS, 'directory exists') ... raise ValueError("value must be <= 10")
...
>>> assert str(exc_info.value) == "value must be <= 10"
assert exc_info.value.errno == errno.EEXISTS # this will now execute
Or you can specify a callable by passing a to-be-called lambda:: Or you can specify a callable by passing a to-be-called lambda::
@ -1228,7 +1232,11 @@ class RaisesContext(object):
exc_type, value, traceback = tp exc_type, value, traceback = tp
tp = exc_type, exc_type(value), traceback tp = exc_type, exc_type(value), traceback
self.excinfo.__init__(tp) self.excinfo.__init__(tp)
return issubclass(self.excinfo.type, self.expected_exception) suppress_exception = issubclass(self.excinfo.type, self.expected_exception)
if sys.version_info[0] == 2 and suppress_exception:
sys.exc_clear()
return suppress_exception
# builtin pytest.approx helper # builtin pytest.approx helper

View File

@ -36,8 +36,13 @@ def deprecated_call(func=None, *args, **kwargs):
This function can be used as a context manager:: This function can be used as a context manager::
>>> import warnings
>>> def api_call_v2():
... warnings.warn('use v3 of this api', DeprecationWarning)
... return 200
>>> with deprecated_call(): >>> with deprecated_call():
... myobject.deprecated_method() ... assert api_call_v2() == 200
Note: we cannot use WarningsRecorder here because it is still subject Note: we cannot use WarningsRecorder here because it is still subject
to the mechanism that prevents warnings of the same type from being to the mechanism that prevents warnings of the same type from being

View File

@ -458,6 +458,15 @@ class TerminalReporter:
self.write_sep("_", msg) self.write_sep("_", msg)
self._outrep_summary(rep) self._outrep_summary(rep)
def print_teardown_sections(self, rep):
for secname, content in rep.sections:
if 'teardown' in secname:
self._tw.sep('-', secname)
if content[-1:] == "\n":
content = content[:-1]
self._tw.line(content)
def summary_failures(self): def summary_failures(self):
if self.config.option.tbstyle != "no": if self.config.option.tbstyle != "no":
reports = self.getreports('failed') reports = self.getreports('failed')
@ -473,6 +482,9 @@ class TerminalReporter:
markup = {'red': True, 'bold': True} markup = {'red': True, 'bold': True}
self.write_sep("_", msg, **markup) self.write_sep("_", msg, **markup)
self._outrep_summary(rep) self._outrep_summary(rep)
for report in self.getreports(''):
if report.nodeid == rep.nodeid and report.when == 'teardown':
self.print_teardown_sections(report)
def summary_errors(self): def summary_errors(self):
if self.config.option.tbstyle != "no": if self.config.option.tbstyle != "no":

View File

@ -94,6 +94,9 @@ class TestCaseFunction(pytest.Function):
def teardown(self): def teardown(self):
if hasattr(self._testcase, 'teardown_method'): if hasattr(self._testcase, 'teardown_method'):
self._testcase.teardown_method(self._obj) self._testcase.teardown_method(self._obj)
# Allow garbage collection on TestCase instance attributes.
self._testcase = None
self._obj = None
def startTest(self, testcase): def startTest(self, testcase):
pass pass

View File

@ -20,10 +20,11 @@ install:
# install pypy using choco (redirect to a file and write to console in case # install pypy using choco (redirect to a file and write to console in case
# choco install returns non-zero, because choco install python.pypy is too # choco install returns non-zero, because choco install python.pypy is too
# noisy) # noisy)
- choco install python.pypy > pypy-inst.log 2>&1 || (type pypy-inst.log & exit /b 1) # pypy is disabled until #1963 gets fixed
- set PATH=C:\tools\pypy\pypy;%PATH% # so tox can find pypy #- choco install python.pypy > pypy-inst.log 2>&1 || (type pypy-inst.log & exit /b 1)
- echo PyPy installed #- set PATH=C:\tools\pypy\pypy;%PATH% # so tox can find pypy
- pypy --version #- echo PyPy installed
#- pypy --version
- C:\Python35\python -m pip install tox - C:\Python35\python -m pip install tox

View File

@ -6,6 +6,7 @@ Release announcements
:maxdepth: 2 :maxdepth: 2
release-3.0.4
release-3.0.3 release-3.0.3
release-3.0.2 release-3.0.2
release-3.0.1 release-3.0.1

View File

@ -0,0 +1,29 @@
pytest-3.0.4
============
pytest 3.0.4 has just been released to PyPI.
This release fixes some regressions and bugs reported in the last version,
being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Bruno Oliveira
* Dan Wandschneider
* Florian Bruhin
* Georgy Dyuldin
* Grigorii Eremeev
* Jason R. Coombs
* Manuel Jacob
* Mathieu Clabaut
* Michael Seifert
* Nikolaus Rath
* Ronny Pfannschmidt
* Tom V
Happy testing,
The pytest Development Team

View File

@ -26,7 +26,7 @@ you will see the return value of the function call::
$ pytest test_assert1.py $ pytest test_assert1.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items
@ -170,7 +170,7 @@ if you run this module::
$ pytest test_assert2.py $ pytest test_assert2.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items

View File

@ -80,7 +80,7 @@ If you then run it with ``--lf``::
$ pytest --lf $ pytest --lf
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
run-last-failure: rerun last 2 failures run-last-failure: rerun last 2 failures
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 50 items collected 50 items
@ -122,7 +122,7 @@ of ``FF`` and dots)::
$ pytest --ff $ pytest --ff
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
run-last-failure: rerun last 2 failures first run-last-failure: rerun last 2 failures first
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 50 items collected 50 items
@ -227,7 +227,7 @@ You can always peek at the content of the cache using the
$ py.test --cache-show $ py.test --cache-show
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
cachedir: $REGENDOC_TMPDIR/.cache cachedir: $REGENDOC_TMPDIR/.cache
------------------------------- cache values ------------------------------- ------------------------------- cache values -------------------------------

View File

@ -64,7 +64,7 @@ of the failing function and hide the other one::
$ pytest $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items

View File

@ -49,7 +49,7 @@ then you can just invoke ``pytest`` without command line options::
$ pytest $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 1 items collected 1 items

View File

@ -31,7 +31,7 @@ You can then restrict a test run to only run tests marked with ``webtest``::
$ pytest -v -m webtest $ pytest -v -m webtest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
@ -45,7 +45,7 @@ Or the inverse, running all tests except the webtest ones::
$ pytest -v -m "not webtest" $ pytest -v -m "not webtest"
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
@ -66,7 +66,7 @@ tests based on their module, class, method, or function name::
$ pytest -v test_server.py::TestClass::test_method $ pytest -v test_server.py::TestClass::test_method
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 5 items collecting ... collected 5 items
@ -79,7 +79,7 @@ You can also select on the class::
$ pytest -v test_server.py::TestClass $ pytest -v test_server.py::TestClass
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
@ -92,7 +92,7 @@ Or select multiple nodes::
$ pytest -v test_server.py::TestClass test_server.py::test_send_http $ pytest -v test_server.py::TestClass test_server.py::test_send_http
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 8 items collecting ... collected 8 items
@ -130,7 +130,7 @@ select tests based on their names::
$ pytest -v -k http # running with the above defined example module $ pytest -v -k http # running with the above defined example module
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
@ -144,7 +144,7 @@ And you can also run all tests except the ones that match the keyword::
$ pytest -k "not send_http" -v $ pytest -k "not send_http" -v
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
@ -160,7 +160,7 @@ Or to select "http" and "quick" tests::
$ pytest -k "http or quick" -v $ pytest -k "http or quick" -v
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
@ -352,7 +352,7 @@ the test needs::
$ pytest -E stage2 $ pytest -E stage2
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items
@ -364,7 +364,7 @@ and here is one that specifies exactly the environment needed::
$ pytest -E stage1 $ pytest -E stage1
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items
@ -485,7 +485,7 @@ then you will see two test skipped and two executed tests as expected::
$ pytest -rs # this option reports skip reasons $ pytest -rs # this option reports skip reasons
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
@ -499,7 +499,7 @@ Note that if you specify a platform via the marker-command line option like this
$ pytest -m linux2 $ pytest -m linux2
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
@ -551,7 +551,7 @@ We can now use the ``-m option`` to select one set::
$ pytest -m interface --tb=short $ pytest -m interface --tb=short
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
@ -573,7 +573,7 @@ or to select both "event" and "interface" tests::
$ pytest -m "interface or event" --tb=short $ pytest -m "interface or event" --tb=short
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items

View File

@ -27,7 +27,7 @@ now execute the test specification::
nonpython $ pytest test_simple.yml nonpython $ pytest test_simple.yml
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR/nonpython, inifile: rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collected 2 items collected 2 items
@ -59,7 +59,7 @@ consulted when reporting in ``verbose`` mode::
nonpython $ pytest -v nonpython $ pytest -v
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
rootdir: $REGENDOC_TMPDIR/nonpython, inifile: rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collecting ... collected 2 items collecting ... collected 2 items
@ -81,7 +81,7 @@ interesting to just look at the collection tree::
nonpython $ pytest --collect-only nonpython $ pytest --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR/nonpython, inifile: rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collected 2 items collected 2 items
<YamlFile 'test_simple.yml'> <YamlFile 'test_simple.yml'>

View File

@ -130,7 +130,7 @@ objects, they are still using the default pytest representation::
$ pytest test_time.py --collect-only $ pytest test_time.py --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 6 items collected 6 items
<Module 'test_time.py'> <Module 'test_time.py'>
@ -181,7 +181,7 @@ this is a fully self-contained example which you can run with::
$ pytest test_scenarios.py $ pytest test_scenarios.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
@ -194,7 +194,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
$ pytest --collect-only test_scenarios.py $ pytest --collect-only test_scenarios.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
<Module 'test_scenarios.py'> <Module 'test_scenarios.py'>
@ -259,7 +259,7 @@ Let's first see how it looks like at collection time::
$ pytest test_backends.py --collect-only $ pytest test_backends.py --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
<Module 'test_backends.py'> <Module 'test_backends.py'>
@ -320,7 +320,7 @@ The result of this test will be successful::
$ pytest test_indirect_list.py --collect-only $ pytest test_indirect_list.py --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items
<Module 'test_indirect_list.py'> <Module 'test_indirect_list.py'>
@ -447,7 +447,7 @@ If you run this with reporting for skips enabled::
$ pytest -rs test_module.py $ pytest -rs test_module.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items

View File

@ -117,7 +117,7 @@ then the test collection looks like this::
$ pytest --collect-only $ pytest --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 2 items collected 2 items
<Module 'check_myapp.py'> <Module 'check_myapp.py'>
@ -163,7 +163,7 @@ You can always peek at the collection tree without running tests like this::
. $ pytest --collect-only pythoncollection.py . $ pytest --collect-only pythoncollection.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 3 items collected 3 items
<Module 'CWD/pythoncollection.py'> <Module 'CWD/pythoncollection.py'>
@ -230,7 +230,7 @@ will be left out::
$ pytest --collect-only $ pytest --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 0 items collected 0 items

View File

@ -11,7 +11,7 @@ get on the terminal - we are working on that)::
assertion $ pytest failure_demo.py assertion $ pytest failure_demo.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR/assertion, inifile: rootdir: $REGENDOC_TMPDIR/assertion, inifile:
collected 42 items collected 42 items
@ -359,7 +359,7 @@ get on the terminal - we are working on that)::
> int(s) > int(s)
E ValueError: invalid literal for int() with base 10: 'qwe' E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python.py:1190>:1: ValueError <0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python.py:1204>:1: ValueError
_______ TestRaises.test_raises_doesnt ________ _______ TestRaises.test_raises_doesnt ________
self = <failure_demo.TestRaises object at 0xdeadbeef> self = <failure_demo.TestRaises object at 0xdeadbeef>

View File

@ -113,7 +113,7 @@ directory with the above conftest.py::
$ pytest $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items collected 0 items
@ -164,7 +164,7 @@ and when running it will see a skipped "slow" test::
$ pytest -rs # "-rs" means report details on the little 's' $ pytest -rs # "-rs" means report details on the little 's'
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
@ -178,7 +178,7 @@ Or run it including the ``slow`` marked test::
$ pytest --runslow $ pytest --runslow
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
@ -302,7 +302,7 @@ which will add the string to the test header accordingly::
$ pytest $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
project deps: mylib-1.1 project deps: mylib-1.1
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items collected 0 items
@ -327,7 +327,7 @@ which will add info only when run with "--v"::
$ pytest -v $ pytest -v
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
info1: did you know that ... info1: did you know that ...
did you? did you?
@ -340,7 +340,7 @@ and nothing when run plainly::
$ pytest $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items collected 0 items
@ -374,7 +374,7 @@ Now we can profile which test functions execute the slowest::
$ pytest --durations=3 $ pytest --durations=3
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items collected 3 items
@ -440,7 +440,7 @@ If we run this::
$ pytest -rx $ pytest -rx
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items collected 4 items
@ -519,7 +519,7 @@ We can run this::
$ pytest $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 7 items collected 7 items
@ -627,7 +627,7 @@ and run them::
$ pytest test_module.py $ pytest test_module.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
@ -721,7 +721,7 @@ and run it::
$ pytest -s test_module.py $ pytest -s test_module.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items collected 3 items

View File

@ -70,7 +70,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
$ pytest test_smtpsimple.py $ pytest test_smtpsimple.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items
@ -188,7 +188,7 @@ inspect what is going on and can now run the tests::
$ pytest test_module.py $ pytest test_module.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items
@ -375,6 +375,8 @@ Running it::
assert 0, smtp.helo() assert 0, smtp.helo()
E AssertionError: (250, b'mail.python.org') E AssertionError: (250, b'mail.python.org')
E assert 0 E assert 0
------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef> (mail.python.org)
voila! The ``smtp`` fixture function picked up our mail server name voila! The ``smtp`` fixture function picked up our mail server name
from the module namespace. from the module namespace.
@ -464,6 +466,8 @@ So let's just do another run::
E assert 0 E assert 0
test_module.py:11: AssertionError test_module.py:11: AssertionError
------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef>
4 failed in 0.12 seconds 4 failed in 0.12 seconds
We see that our two test functions each ran twice, against the different We see that our two test functions each ran twice, against the different
@ -516,7 +520,7 @@ Running the above tests results in the following test IDs being used::
$ pytest --collect-only $ pytest --collect-only
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 11 items collected 11 items
<Module 'test_anothersmtp.py'> <Module 'test_anothersmtp.py'>
@ -569,7 +573,7 @@ Here we declare an ``app`` fixture which receives the previously defined
$ pytest -v test_appsetup.py $ pytest -v test_appsetup.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items collecting ... collected 2 items
@ -638,7 +642,7 @@ Let's run the tests in verbose mode and with looking at the print-output::
$ pytest -v -s test_module.py $ pytest -v -s test_module.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 8 items collecting ... collected 8 items

View File

@ -26,7 +26,7 @@ Installation::
To check your installation has installed the correct version:: To check your installation has installed the correct version::
$ pytest --version $ pytest --version
This is pytest version 3.0.3, imported from $PYTHON_PREFIX/lib/python3.5/site-packages/pytest.py This is pytest version 3.0.4, imported from $PYTHON_PREFIX/lib/python3.5/site-packages/pytest.py
.. _`simpletest`: .. _`simpletest`:
@ -46,7 +46,7 @@ That's it. You can execute the test function now::
$ pytest $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items

View File

@ -25,7 +25,7 @@ To execute it::
$ pytest $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items

View File

@ -55,7 +55,7 @@ them in turn::
$ pytest $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items collected 3 items
@ -103,7 +103,7 @@ Let's run this::
$ pytest $ pytest
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items collected 3 items

View File

@ -224,7 +224,7 @@ Running it with the report-on-xfail option gives this output::
example $ pytest -rx xfail_demo.py example $ pytest -rx xfail_demo.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR/example, inifile: rootdir: $REGENDOC_TMPDIR/example, inifile:
collected 7 items collected 7 items

View File

@ -4,7 +4,7 @@ Talks and Tutorials
.. sidebar:: Next Open Trainings .. sidebar:: Next Open Trainings
`professional testing with pytest and tox <http://www.python-academy.com/courses/specialtopics/python_course_testing.html>`_, 27-29th June 2016, Freiburg, Germany `pytest workshop <http://www.meetup.com/Python-Django-User-Group-Bern/events/235151115/>`_, 8th December 2016, Bern, Switzerland
.. _`funcargs`: funcargs.html .. _`funcargs`: funcargs.html

View File

@ -29,7 +29,7 @@ Running this would result in a passed test except for the last
$ pytest test_tmpdir.py $ pytest test_tmpdir.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 items collected 1 items

View File

@ -100,7 +100,7 @@ the ``self.db`` values in the traceback::
$ pytest test_unittest_db.py $ pytest test_unittest_db.py
======= test session starts ======== ======= test session starts ========
platform linux -- Python 3.5.2, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 platform linux -- Python 3.5.2, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items collected 2 items

View File

@ -1,20 +0,0 @@
#!/bin/bash
# this assumes plugins are installed as sister directories
set -e
cd ../pytest-pep8
pytest
cd ../pytest-instafail
pytest
cd ../pytest-cache
pytest
cd ../pytest-xprocess
pytest
#cd ../pytest-cov
#pytest
cd ../pytest-capturelog
pytest
cd ../pytest-xdist
pytest

View File

@ -1,3 +0,0 @@
sphinx==1.2.3
regendoc
pyyaml

View File

@ -1,8 +0,0 @@
#!/usr/bin/env python
if __name__ == "__main__":
import subprocess
import sys
subprocess.call([sys.executable, "-m", "tox",
"-i", "ALL=https://devpi.net/hpk/dev/",
"--develop"] + sys.argv[1:])

View File

@ -72,6 +72,7 @@ def main():
entry_points={'console_scripts': entry_points={'console_scripts':
['pytest=pytest:main', 'py.test=pytest:main']}, ['pytest=pytest:main', 'py.test=pytest:main']},
classifiers=classifiers, classifiers=classifiers,
keywords="test unittest",
cmdclass={'test': PyTest}, cmdclass={'test': PyTest},
# the following should be enabled for release # the following should be enabled for release
install_requires=install_requires, install_requires=install_requires,

View File

@ -1050,6 +1050,50 @@ raise ValueError()
assert line.endswith('mod.py') assert line.endswith('mod.py')
assert tw.lines[47] == ":15: AttributeError" assert tw.lines[47] == ":15: AttributeError"
@pytest.mark.skipif("sys.version_info[0] < 3")
@pytest.mark.parametrize('reason, description', [
('cause', 'The above exception was the direct cause of the following exception:'),
('context', 'During handling of the above exception, another exception occurred:'),
])
def test_exc_chain_repr_without_traceback(self, importasmod, reason, description):
"""
Handle representation of exception chains where one of the exceptions doesn't have a
real traceback, such as those raised in a subprocess submitted by the multiprocessing
module (#1984).
"""
from _pytest.pytester import LineMatcher
exc_handling_code = ' from e' if reason == 'cause' else ''
mod = importasmod("""
def f():
try:
g()
except Exception as e:
raise RuntimeError('runtime problem'){exc_handling_code}
def g():
raise ValueError('invalid value')
""".format(exc_handling_code=exc_handling_code))
with pytest.raises(RuntimeError) as excinfo:
mod.f()
# emulate the issue described in #1984
attr = '__%s__' % reason
getattr(excinfo.value, attr).__traceback__ = None
r = excinfo.getrepr()
tw = py.io.TerminalWriter(stringio=True)
tw.hasmarkup = False
r.toterminal(tw)
matcher = LineMatcher(tw.stringio.getvalue().splitlines())
matcher.fnmatch_lines([
"ValueError: invalid value",
description,
"* except Exception as e:",
"> * raise RuntimeError('runtime problem')" + exc_handling_code,
"E *RuntimeError: runtime problem",
])
@pytest.mark.parametrize("style", ["short", "long"]) @pytest.mark.parametrize("style", ["short", "long"])
@pytest.mark.parametrize("encoding", [None, "utf8", "utf16"]) @pytest.mark.parametrize("encoding", [None, "utf8", "utf16"])

View File

@ -1,4 +1,6 @@
import pytest import pytest
import sys
class TestRaises: class TestRaises:
def test_raises(self): def test_raises(self):
@ -88,3 +90,31 @@ class TestRaises:
assert e.msg == message assert e.msg == message
else: else:
assert False, "Expected pytest.raises.Exception" assert False, "Expected pytest.raises.Exception"
@pytest.mark.parametrize('method', ['function', 'with'])
def test_raises_cyclic_reference(self, method):
"""
Ensure pytest.raises does not leave a reference cycle (#1965).
"""
import gc
class T(object):
def __call__(self):
raise ValueError
t = T()
if method == 'function':
pytest.raises(ValueError, t)
else:
with pytest.raises(ValueError):
t()
# ensure both forms of pytest.raises don't leave exceptions in sys.exc_info()
assert sys.exc_info() == (None, None, None)
del t
# ensure the t instance is not stuck in a cyclic reference
for o in gc.get_objects():
assert type(o) is not T

View File

@ -757,6 +757,37 @@ def test_traceback_failure(testdir):
"*test_traceback_failure.py:4: AssertionError" "*test_traceback_failure.py:4: AssertionError"
]) ])
@pytest.mark.skipif(sys.version_info[:2] <= (3, 3), reason='Python 3.4+ shows chained exceptions on multiprocess')
def test_exception_handling_no_traceback(testdir):
"""
Handle chain exceptions in tasks submitted by the multiprocess module (#1984).
"""
p1 = testdir.makepyfile("""
from multiprocessing import Pool
def process_task(n):
assert n == 10
def multitask_job():
tasks = [1]
with Pool(processes=1) as pool:
pool.map(process_task, tasks)
def test_multitask_job():
multitask_job()
""")
result = testdir.runpytest(p1, "--tb=long")
result.stdout.fnmatch_lines([
"====* FAILURES *====",
"*multiprocessing.pool.RemoteTraceback:*",
"Traceback (most recent call last):",
"*assert n == 10",
"The above exception was the direct cause of the following exception:",
"> * multitask_job()",
])
@pytest.mark.skipif("'__pypy__' in sys.builtin_module_names or sys.platform.startswith('java')" ) @pytest.mark.skipif("'__pypy__' in sys.builtin_module_names or sys.platform.startswith('java')" )
def test_warn_missing(testdir): def test_warn_missing(testdir):
testdir.makepyfile("") testdir.makepyfile("")

View File

@ -543,6 +543,22 @@ def test_rewritten():
''') ''')
assert testdir.runpytest_subprocess().ret == 0 assert testdir.runpytest_subprocess().ret == 0
def test_remember_rewritten_modules(self, pytestconfig, testdir, monkeypatch):
"""
AssertionRewriteHook should remember rewritten modules so it
doesn't give false positives (#2005).
"""
monkeypatch.syspath_prepend(testdir.tmpdir)
testdir.makepyfile(test_remember_rewritten_modules='')
warnings = []
hook = AssertionRewritingHook(pytestconfig)
monkeypatch.setattr(hook.config, 'warn', lambda code, msg: warnings.append(msg))
hook.find_module('test_remember_rewritten_modules')
hook.load_module('test_remember_rewritten_modules')
hook.mark_rewrite('test_remember_rewritten_modules')
hook.mark_rewrite('test_remember_rewritten_modules')
assert warnings == []
class TestAssertionRewriteHookDetails(object): class TestAssertionRewriteHookDetails(object):
def test_loader_is_package_false_for_module(self, testdir): def test_loader_is_package_false_for_module(self, testdir):

View File

@ -165,6 +165,30 @@ class TestPython:
fnode.assert_attr(message="test setup failure") fnode.assert_attr(message="test setup failure")
assert "ValueError" in fnode.toxml() assert "ValueError" in fnode.toxml()
def test_teardown_error(self, testdir):
testdir.makepyfile("""
import pytest
@pytest.fixture
def arg():
yield
raise ValueError()
def test_function(arg):
pass
""")
result, dom = runandparse(testdir)
assert result.ret
node = dom.find_first_by_tag("testsuite")
tnode = node.find_first_by_tag("testcase")
tnode.assert_attr(
file="test_teardown_error.py",
line="6",
classname="test_teardown_error",
name="test_function")
fnode = tnode.find_first_by_tag("error")
fnode.assert_attr(message="test teardown failure")
assert "ValueError" in fnode.toxml()
def test_skip_contains_name_reason(self, testdir): def test_skip_contains_name_reason(self, testdir):
testdir.makepyfile(""" testdir.makepyfile("""
import pytest import pytest

View File

@ -1,4 +1,5 @@
import sys import sys
import platform
import _pytest._code import _pytest._code
import pytest import pytest
@ -96,6 +97,12 @@ class TestPDB:
rest = child.read().decode("utf8") rest = child.read().decode("utf8")
assert "1 failed" in rest assert "1 failed" in rest
assert "def test_1" not in rest assert "def test_1" not in rest
self.flush(child)
@staticmethod
def flush(child):
if platform.system() == 'Darwin':
return
if child.isalive(): if child.isalive():
child.wait() child.wait()
@ -115,8 +122,7 @@ class TestPDB:
child.sendeof() child.sendeof()
rest = child.read().decode("utf8") rest = child.read().decode("utf8")
assert 'debug.me' in rest assert 'debug.me' in rest
if child.isalive(): self.flush(child)
child.wait()
def test_pdb_interaction_capture(self, testdir): def test_pdb_interaction_capture(self, testdir):
p1 = testdir.makepyfile(""" p1 = testdir.makepyfile("""
@ -131,8 +137,7 @@ class TestPDB:
rest = child.read().decode("utf8") rest = child.read().decode("utf8")
assert "1 failed" in rest assert "1 failed" in rest
assert "getrekt" not in rest assert "getrekt" not in rest
if child.isalive(): self.flush(child)
child.wait()
def test_pdb_interaction_exception(self, testdir): def test_pdb_interaction_exception(self, testdir):
p1 = testdir.makepyfile(""" p1 = testdir.makepyfile("""
@ -150,8 +155,7 @@ class TestPDB:
child.expect(".*function") child.expect(".*function")
child.sendeof() child.sendeof()
child.expect("1 failed") child.expect("1 failed")
if child.isalive(): self.flush(child)
child.wait()
def test_pdb_interaction_on_collection_issue181(self, testdir): def test_pdb_interaction_on_collection_issue181(self, testdir):
p1 = testdir.makepyfile(""" p1 = testdir.makepyfile("""
@ -163,8 +167,7 @@ class TestPDB:
child.expect("(Pdb)") child.expect("(Pdb)")
child.sendeof() child.sendeof()
child.expect("1 error") child.expect("1 error")
if child.isalive(): self.flush(child)
child.wait()
def test_pdb_interaction_on_internal_error(self, testdir): def test_pdb_interaction_on_internal_error(self, testdir):
testdir.makeconftest(""" testdir.makeconftest("""
@ -176,8 +179,7 @@ class TestPDB:
#child.expect(".*import pytest.*") #child.expect(".*import pytest.*")
child.expect("(Pdb)") child.expect("(Pdb)")
child.sendeof() child.sendeof()
if child.isalive(): self.flush(child)
child.wait()
def test_pdb_interaction_capturing_simple(self, testdir): def test_pdb_interaction_capturing_simple(self, testdir):
p1 = testdir.makepyfile(""" p1 = testdir.makepyfile("""
@ -197,8 +199,7 @@ class TestPDB:
assert "1 failed" in rest assert "1 failed" in rest
assert "def test_1" in rest assert "def test_1" in rest
assert "hello17" in rest # out is captured assert "hello17" in rest # out is captured
if child.isalive(): self.flush(child)
child.wait()
def test_pdb_set_trace_interception(self, testdir): def test_pdb_set_trace_interception(self, testdir):
p1 = testdir.makepyfile(""" p1 = testdir.makepyfile("""
@ -213,8 +214,7 @@ class TestPDB:
rest = child.read().decode("utf8") rest = child.read().decode("utf8")
assert "1 failed" in rest assert "1 failed" in rest
assert "reading from stdin while output" not in rest assert "reading from stdin while output" not in rest
if child.isalive(): self.flush(child)
child.wait()
def test_pdb_and_capsys(self, testdir): def test_pdb_and_capsys(self, testdir):
p1 = testdir.makepyfile(""" p1 = testdir.makepyfile("""
@ -229,8 +229,7 @@ class TestPDB:
child.expect("hello1") child.expect("hello1")
child.sendeof() child.sendeof()
child.read() child.read()
if child.isalive(): self.flush(child)
child.wait()
def test_set_trace_capturing_afterwards(self, testdir): def test_set_trace_capturing_afterwards(self, testdir):
p1 = testdir.makepyfile(""" p1 = testdir.makepyfile("""
@ -249,8 +248,7 @@ class TestPDB:
child.expect("hello") child.expect("hello")
child.sendeof() child.sendeof()
child.read() child.read()
if child.isalive(): self.flush(child)
child.wait()
def test_pdb_interaction_doctest(self, testdir): def test_pdb_interaction_doctest(self, testdir):
p1 = testdir.makepyfile(""" p1 = testdir.makepyfile("""
@ -269,8 +267,7 @@ class TestPDB:
child.sendeof() child.sendeof()
rest = child.read().decode("utf8") rest = child.read().decode("utf8")
assert "1 failed" in rest assert "1 failed" in rest
if child.isalive(): self.flush(child)
child.wait()
def test_pdb_interaction_capturing_twice(self, testdir): def test_pdb_interaction_capturing_twice(self, testdir):
p1 = testdir.makepyfile(""" p1 = testdir.makepyfile("""
@ -296,8 +293,7 @@ class TestPDB:
assert "def test_1" in rest assert "def test_1" in rest
assert "hello17" in rest # out is captured assert "hello17" in rest # out is captured
assert "hello18" in rest # out is captured assert "hello18" in rest # out is captured
if child.isalive(): self.flush(child)
child.wait()
def test_pdb_used_outside_test(self, testdir): def test_pdb_used_outside_test(self, testdir):
p1 = testdir.makepyfile(""" p1 = testdir.makepyfile("""
@ -308,7 +304,7 @@ class TestPDB:
child = testdir.spawn("%s %s" %(sys.executable, p1)) child = testdir.spawn("%s %s" %(sys.executable, p1))
child.expect("x = 5") child.expect("x = 5")
child.sendeof() child.sendeof()
child.wait() self.flush(child)
def test_pdb_used_in_generate_tests(self, testdir): def test_pdb_used_in_generate_tests(self, testdir):
p1 = testdir.makepyfile(""" p1 = testdir.makepyfile("""
@ -322,7 +318,7 @@ class TestPDB:
child = testdir.spawn_pytest(str(p1)) child = testdir.spawn_pytest(str(p1))
child.expect("x = 5") child.expect("x = 5")
child.sendeof() child.sendeof()
child.wait() self.flush(child)
def test_pdb_collection_failure_is_shown(self, testdir): def test_pdb_collection_failure_is_shown(self, testdir):
p1 = testdir.makepyfile("""xxx """) p1 = testdir.makepyfile("""xxx """)
@ -351,8 +347,7 @@ class TestPDB:
child.expect("enter_pdb_hook") child.expect("enter_pdb_hook")
child.send('c\n') child.send('c\n')
child.sendeof() child.sendeof()
if child.isalive(): self.flush(child)
child.wait()
def test_pdb_custom_cls(self, testdir, custom_pdb_calls): def test_pdb_custom_cls(self, testdir, custom_pdb_calls):
p1 = testdir.makepyfile("""xxx """) p1 = testdir.makepyfile("""xxx """)

View File

@ -370,6 +370,31 @@ class TestFixtureReporting:
"*1 failed*1 error*", "*1 failed*1 error*",
]) ])
def test_setup_teardown_output_and_test_failure(self, testdir):
""" Test for issue #442 """
testdir.makepyfile("""
def setup_function(function):
print ("setup func")
def test_fail():
assert 0, "failingfunc"
def teardown_function(function):
print ("teardown func")
""")
result = testdir.runpytest()
result.stdout.fnmatch_lines([
"*test_fail*",
"*def test_fail():",
"*failingfunc*",
"*Captured stdout setup*",
"*setup func*",
"*Captured stdout teardown*",
"*teardown func*",
"*1 failed*",
])
class TestTerminalFunctional: class TestTerminalFunctional:
def test_deselected(self, testdir): def test_deselected(self, testdir):
testpath = testdir.makepyfile(""" testpath = testdir.makepyfile("""

View File

@ -1,5 +1,6 @@
from _pytest.main import EXIT_NOTESTSCOLLECTED from _pytest.main import EXIT_NOTESTSCOLLECTED
import pytest import pytest
import gc
def test_simple_unittest(testdir): def test_simple_unittest(testdir):
testpath = testdir.makepyfile(""" testpath = testdir.makepyfile("""
@ -134,6 +135,28 @@ def test_teardown(testdir):
assert passed == 2 assert passed == 2
assert passed + skipped + failed == 2 assert passed + skipped + failed == 2
def test_teardown_issue1649(testdir):
"""
Are TestCase objects cleaned up? Often unittest TestCase objects set
attributes that are large and expensive during setUp.
The TestCase will not be cleaned up if the test fails, because it
would then exist in the stackframe.
"""
testpath = testdir.makepyfile("""
import unittest
class TestCaseObjectsShouldBeCleanedUp(unittest.TestCase):
def setUp(self):
self.an_expensive_object = 1
def test_demo(self):
pass
""")
testdir.inline_run("-s", testpath)
gc.collect()
for obj in gc.get_objects():
assert type(obj).__name__ != 'TestCaseObjectsShouldBeCleanedUp'
@pytest.mark.skipif("sys.version_info < (2,7)") @pytest.mark.skipif("sys.version_info < (2,7)")
def test_unittest_skip_issue148(testdir): def test_unittest_skip_issue148(testdir):
testpath = testdir.makepyfile(""" testpath = testdir.makepyfile("""

42
tox.ini
View File

@ -3,9 +3,18 @@ minversion=2.0
distshare={homedir}/.tox/distshare distshare={homedir}/.tox/distshare
# make sure to update enviroment list on appveyor.yml # make sure to update enviroment list on appveyor.yml
envlist= envlist=
linting,py26,py27,py33,py34,py35,pypy, linting
{py27,py35}-{pexpect,xdist,trial}, py26
py27-nobyte,doctesting,freeze,docs py27
py33
py34
py35
pypy
{py27,py35}-{pexpect,xdist,trial}
py27-nobyte
doctesting
freeze
docs
[testenv] [testenv]
commands= pytest --lsof -rfsxX {posargs:testing} commands= pytest --lsof -rfsxX {posargs:testing}
@ -33,15 +42,19 @@ deps=pytest-xdist>=1.13
commands= commands=
pytest -n3 -rfsxX --runpytest=subprocess {posargs:testing} pytest -n3 -rfsxX --runpytest=subprocess {posargs:testing}
[testenv:genscript]
commands= pytest --genscript=pytest1
[testenv:linting] [testenv:linting]
basepython = python2.7 basepython = python2.7
deps = flake8 deps =
flake8
# pygments required by rst-lint
pygments
restructuredtext_lint restructuredtext_lint
commands = flake8 pytest.py _pytest testing check-manifest
rst-lint CHANGELOG.rst HOWTORELEASE.rst commands =
check-manifest
flake8 pytest.py _pytest testing
rst-lint CHANGELOG.rst HOWTORELEASE.rst README.rst
[testenv:py27-xdist] [testenv:py27-xdist]
deps=pytest-xdist>=1.13 deps=pytest-xdist>=1.13
@ -90,10 +103,6 @@ deps={[testenv:py27-trial]deps}
commands= commands=
pytest -ra {posargs:testing/test_unittest.py} pytest -ra {posargs:testing/test_unittest.py}
[testenv:doctest]
commands=pytest --doctest-modules _pytest
deps=
[testenv:docs] [testenv:docs]
basepython=python basepython=python
changedir=doc/en changedir=doc/en
@ -106,9 +115,12 @@ commands=
[testenv:doctesting] [testenv:doctesting]
basepython = python basepython = python
changedir=doc/en usedevelop=True
skipsdist=True
deps=PyYAML deps=PyYAML
commands= pytest -rfsxX {posargs} commands=
pytest -rfsxX doc/en
pytest --doctest-modules {toxinidir}/_pytest
[testenv:regen] [testenv:regen]
changedir=doc/en changedir=doc/en
@ -139,7 +151,7 @@ commands=
[testenv:coveralls] [testenv:coveralls]
passenv = TRAVIS TRAVIS_JOB_ID TRAVIS_BRANCH COVERALLS_REPO_TOKEN passenv = TRAVIS TRAVIS_JOB_ID TRAVIS_BRANCH COVERALLS_REPO_TOKEN
usedevelop=True usedevelop=True
basepython=python3.4 basepython=python3.5
changedir=. changedir=.
deps = deps =
{[testenv]deps} {[testenv]deps}