* rename "rep" to "report" in reporting hooks

* refine docs
* bump version data
* improve announcement

--HG--
branch : 1.0.x
This commit is contained in:
holger krekel 2009-08-04 12:00:04 +02:00
parent 67c4503d1b
commit 8c8617c354
27 changed files with 181 additions and 603 deletions

View File

@ -1,3 +1,9 @@
Changes between 1.0.0b9 and 1.0.0
=====================================
* more terse reporting try to show filesystem path relatively to current dir
* improve xfail output a bit
Changes between 1.0.0b8 and 1.0.0b9
=====================================

View File

@ -60,7 +60,9 @@ example/execnet/svn-sync-repo.py
example/execnet/sysinfo.py
example/funcarg/conftest.py
example/funcarg/costlysetup/conftest.py
example/funcarg/costlysetup/sub1/__init__.py
example/funcarg/costlysetup/sub1/test_quick.py
example/funcarg/costlysetup/sub2/__init__.py
example/funcarg/costlysetup/sub2/test_two.py
example/funcarg/mysetup/__init__.py
example/funcarg/mysetup/conftest.py

View File

@ -1,54 +1,63 @@
py.test / py lib 1.0.0: new test plugins, funcargs and cleanups
============================================================================
Welcome to the 1.0 release bringing new flexibility and
power to testing with Python. Main news:
pylib 1.0.0 released: testing-with-python innovations continue
--------------------------------------------------------------------
* funcargs - new flexibilty and zero-boilerplate fixtures for Python testing:
Took a few betas but finally i uploaded a `1.0.0 py lib release`_,
featuring the mature and powerful py.test tool and "execnet-style"
*elastic* distributed programming. With the new release, there are
many new advanced automated testing features - here is a quick summary:
- separate test code, configuration and setup
* funcargs_ - pythonic zero-boilerplate fixtures for Python test functions :
- totally separates test code, test configuration and test setup
- ideal for integration and functional tests
- more powerful dynamic generation of tests
- allows for flexible and natural test parametrization schemes
* new plugin architecture, allowing project-specific and
cross-project single-file plugins. Many useful examples
shipped by default:
* new `plugin architecture`_, allowing easy-to-write project-specific and cross-project single-file plugins. The most notable new external plugin is `oejskit`_ which naturally enables **running and reporting of javascript-unittests in real-life browsers**.
* pytest_unittest.py: run and integrate traditional unittest.py tests
* pytest_xfail.py: mark tests as "expected to fail" and report separately.
* pytest_pocoo.py: automatically send tracebacks to pocoo paste service
* pytest_monkeypatch.py: safely monkeypatch from tests
* pytest_figleaf.py: generate html coverage reports
* pytest_resultlog.py: generate buildbot-friendly reporting output
* many new features done in easy-to-improve `default plugins`_, highlights:
and many more!
* xfail: mark tests as "expected to fail" and report separately.
* pastebin: automatically send tracebacks to pocoo paste service
* capture: flexibly capture stdout/stderr of subprocesses, per-test ...
* monkeypatch: safely monkeypatch modules/classes from within tests
* unittest: run and integrate traditional unittest.py tests
* figleaf: generate html coverage reports with the figleaf module
* resultlog: generate buildbot-friendly reporting output
* ...
* distributed testing and distributed execution (py.execnet):
* `distributed testing`_ and `elastic distributed execution`_:
- new unified "TX" URL scheme for specifying remote resources
- new sync/async ways to handle multiple remote processes
- new unified "TX" URL scheme for specifying remote processes
- new distribution modes "--dist=each" and "--dist=load"
- new sync/async ways to handle 1:N communication
- improved documentation
See the py.test and py lib documentation for more info:
The py lib continues to offer most of the functionality used by
the testing tool in `independent namespaces`_.
Some non-test related code, notably greenlets/co-routines and
api-generation now live as their own projects which simplifies the
installation procedure because no C-Extensions are required anymore.
The whole package should work well with Linux, Win32 and OSX, on Python
2.3, 2.4, 2.5 and 2.6. (Expect Python3 compatibility soon!)
For more info, see the py.test and py lib documentation:
http://pytest.org
http://pylib.org
The py lib now is smaller and focuses more on offering
functionality used by the py.test tool in independent
namespaces:
* py.execnet: elastic code deployment to SSH, Socket and local sub processes
* py.code: higher-level introspection and dynamic generation of python code
* py.path: path abstractions over local and subversion files
Some non-strictly-test related code, notably greenlets/co-routines
and apigen now live on their own and have been removed, also simplifying
the installation procedures.
The whole package works well with Linux, OSX and Win32, on
Python 2.3, 2.4, 2.5 and 2.6. (Expect Python3 compatibility soon!)
best,
have fun,
holger
.. _`independent namespaces`: http://pylib.org
.. _`funcargs`: http://codespeak.net/py/dist/test/funcargs.html
.. _`plugin architecture`: http://codespeak.net/py/dist/test/extend.html
.. _`default plugins`: http://codespeak.net/py/dist/test/plugin/index.html
.. _`distributed testing`: http://codespeak.net/py/dist/test/dist.html
.. _`elastic distributed execution`: http://codespeak.net/py/dist/execnet.html
.. _`1.0.0 py lib release`: http://pypi.python.org/pypi/py
.. _`oejskit`: http://codespeak.net/py/dist/test/plugin/oejskit.html

View File

@ -14,6 +14,17 @@ class css:
class Page(object):
doctype = ('<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"'
' "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">\n')
googlefragment = """
<script type="text/javascript">
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
</script>
<script type="text/javascript">
try {
var pageTracker = _gat._getTracker("UA-7597274-3");
pageTracker._trackPageview();
} catch(err) {}</script>
"""
def __init__(self, project, title, targetpath, stylesheeturl=None,
type="text/html", encoding="ISO-8859-1"):
@ -47,8 +58,10 @@ class Page(object):
def fill_menubar(self):
items = [
self.a_docref("pylib index", "index.html"),
self.a_docref("py.test index", "test/test.html"),
self.a_docref("py.test plugins", "test/plugin/index.html"),
self.a_docref("test doc-index", "test/test.html"),
self.a_docref("test quickstart", "test/quickstart.html"),
self.a_docref("test features", "test/features.html"),
self.a_docref("test plugins", "test/plugin/index.html"),
self.a_docref("py.execnet", "execnet.html"),
#self.a_docref("py.code", "code.html"),
#self.a_apigenref("api", "api/index.html"),
@ -91,6 +104,7 @@ class Page(object):
def unicode(self, doctype=True):
page = self._root.unicode()
page = page.replace("</body>", self.googlefragment + "</body>")
if doctype:
return self.doctype + page
else:

View File

@ -27,7 +27,7 @@ Other (minor) support functionality
`miscellaneous features`_ describes some small but nice py lib features.
.. _`PyPI project page`: http://pypi.python.org/pypi?%3Aaction=pkg_edit&name=py
.. _`PyPI project page`: http://pypi.python.org/pypi/py/
For the latest Release, see `PyPI project page`_

View File

@ -3,19 +3,21 @@
==========================================================
Since version 1.0 py.test features the "funcarg" mechanism which
allows a test function to take arguments which will be independently
provided by factory functions. Factory functions are automatically
discovered and allow to encapsulate all neccessary setup and glue code
for running tests. Compared to `xUnit style`_ the new mechanism is
meant to:
allows a test function to take arguments independently provided
by factory functions. Factory functions allow to encapsulate
all setup and fixture glue code into nicely separated objects
and provide a natural way for writing python test functions.
Compared to `xUnit style`_ the new mechanism is meant to:
* make test functions easier to write and to read
* isolate test fixture creation to a single place
* bring new flexibility and power to test state management
* enable running of a test function with different values
* naturally extend towards parametrizing test functions
with multiple argument sets
(superseding `old-style generative tests`_)
* to enable creation of helper objects that interact with the execution
of a test function, see the `blog post about the monkeypatch funcarg`_.
* enable creation of zero-boilerplate test helper objects that
interact with the execution of a test function, see the
`blog post about the monkeypatch funcarg`_.
If you find issues or have further suggestions for improving
the mechanism you are welcome to checkout `contact possibilities`_ page.

View File

@ -39,7 +39,7 @@ hook specification sourcecode
def pytest_collectstart(collector):
""" collector starts collecting. """
def pytest_collectreport(rep):
def pytest_collectreport(report):
""" collector finished collecting. """
def pytest_deselected(items):
@ -89,7 +89,7 @@ hook specification sourcecode
""" make ItemTestReport for the given item and call outcome. """
pytest_runtest_makereport.firstresult = True
def pytest_runtest_logreport(rep):
def pytest_runtest_logreport(report):
""" process item test report. """
# special handling for final teardown - somewhat internal for now

View File

@ -1,33 +1,33 @@
.. _`terminal`: terminal.html
.. _`pytest_recwarn.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_recwarn.py
.. _`pytest_recwarn.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_recwarn.py
.. _`unittest`: unittest.html
.. _`pytest_monkeypatch.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_monkeypatch.py
.. _`pytest_keyword.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_keyword.py
.. _`pytest_monkeypatch.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_monkeypatch.py
.. _`pytest_keyword.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_keyword.py
.. _`pastebin`: pastebin.html
.. _`plugins`: index.html
.. _`pytest_capture.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_capture.py
.. _`pytest_doctest.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_doctest.py
.. _`pytest_capture.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_capture.py
.. _`pytest_doctest.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_doctest.py
.. _`capture`: capture.html
.. _`hooklog`: hooklog.html
.. _`pytest_restdoc.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_restdoc.py
.. _`pytest_hooklog.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_hooklog.py
.. _`pytest_pastebin.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_pastebin.py
.. _`pytest_figleaf.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_figleaf.py
.. _`pytest_restdoc.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_restdoc.py
.. _`pytest_hooklog.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_hooklog.py
.. _`pytest_pastebin.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_pastebin.py
.. _`pytest_figleaf.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_figleaf.py
.. _`xfail`: xfail.html
.. _`contact`: ../../contact.html
.. _`checkout the py.test development version`: ../../download.html#checkout
.. _`oejskit`: oejskit.html
.. _`pytest_xfail.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_xfail.py
.. _`pytest_xfail.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_xfail.py
.. _`figleaf`: figleaf.html
.. _`extend`: ../extend.html
.. _`pytest_terminal.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_terminal.py
.. _`pytest_terminal.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_terminal.py
.. _`recwarn`: recwarn.html
.. _`pytest_pdb.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_pdb.py
.. _`pytest_pdb.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_pdb.py
.. _`monkeypatch`: monkeypatch.html
.. _`resultlog`: resultlog.html
.. _`keyword`: keyword.html
.. _`restdoc`: restdoc.html
.. _`pytest_unittest.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_unittest.py
.. _`pytest_unittest.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_unittest.py
.. _`doctest`: doctest.html
.. _`pytest_resultlog.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_resultlog.py
.. _`pytest_resultlog.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_resultlog.py
.. _`pdb`: pdb.html

View File

@ -0,0 +1 @@
#

View File

@ -0,0 +1 @@
#

View File

@ -32,7 +32,7 @@ initpkg(__name__,
author_email = "holger at merlinux.eu, py-dev at codespeak.net",
long_description = globals()['__doc__'],
classifiers = [
"Development Status :: 4 - Beta",
"Development Status :: 5 - Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX",

View File

@ -34,16 +34,16 @@ class LoopState(object):
return "<LoopState exitstatus=%r shuttingdown=%r len(colitems)=%d>" % (
self.exitstatus, self.shuttingdown, len(self.colitems))
def pytest_runtest_logreport(self, rep):
if rep.item in self.dsession.item2nodes:
if rep.when != "teardown": # otherwise we have already managed it
self.dsession.removeitem(rep.item, rep.node)
if rep.failed:
def pytest_runtest_logreport(self, report):
if report.item in self.dsession.item2nodes:
if report.when != "teardown": # otherwise we already managed it
self.dsession.removeitem(report.item, report.node)
if report.failed:
self.testsfailed = True
def pytest_collectreport(self, rep):
if rep.passed:
self.colitems.extend(rep.result)
def pytest_collectreport(self, report):
if report.passed:
self.colitems.extend(report.result)
def pytest_testnodeready(self, node):
self.dsession.addnode(node)
@ -199,7 +199,7 @@ class DSession(Session):
else:
self.config.hook.pytest_collectstart(collector=next)
colrep = self.config.hook.pytest_make_collect_report(collector=next)
self.queueevent("pytest_collectreport", rep=colrep)
self.queueevent("pytest_collectreport", report=colrep)
if self.config.option.dist == "each":
self.senditems_each(senditems)
else:
@ -267,7 +267,7 @@ class DSession(Session):
info = "!!! Node %r crashed during running of test %r" %(node, item)
rep = runner.ItemTestReport(item=item, excinfo=info, when="???")
rep.node = node
self.config.hook.pytest_runtest_logreport(rep=rep)
self.config.hook.pytest_runtest_logreport(report=rep)
def setup(self):
""" setup any neccessary resources ahead of the test run. """

View File

@ -1,242 +1,5 @@
import py
EXPECTTIMEOUT=10.0
class TestGeneralUsage:
def test_config_error(self, testdir):
testdir.makeconftest("""
def pytest_configure(config):
raise config.Error("hello")
""")
result = testdir.runpytest(testdir.tmpdir)
assert result.ret != 0
assert result.stderr.fnmatch_lines([
'*ERROR: hello'
])
def test_config_preparse_plugin_option(self, testdir):
testdir.makepyfile(pytest_xyz="""
def pytest_addoption(parser):
parser.addoption("--xyz", dest="xyz", action="store")
""")
testdir.makepyfile(test_one="""
import py
def test_option():
assert py.test.config.option.xyz == "123"
""")
result = testdir.runpytest("-p", "xyz", "--xyz=123")
assert result.ret == 0
assert result.stdout.fnmatch_lines([
'*1 passed*',
])
def test_basetemp(self, testdir):
mytemp = testdir.tmpdir.mkdir("mytemp")
p = testdir.makepyfile("""
import py
def test_1():
py.test.ensuretemp('xyz')
""")
result = testdir.runpytest(p, '--basetemp=%s' %mytemp)
assert result.ret == 0
assert mytemp.join('xyz').check(dir=1)
def test_assertion_magic(self, testdir):
p = testdir.makepyfile("""
def test_this():
x = 0
assert x
""")
result = testdir.runpytest(p)
extra = result.stdout.fnmatch_lines([
"> assert x",
"E assert 0",
])
assert result.ret == 1
def test_nested_import_error(self, testdir):
p = testdir.makepyfile("""
import import_fails
def test_this():
assert import_fails.a == 1
""")
testdir.makepyfile(import_fails="import does_not_work")
result = testdir.runpytest(p)
extra = result.stdout.fnmatch_lines([
"> import import_fails",
"E ImportError: No module named does_not_work",
])
assert result.ret == 1
def test_skipped_reasons(self, testdir):
testdir.makepyfile(
test_one="""
from conftest import doskip
def setup_function(func):
doskip()
def test_func():
pass
class TestClass:
def test_method(self):
doskip()
""",
test_two = """
from conftest import doskip
doskip()
""",
conftest = """
import py
def doskip():
py.test.skip('test')
"""
)
result = testdir.runpytest()
extra = result.stdout.fnmatch_lines([
"*test_one.py ss",
"*test_two.py S",
"___* skipped test summary *_",
"*conftest.py:3: *3* Skipped: 'test'",
])
assert result.ret == 0
def test_deselected(self, testdir):
testpath = testdir.makepyfile("""
def test_one():
pass
def test_two():
pass
def test_three():
pass
"""
)
result = testdir.runpytest("-k", "test_two:", testpath)
extra = result.stdout.fnmatch_lines([
"*test_deselected.py ..",
"=* 1 test*deselected by 'test_two:'*=",
])
assert result.ret == 0
def test_no_skip_summary_if_failure(self, testdir):
testdir.makepyfile("""
import py
def test_ok():
pass
def test_fail():
assert 0
def test_skip():
py.test.skip("dontshow")
""")
result = testdir.runpytest()
assert result.stdout.str().find("skip test summary") == -1
assert result.ret == 1
def test_passes(self, testdir):
p1 = testdir.makepyfile("""
def test_passes():
pass
class TestClass:
def test_method(self):
pass
""")
old = p1.dirpath().chdir()
try:
result = testdir.runpytest()
finally:
old.chdir()
extra = result.stdout.fnmatch_lines([
"test_passes.py ..",
"* 2 pass*",
])
assert result.ret == 0
def test_header_trailer_info(self, testdir):
p1 = testdir.makepyfile("""
def test_passes():
pass
""")
result = testdir.runpytest()
verinfo = ".".join(map(str, py.std.sys.version_info[:3]))
extra = result.stdout.fnmatch_lines([
"*===== test session starts ====*",
"python: platform %s -- Python %s*" %(
py.std.sys.platform, verinfo), # , py.std.sys.executable),
"*test_header_trailer_info.py .",
"=* 1 passed in *.[0-9][0-9] seconds *=",
])
def test_traceback_failure(self, testdir):
p1 = testdir.makepyfile("""
def g():
return 2
def f(x):
assert x == g()
def test_onefails():
f(3)
""")
result = testdir.runpytest(p1)
result.stdout.fnmatch_lines([
"*test_traceback_failure.py F",
"====* FAILURES *====",
"____*____",
"",
" def test_onefails():",
"> f(3)",
"",
"*test_*.py:6: ",
"_ _ _ *",
#"",
" def f(x):",
"> assert x == g()",
"E assert 3 == 2",
"E + where 2 = g()",
"",
"*test_traceback_failure.py:4: AssertionError"
])
def test_showlocals(self, testdir):
p1 = testdir.makepyfile("""
def test_showlocals():
x = 3
y = "x" * 5000
assert 0
""")
result = testdir.runpytest(p1, '-l')
result.stdout.fnmatch_lines([
#"_ _ * Locals *",
"x* = 3",
"y* = 'xxxxxx*"
])
def test_verbose_reporting(self, testdir):
p1 = testdir.makepyfile("""
import py
def test_fail():
raise ValueError()
def test_pass():
pass
class TestClass:
def test_skip(self):
py.test.skip("hello")
def test_gen():
def check(x):
assert x == 1
yield check, 0
""")
result = testdir.runpytest(p1, '-v')
result.stdout.fnmatch_lines([
"*test_verbose_reporting.py:2: test_fail*FAIL*",
"*test_verbose_reporting.py:4: test_pass*PASS*",
"*test_verbose_reporting.py:7: TestClass.test_skip*SKIP*",
"*test_verbose_reporting.py:10: test_gen*FAIL*",
])
assert result.ret == 1
result = testdir.runpytest(p1, '-v', '-n 1')
result.stdout.fnmatch_lines([
"*FAIL*test_verbose_reporting.py:2: test_fail*",
])
assert result.ret == 1
class TestDistribution:
def test_dist_conftest_options(self, testdir):
p1 = testdir.tmpdir.ensure("dir", 'p1.py')
@ -383,40 +146,3 @@ class TestDistribution:
result.stdout.fnmatch_lines(["2...4"])
result.stdout.fnmatch_lines(["2...5"])
class TestInteractive:
def test_simple_looponfail_interaction(self, testdir):
p1 = testdir.makepyfile("""
def test_1():
assert 1 == 0
""")
p1.setmtime(p1.mtime() - 50.0)
child = testdir.spawn_pytest("--looponfail %s" % p1)
child.expect("assert 1 == 0")
child.expect("test_simple_looponfail_interaction.py:")
child.expect("1 failed")
child.expect("waiting for changes")
p1.write(py.code.Source("""
def test_1():
assert 1 == 1
"""))
child.expect("MODIFIED.*test_simple_looponfail_interaction.py", timeout=4.0)
child.expect("1 passed", timeout=5.0)
child.kill(15)
class TestKeyboardInterrupt:
def test_raised_in_testfunction(self, testdir):
p1 = testdir.makepyfile("""
import py
def test_fail():
raise ValueError()
def test_inter():
raise KeyboardInterrupt()
""")
result = testdir.runpytest(p1)
result.stdout.fnmatch_lines([
#"*test_inter() INTERRUPTED",
"*KEYBOARD INTERRUPT*",
"*1 failed*",
])

View File

@ -81,8 +81,8 @@ class TestDSession:
session.triggertesting([modcol])
name, args, kwargs = session.queue.get(block=False)
assert name == 'pytest_collectreport'
rep = kwargs['rep']
assert len(rep.result) == 1
report = kwargs['report']
assert len(report.result) == 1
def test_triggertesting_item(self, testdir):
item = testdir.getitem("def test_func(): pass")
@ -134,7 +134,7 @@ class TestDSession:
session.queueevent(None)
session.loop_once(loopstate)
assert node.sent == [[item]]
session.queueevent("pytest_runtest_logreport", rep=run(item, node))
session.queueevent("pytest_runtest_logreport", report=run(item, node))
session.loop_once(loopstate)
assert loopstate.shuttingdown
assert not loopstate.testsfailed
@ -182,7 +182,7 @@ class TestDSession:
item = item1
node = nodes[0]
when = "call"
session.queueevent("pytest_runtest_logreport", rep=rep)
session.queueevent("pytest_runtest_logreport", report=rep)
reprec = testdir.getreportrecorder(session)
print session.item2nodes
loopstate = session._initloopstate([])
@ -190,7 +190,7 @@ class TestDSession:
session.loop_once(loopstate)
assert len(session.item2nodes[item1]) == 1
rep.when = "teardown"
session.queueevent("pytest_runtest_logreport", rep=rep)
session.queueevent("pytest_runtest_logreport", report=rep)
session.loop_once(loopstate)
assert len(session.item2nodes[item1]) == 1
@ -249,7 +249,7 @@ class TestDSession:
assert node.sent == [[item]]
ev = run(item, node, excinfo=excinfo)
session.queueevent("pytest_runtest_logreport", rep=ev)
session.queueevent("pytest_runtest_logreport", report=ev)
session.loop_once(loopstate)
assert loopstate.shuttingdown
session.queueevent("pytest_testnodedown", node=node, error=None)
@ -286,8 +286,8 @@ class TestDSession:
# run tests ourselves and produce reports
ev1 = run(items[0], node, "fail")
ev2 = run(items[1], node, None)
session.queueevent("pytest_runtest_logreport", rep=ev1) # a failing one
session.queueevent("pytest_runtest_logreport", rep=ev2)
session.queueevent("pytest_runtest_logreport", report=ev1) # a failing one
session.queueevent("pytest_runtest_logreport", report=ev2)
# now call the loop
loopstate = session._initloopstate(items)
session.loop_once(loopstate)
@ -302,7 +302,7 @@ class TestDSession:
loopstate = session._initloopstate([])
loopstate.shuttingdown = True
reprec = testdir.getreportrecorder(session)
session.queueevent("pytest_runtest_logreport", rep=run(item, node))
session.queueevent("pytest_runtest_logreport", report=run(item, node))
session.loop_once(loopstate)
assert not reprec.getcalls("pytest_testnodedown")
session.queueevent("pytest_testnodedown", node=node, error=None)
@ -343,7 +343,7 @@ class TestDSession:
node = MockNode()
session.addnode(node)
session.senditems_load([item])
session.queueevent("pytest_runtest_logreport", rep=run(item, node))
session.queueevent("pytest_runtest_logreport", report=run(item, node))
loopstate = session._initloopstate([])
session.loop_once(loopstate)
assert node._shutdown is True
@ -369,10 +369,10 @@ class TestDSession:
session.senditems_load([item1])
# node2pending will become empty when the loop sees the report
rep = run(item1, node)
session.queueevent("pytest_runtest_logreport", rep=run(item1, node))
session.queueevent("pytest_runtest_logreport", report=run(item1, node))
# but we have a collection pending
session.queueevent("pytest_collectreport", rep=colreport)
session.queueevent("pytest_collectreport", report=colreport)
loopstate = session._initloopstate([])
session.loop_once(loopstate)
@ -396,11 +396,11 @@ class TestDSession:
dsession = DSession(config)
hookrecorder = testdir.getreportrecorder(config).hookrecorder
dsession.main([config.getfsnode(p1)])
rep = hookrecorder.popcall("pytest_runtest_logreport").rep
rep = hookrecorder.popcall("pytest_runtest_logreport").report
assert rep.passed
rep = hookrecorder.popcall("pytest_runtest_logreport").rep
rep = hookrecorder.popcall("pytest_runtest_logreport").report
assert rep.skipped
rep = hookrecorder.popcall("pytest_runtest_logreport").rep
rep = hookrecorder.popcall("pytest_runtest_logreport").report
assert rep.failed
# see that the node is really down
node = hookrecorder.popcall("pytest_testnodedown").node

View File

@ -115,7 +115,7 @@ class TestMasterSlaveConnection:
node = mysetup.makenode(item.config)
node.send(item)
kwargs = mysetup.geteventargs("pytest_runtest_logreport")
rep = kwargs['rep']
rep = kwargs['report']
assert rep.passed
print rep
assert rep.item == item
@ -135,10 +135,10 @@ class TestMasterSlaveConnection:
node.send(item)
for outcome in "passed failed skipped".split():
kwargs = mysetup.geteventargs("pytest_runtest_logreport")
rep = kwargs['rep']
assert getattr(rep, outcome)
report = kwargs['report']
assert getattr(report, outcome)
node.sendlist(items)
for outcome in "passed failed skipped".split():
rep = mysetup.geteventargs("pytest_runtest_logreport")['rep']
rep = mysetup.geteventargs("pytest_runtest_logreport")['report']
assert getattr(rep, outcome)

View File

@ -56,9 +56,9 @@ class TXNode(object):
self._down = True
self.notify("pytest_testnodedown", error=None, node=self)
elif eventname == "pytest_runtest_logreport":
rep = kwargs['rep']
rep = kwargs['report']
rep.node = self
self.notify("pytest_runtest_logreport", rep=rep)
self.notify("pytest_runtest_logreport", report=rep)
else:
self.notify(eventname, *args, **kwargs)
except KeyboardInterrupt:
@ -110,8 +110,8 @@ class SlaveNode(object):
def sendevent(self, eventname, *args, **kwargs):
self.channel.send((eventname, args, kwargs))
def pytest_runtest_logreport(self, rep):
self.sendevent("pytest_runtest_logreport", rep=rep)
def pytest_runtest_logreport(self, report):
self.sendevent("pytest_runtest_logreport", report=report)
def run(self):
channel = self.channel

View File

@ -137,9 +137,9 @@ def slave_runsession(channel, config, fullwidth, hasmarkup):
session.shouldclose = channel.isclosed
class Failures(list):
def pytest_runtest_logreport(self, rep):
if rep.failed:
self.append(rep)
def pytest_runtest_logreport(self, report):
if report.failed:
self.append(report)
pytest_collectreport = pytest_runtest_logreport
failreports = Failures()

View File

@ -33,7 +33,7 @@ def pytest_collect_file(path, parent):
def pytest_collectstart(collector):
""" collector starts collecting. """
def pytest_collectreport(rep):
def pytest_collectreport(report):
""" collector finished collecting. """
def pytest_deselected(items):
@ -83,7 +83,7 @@ def pytest_runtest_makereport(item, call):
""" make ItemTestReport for the given item and call outcome. """
pytest_runtest_makereport.firstresult = True
def pytest_runtest_logreport(rep):
def pytest_runtest_logreport(report):
""" process item test report. """
# special handling for final teardown - somewhat internal for now

View File

@ -132,8 +132,8 @@ class TestDoctests:
""")
reprec = testdir.inline_run(p)
call = reprec.getcall("pytest_runtest_logreport")
assert call.rep.failed
assert call.rep.longrepr
assert call.report.failed
assert call.report.longrepr
# XXX
#testitem, = items
#excinfo = py.test.raises(Failed, "testitem.runtest()")

View File

@ -341,7 +341,7 @@ class ReportRecorder(object):
# functionality for test reports
def getreports(self, names="pytest_runtest_logreport pytest_collectreport"):
return [x.rep for x in self.getcalls(names)]
return [x.report for x in self.getcalls(names)]
def matchreport(self, inamepart="", names="pytest_runtest_logreport pytest_collectreport"):
""" return a testreport whose dotted import path matches """
@ -406,7 +406,7 @@ def test_reportrecorder(testdir):
skipped = False
when = "call"
recorder.hook.pytest_runtest_logreport(rep=rep)
recorder.hook.pytest_runtest_logreport(report=rep)
failures = recorder.getfailures()
assert failures == [rep]
failures = recorder.getfailures()
@ -420,14 +420,14 @@ def test_reportrecorder(testdir):
when = "call"
rep.passed = False
rep.skipped = True
recorder.hook.pytest_runtest_logreport(rep=rep)
recorder.hook.pytest_runtest_logreport(report=rep)
modcol = testdir.getmodulecol("")
rep = modcol.config.hook.pytest_make_collect_report(collector=modcol)
rep.passed = False
rep.failed = True
rep.skipped = False
recorder.hook.pytest_collectreport(rep=rep)
recorder.hook.pytest_collectreport(report=rep)
passed, skipped, failed = recorder.listoutcomes()
assert not passed and skipped and failed
@ -440,7 +440,7 @@ def test_reportrecorder(testdir):
recorder.unregister()
recorder.clear()
recorder.hook.pytest_runtest_logreport(rep=rep)
recorder.hook.pytest_runtest_logreport(report=rep)
py.test.raises(ValueError, "recorder.getfailures()")
class LineComp:

View File

@ -59,25 +59,25 @@ class ResultLog(object):
testpath = generic_path(node)
self.write_log_entry(testpath, shortrepr, longrepr)
def pytest_runtest_logreport(self, rep):
code = rep.shortrepr
if rep.passed:
def pytest_runtest_logreport(self, report):
code = report.shortrepr
if report.passed:
longrepr = ""
elif rep.failed:
longrepr = str(rep.longrepr)
elif rep.skipped:
longrepr = str(rep.longrepr.reprcrash.message)
self.log_outcome(rep.item, code, longrepr)
elif report.failed:
longrepr = str(report.longrepr)
elif report.skipped:
longrepr = str(report.longrepr.reprcrash.message)
self.log_outcome(report.item, code, longrepr)
def pytest_collectreport(self, rep):
if not rep.passed:
if rep.failed:
def pytest_collectreport(self, report):
if not report.passed:
if report.failed:
code = "F"
else:
assert rep.skipped
assert report.skipped
code = "S"
longrepr = str(rep.longrepr.reprcrash)
self.log_outcome(rep.collector, code, longrepr)
longrepr = str(report.longrepr.reprcrash)
self.log_outcome(report.collector, code, longrepr)
def pytest_internalerror(self, excrepr):
path = excrepr.reprcrash.path

View File

@ -40,7 +40,7 @@ def pytest_runtest_protocol(item):
if item.config.getvalue("boxed"):
reports = forked_run_report(item)
for rep in reports:
item.config.hook.pytest_runtest_logreport(rep=rep)
item.config.hook.pytest_runtest_logreport(report=rep)
else:
runtestprotocol(item)
return True
@ -89,7 +89,7 @@ def call_and_report(item, when, log=True):
hook = item.config.hook
report = hook.pytest_runtest_makereport(item=item, call=call)
if log and (when == "call" or not report.passed):
hook.pytest_runtest_logreport(rep=report)
hook.pytest_runtest_logreport(report=report)
return report
def call_runtest_hook(item, when):

View File

@ -187,7 +187,8 @@ class TerminalReporter:
def pytest__teardown_final_logerror(self, rep):
self.stats.setdefault("error", []).append(rep)
def pytest_runtest_logreport(self, rep):
def pytest_runtest_logreport(self, report):
rep = report
cat, letter, word = self.getcategoryletterword(rep)
if not letter and not word:
# probably passed setup/teardown
@ -212,15 +213,15 @@ class TerminalReporter:
self._tw.write(" " + line)
self.currentfspath = -2
def pytest_collectreport(self, rep):
if not rep.passed:
if rep.failed:
self.stats.setdefault("error", []).append(rep)
msg = rep.longrepr.reprcrash.message
self.write_fspath_result(rep.collector.fspath, "E")
elif rep.skipped:
self.stats.setdefault("skipped", []).append(rep)
self.write_fspath_result(rep.collector.fspath, "S")
def pytest_collectreport(self, report):
if not report.passed:
if report.failed:
self.stats.setdefault("error", []).append(report)
msg = report.longrepr.reprcrash.message
self.write_fspath_result(report.collector.fspath, "E")
elif report.skipped:
self.stats.setdefault("skipped", []).append(report)
self.write_fspath_result(report.collector.fspath, "S")
def pytest_sessionstart(self, session):
self.write_sep("=", "test session starts", bold=True)
@ -417,10 +418,10 @@ class CollectonlyReporter:
def pytest_itemstart(self, item, node=None):
self.outindent(item)
def pytest_collectreport(self, rep):
if not rep.passed:
self.outindent("!!! %s !!!" % rep.longrepr.reprcrash.message)
self._failed.append(rep)
def pytest_collectreport(self, report):
if not report.passed:
self.outindent("!!! %s !!!" % report.longrepr.reprcrash.message)
self._failed.append(report)
self.indent = self.indent[:-len(self.INDENT)]
def pytest_sessionfinish(self, session, exitstatus):

View File

@ -311,7 +311,7 @@ class TestCollectonly:
" <Function 'test_func'>",
])
rep.config.hook.pytest_collectreport(
rep=runner.CollectReport(modcol, [], excinfo=None))
report=runner.CollectReport(modcol, [], excinfo=None))
assert rep.indent == indent
def test_collectonly_skipped_module(self, testdir, linecomp):

View File

@ -45,7 +45,7 @@ class Session(object):
if rep.passed:
for x in self.genitems(rep.result, keywordexpr):
yield x
self.config.hook.pytest_collectreport(rep=rep)
self.config.hook.pytest_collectreport(report=rep)
if self.shouldstop:
break
@ -79,8 +79,8 @@ class Session(object):
""" setup any neccessary resources ahead of the test run. """
self.config.hook.pytest_sessionstart(session=self)
def pytest_runtest_logreport(self, rep):
if rep.failed:
def pytest_runtest_logreport(self, report):
if report.failed:
self._testsfailed = True
if self.config.option.exitfirst:
self.shouldstop = True

View File

@ -67,187 +67,3 @@ class TestGeneralUsage:
"E ImportError: No module named does_not_work",
])
assert result.ret == 1
class TestDistribution:
def test_dist_conftest_options(self, testdir):
p1 = testdir.tmpdir.ensure("dir", 'p1.py')
p1.dirpath("__init__.py").write("")
p1.dirpath("conftest.py").write(py.code.Source("""
print "importing conftest", __file__
import py
Option = py.test.config.Option
option = py.test.config.addoptions("someopt",
Option('--someopt', action="store_true", dest="someopt", default=False))
dist_rsync_roots = ['../dir']
print "added options", option
print "config file seen from conftest", py.test.config
"""))
p1.write(py.code.Source("""
import py, conftest
def test_1():
print "config from test_1", py.test.config
print "conftest from test_1", conftest.__file__
print "test_1: py.test.config.option.someopt", py.test.config.option.someopt
print "test_1: conftest", conftest
print "test_1: conftest.option.someopt", conftest.option.someopt
assert conftest.option.someopt
"""))
result = testdir.runpytest('-d', '--tx=popen', p1, '--someopt')
assert result.ret == 0
extra = result.stdout.fnmatch_lines([
"*1 passed*",
])
def test_manytests_to_one_popen(self, testdir):
p1 = testdir.makepyfile("""
import py
def test_fail0():
assert 0
def test_fail1():
raise ValueError()
def test_ok():
pass
def test_skip():
py.test.skip("hello")
""",
)
result = testdir.runpytest(p1, '-d', '--tx=popen', '--tx=popen')
result.stdout.fnmatch_lines([
"*1*popen*Python*",
"*2*popen*Python*",
"*2 failed, 1 passed, 1 skipped*",
])
assert result.ret == 1
def test_dist_conftest_specified(self, testdir):
p1 = testdir.makepyfile("""
import py
def test_fail0():
assert 0
def test_fail1():
raise ValueError()
def test_ok():
pass
def test_skip():
py.test.skip("hello")
""",
)
testdir.makeconftest("""
pytest_option_tx = 'popen popen popen'.split()
""")
result = testdir.runpytest(p1, '-d')
result.stdout.fnmatch_lines([
"*1*popen*Python*",
"*2*popen*Python*",
"*3*popen*Python*",
"*2 failed, 1 passed, 1 skipped*",
])
assert result.ret == 1
def test_dist_tests_with_crash(self, testdir):
if not hasattr(py.std.os, 'kill'):
py.test.skip("no os.kill")
p1 = testdir.makepyfile("""
import py
def test_fail0():
assert 0
def test_fail1():
raise ValueError()
def test_ok():
pass
def test_skip():
py.test.skip("hello")
def test_crash():
import time
import os
time.sleep(0.5)
os.kill(os.getpid(), 15)
"""
)
result = testdir.runpytest(p1, '-d', '--tx=3*popen')
result.stdout.fnmatch_lines([
"*popen*Python*",
"*popen*Python*",
"*popen*Python*",
"*node down*",
"*3 failed, 1 passed, 1 skipped*"
])
assert result.ret == 1
def test_distribution_rsyncdirs_example(self, testdir):
source = testdir.mkdir("source")
dest = testdir.mkdir("dest")
subdir = source.mkdir("example_pkg")
subdir.ensure("__init__.py")
p = subdir.join("test_one.py")
p.write("def test_5(): assert not __file__.startswith(%r)" % str(p))
result = testdir.runpytest("-d", "--rsyncdir=%(subdir)s" % locals(),
"--tx=popen//chdir=%(dest)s" % locals(), p)
assert result.ret == 0
result.stdout.fnmatch_lines([
"*1* *popen*platform*",
#"RSyncStart: [G1]",
#"RSyncFinished: [G1]",
"*1 passed*"
])
assert dest.join(subdir.basename).check(dir=1)
def test_dist_each(self, testdir):
interpreters = []
for name in ("python2.4", "python2.5"):
interp = py.path.local.sysfind(name)
if interp is None:
py.test.skip("%s not found" % name)
interpreters.append(interp)
testdir.makepyfile(__init__="", test_one="""
import sys
def test_hello():
print "%s...%s" % sys.version_info[:2]
assert 0
""")
args = ["--dist=each"]
args += ["--tx", "popen//python=%s" % interpreters[0]]
args += ["--tx", "popen//python=%s" % interpreters[1]]
result = testdir.runpytest(*args)
result.stdout.fnmatch_lines(["2...4"])
result.stdout.fnmatch_lines(["2...5"])
class TestInteractive:
def test_simple_looponfail_interaction(self, testdir):
p1 = testdir.makepyfile("""
def test_1():
assert 1 == 0
""")
p1.setmtime(p1.mtime() - 50.0)
child = testdir.spawn_pytest("--looponfail %s" % p1)
child.expect("assert 1 == 0")
child.expect("test_simple_looponfail_interaction.py:")
child.expect("1 failed")
child.expect("waiting for changes")
p1.write(py.code.Source("""
def test_1():
assert 1 == 1
"""))
child.expect("MODIFIED.*test_simple_looponfail_interaction.py", timeout=4.0)
child.expect("1 passed", timeout=5.0)
child.kill(15)
class TestKeyboardInterrupt:
def test_raised_in_testfunction(self, testdir):
p1 = testdir.makepyfile("""
import py
def test_fail():
raise ValueError()
def test_inter():
raise KeyboardInterrupt()
""")
result = testdir.runpytest(p1)
result.stdout.fnmatch_lines([
#"*test_inter() INTERRUPTED",
"*KEYBOARD INTERRUPT*",
"*1 failed*",
])

View File

@ -45,7 +45,7 @@ def main():
'py.svnwcrevert = py.cmdline:pysvnwcrevert',
'py.test = py.cmdline:pytest',
'py.which = py.cmdline:pywhich']},
classifiers=['Development Status :: 4 - Beta',
classifiers=['Development Status :: 5 - Stable',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: POSIX',