* rename "rep" to "report" in reporting hooks
* refine docs * bump version data * improve announcement --HG-- branch : 1.0.x
This commit is contained in:
parent
67c4503d1b
commit
8c8617c354
|
@ -1,3 +1,9 @@
|
||||||
|
Changes between 1.0.0b9 and 1.0.0
|
||||||
|
=====================================
|
||||||
|
|
||||||
|
* more terse reporting try to show filesystem path relatively to current dir
|
||||||
|
* improve xfail output a bit
|
||||||
|
|
||||||
Changes between 1.0.0b8 and 1.0.0b9
|
Changes between 1.0.0b8 and 1.0.0b9
|
||||||
=====================================
|
=====================================
|
||||||
|
|
||||||
|
|
2
MANIFEST
2
MANIFEST
|
@ -60,7 +60,9 @@ example/execnet/svn-sync-repo.py
|
||||||
example/execnet/sysinfo.py
|
example/execnet/sysinfo.py
|
||||||
example/funcarg/conftest.py
|
example/funcarg/conftest.py
|
||||||
example/funcarg/costlysetup/conftest.py
|
example/funcarg/costlysetup/conftest.py
|
||||||
|
example/funcarg/costlysetup/sub1/__init__.py
|
||||||
example/funcarg/costlysetup/sub1/test_quick.py
|
example/funcarg/costlysetup/sub1/test_quick.py
|
||||||
|
example/funcarg/costlysetup/sub2/__init__.py
|
||||||
example/funcarg/costlysetup/sub2/test_two.py
|
example/funcarg/costlysetup/sub2/test_two.py
|
||||||
example/funcarg/mysetup/__init__.py
|
example/funcarg/mysetup/__init__.py
|
||||||
example/funcarg/mysetup/conftest.py
|
example/funcarg/mysetup/conftest.py
|
||||||
|
|
|
@ -1,54 +1,63 @@
|
||||||
py.test / py lib 1.0.0: new test plugins, funcargs and cleanups
|
|
||||||
============================================================================
|
|
||||||
|
|
||||||
Welcome to the 1.0 release bringing new flexibility and
|
pylib 1.0.0 released: testing-with-python innovations continue
|
||||||
power to testing with Python. Main news:
|
--------------------------------------------------------------------
|
||||||
|
|
||||||
* funcargs - new flexibilty and zero-boilerplate fixtures for Python testing:
|
Took a few betas but finally i uploaded a `1.0.0 py lib release`_,
|
||||||
|
featuring the mature and powerful py.test tool and "execnet-style"
|
||||||
|
*elastic* distributed programming. With the new release, there are
|
||||||
|
many new advanced automated testing features - here is a quick summary:
|
||||||
|
|
||||||
- separate test code, configuration and setup
|
* funcargs_ - pythonic zero-boilerplate fixtures for Python test functions :
|
||||||
|
|
||||||
|
- totally separates test code, test configuration and test setup
|
||||||
- ideal for integration and functional tests
|
- ideal for integration and functional tests
|
||||||
- more powerful dynamic generation of tests
|
- allows for flexible and natural test parametrization schemes
|
||||||
|
|
||||||
* new plugin architecture, allowing project-specific and
|
* new `plugin architecture`_, allowing easy-to-write project-specific and cross-project single-file plugins. The most notable new external plugin is `oejskit`_ which naturally enables **running and reporting of javascript-unittests in real-life browsers**.
|
||||||
cross-project single-file plugins. Many useful examples
|
|
||||||
shipped by default:
|
|
||||||
|
|
||||||
* pytest_unittest.py: run and integrate traditional unittest.py tests
|
* many new features done in easy-to-improve `default plugins`_, highlights:
|
||||||
* pytest_xfail.py: mark tests as "expected to fail" and report separately.
|
|
||||||
* pytest_pocoo.py: automatically send tracebacks to pocoo paste service
|
|
||||||
* pytest_monkeypatch.py: safely monkeypatch from tests
|
|
||||||
* pytest_figleaf.py: generate html coverage reports
|
|
||||||
* pytest_resultlog.py: generate buildbot-friendly reporting output
|
|
||||||
|
|
||||||
and many more!
|
* xfail: mark tests as "expected to fail" and report separately.
|
||||||
|
* pastebin: automatically send tracebacks to pocoo paste service
|
||||||
|
* capture: flexibly capture stdout/stderr of subprocesses, per-test ...
|
||||||
|
* monkeypatch: safely monkeypatch modules/classes from within tests
|
||||||
|
* unittest: run and integrate traditional unittest.py tests
|
||||||
|
* figleaf: generate html coverage reports with the figleaf module
|
||||||
|
* resultlog: generate buildbot-friendly reporting output
|
||||||
|
* ...
|
||||||
|
|
||||||
* distributed testing and distributed execution (py.execnet):
|
* `distributed testing`_ and `elastic distributed execution`_:
|
||||||
|
|
||||||
- new unified "TX" URL scheme for specifying remote resources
|
- new unified "TX" URL scheme for specifying remote processes
|
||||||
- new sync/async ways to handle multiple remote processes
|
- new distribution modes "--dist=each" and "--dist=load"
|
||||||
|
- new sync/async ways to handle 1:N communication
|
||||||
- improved documentation
|
- improved documentation
|
||||||
|
|
||||||
See the py.test and py lib documentation for more info:
|
The py lib continues to offer most of the functionality used by
|
||||||
|
the testing tool in `independent namespaces`_.
|
||||||
|
|
||||||
|
Some non-test related code, notably greenlets/co-routines and
|
||||||
|
api-generation now live as their own projects which simplifies the
|
||||||
|
installation procedure because no C-Extensions are required anymore.
|
||||||
|
|
||||||
|
The whole package should work well with Linux, Win32 and OSX, on Python
|
||||||
|
2.3, 2.4, 2.5 and 2.6. (Expect Python3 compatibility soon!)
|
||||||
|
|
||||||
|
For more info, see the py.test and py lib documentation:
|
||||||
|
|
||||||
http://pytest.org
|
http://pytest.org
|
||||||
|
|
||||||
http://pylib.org
|
http://pylib.org
|
||||||
|
|
||||||
The py lib now is smaller and focuses more on offering
|
have fun,
|
||||||
functionality used by the py.test tool in independent
|
|
||||||
namespaces:
|
|
||||||
|
|
||||||
* py.execnet: elastic code deployment to SSH, Socket and local sub processes
|
|
||||||
* py.code: higher-level introspection and dynamic generation of python code
|
|
||||||
* py.path: path abstractions over local and subversion files
|
|
||||||
|
|
||||||
Some non-strictly-test related code, notably greenlets/co-routines
|
|
||||||
and apigen now live on their own and have been removed, also simplifying
|
|
||||||
the installation procedures.
|
|
||||||
|
|
||||||
The whole package works well with Linux, OSX and Win32, on
|
|
||||||
Python 2.3, 2.4, 2.5 and 2.6. (Expect Python3 compatibility soon!)
|
|
||||||
|
|
||||||
best,
|
|
||||||
holger
|
holger
|
||||||
|
|
||||||
|
.. _`independent namespaces`: http://pylib.org
|
||||||
|
.. _`funcargs`: http://codespeak.net/py/dist/test/funcargs.html
|
||||||
|
.. _`plugin architecture`: http://codespeak.net/py/dist/test/extend.html
|
||||||
|
.. _`default plugins`: http://codespeak.net/py/dist/test/plugin/index.html
|
||||||
|
.. _`distributed testing`: http://codespeak.net/py/dist/test/dist.html
|
||||||
|
.. _`elastic distributed execution`: http://codespeak.net/py/dist/execnet.html
|
||||||
|
.. _`1.0.0 py lib release`: http://pypi.python.org/pypi/py
|
||||||
|
.. _`oejskit`: http://codespeak.net/py/dist/test/plugin/oejskit.html
|
||||||
|
|
||||||
|
|
|
@ -14,6 +14,17 @@ class css:
|
||||||
class Page(object):
|
class Page(object):
|
||||||
doctype = ('<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"'
|
doctype = ('<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"'
|
||||||
' "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">\n')
|
' "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">\n')
|
||||||
|
googlefragment = """
|
||||||
|
<script type="text/javascript">
|
||||||
|
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
|
||||||
|
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
|
||||||
|
</script>
|
||||||
|
<script type="text/javascript">
|
||||||
|
try {
|
||||||
|
var pageTracker = _gat._getTracker("UA-7597274-3");
|
||||||
|
pageTracker._trackPageview();
|
||||||
|
} catch(err) {}</script>
|
||||||
|
"""
|
||||||
|
|
||||||
def __init__(self, project, title, targetpath, stylesheeturl=None,
|
def __init__(self, project, title, targetpath, stylesheeturl=None,
|
||||||
type="text/html", encoding="ISO-8859-1"):
|
type="text/html", encoding="ISO-8859-1"):
|
||||||
|
@ -47,8 +58,10 @@ class Page(object):
|
||||||
def fill_menubar(self):
|
def fill_menubar(self):
|
||||||
items = [
|
items = [
|
||||||
self.a_docref("pylib index", "index.html"),
|
self.a_docref("pylib index", "index.html"),
|
||||||
self.a_docref("py.test index", "test/test.html"),
|
self.a_docref("test doc-index", "test/test.html"),
|
||||||
self.a_docref("py.test plugins", "test/plugin/index.html"),
|
self.a_docref("test quickstart", "test/quickstart.html"),
|
||||||
|
self.a_docref("test features", "test/features.html"),
|
||||||
|
self.a_docref("test plugins", "test/plugin/index.html"),
|
||||||
self.a_docref("py.execnet", "execnet.html"),
|
self.a_docref("py.execnet", "execnet.html"),
|
||||||
#self.a_docref("py.code", "code.html"),
|
#self.a_docref("py.code", "code.html"),
|
||||||
#self.a_apigenref("api", "api/index.html"),
|
#self.a_apigenref("api", "api/index.html"),
|
||||||
|
@ -91,6 +104,7 @@ class Page(object):
|
||||||
|
|
||||||
def unicode(self, doctype=True):
|
def unicode(self, doctype=True):
|
||||||
page = self._root.unicode()
|
page = self._root.unicode()
|
||||||
|
page = page.replace("</body>", self.googlefragment + "</body>")
|
||||||
if doctype:
|
if doctype:
|
||||||
return self.doctype + page
|
return self.doctype + page
|
||||||
else:
|
else:
|
||||||
|
|
|
@ -27,7 +27,7 @@ Other (minor) support functionality
|
||||||
`miscellaneous features`_ describes some small but nice py lib features.
|
`miscellaneous features`_ describes some small but nice py lib features.
|
||||||
|
|
||||||
|
|
||||||
.. _`PyPI project page`: http://pypi.python.org/pypi?%3Aaction=pkg_edit&name=py
|
.. _`PyPI project page`: http://pypi.python.org/pypi/py/
|
||||||
|
|
||||||
For the latest Release, see `PyPI project page`_
|
For the latest Release, see `PyPI project page`_
|
||||||
|
|
||||||
|
|
|
@ -3,19 +3,21 @@
|
||||||
==========================================================
|
==========================================================
|
||||||
|
|
||||||
Since version 1.0 py.test features the "funcarg" mechanism which
|
Since version 1.0 py.test features the "funcarg" mechanism which
|
||||||
allows a test function to take arguments which will be independently
|
allows a test function to take arguments independently provided
|
||||||
provided by factory functions. Factory functions are automatically
|
by factory functions. Factory functions allow to encapsulate
|
||||||
discovered and allow to encapsulate all neccessary setup and glue code
|
all setup and fixture glue code into nicely separated objects
|
||||||
for running tests. Compared to `xUnit style`_ the new mechanism is
|
and provide a natural way for writing python test functions.
|
||||||
meant to:
|
Compared to `xUnit style`_ the new mechanism is meant to:
|
||||||
|
|
||||||
* make test functions easier to write and to read
|
* make test functions easier to write and to read
|
||||||
* isolate test fixture creation to a single place
|
* isolate test fixture creation to a single place
|
||||||
* bring new flexibility and power to test state management
|
* bring new flexibility and power to test state management
|
||||||
* enable running of a test function with different values
|
* naturally extend towards parametrizing test functions
|
||||||
|
with multiple argument sets
|
||||||
(superseding `old-style generative tests`_)
|
(superseding `old-style generative tests`_)
|
||||||
* to enable creation of helper objects that interact with the execution
|
* enable creation of zero-boilerplate test helper objects that
|
||||||
of a test function, see the `blog post about the monkeypatch funcarg`_.
|
interact with the execution of a test function, see the
|
||||||
|
`blog post about the monkeypatch funcarg`_.
|
||||||
|
|
||||||
If you find issues or have further suggestions for improving
|
If you find issues or have further suggestions for improving
|
||||||
the mechanism you are welcome to checkout `contact possibilities`_ page.
|
the mechanism you are welcome to checkout `contact possibilities`_ page.
|
||||||
|
|
|
@ -39,7 +39,7 @@ hook specification sourcecode
|
||||||
def pytest_collectstart(collector):
|
def pytest_collectstart(collector):
|
||||||
""" collector starts collecting. """
|
""" collector starts collecting. """
|
||||||
|
|
||||||
def pytest_collectreport(rep):
|
def pytest_collectreport(report):
|
||||||
""" collector finished collecting. """
|
""" collector finished collecting. """
|
||||||
|
|
||||||
def pytest_deselected(items):
|
def pytest_deselected(items):
|
||||||
|
@ -89,7 +89,7 @@ hook specification sourcecode
|
||||||
""" make ItemTestReport for the given item and call outcome. """
|
""" make ItemTestReport for the given item and call outcome. """
|
||||||
pytest_runtest_makereport.firstresult = True
|
pytest_runtest_makereport.firstresult = True
|
||||||
|
|
||||||
def pytest_runtest_logreport(rep):
|
def pytest_runtest_logreport(report):
|
||||||
""" process item test report. """
|
""" process item test report. """
|
||||||
|
|
||||||
# special handling for final teardown - somewhat internal for now
|
# special handling for final teardown - somewhat internal for now
|
||||||
|
|
|
@ -1,33 +1,33 @@
|
||||||
.. _`terminal`: terminal.html
|
.. _`terminal`: terminal.html
|
||||||
.. _`pytest_recwarn.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_recwarn.py
|
.. _`pytest_recwarn.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_recwarn.py
|
||||||
.. _`unittest`: unittest.html
|
.. _`unittest`: unittest.html
|
||||||
.. _`pytest_monkeypatch.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_monkeypatch.py
|
.. _`pytest_monkeypatch.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_monkeypatch.py
|
||||||
.. _`pytest_keyword.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_keyword.py
|
.. _`pytest_keyword.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_keyword.py
|
||||||
.. _`pastebin`: pastebin.html
|
.. _`pastebin`: pastebin.html
|
||||||
.. _`plugins`: index.html
|
.. _`plugins`: index.html
|
||||||
.. _`pytest_capture.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_capture.py
|
.. _`pytest_capture.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_capture.py
|
||||||
.. _`pytest_doctest.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_doctest.py
|
.. _`pytest_doctest.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_doctest.py
|
||||||
.. _`capture`: capture.html
|
.. _`capture`: capture.html
|
||||||
.. _`hooklog`: hooklog.html
|
.. _`hooklog`: hooklog.html
|
||||||
.. _`pytest_restdoc.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_restdoc.py
|
.. _`pytest_restdoc.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_restdoc.py
|
||||||
.. _`pytest_hooklog.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_hooklog.py
|
.. _`pytest_hooklog.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_hooklog.py
|
||||||
.. _`pytest_pastebin.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_pastebin.py
|
.. _`pytest_pastebin.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_pastebin.py
|
||||||
.. _`pytest_figleaf.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_figleaf.py
|
.. _`pytest_figleaf.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_figleaf.py
|
||||||
.. _`xfail`: xfail.html
|
.. _`xfail`: xfail.html
|
||||||
.. _`contact`: ../../contact.html
|
.. _`contact`: ../../contact.html
|
||||||
.. _`checkout the py.test development version`: ../../download.html#checkout
|
.. _`checkout the py.test development version`: ../../download.html#checkout
|
||||||
.. _`oejskit`: oejskit.html
|
.. _`oejskit`: oejskit.html
|
||||||
.. _`pytest_xfail.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_xfail.py
|
.. _`pytest_xfail.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_xfail.py
|
||||||
.. _`figleaf`: figleaf.html
|
.. _`figleaf`: figleaf.html
|
||||||
.. _`extend`: ../extend.html
|
.. _`extend`: ../extend.html
|
||||||
.. _`pytest_terminal.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_terminal.py
|
.. _`pytest_terminal.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_terminal.py
|
||||||
.. _`recwarn`: recwarn.html
|
.. _`recwarn`: recwarn.html
|
||||||
.. _`pytest_pdb.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_pdb.py
|
.. _`pytest_pdb.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_pdb.py
|
||||||
.. _`monkeypatch`: monkeypatch.html
|
.. _`monkeypatch`: monkeypatch.html
|
||||||
.. _`resultlog`: resultlog.html
|
.. _`resultlog`: resultlog.html
|
||||||
.. _`keyword`: keyword.html
|
.. _`keyword`: keyword.html
|
||||||
.. _`restdoc`: restdoc.html
|
.. _`restdoc`: restdoc.html
|
||||||
.. _`pytest_unittest.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_unittest.py
|
.. _`pytest_unittest.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_unittest.py
|
||||||
.. _`doctest`: doctest.html
|
.. _`doctest`: doctest.html
|
||||||
.. _`pytest_resultlog.py`: http://bitbucket.org/hpk42/py-trunk/raw/4ac3aa2d7ea5f3fdcb5a28d4ca70040d9180ef04/py/test/plugin/pytest_resultlog.py
|
.. _`pytest_resultlog.py`: http://bitbucket.org/hpk42/py-trunk/raw/3b3ea41060652c47739450a590c4d71625bc05bd/py/test/plugin/pytest_resultlog.py
|
||||||
.. _`pdb`: pdb.html
|
.. _`pdb`: pdb.html
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
#
|
|
@ -0,0 +1 @@
|
||||||
|
#
|
|
@ -32,7 +32,7 @@ initpkg(__name__,
|
||||||
author_email = "holger at merlinux.eu, py-dev at codespeak.net",
|
author_email = "holger at merlinux.eu, py-dev at codespeak.net",
|
||||||
long_description = globals()['__doc__'],
|
long_description = globals()['__doc__'],
|
||||||
classifiers = [
|
classifiers = [
|
||||||
"Development Status :: 4 - Beta",
|
"Development Status :: 5 - Stable",
|
||||||
"Intended Audience :: Developers",
|
"Intended Audience :: Developers",
|
||||||
"License :: OSI Approved :: MIT License",
|
"License :: OSI Approved :: MIT License",
|
||||||
"Operating System :: POSIX",
|
"Operating System :: POSIX",
|
||||||
|
|
|
@ -34,16 +34,16 @@ class LoopState(object):
|
||||||
return "<LoopState exitstatus=%r shuttingdown=%r len(colitems)=%d>" % (
|
return "<LoopState exitstatus=%r shuttingdown=%r len(colitems)=%d>" % (
|
||||||
self.exitstatus, self.shuttingdown, len(self.colitems))
|
self.exitstatus, self.shuttingdown, len(self.colitems))
|
||||||
|
|
||||||
def pytest_runtest_logreport(self, rep):
|
def pytest_runtest_logreport(self, report):
|
||||||
if rep.item in self.dsession.item2nodes:
|
if report.item in self.dsession.item2nodes:
|
||||||
if rep.when != "teardown": # otherwise we have already managed it
|
if report.when != "teardown": # otherwise we already managed it
|
||||||
self.dsession.removeitem(rep.item, rep.node)
|
self.dsession.removeitem(report.item, report.node)
|
||||||
if rep.failed:
|
if report.failed:
|
||||||
self.testsfailed = True
|
self.testsfailed = True
|
||||||
|
|
||||||
def pytest_collectreport(self, rep):
|
def pytest_collectreport(self, report):
|
||||||
if rep.passed:
|
if report.passed:
|
||||||
self.colitems.extend(rep.result)
|
self.colitems.extend(report.result)
|
||||||
|
|
||||||
def pytest_testnodeready(self, node):
|
def pytest_testnodeready(self, node):
|
||||||
self.dsession.addnode(node)
|
self.dsession.addnode(node)
|
||||||
|
@ -199,7 +199,7 @@ class DSession(Session):
|
||||||
else:
|
else:
|
||||||
self.config.hook.pytest_collectstart(collector=next)
|
self.config.hook.pytest_collectstart(collector=next)
|
||||||
colrep = self.config.hook.pytest_make_collect_report(collector=next)
|
colrep = self.config.hook.pytest_make_collect_report(collector=next)
|
||||||
self.queueevent("pytest_collectreport", rep=colrep)
|
self.queueevent("pytest_collectreport", report=colrep)
|
||||||
if self.config.option.dist == "each":
|
if self.config.option.dist == "each":
|
||||||
self.senditems_each(senditems)
|
self.senditems_each(senditems)
|
||||||
else:
|
else:
|
||||||
|
@ -267,7 +267,7 @@ class DSession(Session):
|
||||||
info = "!!! Node %r crashed during running of test %r" %(node, item)
|
info = "!!! Node %r crashed during running of test %r" %(node, item)
|
||||||
rep = runner.ItemTestReport(item=item, excinfo=info, when="???")
|
rep = runner.ItemTestReport(item=item, excinfo=info, when="???")
|
||||||
rep.node = node
|
rep.node = node
|
||||||
self.config.hook.pytest_runtest_logreport(rep=rep)
|
self.config.hook.pytest_runtest_logreport(report=rep)
|
||||||
|
|
||||||
def setup(self):
|
def setup(self):
|
||||||
""" setup any neccessary resources ahead of the test run. """
|
""" setup any neccessary resources ahead of the test run. """
|
||||||
|
|
|
@ -1,242 +1,5 @@
|
||||||
import py
|
import py
|
||||||
|
|
||||||
EXPECTTIMEOUT=10.0
|
|
||||||
|
|
||||||
class TestGeneralUsage:
|
|
||||||
def test_config_error(self, testdir):
|
|
||||||
testdir.makeconftest("""
|
|
||||||
def pytest_configure(config):
|
|
||||||
raise config.Error("hello")
|
|
||||||
""")
|
|
||||||
result = testdir.runpytest(testdir.tmpdir)
|
|
||||||
assert result.ret != 0
|
|
||||||
assert result.stderr.fnmatch_lines([
|
|
||||||
'*ERROR: hello'
|
|
||||||
])
|
|
||||||
|
|
||||||
def test_config_preparse_plugin_option(self, testdir):
|
|
||||||
testdir.makepyfile(pytest_xyz="""
|
|
||||||
def pytest_addoption(parser):
|
|
||||||
parser.addoption("--xyz", dest="xyz", action="store")
|
|
||||||
""")
|
|
||||||
testdir.makepyfile(test_one="""
|
|
||||||
import py
|
|
||||||
def test_option():
|
|
||||||
assert py.test.config.option.xyz == "123"
|
|
||||||
""")
|
|
||||||
result = testdir.runpytest("-p", "xyz", "--xyz=123")
|
|
||||||
assert result.ret == 0
|
|
||||||
assert result.stdout.fnmatch_lines([
|
|
||||||
'*1 passed*',
|
|
||||||
])
|
|
||||||
|
|
||||||
def test_basetemp(self, testdir):
|
|
||||||
mytemp = testdir.tmpdir.mkdir("mytemp")
|
|
||||||
p = testdir.makepyfile("""
|
|
||||||
import py
|
|
||||||
def test_1():
|
|
||||||
py.test.ensuretemp('xyz')
|
|
||||||
""")
|
|
||||||
result = testdir.runpytest(p, '--basetemp=%s' %mytemp)
|
|
||||||
assert result.ret == 0
|
|
||||||
assert mytemp.join('xyz').check(dir=1)
|
|
||||||
|
|
||||||
def test_assertion_magic(self, testdir):
|
|
||||||
p = testdir.makepyfile("""
|
|
||||||
def test_this():
|
|
||||||
x = 0
|
|
||||||
assert x
|
|
||||||
""")
|
|
||||||
result = testdir.runpytest(p)
|
|
||||||
extra = result.stdout.fnmatch_lines([
|
|
||||||
"> assert x",
|
|
||||||
"E assert 0",
|
|
||||||
])
|
|
||||||
assert result.ret == 1
|
|
||||||
|
|
||||||
def test_nested_import_error(self, testdir):
|
|
||||||
p = testdir.makepyfile("""
|
|
||||||
import import_fails
|
|
||||||
def test_this():
|
|
||||||
assert import_fails.a == 1
|
|
||||||
""")
|
|
||||||
testdir.makepyfile(import_fails="import does_not_work")
|
|
||||||
result = testdir.runpytest(p)
|
|
||||||
extra = result.stdout.fnmatch_lines([
|
|
||||||
"> import import_fails",
|
|
||||||
"E ImportError: No module named does_not_work",
|
|
||||||
])
|
|
||||||
assert result.ret == 1
|
|
||||||
|
|
||||||
def test_skipped_reasons(self, testdir):
|
|
||||||
testdir.makepyfile(
|
|
||||||
test_one="""
|
|
||||||
from conftest import doskip
|
|
||||||
def setup_function(func):
|
|
||||||
doskip()
|
|
||||||
def test_func():
|
|
||||||
pass
|
|
||||||
class TestClass:
|
|
||||||
def test_method(self):
|
|
||||||
doskip()
|
|
||||||
""",
|
|
||||||
test_two = """
|
|
||||||
from conftest import doskip
|
|
||||||
doskip()
|
|
||||||
""",
|
|
||||||
conftest = """
|
|
||||||
import py
|
|
||||||
def doskip():
|
|
||||||
py.test.skip('test')
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
result = testdir.runpytest()
|
|
||||||
extra = result.stdout.fnmatch_lines([
|
|
||||||
"*test_one.py ss",
|
|
||||||
"*test_two.py S",
|
|
||||||
"___* skipped test summary *_",
|
|
||||||
"*conftest.py:3: *3* Skipped: 'test'",
|
|
||||||
])
|
|
||||||
assert result.ret == 0
|
|
||||||
|
|
||||||
def test_deselected(self, testdir):
|
|
||||||
testpath = testdir.makepyfile("""
|
|
||||||
def test_one():
|
|
||||||
pass
|
|
||||||
def test_two():
|
|
||||||
pass
|
|
||||||
def test_three():
|
|
||||||
pass
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
result = testdir.runpytest("-k", "test_two:", testpath)
|
|
||||||
extra = result.stdout.fnmatch_lines([
|
|
||||||
"*test_deselected.py ..",
|
|
||||||
"=* 1 test*deselected by 'test_two:'*=",
|
|
||||||
])
|
|
||||||
assert result.ret == 0
|
|
||||||
|
|
||||||
def test_no_skip_summary_if_failure(self, testdir):
|
|
||||||
testdir.makepyfile("""
|
|
||||||
import py
|
|
||||||
def test_ok():
|
|
||||||
pass
|
|
||||||
def test_fail():
|
|
||||||
assert 0
|
|
||||||
def test_skip():
|
|
||||||
py.test.skip("dontshow")
|
|
||||||
""")
|
|
||||||
result = testdir.runpytest()
|
|
||||||
assert result.stdout.str().find("skip test summary") == -1
|
|
||||||
assert result.ret == 1
|
|
||||||
|
|
||||||
def test_passes(self, testdir):
|
|
||||||
p1 = testdir.makepyfile("""
|
|
||||||
def test_passes():
|
|
||||||
pass
|
|
||||||
class TestClass:
|
|
||||||
def test_method(self):
|
|
||||||
pass
|
|
||||||
""")
|
|
||||||
old = p1.dirpath().chdir()
|
|
||||||
try:
|
|
||||||
result = testdir.runpytest()
|
|
||||||
finally:
|
|
||||||
old.chdir()
|
|
||||||
extra = result.stdout.fnmatch_lines([
|
|
||||||
"test_passes.py ..",
|
|
||||||
"* 2 pass*",
|
|
||||||
])
|
|
||||||
assert result.ret == 0
|
|
||||||
|
|
||||||
def test_header_trailer_info(self, testdir):
|
|
||||||
p1 = testdir.makepyfile("""
|
|
||||||
def test_passes():
|
|
||||||
pass
|
|
||||||
""")
|
|
||||||
result = testdir.runpytest()
|
|
||||||
verinfo = ".".join(map(str, py.std.sys.version_info[:3]))
|
|
||||||
extra = result.stdout.fnmatch_lines([
|
|
||||||
"*===== test session starts ====*",
|
|
||||||
"python: platform %s -- Python %s*" %(
|
|
||||||
py.std.sys.platform, verinfo), # , py.std.sys.executable),
|
|
||||||
"*test_header_trailer_info.py .",
|
|
||||||
"=* 1 passed in *.[0-9][0-9] seconds *=",
|
|
||||||
])
|
|
||||||
|
|
||||||
def test_traceback_failure(self, testdir):
|
|
||||||
p1 = testdir.makepyfile("""
|
|
||||||
def g():
|
|
||||||
return 2
|
|
||||||
def f(x):
|
|
||||||
assert x == g()
|
|
||||||
def test_onefails():
|
|
||||||
f(3)
|
|
||||||
""")
|
|
||||||
result = testdir.runpytest(p1)
|
|
||||||
result.stdout.fnmatch_lines([
|
|
||||||
"*test_traceback_failure.py F",
|
|
||||||
"====* FAILURES *====",
|
|
||||||
"____*____",
|
|
||||||
"",
|
|
||||||
" def test_onefails():",
|
|
||||||
"> f(3)",
|
|
||||||
"",
|
|
||||||
"*test_*.py:6: ",
|
|
||||||
"_ _ _ *",
|
|
||||||
#"",
|
|
||||||
" def f(x):",
|
|
||||||
"> assert x == g()",
|
|
||||||
"E assert 3 == 2",
|
|
||||||
"E + where 2 = g()",
|
|
||||||
"",
|
|
||||||
"*test_traceback_failure.py:4: AssertionError"
|
|
||||||
])
|
|
||||||
|
|
||||||
|
|
||||||
def test_showlocals(self, testdir):
|
|
||||||
p1 = testdir.makepyfile("""
|
|
||||||
def test_showlocals():
|
|
||||||
x = 3
|
|
||||||
y = "x" * 5000
|
|
||||||
assert 0
|
|
||||||
""")
|
|
||||||
result = testdir.runpytest(p1, '-l')
|
|
||||||
result.stdout.fnmatch_lines([
|
|
||||||
#"_ _ * Locals *",
|
|
||||||
"x* = 3",
|
|
||||||
"y* = 'xxxxxx*"
|
|
||||||
])
|
|
||||||
|
|
||||||
def test_verbose_reporting(self, testdir):
|
|
||||||
p1 = testdir.makepyfile("""
|
|
||||||
import py
|
|
||||||
def test_fail():
|
|
||||||
raise ValueError()
|
|
||||||
def test_pass():
|
|
||||||
pass
|
|
||||||
class TestClass:
|
|
||||||
def test_skip(self):
|
|
||||||
py.test.skip("hello")
|
|
||||||
def test_gen():
|
|
||||||
def check(x):
|
|
||||||
assert x == 1
|
|
||||||
yield check, 0
|
|
||||||
""")
|
|
||||||
result = testdir.runpytest(p1, '-v')
|
|
||||||
result.stdout.fnmatch_lines([
|
|
||||||
"*test_verbose_reporting.py:2: test_fail*FAIL*",
|
|
||||||
"*test_verbose_reporting.py:4: test_pass*PASS*",
|
|
||||||
"*test_verbose_reporting.py:7: TestClass.test_skip*SKIP*",
|
|
||||||
"*test_verbose_reporting.py:10: test_gen*FAIL*",
|
|
||||||
])
|
|
||||||
assert result.ret == 1
|
|
||||||
result = testdir.runpytest(p1, '-v', '-n 1')
|
|
||||||
result.stdout.fnmatch_lines([
|
|
||||||
"*FAIL*test_verbose_reporting.py:2: test_fail*",
|
|
||||||
])
|
|
||||||
assert result.ret == 1
|
|
||||||
|
|
||||||
class TestDistribution:
|
class TestDistribution:
|
||||||
def test_dist_conftest_options(self, testdir):
|
def test_dist_conftest_options(self, testdir):
|
||||||
p1 = testdir.tmpdir.ensure("dir", 'p1.py')
|
p1 = testdir.tmpdir.ensure("dir", 'p1.py')
|
||||||
|
@ -383,40 +146,3 @@ class TestDistribution:
|
||||||
result.stdout.fnmatch_lines(["2...4"])
|
result.stdout.fnmatch_lines(["2...4"])
|
||||||
result.stdout.fnmatch_lines(["2...5"])
|
result.stdout.fnmatch_lines(["2...5"])
|
||||||
|
|
||||||
|
|
||||||
class TestInteractive:
|
|
||||||
def test_simple_looponfail_interaction(self, testdir):
|
|
||||||
p1 = testdir.makepyfile("""
|
|
||||||
def test_1():
|
|
||||||
assert 1 == 0
|
|
||||||
""")
|
|
||||||
p1.setmtime(p1.mtime() - 50.0)
|
|
||||||
child = testdir.spawn_pytest("--looponfail %s" % p1)
|
|
||||||
child.expect("assert 1 == 0")
|
|
||||||
child.expect("test_simple_looponfail_interaction.py:")
|
|
||||||
child.expect("1 failed")
|
|
||||||
child.expect("waiting for changes")
|
|
||||||
p1.write(py.code.Source("""
|
|
||||||
def test_1():
|
|
||||||
assert 1 == 1
|
|
||||||
"""))
|
|
||||||
child.expect("MODIFIED.*test_simple_looponfail_interaction.py", timeout=4.0)
|
|
||||||
child.expect("1 passed", timeout=5.0)
|
|
||||||
child.kill(15)
|
|
||||||
|
|
||||||
class TestKeyboardInterrupt:
|
|
||||||
def test_raised_in_testfunction(self, testdir):
|
|
||||||
p1 = testdir.makepyfile("""
|
|
||||||
import py
|
|
||||||
def test_fail():
|
|
||||||
raise ValueError()
|
|
||||||
def test_inter():
|
|
||||||
raise KeyboardInterrupt()
|
|
||||||
""")
|
|
||||||
result = testdir.runpytest(p1)
|
|
||||||
result.stdout.fnmatch_lines([
|
|
||||||
#"*test_inter() INTERRUPTED",
|
|
||||||
"*KEYBOARD INTERRUPT*",
|
|
||||||
"*1 failed*",
|
|
||||||
])
|
|
||||||
|
|
||||||
|
|
|
@ -81,8 +81,8 @@ class TestDSession:
|
||||||
session.triggertesting([modcol])
|
session.triggertesting([modcol])
|
||||||
name, args, kwargs = session.queue.get(block=False)
|
name, args, kwargs = session.queue.get(block=False)
|
||||||
assert name == 'pytest_collectreport'
|
assert name == 'pytest_collectreport'
|
||||||
rep = kwargs['rep']
|
report = kwargs['report']
|
||||||
assert len(rep.result) == 1
|
assert len(report.result) == 1
|
||||||
|
|
||||||
def test_triggertesting_item(self, testdir):
|
def test_triggertesting_item(self, testdir):
|
||||||
item = testdir.getitem("def test_func(): pass")
|
item = testdir.getitem("def test_func(): pass")
|
||||||
|
@ -134,7 +134,7 @@ class TestDSession:
|
||||||
session.queueevent(None)
|
session.queueevent(None)
|
||||||
session.loop_once(loopstate)
|
session.loop_once(loopstate)
|
||||||
assert node.sent == [[item]]
|
assert node.sent == [[item]]
|
||||||
session.queueevent("pytest_runtest_logreport", rep=run(item, node))
|
session.queueevent("pytest_runtest_logreport", report=run(item, node))
|
||||||
session.loop_once(loopstate)
|
session.loop_once(loopstate)
|
||||||
assert loopstate.shuttingdown
|
assert loopstate.shuttingdown
|
||||||
assert not loopstate.testsfailed
|
assert not loopstate.testsfailed
|
||||||
|
@ -182,7 +182,7 @@ class TestDSession:
|
||||||
item = item1
|
item = item1
|
||||||
node = nodes[0]
|
node = nodes[0]
|
||||||
when = "call"
|
when = "call"
|
||||||
session.queueevent("pytest_runtest_logreport", rep=rep)
|
session.queueevent("pytest_runtest_logreport", report=rep)
|
||||||
reprec = testdir.getreportrecorder(session)
|
reprec = testdir.getreportrecorder(session)
|
||||||
print session.item2nodes
|
print session.item2nodes
|
||||||
loopstate = session._initloopstate([])
|
loopstate = session._initloopstate([])
|
||||||
|
@ -190,7 +190,7 @@ class TestDSession:
|
||||||
session.loop_once(loopstate)
|
session.loop_once(loopstate)
|
||||||
assert len(session.item2nodes[item1]) == 1
|
assert len(session.item2nodes[item1]) == 1
|
||||||
rep.when = "teardown"
|
rep.when = "teardown"
|
||||||
session.queueevent("pytest_runtest_logreport", rep=rep)
|
session.queueevent("pytest_runtest_logreport", report=rep)
|
||||||
session.loop_once(loopstate)
|
session.loop_once(loopstate)
|
||||||
assert len(session.item2nodes[item1]) == 1
|
assert len(session.item2nodes[item1]) == 1
|
||||||
|
|
||||||
|
@ -249,7 +249,7 @@ class TestDSession:
|
||||||
|
|
||||||
assert node.sent == [[item]]
|
assert node.sent == [[item]]
|
||||||
ev = run(item, node, excinfo=excinfo)
|
ev = run(item, node, excinfo=excinfo)
|
||||||
session.queueevent("pytest_runtest_logreport", rep=ev)
|
session.queueevent("pytest_runtest_logreport", report=ev)
|
||||||
session.loop_once(loopstate)
|
session.loop_once(loopstate)
|
||||||
assert loopstate.shuttingdown
|
assert loopstate.shuttingdown
|
||||||
session.queueevent("pytest_testnodedown", node=node, error=None)
|
session.queueevent("pytest_testnodedown", node=node, error=None)
|
||||||
|
@ -286,8 +286,8 @@ class TestDSession:
|
||||||
# run tests ourselves and produce reports
|
# run tests ourselves and produce reports
|
||||||
ev1 = run(items[0], node, "fail")
|
ev1 = run(items[0], node, "fail")
|
||||||
ev2 = run(items[1], node, None)
|
ev2 = run(items[1], node, None)
|
||||||
session.queueevent("pytest_runtest_logreport", rep=ev1) # a failing one
|
session.queueevent("pytest_runtest_logreport", report=ev1) # a failing one
|
||||||
session.queueevent("pytest_runtest_logreport", rep=ev2)
|
session.queueevent("pytest_runtest_logreport", report=ev2)
|
||||||
# now call the loop
|
# now call the loop
|
||||||
loopstate = session._initloopstate(items)
|
loopstate = session._initloopstate(items)
|
||||||
session.loop_once(loopstate)
|
session.loop_once(loopstate)
|
||||||
|
@ -302,7 +302,7 @@ class TestDSession:
|
||||||
loopstate = session._initloopstate([])
|
loopstate = session._initloopstate([])
|
||||||
loopstate.shuttingdown = True
|
loopstate.shuttingdown = True
|
||||||
reprec = testdir.getreportrecorder(session)
|
reprec = testdir.getreportrecorder(session)
|
||||||
session.queueevent("pytest_runtest_logreport", rep=run(item, node))
|
session.queueevent("pytest_runtest_logreport", report=run(item, node))
|
||||||
session.loop_once(loopstate)
|
session.loop_once(loopstate)
|
||||||
assert not reprec.getcalls("pytest_testnodedown")
|
assert not reprec.getcalls("pytest_testnodedown")
|
||||||
session.queueevent("pytest_testnodedown", node=node, error=None)
|
session.queueevent("pytest_testnodedown", node=node, error=None)
|
||||||
|
@ -343,7 +343,7 @@ class TestDSession:
|
||||||
node = MockNode()
|
node = MockNode()
|
||||||
session.addnode(node)
|
session.addnode(node)
|
||||||
session.senditems_load([item])
|
session.senditems_load([item])
|
||||||
session.queueevent("pytest_runtest_logreport", rep=run(item, node))
|
session.queueevent("pytest_runtest_logreport", report=run(item, node))
|
||||||
loopstate = session._initloopstate([])
|
loopstate = session._initloopstate([])
|
||||||
session.loop_once(loopstate)
|
session.loop_once(loopstate)
|
||||||
assert node._shutdown is True
|
assert node._shutdown is True
|
||||||
|
@ -369,10 +369,10 @@ class TestDSession:
|
||||||
session.senditems_load([item1])
|
session.senditems_load([item1])
|
||||||
# node2pending will become empty when the loop sees the report
|
# node2pending will become empty when the loop sees the report
|
||||||
rep = run(item1, node)
|
rep = run(item1, node)
|
||||||
session.queueevent("pytest_runtest_logreport", rep=run(item1, node))
|
session.queueevent("pytest_runtest_logreport", report=run(item1, node))
|
||||||
|
|
||||||
# but we have a collection pending
|
# but we have a collection pending
|
||||||
session.queueevent("pytest_collectreport", rep=colreport)
|
session.queueevent("pytest_collectreport", report=colreport)
|
||||||
|
|
||||||
loopstate = session._initloopstate([])
|
loopstate = session._initloopstate([])
|
||||||
session.loop_once(loopstate)
|
session.loop_once(loopstate)
|
||||||
|
@ -396,11 +396,11 @@ class TestDSession:
|
||||||
dsession = DSession(config)
|
dsession = DSession(config)
|
||||||
hookrecorder = testdir.getreportrecorder(config).hookrecorder
|
hookrecorder = testdir.getreportrecorder(config).hookrecorder
|
||||||
dsession.main([config.getfsnode(p1)])
|
dsession.main([config.getfsnode(p1)])
|
||||||
rep = hookrecorder.popcall("pytest_runtest_logreport").rep
|
rep = hookrecorder.popcall("pytest_runtest_logreport").report
|
||||||
assert rep.passed
|
assert rep.passed
|
||||||
rep = hookrecorder.popcall("pytest_runtest_logreport").rep
|
rep = hookrecorder.popcall("pytest_runtest_logreport").report
|
||||||
assert rep.skipped
|
assert rep.skipped
|
||||||
rep = hookrecorder.popcall("pytest_runtest_logreport").rep
|
rep = hookrecorder.popcall("pytest_runtest_logreport").report
|
||||||
assert rep.failed
|
assert rep.failed
|
||||||
# see that the node is really down
|
# see that the node is really down
|
||||||
node = hookrecorder.popcall("pytest_testnodedown").node
|
node = hookrecorder.popcall("pytest_testnodedown").node
|
||||||
|
|
|
@ -115,7 +115,7 @@ class TestMasterSlaveConnection:
|
||||||
node = mysetup.makenode(item.config)
|
node = mysetup.makenode(item.config)
|
||||||
node.send(item)
|
node.send(item)
|
||||||
kwargs = mysetup.geteventargs("pytest_runtest_logreport")
|
kwargs = mysetup.geteventargs("pytest_runtest_logreport")
|
||||||
rep = kwargs['rep']
|
rep = kwargs['report']
|
||||||
assert rep.passed
|
assert rep.passed
|
||||||
print rep
|
print rep
|
||||||
assert rep.item == item
|
assert rep.item == item
|
||||||
|
@ -135,10 +135,10 @@ class TestMasterSlaveConnection:
|
||||||
node.send(item)
|
node.send(item)
|
||||||
for outcome in "passed failed skipped".split():
|
for outcome in "passed failed skipped".split():
|
||||||
kwargs = mysetup.geteventargs("pytest_runtest_logreport")
|
kwargs = mysetup.geteventargs("pytest_runtest_logreport")
|
||||||
rep = kwargs['rep']
|
report = kwargs['report']
|
||||||
assert getattr(rep, outcome)
|
assert getattr(report, outcome)
|
||||||
|
|
||||||
node.sendlist(items)
|
node.sendlist(items)
|
||||||
for outcome in "passed failed skipped".split():
|
for outcome in "passed failed skipped".split():
|
||||||
rep = mysetup.geteventargs("pytest_runtest_logreport")['rep']
|
rep = mysetup.geteventargs("pytest_runtest_logreport")['report']
|
||||||
assert getattr(rep, outcome)
|
assert getattr(rep, outcome)
|
||||||
|
|
|
@ -56,9 +56,9 @@ class TXNode(object):
|
||||||
self._down = True
|
self._down = True
|
||||||
self.notify("pytest_testnodedown", error=None, node=self)
|
self.notify("pytest_testnodedown", error=None, node=self)
|
||||||
elif eventname == "pytest_runtest_logreport":
|
elif eventname == "pytest_runtest_logreport":
|
||||||
rep = kwargs['rep']
|
rep = kwargs['report']
|
||||||
rep.node = self
|
rep.node = self
|
||||||
self.notify("pytest_runtest_logreport", rep=rep)
|
self.notify("pytest_runtest_logreport", report=rep)
|
||||||
else:
|
else:
|
||||||
self.notify(eventname, *args, **kwargs)
|
self.notify(eventname, *args, **kwargs)
|
||||||
except KeyboardInterrupt:
|
except KeyboardInterrupt:
|
||||||
|
@ -110,8 +110,8 @@ class SlaveNode(object):
|
||||||
def sendevent(self, eventname, *args, **kwargs):
|
def sendevent(self, eventname, *args, **kwargs):
|
||||||
self.channel.send((eventname, args, kwargs))
|
self.channel.send((eventname, args, kwargs))
|
||||||
|
|
||||||
def pytest_runtest_logreport(self, rep):
|
def pytest_runtest_logreport(self, report):
|
||||||
self.sendevent("pytest_runtest_logreport", rep=rep)
|
self.sendevent("pytest_runtest_logreport", report=report)
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
channel = self.channel
|
channel = self.channel
|
||||||
|
|
|
@ -137,9 +137,9 @@ def slave_runsession(channel, config, fullwidth, hasmarkup):
|
||||||
session.shouldclose = channel.isclosed
|
session.shouldclose = channel.isclosed
|
||||||
|
|
||||||
class Failures(list):
|
class Failures(list):
|
||||||
def pytest_runtest_logreport(self, rep):
|
def pytest_runtest_logreport(self, report):
|
||||||
if rep.failed:
|
if report.failed:
|
||||||
self.append(rep)
|
self.append(report)
|
||||||
pytest_collectreport = pytest_runtest_logreport
|
pytest_collectreport = pytest_runtest_logreport
|
||||||
|
|
||||||
failreports = Failures()
|
failreports = Failures()
|
||||||
|
|
|
@ -33,7 +33,7 @@ def pytest_collect_file(path, parent):
|
||||||
def pytest_collectstart(collector):
|
def pytest_collectstart(collector):
|
||||||
""" collector starts collecting. """
|
""" collector starts collecting. """
|
||||||
|
|
||||||
def pytest_collectreport(rep):
|
def pytest_collectreport(report):
|
||||||
""" collector finished collecting. """
|
""" collector finished collecting. """
|
||||||
|
|
||||||
def pytest_deselected(items):
|
def pytest_deselected(items):
|
||||||
|
@ -83,7 +83,7 @@ def pytest_runtest_makereport(item, call):
|
||||||
""" make ItemTestReport for the given item and call outcome. """
|
""" make ItemTestReport for the given item and call outcome. """
|
||||||
pytest_runtest_makereport.firstresult = True
|
pytest_runtest_makereport.firstresult = True
|
||||||
|
|
||||||
def pytest_runtest_logreport(rep):
|
def pytest_runtest_logreport(report):
|
||||||
""" process item test report. """
|
""" process item test report. """
|
||||||
|
|
||||||
# special handling for final teardown - somewhat internal for now
|
# special handling for final teardown - somewhat internal for now
|
||||||
|
|
|
@ -132,8 +132,8 @@ class TestDoctests:
|
||||||
""")
|
""")
|
||||||
reprec = testdir.inline_run(p)
|
reprec = testdir.inline_run(p)
|
||||||
call = reprec.getcall("pytest_runtest_logreport")
|
call = reprec.getcall("pytest_runtest_logreport")
|
||||||
assert call.rep.failed
|
assert call.report.failed
|
||||||
assert call.rep.longrepr
|
assert call.report.longrepr
|
||||||
# XXX
|
# XXX
|
||||||
#testitem, = items
|
#testitem, = items
|
||||||
#excinfo = py.test.raises(Failed, "testitem.runtest()")
|
#excinfo = py.test.raises(Failed, "testitem.runtest()")
|
||||||
|
|
|
@ -341,7 +341,7 @@ class ReportRecorder(object):
|
||||||
# functionality for test reports
|
# functionality for test reports
|
||||||
|
|
||||||
def getreports(self, names="pytest_runtest_logreport pytest_collectreport"):
|
def getreports(self, names="pytest_runtest_logreport pytest_collectreport"):
|
||||||
return [x.rep for x in self.getcalls(names)]
|
return [x.report for x in self.getcalls(names)]
|
||||||
|
|
||||||
def matchreport(self, inamepart="", names="pytest_runtest_logreport pytest_collectreport"):
|
def matchreport(self, inamepart="", names="pytest_runtest_logreport pytest_collectreport"):
|
||||||
""" return a testreport whose dotted import path matches """
|
""" return a testreport whose dotted import path matches """
|
||||||
|
@ -406,7 +406,7 @@ def test_reportrecorder(testdir):
|
||||||
skipped = False
|
skipped = False
|
||||||
when = "call"
|
when = "call"
|
||||||
|
|
||||||
recorder.hook.pytest_runtest_logreport(rep=rep)
|
recorder.hook.pytest_runtest_logreport(report=rep)
|
||||||
failures = recorder.getfailures()
|
failures = recorder.getfailures()
|
||||||
assert failures == [rep]
|
assert failures == [rep]
|
||||||
failures = recorder.getfailures()
|
failures = recorder.getfailures()
|
||||||
|
@ -420,14 +420,14 @@ def test_reportrecorder(testdir):
|
||||||
when = "call"
|
when = "call"
|
||||||
rep.passed = False
|
rep.passed = False
|
||||||
rep.skipped = True
|
rep.skipped = True
|
||||||
recorder.hook.pytest_runtest_logreport(rep=rep)
|
recorder.hook.pytest_runtest_logreport(report=rep)
|
||||||
|
|
||||||
modcol = testdir.getmodulecol("")
|
modcol = testdir.getmodulecol("")
|
||||||
rep = modcol.config.hook.pytest_make_collect_report(collector=modcol)
|
rep = modcol.config.hook.pytest_make_collect_report(collector=modcol)
|
||||||
rep.passed = False
|
rep.passed = False
|
||||||
rep.failed = True
|
rep.failed = True
|
||||||
rep.skipped = False
|
rep.skipped = False
|
||||||
recorder.hook.pytest_collectreport(rep=rep)
|
recorder.hook.pytest_collectreport(report=rep)
|
||||||
|
|
||||||
passed, skipped, failed = recorder.listoutcomes()
|
passed, skipped, failed = recorder.listoutcomes()
|
||||||
assert not passed and skipped and failed
|
assert not passed and skipped and failed
|
||||||
|
@ -440,7 +440,7 @@ def test_reportrecorder(testdir):
|
||||||
|
|
||||||
recorder.unregister()
|
recorder.unregister()
|
||||||
recorder.clear()
|
recorder.clear()
|
||||||
recorder.hook.pytest_runtest_logreport(rep=rep)
|
recorder.hook.pytest_runtest_logreport(report=rep)
|
||||||
py.test.raises(ValueError, "recorder.getfailures()")
|
py.test.raises(ValueError, "recorder.getfailures()")
|
||||||
|
|
||||||
class LineComp:
|
class LineComp:
|
||||||
|
|
|
@ -59,25 +59,25 @@ class ResultLog(object):
|
||||||
testpath = generic_path(node)
|
testpath = generic_path(node)
|
||||||
self.write_log_entry(testpath, shortrepr, longrepr)
|
self.write_log_entry(testpath, shortrepr, longrepr)
|
||||||
|
|
||||||
def pytest_runtest_logreport(self, rep):
|
def pytest_runtest_logreport(self, report):
|
||||||
code = rep.shortrepr
|
code = report.shortrepr
|
||||||
if rep.passed:
|
if report.passed:
|
||||||
longrepr = ""
|
longrepr = ""
|
||||||
elif rep.failed:
|
elif report.failed:
|
||||||
longrepr = str(rep.longrepr)
|
longrepr = str(report.longrepr)
|
||||||
elif rep.skipped:
|
elif report.skipped:
|
||||||
longrepr = str(rep.longrepr.reprcrash.message)
|
longrepr = str(report.longrepr.reprcrash.message)
|
||||||
self.log_outcome(rep.item, code, longrepr)
|
self.log_outcome(report.item, code, longrepr)
|
||||||
|
|
||||||
def pytest_collectreport(self, rep):
|
def pytest_collectreport(self, report):
|
||||||
if not rep.passed:
|
if not report.passed:
|
||||||
if rep.failed:
|
if report.failed:
|
||||||
code = "F"
|
code = "F"
|
||||||
else:
|
else:
|
||||||
assert rep.skipped
|
assert report.skipped
|
||||||
code = "S"
|
code = "S"
|
||||||
longrepr = str(rep.longrepr.reprcrash)
|
longrepr = str(report.longrepr.reprcrash)
|
||||||
self.log_outcome(rep.collector, code, longrepr)
|
self.log_outcome(report.collector, code, longrepr)
|
||||||
|
|
||||||
def pytest_internalerror(self, excrepr):
|
def pytest_internalerror(self, excrepr):
|
||||||
path = excrepr.reprcrash.path
|
path = excrepr.reprcrash.path
|
||||||
|
|
|
@ -40,7 +40,7 @@ def pytest_runtest_protocol(item):
|
||||||
if item.config.getvalue("boxed"):
|
if item.config.getvalue("boxed"):
|
||||||
reports = forked_run_report(item)
|
reports = forked_run_report(item)
|
||||||
for rep in reports:
|
for rep in reports:
|
||||||
item.config.hook.pytest_runtest_logreport(rep=rep)
|
item.config.hook.pytest_runtest_logreport(report=rep)
|
||||||
else:
|
else:
|
||||||
runtestprotocol(item)
|
runtestprotocol(item)
|
||||||
return True
|
return True
|
||||||
|
@ -89,7 +89,7 @@ def call_and_report(item, when, log=True):
|
||||||
hook = item.config.hook
|
hook = item.config.hook
|
||||||
report = hook.pytest_runtest_makereport(item=item, call=call)
|
report = hook.pytest_runtest_makereport(item=item, call=call)
|
||||||
if log and (when == "call" or not report.passed):
|
if log and (when == "call" or not report.passed):
|
||||||
hook.pytest_runtest_logreport(rep=report)
|
hook.pytest_runtest_logreport(report=report)
|
||||||
return report
|
return report
|
||||||
|
|
||||||
def call_runtest_hook(item, when):
|
def call_runtest_hook(item, when):
|
||||||
|
|
|
@ -187,7 +187,8 @@ class TerminalReporter:
|
||||||
def pytest__teardown_final_logerror(self, rep):
|
def pytest__teardown_final_logerror(self, rep):
|
||||||
self.stats.setdefault("error", []).append(rep)
|
self.stats.setdefault("error", []).append(rep)
|
||||||
|
|
||||||
def pytest_runtest_logreport(self, rep):
|
def pytest_runtest_logreport(self, report):
|
||||||
|
rep = report
|
||||||
cat, letter, word = self.getcategoryletterword(rep)
|
cat, letter, word = self.getcategoryletterword(rep)
|
||||||
if not letter and not word:
|
if not letter and not word:
|
||||||
# probably passed setup/teardown
|
# probably passed setup/teardown
|
||||||
|
@ -212,15 +213,15 @@ class TerminalReporter:
|
||||||
self._tw.write(" " + line)
|
self._tw.write(" " + line)
|
||||||
self.currentfspath = -2
|
self.currentfspath = -2
|
||||||
|
|
||||||
def pytest_collectreport(self, rep):
|
def pytest_collectreport(self, report):
|
||||||
if not rep.passed:
|
if not report.passed:
|
||||||
if rep.failed:
|
if report.failed:
|
||||||
self.stats.setdefault("error", []).append(rep)
|
self.stats.setdefault("error", []).append(report)
|
||||||
msg = rep.longrepr.reprcrash.message
|
msg = report.longrepr.reprcrash.message
|
||||||
self.write_fspath_result(rep.collector.fspath, "E")
|
self.write_fspath_result(report.collector.fspath, "E")
|
||||||
elif rep.skipped:
|
elif report.skipped:
|
||||||
self.stats.setdefault("skipped", []).append(rep)
|
self.stats.setdefault("skipped", []).append(report)
|
||||||
self.write_fspath_result(rep.collector.fspath, "S")
|
self.write_fspath_result(report.collector.fspath, "S")
|
||||||
|
|
||||||
def pytest_sessionstart(self, session):
|
def pytest_sessionstart(self, session):
|
||||||
self.write_sep("=", "test session starts", bold=True)
|
self.write_sep("=", "test session starts", bold=True)
|
||||||
|
@ -417,10 +418,10 @@ class CollectonlyReporter:
|
||||||
def pytest_itemstart(self, item, node=None):
|
def pytest_itemstart(self, item, node=None):
|
||||||
self.outindent(item)
|
self.outindent(item)
|
||||||
|
|
||||||
def pytest_collectreport(self, rep):
|
def pytest_collectreport(self, report):
|
||||||
if not rep.passed:
|
if not report.passed:
|
||||||
self.outindent("!!! %s !!!" % rep.longrepr.reprcrash.message)
|
self.outindent("!!! %s !!!" % report.longrepr.reprcrash.message)
|
||||||
self._failed.append(rep)
|
self._failed.append(report)
|
||||||
self.indent = self.indent[:-len(self.INDENT)]
|
self.indent = self.indent[:-len(self.INDENT)]
|
||||||
|
|
||||||
def pytest_sessionfinish(self, session, exitstatus):
|
def pytest_sessionfinish(self, session, exitstatus):
|
||||||
|
|
|
@ -311,7 +311,7 @@ class TestCollectonly:
|
||||||
" <Function 'test_func'>",
|
" <Function 'test_func'>",
|
||||||
])
|
])
|
||||||
rep.config.hook.pytest_collectreport(
|
rep.config.hook.pytest_collectreport(
|
||||||
rep=runner.CollectReport(modcol, [], excinfo=None))
|
report=runner.CollectReport(modcol, [], excinfo=None))
|
||||||
assert rep.indent == indent
|
assert rep.indent == indent
|
||||||
|
|
||||||
def test_collectonly_skipped_module(self, testdir, linecomp):
|
def test_collectonly_skipped_module(self, testdir, linecomp):
|
||||||
|
|
|
@ -45,7 +45,7 @@ class Session(object):
|
||||||
if rep.passed:
|
if rep.passed:
|
||||||
for x in self.genitems(rep.result, keywordexpr):
|
for x in self.genitems(rep.result, keywordexpr):
|
||||||
yield x
|
yield x
|
||||||
self.config.hook.pytest_collectreport(rep=rep)
|
self.config.hook.pytest_collectreport(report=rep)
|
||||||
if self.shouldstop:
|
if self.shouldstop:
|
||||||
break
|
break
|
||||||
|
|
||||||
|
@ -79,8 +79,8 @@ class Session(object):
|
||||||
""" setup any neccessary resources ahead of the test run. """
|
""" setup any neccessary resources ahead of the test run. """
|
||||||
self.config.hook.pytest_sessionstart(session=self)
|
self.config.hook.pytest_sessionstart(session=self)
|
||||||
|
|
||||||
def pytest_runtest_logreport(self, rep):
|
def pytest_runtest_logreport(self, report):
|
||||||
if rep.failed:
|
if report.failed:
|
||||||
self._testsfailed = True
|
self._testsfailed = True
|
||||||
if self.config.option.exitfirst:
|
if self.config.option.exitfirst:
|
||||||
self.shouldstop = True
|
self.shouldstop = True
|
||||||
|
|
|
@ -67,187 +67,3 @@ class TestGeneralUsage:
|
||||||
"E ImportError: No module named does_not_work",
|
"E ImportError: No module named does_not_work",
|
||||||
])
|
])
|
||||||
assert result.ret == 1
|
assert result.ret == 1
|
||||||
|
|
||||||
class TestDistribution:
|
|
||||||
def test_dist_conftest_options(self, testdir):
|
|
||||||
p1 = testdir.tmpdir.ensure("dir", 'p1.py')
|
|
||||||
p1.dirpath("__init__.py").write("")
|
|
||||||
p1.dirpath("conftest.py").write(py.code.Source("""
|
|
||||||
print "importing conftest", __file__
|
|
||||||
import py
|
|
||||||
Option = py.test.config.Option
|
|
||||||
option = py.test.config.addoptions("someopt",
|
|
||||||
Option('--someopt', action="store_true", dest="someopt", default=False))
|
|
||||||
dist_rsync_roots = ['../dir']
|
|
||||||
print "added options", option
|
|
||||||
print "config file seen from conftest", py.test.config
|
|
||||||
"""))
|
|
||||||
p1.write(py.code.Source("""
|
|
||||||
import py, conftest
|
|
||||||
def test_1():
|
|
||||||
print "config from test_1", py.test.config
|
|
||||||
print "conftest from test_1", conftest.__file__
|
|
||||||
print "test_1: py.test.config.option.someopt", py.test.config.option.someopt
|
|
||||||
print "test_1: conftest", conftest
|
|
||||||
print "test_1: conftest.option.someopt", conftest.option.someopt
|
|
||||||
assert conftest.option.someopt
|
|
||||||
"""))
|
|
||||||
result = testdir.runpytest('-d', '--tx=popen', p1, '--someopt')
|
|
||||||
assert result.ret == 0
|
|
||||||
extra = result.stdout.fnmatch_lines([
|
|
||||||
"*1 passed*",
|
|
||||||
])
|
|
||||||
|
|
||||||
def test_manytests_to_one_popen(self, testdir):
|
|
||||||
p1 = testdir.makepyfile("""
|
|
||||||
import py
|
|
||||||
def test_fail0():
|
|
||||||
assert 0
|
|
||||||
def test_fail1():
|
|
||||||
raise ValueError()
|
|
||||||
def test_ok():
|
|
||||||
pass
|
|
||||||
def test_skip():
|
|
||||||
py.test.skip("hello")
|
|
||||||
""",
|
|
||||||
)
|
|
||||||
result = testdir.runpytest(p1, '-d', '--tx=popen', '--tx=popen')
|
|
||||||
result.stdout.fnmatch_lines([
|
|
||||||
"*1*popen*Python*",
|
|
||||||
"*2*popen*Python*",
|
|
||||||
"*2 failed, 1 passed, 1 skipped*",
|
|
||||||
])
|
|
||||||
assert result.ret == 1
|
|
||||||
|
|
||||||
def test_dist_conftest_specified(self, testdir):
|
|
||||||
p1 = testdir.makepyfile("""
|
|
||||||
import py
|
|
||||||
def test_fail0():
|
|
||||||
assert 0
|
|
||||||
def test_fail1():
|
|
||||||
raise ValueError()
|
|
||||||
def test_ok():
|
|
||||||
pass
|
|
||||||
def test_skip():
|
|
||||||
py.test.skip("hello")
|
|
||||||
""",
|
|
||||||
)
|
|
||||||
testdir.makeconftest("""
|
|
||||||
pytest_option_tx = 'popen popen popen'.split()
|
|
||||||
""")
|
|
||||||
result = testdir.runpytest(p1, '-d')
|
|
||||||
result.stdout.fnmatch_lines([
|
|
||||||
"*1*popen*Python*",
|
|
||||||
"*2*popen*Python*",
|
|
||||||
"*3*popen*Python*",
|
|
||||||
"*2 failed, 1 passed, 1 skipped*",
|
|
||||||
])
|
|
||||||
assert result.ret == 1
|
|
||||||
|
|
||||||
def test_dist_tests_with_crash(self, testdir):
|
|
||||||
if not hasattr(py.std.os, 'kill'):
|
|
||||||
py.test.skip("no os.kill")
|
|
||||||
|
|
||||||
p1 = testdir.makepyfile("""
|
|
||||||
import py
|
|
||||||
def test_fail0():
|
|
||||||
assert 0
|
|
||||||
def test_fail1():
|
|
||||||
raise ValueError()
|
|
||||||
def test_ok():
|
|
||||||
pass
|
|
||||||
def test_skip():
|
|
||||||
py.test.skip("hello")
|
|
||||||
def test_crash():
|
|
||||||
import time
|
|
||||||
import os
|
|
||||||
time.sleep(0.5)
|
|
||||||
os.kill(os.getpid(), 15)
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
result = testdir.runpytest(p1, '-d', '--tx=3*popen')
|
|
||||||
result.stdout.fnmatch_lines([
|
|
||||||
"*popen*Python*",
|
|
||||||
"*popen*Python*",
|
|
||||||
"*popen*Python*",
|
|
||||||
"*node down*",
|
|
||||||
"*3 failed, 1 passed, 1 skipped*"
|
|
||||||
])
|
|
||||||
assert result.ret == 1
|
|
||||||
|
|
||||||
def test_distribution_rsyncdirs_example(self, testdir):
|
|
||||||
source = testdir.mkdir("source")
|
|
||||||
dest = testdir.mkdir("dest")
|
|
||||||
subdir = source.mkdir("example_pkg")
|
|
||||||
subdir.ensure("__init__.py")
|
|
||||||
p = subdir.join("test_one.py")
|
|
||||||
p.write("def test_5(): assert not __file__.startswith(%r)" % str(p))
|
|
||||||
result = testdir.runpytest("-d", "--rsyncdir=%(subdir)s" % locals(),
|
|
||||||
"--tx=popen//chdir=%(dest)s" % locals(), p)
|
|
||||||
assert result.ret == 0
|
|
||||||
result.stdout.fnmatch_lines([
|
|
||||||
"*1* *popen*platform*",
|
|
||||||
#"RSyncStart: [G1]",
|
|
||||||
#"RSyncFinished: [G1]",
|
|
||||||
"*1 passed*"
|
|
||||||
])
|
|
||||||
assert dest.join(subdir.basename).check(dir=1)
|
|
||||||
|
|
||||||
def test_dist_each(self, testdir):
|
|
||||||
interpreters = []
|
|
||||||
for name in ("python2.4", "python2.5"):
|
|
||||||
interp = py.path.local.sysfind(name)
|
|
||||||
if interp is None:
|
|
||||||
py.test.skip("%s not found" % name)
|
|
||||||
interpreters.append(interp)
|
|
||||||
|
|
||||||
testdir.makepyfile(__init__="", test_one="""
|
|
||||||
import sys
|
|
||||||
def test_hello():
|
|
||||||
print "%s...%s" % sys.version_info[:2]
|
|
||||||
assert 0
|
|
||||||
""")
|
|
||||||
args = ["--dist=each"]
|
|
||||||
args += ["--tx", "popen//python=%s" % interpreters[0]]
|
|
||||||
args += ["--tx", "popen//python=%s" % interpreters[1]]
|
|
||||||
result = testdir.runpytest(*args)
|
|
||||||
result.stdout.fnmatch_lines(["2...4"])
|
|
||||||
result.stdout.fnmatch_lines(["2...5"])
|
|
||||||
|
|
||||||
|
|
||||||
class TestInteractive:
|
|
||||||
def test_simple_looponfail_interaction(self, testdir):
|
|
||||||
p1 = testdir.makepyfile("""
|
|
||||||
def test_1():
|
|
||||||
assert 1 == 0
|
|
||||||
""")
|
|
||||||
p1.setmtime(p1.mtime() - 50.0)
|
|
||||||
child = testdir.spawn_pytest("--looponfail %s" % p1)
|
|
||||||
child.expect("assert 1 == 0")
|
|
||||||
child.expect("test_simple_looponfail_interaction.py:")
|
|
||||||
child.expect("1 failed")
|
|
||||||
child.expect("waiting for changes")
|
|
||||||
p1.write(py.code.Source("""
|
|
||||||
def test_1():
|
|
||||||
assert 1 == 1
|
|
||||||
"""))
|
|
||||||
child.expect("MODIFIED.*test_simple_looponfail_interaction.py", timeout=4.0)
|
|
||||||
child.expect("1 passed", timeout=5.0)
|
|
||||||
child.kill(15)
|
|
||||||
|
|
||||||
class TestKeyboardInterrupt:
|
|
||||||
def test_raised_in_testfunction(self, testdir):
|
|
||||||
p1 = testdir.makepyfile("""
|
|
||||||
import py
|
|
||||||
def test_fail():
|
|
||||||
raise ValueError()
|
|
||||||
def test_inter():
|
|
||||||
raise KeyboardInterrupt()
|
|
||||||
""")
|
|
||||||
result = testdir.runpytest(p1)
|
|
||||||
result.stdout.fnmatch_lines([
|
|
||||||
#"*test_inter() INTERRUPTED",
|
|
||||||
"*KEYBOARD INTERRUPT*",
|
|
||||||
"*1 failed*",
|
|
||||||
])
|
|
||||||
|
|
||||||
|
|
2
setup.py
2
setup.py
|
@ -45,7 +45,7 @@ def main():
|
||||||
'py.svnwcrevert = py.cmdline:pysvnwcrevert',
|
'py.svnwcrevert = py.cmdline:pysvnwcrevert',
|
||||||
'py.test = py.cmdline:pytest',
|
'py.test = py.cmdline:pytest',
|
||||||
'py.which = py.cmdline:pywhich']},
|
'py.which = py.cmdline:pywhich']},
|
||||||
classifiers=['Development Status :: 4 - Beta',
|
classifiers=['Development Status :: 5 - Stable',
|
||||||
'Intended Audience :: Developers',
|
'Intended Audience :: Developers',
|
||||||
'License :: OSI Approved :: MIT License',
|
'License :: OSI Approved :: MIT License',
|
||||||
'Operating System :: POSIX',
|
'Operating System :: POSIX',
|
||||||
|
|
Loading…
Reference in New Issue