some more refinements to docs
This commit is contained in:
parent
fed8f19156
commit
49319ba729
|
@ -258,7 +258,7 @@ epub_copyright = u'2010, holger krekel et aliter'
|
|||
|
||||
|
||||
# Example configuration for intersphinx: refer to the Python standard library.
|
||||
intersphinx_mapping = {'http://docs.python.org/': None}
|
||||
intersphinx_mapping = {} # 'http://docs.python.org/': None}
|
||||
def setup(app):
|
||||
#from sphinx.ext.autodoc import cut_lines
|
||||
#app.connect('autodoc-process-docstring', cut_lines(4, what=['module']))
|
||||
|
|
|
@ -35,13 +35,13 @@ If no path was provided at all the current working directory is used for the loo
|
|||
builtin configuration file options
|
||||
----------------------------------------------
|
||||
|
||||
.. confval:: minversion = VERSTRING
|
||||
.. confval:: minversion
|
||||
|
||||
specifies the minimal pytest version that is needed for this test suite.
|
||||
specifies a minimal pytest version needed for running tests.
|
||||
|
||||
minversion = 2.1 # will fail if we run with pytest-2.0
|
||||
|
||||
.. confval:: addopts = OPTS
|
||||
.. confval:: addopts
|
||||
|
||||
add the specified ``OPTS`` to the set of command line arguments as if they
|
||||
had been specified by the user. Example: if you have this ini file content::
|
||||
|
@ -53,5 +53,28 @@ builtin configuration file options
|
|||
|
||||
py.test --maxfail=2 -rf test_hello.py
|
||||
|
||||
.. _`function arguments`: funcargs.html
|
||||
Default is to add no options.
|
||||
|
||||
.. confval:: norecursedirs
|
||||
|
||||
Set the directory basename patterns to avoid when recursing
|
||||
for test discovery. The individual (fnmatch-style) patterns are
|
||||
applied to the basename of a directory to decide if to recurse into it.
|
||||
Pattern matching characters::
|
||||
|
||||
* matches everything
|
||||
? matches any single character
|
||||
[seq] matches any character in seq
|
||||
[!seq] matches any char not in seq
|
||||
|
||||
Default patterns are ``.* _* CVS {args}``. Setting a ``norecurse``
|
||||
replaces the default. Here is a customizing example for avoiding
|
||||
a different set of directories::
|
||||
|
||||
# content of setup.cfg
|
||||
[pytest]
|
||||
norecursedirs = .svn _build tmp*
|
||||
|
||||
This would tell py.test to not recurse into typical subversion or
|
||||
sphinx-build directories or into any ``tmp`` prefixed directory.
|
||||
|
||||
|
|
|
@ -1,45 +0,0 @@
|
|||
|
||||
Test collection and discovery
|
||||
======================================================
|
||||
|
||||
.. _`discovered`:
|
||||
|
||||
Default filesystem test discovery
|
||||
-----------------------------------------------
|
||||
|
||||
Test collection starts from paths specified at the command line or from
|
||||
the current directory. Tests are collected ahead of running the first test.
|
||||
(This used to be different in earlier versions of ``py.test`` where
|
||||
collection and running was interweaved which made test randomization
|
||||
and distributed testing harder).
|
||||
|
||||
Collection nodes which have children are called "Collectors" and otherwise
|
||||
they are called "Items" or "test items". Here is an example of such a
|
||||
tree::
|
||||
|
||||
example $ py.test --collectonly test_collectonly.py
|
||||
<Directory 'example'>
|
||||
<Module 'test_collectonly.py'>
|
||||
<Function 'test_function'>
|
||||
<Class 'TestClass'>
|
||||
<Instance '()'>
|
||||
<Function 'test_method'>
|
||||
<Function 'test_anothermethod'>
|
||||
|
||||
By default all directories not starting with a dot are traversed,
|
||||
looking for ``test_*.py`` and ``*_test.py`` files. Those Python
|
||||
files are imported under their `package name`_.
|
||||
|
||||
The Module collector looks for test functions
|
||||
and test classes and methods. Test functions and methods
|
||||
are prefixed ``test`` by default. Test classes must
|
||||
start with a capitalized ``Test`` prefix.
|
||||
|
||||
Customizing error messages
|
||||
-------------------------------------------------
|
||||
|
||||
On test and collection nodes ``py.test`` will invoke
|
||||
the ``node.repr_failure(excinfo)`` function which
|
||||
you may override and make it return an error
|
||||
representation string of your choice. It
|
||||
will be reported as a (red) string.
|
|
@ -7,7 +7,9 @@ Usages and Examples
|
|||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
example/controlskip.txt
|
||||
example/mysetup.txt
|
||||
example/detectpytest.txt
|
||||
example/nonpython.txt
|
||||
pythoncollection.txt
|
||||
controlskip.txt
|
||||
mysetup.txt
|
||||
detectpytest.txt
|
||||
nonpython.txt
|
||||
simple.txt
|
|
@ -4,6 +4,8 @@
|
|||
Working with non-python tests
|
||||
====================================================
|
||||
|
||||
.. _`yaml plugin`:
|
||||
|
||||
a basic example for specifying tests in Yaml files
|
||||
--------------------------------------------------------------
|
||||
|
||||
|
@ -39,7 +41,17 @@ now execute the test specification::
|
|||
|
||||
You get one dot for the passing ``sub1: sub1`` check and one failure.
|
||||
Obviously in the above ``conftest.py`` you'll want to implement a more
|
||||
interesting interpretation of the yaml-values. Note that ``reportinfo()``
|
||||
interesting interpretation of the yaml-values. You can easily write
|
||||
your own domain specific testing language this way.
|
||||
|
||||
.. note::
|
||||
|
||||
``repr_failure(excinfo)`` is called for representing test failures.
|
||||
If you create custom collection nodes you can return an error
|
||||
representation string of your choice. It
|
||||
will be reported as a (red) string.
|
||||
|
||||
``reportinfo()``
|
||||
is used for representing the test location and is also consulted for
|
||||
reporting in ``verbose`` mode::
|
||||
|
||||
|
|
|
@ -11,22 +11,22 @@ class YamlFile(py.test.collect.File):
|
|||
import yaml # we need a yaml parser, e.g. PyYAML
|
||||
raw = yaml.load(self.fspath.open())
|
||||
for name, spec in raw.items():
|
||||
yield UsecaseItem(name, self, spec)
|
||||
yield YamlItem(name, self, spec)
|
||||
|
||||
class UsecaseItem(py.test.collect.Item):
|
||||
class YamlItem(py.test.collect.Item):
|
||||
def __init__(self, name, parent, spec):
|
||||
super(UsecaseItem, self).__init__(name, parent)
|
||||
super(YamlItem, self).__init__(name, parent)
|
||||
self.spec = spec
|
||||
|
||||
def runtest(self):
|
||||
for name, value in self.spec.items():
|
||||
# some custom test execution (dumb example follows)
|
||||
if name != value:
|
||||
raise UsecaseException(self, name, value)
|
||||
raise YamlException(self, name, value)
|
||||
|
||||
def repr_failure(self, excinfo):
|
||||
""" called when self.runtest() raises an exception. """
|
||||
if excinfo.errisinstance(UsecaseException):
|
||||
if isinstance(excinfo.value, YamlException):
|
||||
return "\n".join([
|
||||
"usecase execution failed",
|
||||
" spec failed: %r: %r" % excinfo.value.args[1:3],
|
||||
|
@ -36,5 +36,5 @@ class UsecaseItem(py.test.collect.Item):
|
|||
def reportinfo(self):
|
||||
return self.fspath, 0, "usecase: %s" % self.name
|
||||
|
||||
class UsecaseException(Exception):
|
||||
class YamlException(Exception):
|
||||
""" custom exception for error reporting. """
|
||||
|
|
|
@ -0,0 +1,29 @@
|
|||
Changing standard (Python) test discovery
|
||||
===============================================
|
||||
|
||||
changing directory recursion
|
||||
-----------------------------------------------------
|
||||
|
||||
You can set the :confval:`norecursedirs` option in an ini-file, for example your ``setup.cfg`` in the project root directory::
|
||||
|
||||
# content of setup.cfg
|
||||
[pytest]
|
||||
norecursedirs = .svn _build tmp*
|
||||
|
||||
This would tell py.test to not recurse into typical subversion or sphinx-build directories or into any ``tmp`` prefixed directory.
|
||||
|
||||
|
||||
finding out what is collected
|
||||
-----------------------------------------------
|
||||
|
||||
You can always peek at the collection tree without running tests like this::
|
||||
|
||||
$ py.test --collectonly collectonly.py
|
||||
<Directory 'example'>
|
||||
<Module 'test_collectonly.py'>
|
||||
<Function 'test_function'>
|
||||
<Class 'TestClass'>
|
||||
<Instance '()'>
|
||||
<Function 'test_method'>
|
||||
<Function 'test_anothermethod'>
|
||||
|
|
@ -0,0 +1,137 @@
|
|||
|
||||
.. highlightlang:: python
|
||||
|
||||
simple patterns using hooks
|
||||
==========================================================
|
||||
|
||||
pass different values to a test function, depending on command line options
|
||||
----------------------------------------------------------------------------
|
||||
|
||||
Suppose we want to write a test that depends on a command line option.
|
||||
Here is a basic pattern how to achieve this::
|
||||
|
||||
# content of test_sample.py
|
||||
def test_answer(cmdopt):
|
||||
if cmdopt == "type1":
|
||||
print ("first")
|
||||
elif cmdopt == "type2":
|
||||
print ("second")
|
||||
assert 0 # to see what was printed
|
||||
|
||||
|
||||
For this to work we need to add a command line option and
|
||||
provide the ``cmdopt`` through a function argument factory::
|
||||
|
||||
# content of conftest.py
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption("--cmdopt", action="store", default="type1",
|
||||
help="my option: type1 or type2")
|
||||
|
||||
def pytest_funcarg__cmdopt(request):
|
||||
return request.config.option.cmdopt
|
||||
|
||||
Let's run this without supplying our new command line option::
|
||||
|
||||
$ py.test -q
|
||||
F
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_answer ________________________________
|
||||
|
||||
cmdopt = 'type1'
|
||||
|
||||
def test_answer(cmdopt):
|
||||
if cmdopt == "type1":
|
||||
print ("first")
|
||||
elif cmdopt == "type2":
|
||||
print ("second")
|
||||
> assert 0 # to see what was printed
|
||||
E assert 0
|
||||
|
||||
test_sample.py:6: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
first
|
||||
1 failed in 0.02 seconds
|
||||
|
||||
And now with supplying a command line option::
|
||||
|
||||
$ py.test -q --cmdopt=type2
|
||||
F
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_answer ________________________________
|
||||
|
||||
cmdopt = 'type2'
|
||||
|
||||
def test_answer(cmdopt):
|
||||
if cmdopt == "type1":
|
||||
print ("first")
|
||||
elif cmdopt == "type2":
|
||||
print ("second")
|
||||
> assert 0 # to see what was printed
|
||||
E assert 0
|
||||
|
||||
test_sample.py:6: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
second
|
||||
1 failed in 0.02 seconds
|
||||
|
||||
Ok, this completes the basic pattern. However, one often rather
|
||||
wants to process command line options outside of the test and
|
||||
rather pass in different or more complex objects. See the
|
||||
next example or refer to :ref:`mysetup` for more information
|
||||
on real-life examples.
|
||||
|
||||
generating parameters combinations, depending on command line
|
||||
----------------------------------------------------------------------------
|
||||
|
||||
Let's say we want to execute a test with different parameters
|
||||
and the parameter range shall be determined by a command
|
||||
line argument. Let's first write a simple computation test::
|
||||
|
||||
# content of test_compute.py
|
||||
|
||||
def test_compute(param1):
|
||||
assert param1 < 4
|
||||
|
||||
Now we add a test configuration like this::
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption("--all", action="store_true",
|
||||
help="run all combinations")
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
if 'param1' in metafunc.funcargnames:
|
||||
if metafunc.config.option.all:
|
||||
end = 5
|
||||
else:
|
||||
end = 2
|
||||
for i in range(end):
|
||||
metafunc.addcall(funcargs={'param1': i})
|
||||
|
||||
This means that we only run 2 tests if we do not pass ``--all``::
|
||||
|
||||
$ py.test -q test_compute.py
|
||||
..
|
||||
2 passed in 0.01 seconds
|
||||
|
||||
We run only two computations, so we see two dots.
|
||||
let's run the full monty::
|
||||
|
||||
$ py.test -q --all test_compute.py
|
||||
....F
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_compute[4] ______________________________
|
||||
|
||||
param1 = 4
|
||||
|
||||
def test_compute(param1):
|
||||
> assert param1 < 4
|
||||
E assert 4 < 4
|
||||
|
||||
test_compute.py:3: AssertionError
|
||||
1 failed, 4 passed in 0.03 seconds
|
||||
|
||||
|
||||
As expected when running the full range of ``param1`` values
|
||||
we'll get an error on the last one.
|
|
@ -0,0 +1,32 @@
|
|||
changing Python test discovery patterns
|
||||
--------------------------------------------------
|
||||
|
||||
You can influence python test file, function and class prefixes through
|
||||
the :confval:`python_patterns` configuration valueto determine which
|
||||
files are checked and which test functions are found. Example for using
|
||||
a scheme that builds on ``check`` rather than on ``test`` prefixes::
|
||||
|
||||
|
||||
# content of setup.cfg
|
||||
[pytest]
|
||||
python_patterns =
|
||||
files: check_*.py
|
||||
functions: check_
|
||||
classes: Check
|
||||
|
||||
See
|
||||
:confval:`python_funcprefixes` and :confval:`python_classprefixes`
|
||||
|
||||
|
||||
changing test file discovery
|
||||
-----------------------------------------------------
|
||||
|
||||
You can specify patterns where python tests are found::
|
||||
|
||||
python_testfilepatterns =
|
||||
testing/**/{purebasename}.py
|
||||
testing/*.py
|
||||
|
||||
.. note::
|
||||
|
||||
conftest.py files are never considered for test discovery
|
|
@ -4,6 +4,8 @@ creating and managing test function arguments
|
|||
|
||||
.. currentmodule:: pytest.plugin.python
|
||||
|
||||
|
||||
.. _`funcargs`:
|
||||
.. _`funcarg mechanism`:
|
||||
|
||||
Test function arguments and factories
|
||||
|
@ -34,18 +36,18 @@ Running the test looks like this::
|
|||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev17
|
||||
test path 1: test_simplefactory.py
|
||||
|
||||
|
||||
test_simplefactory.py F
|
||||
|
||||
|
||||
================================= FAILURES =================================
|
||||
______________________________ test_function _______________________________
|
||||
|
||||
|
||||
myfuncarg = 42
|
||||
|
||||
|
||||
def test_function(myfuncarg):
|
||||
> assert myfuncarg == 17
|
||||
E assert 42 == 17
|
||||
|
||||
|
||||
test_simplefactory.py:5: AssertionError
|
||||
========================= 1 failed in 0.02 seconds =========================
|
||||
|
||||
|
@ -118,7 +120,8 @@ example:
|
|||
Basic generated test example
|
||||
----------------------------
|
||||
|
||||
Let's consider this test module::
|
||||
Let's consider a test module which uses the ``pytest_generate_tests``
|
||||
hook to generate several calls to the same test function::
|
||||
|
||||
# content of test_example.py
|
||||
def pytest_generate_tests(metafunc):
|
||||
|
@ -135,23 +138,24 @@ Running this::
|
|||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev17
|
||||
test path 1: test_example.py
|
||||
|
||||
|
||||
test_example.py .........F
|
||||
|
||||
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_func[9] _______________________________
|
||||
|
||||
|
||||
numiter = 9
|
||||
|
||||
|
||||
def test_func(numiter):
|
||||
> assert numiter < 9
|
||||
E assert 9 < 9
|
||||
|
||||
|
||||
test_example.py:7: AssertionError
|
||||
==================== 1 failed, 9 passed in 0.03 seconds ====================
|
||||
|
||||
Note that the ``pytest_generate_tests(metafunc)`` hook is called during
|
||||
the test collection phase. You can have a look at it with this::
|
||||
the test collection phase which is separate from the actual test running.
|
||||
Let's just look at what is collected::
|
||||
|
||||
$ py.test --collectonly test_example.py
|
||||
<Directory 'doc-exec-167'>
|
||||
|
@ -173,7 +177,7 @@ If you want to select only the run with the value ``7`` you could do::
|
|||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev17 -- /home/hpk/venv/0/bin/python
|
||||
test path 1: test_example.py
|
||||
|
||||
|
||||
test_example.py:6: test_func[0] PASSED
|
||||
test_example.py:6: test_func[1] PASSED
|
||||
test_example.py:6: test_func[2] PASSED
|
||||
|
@ -184,16 +188,16 @@ If you want to select only the run with the value ``7`` you could do::
|
|||
test_example.py:6: test_func[7] PASSED
|
||||
test_example.py:6: test_func[8] PASSED
|
||||
test_example.py:6: test_func[9] FAILED
|
||||
|
||||
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_func[9] _______________________________
|
||||
|
||||
|
||||
numiter = 9
|
||||
|
||||
|
||||
def test_func(numiter):
|
||||
> assert numiter < 9
|
||||
E assert 9 < 9
|
||||
|
||||
|
||||
test_example.py:7: AssertionError
|
||||
==================== 1 failed, 9 passed in 0.04 seconds ====================
|
||||
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
Installation and Getting Started
|
||||
===================================
|
||||
|
||||
.. _`easy_install`:
|
||||
|
||||
**Compatibility**: Python 2.4-3.2, Jython, PyPy on Unix/Posix and Windows
|
||||
|
||||
Installation
|
||||
|
@ -17,27 +15,26 @@ To check your installation has installed the correct version::
|
|||
|
||||
$ py.test --version
|
||||
|
||||
If you get an error, checkout :ref:`installation issues`.
|
||||
|
||||
If you get an error checkout :ref:`installation issues`.
|
||||
|
||||
Our first test run
|
||||
----------------------------------------------------------
|
||||
|
||||
Let's create a small file with a test function testing a function
|
||||
computes a certain value::
|
||||
Let's create a first test file with a simple test function::
|
||||
|
||||
# content of test_sample.py
|
||||
def func(x):
|
||||
return x + 1
|
||||
|
||||
def test_answer():
|
||||
assert func(3) == 5
|
||||
|
||||
You can execute the test function::
|
||||
That's it. You can execute the test function now::
|
||||
|
||||
$ py.test test_sample.py
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev17
|
||||
test path 1: test_sample.py
|
||||
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev18
|
||||
test path 1: /tmp/doc-exec-211
|
||||
|
||||
test_sample.py F
|
||||
|
||||
|
@ -49,19 +46,26 @@ You can execute the test function::
|
|||
E assert 4 == 5
|
||||
E + where 4 = func(3)
|
||||
|
||||
test_sample.py:4: AssertionError
|
||||
test_sample.py:5: AssertionError
|
||||
========================= 1 failed in 0.02 seconds =========================
|
||||
|
||||
We told py.test to run the ``test_sample.py`` file and it :ref:`discovered` the
|
||||
``test_answer`` function because of the ``test_`` prefix. We got a
|
||||
failure because our little ``func(3)`` call did not return ``5``.
|
||||
py.test found the ``test_answer`` function by following :ref:`standard test discovery rules <test discovery>`, basically detecting the ``test_`` prefixes. We got a failure report because our little ``func(3)`` call did not return ``5``. The report is formatted using the :ref:`standard traceback reporting`.
|
||||
|
||||
.. note::
|
||||
|
||||
You can simply use the `assert statement`_ for coding expectations because
|
||||
intermediate values will be presented to you. Or to put it bluntly,
|
||||
there is no need to learn all `the JUnit legacy methods`_ for expressing
|
||||
assertions.
|
||||
You can simply use the ``assert`` statement for coding expectations because
|
||||
intermediate values will be presented to you. This is much easier than
|
||||
learning all the `the JUnit legacy methods`_ which are even inconsistent
|
||||
with Python's own coding guidelines (but consistent with
|
||||
Java-style naming).
|
||||
|
||||
There is only one seldomly hit caveat to using asserts: if your
|
||||
assertion expression fails and has side effects then re-evaluating
|
||||
it for presenting intermediate values can go wrong. It's easy to fix:
|
||||
compute the value ahead of the assert and then do the
|
||||
assertion or use the assert "message" syntax::
|
||||
|
||||
assert expr, "message" # show "message" if expr is not True
|
||||
|
||||
.. _`the JUnit legacy methods`: http://docs.python.org/library/unittest.html#test-cases
|
||||
|
||||
|
@ -88,7 +92,7 @@ Running it with, this time in "quiet" reporting mode::
|
|||
.
|
||||
1 passed in 0.01 seconds
|
||||
|
||||
.. todo:: For further ways to assert exceptions see the :pyfunc:`raises`
|
||||
.. todo:: For further ways to assert exceptions see the `raises`
|
||||
|
||||
Grouping multiple tests in a class
|
||||
--------------------------------------------------------------
|
||||
|
@ -107,16 +111,16 @@ tests in a class like this::
|
|||
x = "hello"
|
||||
assert hasattr(x, 'check')
|
||||
|
||||
The two tests will be discovered because of the default `automatic test
|
||||
discovery`_. There is no need to subclass anything. If we now run
|
||||
the module we'll see one passed and one failed test::
|
||||
The two tests are found because of the standard :ref:`test discovery`.
|
||||
There is no need to subclass anything. We can simply
|
||||
run the module by passing its filename::
|
||||
|
||||
$ py.test -q test_class.py
|
||||
.F
|
||||
================================= FAILURES =================================
|
||||
____________________________ TestClass.test_two ____________________________
|
||||
|
||||
self = <test_class.TestClass instance at 0x1732368>
|
||||
self = <test_class.TestClass instance at 0x19c6638>
|
||||
|
||||
def test_two(self):
|
||||
x = "hello"
|
||||
|
@ -126,27 +130,74 @@ the module we'll see one passed and one failed test::
|
|||
test_class.py:8: AssertionError
|
||||
1 failed, 1 passed in 0.02 seconds
|
||||
|
||||
where to go from here
|
||||
The first test passed, the second failed. Again we can easily see
|
||||
the intermediate values used in the assertion, helping us to
|
||||
understand the reason for the failure.
|
||||
|
||||
Going functional: requesting a unique temporary directory
|
||||
--------------------------------------------------------------
|
||||
|
||||
For functional tests one often needs to create some files
|
||||
and pass them to application objects. py.test provides
|
||||
the versatile :ref:`funcarg mechanism` which allows to request
|
||||
arbitrary resources, for example a unique temporary directory::
|
||||
|
||||
# content of test_tmpdir.py
|
||||
def test_needsfiles(tmpdir):
|
||||
print tmpdir
|
||||
assert 0
|
||||
|
||||
We list the name ``tmpdir`` in the test function signature and
|
||||
py.test will lookup and call a factory to create the resource
|
||||
before performing the test function call. Let's just run it::
|
||||
|
||||
$ py.test -q test_tmpdir.py
|
||||
F
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_needsfiles ______________________________
|
||||
|
||||
tmpdir = local('/tmp/pytest-1306/test_needsfiles0')
|
||||
|
||||
def test_needsfiles(tmpdir):
|
||||
print tmpdir
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_tmpdir.py:3: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
/tmp/pytest-1306/test_needsfiles0
|
||||
1 failed in 0.04 seconds
|
||||
|
||||
Before the test runs, a unique-per-test-invocation temporary directory
|
||||
was created. More info at :ref:`tmpdir handling`.
|
||||
|
||||
You can find out what kind of builtin :ref:`funcargs` exist by typing::
|
||||
|
||||
py.test --funcargs # shows builtin and custom function arguments
|
||||
|
||||
where to go next
|
||||
-------------------------------------
|
||||
|
||||
Here are a few suggestions where to go next:
|
||||
|
||||
* :ref:`cmdline` for command line invocation examples
|
||||
* :ref:`good practises` for virtualenv, test layout, genscript support
|
||||
* :ref:`apiref` for documentation and examples on writing Python tests
|
||||
* :ref:`apiref` for documentation and examples on using py.test
|
||||
* :ref:`plugins` managing and writing plugins
|
||||
|
||||
.. _`installation issues`:
|
||||
|
||||
Installation issues
|
||||
Known Installation issues
|
||||
------------------------------
|
||||
|
||||
easy_install or pip not found?
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
Consult distribute_ to install the ``easy_install`` tool on your machine.
|
||||
You may also use the original but somewhat older `setuptools`_ project
|
||||
although we generally recommend to use ``distribute`` because it contains
|
||||
more bug fixes and also works for Python3.
|
||||
Consult `distribute docs <distribute>`_ to install the ``easy_install``
|
||||
tool on your machine. You may also use the original but somewhat older
|
||||
`setuptools`_ project although we generally recommend to use
|
||||
``distribute`` because it contains more bug fixes and also works for
|
||||
Python3.
|
||||
|
||||
For Python2 you can also consult pip_ for the popular ``pip`` tool.
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
Good Integration Practises
|
||||
=================================================
|
||||
|
||||
work with virtual environments
|
||||
Work with virtual environments
|
||||
-----------------------------------------------------------
|
||||
|
||||
We recommend to work with virtualenv_ environments and use easy_install_
|
||||
|
@ -21,6 +21,24 @@ server Hudson_.
|
|||
.. _`buildout`: http://www.buildout.org/
|
||||
.. _pip: http://pypi.python.org/pypi/pip
|
||||
|
||||
.. _`test discovery`:
|
||||
|
||||
Conventions for Python test discovery
|
||||
-------------------------------------------------
|
||||
|
||||
``py.test`` implements the following standard test discovery:
|
||||
|
||||
* collection starts from initial command line arguments
|
||||
which may be directories, filenames or test ids.
|
||||
* recurse into directories, unless they match :confval:`norecursedirs`
|
||||
* ``test_*.py`` or ``*_test.py`` files, imported by their `package name`_.
|
||||
* ``Test`` prefixed test classes (without an ``__init__`` method)
|
||||
* ``test_`` prefixed test functions or methods are test items
|
||||
|
||||
For changing and customization example, see :doc:`example/pythoncollection`.
|
||||
|
||||
py.test additionally discovers tests using the standard
|
||||
:ref:`unittest.TestCase <unittest.TestCase>` subclassing technique.
|
||||
|
||||
Choosing a test layout / import rules
|
||||
------------------------------------------
|
||||
|
@ -57,6 +75,8 @@ You can always run your tests by pointing to it::
|
|||
py.test # run all tests below current dir
|
||||
...
|
||||
|
||||
.. _`package name`:
|
||||
|
||||
.. note::
|
||||
|
||||
Test modules are imported under their fully qualified name as follows:
|
||||
|
|
|
@ -12,7 +12,7 @@ Welcome to ``py.test`` documentation:
|
|||
overview
|
||||
apiref
|
||||
plugins
|
||||
examples
|
||||
example/index
|
||||
talks
|
||||
develop
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ Writing, managing and understanding plugins
|
|||
|
||||
.. _`local plugin`:
|
||||
|
||||
py.test implements all aspects of configuration, collection, running and reporting by calling `well specified hooks`_. Virtually any Python module can be registered as a plugin. It can implement any number of hook functions (usually two or three) which all have a ``pytest_`` prefix, making hook functions easy to distinguish and find. There are three basic locations types::
|
||||
py.test implements all aspects of configuration, collection, running and reporting by calling `well specified hooks`_. Virtually any Python module can be registered as a plugin. It can implement any number of hook functions (usually two or three) which all have a ``pytest_`` prefix, making hook functions easy to distinguish and find. There are three basic locations types:
|
||||
|
||||
* builtin plugins: loaded from py.test's own `pytest/plugin`_ directory.
|
||||
* `external plugins`_: modules discovered through `setuptools entry points`_
|
||||
|
@ -55,7 +55,7 @@ earlier than further away ones.
|
|||
.. _`installing plugins`:
|
||||
.. _`external plugins`:
|
||||
|
||||
Installing External Plugins
|
||||
Installing External Plugins / Searching
|
||||
------------------------------------------------------
|
||||
|
||||
Installing a plugin happens through any usual Python installation
|
||||
|
@ -72,9 +72,7 @@ de-install it. You can find a list of valid plugins through a
|
|||
.. _`available installable plugins`:
|
||||
.. _`pytest- pypi.python.org search`: http://pypi.python.org/pypi?%3Aaction=search&term=pytest-&submit=search
|
||||
|
||||
.. _`setuptools entry points`:
|
||||
|
||||
Writing an installable plugin
|
||||
Writing a plugin by looking at examples
|
||||
------------------------------------------------------
|
||||
|
||||
.. _`Distribute`: http://pypi.python.org/pypi/distribute
|
||||
|
@ -83,9 +81,18 @@ Writing an installable plugin
|
|||
If you want to write a plugin, there are many real-life examples
|
||||
you can copy from:
|
||||
|
||||
* a custom collection example plugin: :ref:`yaml plugin`
|
||||
* around 20 `builtin plugins`_ which comprise py.test's own functionality
|
||||
* around 10 `external plugins`_ providing additional features
|
||||
|
||||
All of these plugins are using the documented `well specified hooks`_
|
||||
to implement their wide-ranging functionality.
|
||||
|
||||
.. _`setuptools entry points`:
|
||||
|
||||
Making your plugin installable by others
|
||||
-----------------------------------------------
|
||||
|
||||
If you want to make your plugin externally available, you
|
||||
may define a so called entry point for your distribution so
|
||||
that ``py.test`` finds your plugin module. Entry points are
|
||||
|
@ -149,9 +156,6 @@ will be loaded as well. You can also use dotted path like this::
|
|||
|
||||
which will import the specified module as a py.test plugin.
|
||||
|
||||
.. _`setuptools entry points`:
|
||||
.. _registered:
|
||||
|
||||
|
||||
Accessing another plugin by name
|
||||
--------------------------------------------
|
||||
|
|
|
@ -1,4 +1,6 @@
|
|||
|
||||
.. _`tmpdir handling`:
|
||||
|
||||
temporary directories and files
|
||||
================================================
|
||||
|
||||
|
@ -18,7 +20,7 @@ and more. Here is an example test usage::
|
|||
p = tmpdir.mkdir("sub").join("hello.txt")
|
||||
p.write("content")
|
||||
assert p.read() == "content"
|
||||
assert len(os.listdir(str(tmpdir))) == 1
|
||||
assert tmpdir.listdir() == 1
|
||||
assert 0
|
||||
|
||||
Running this would result in a passed test except for the last
|
||||
|
@ -41,8 +43,6 @@ Running this would result in a passed test except for the last
|
|||
p.write("content")
|
||||
assert p.read() == "content"
|
||||
assert len(os.listdir(str(tmpdir))) == 1
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_tmpdir.py:7: AssertionError
|
||||
========================= 1 failed in 0.04 seconds =========================
|
||||
|
|
|
@ -1,3 +1,6 @@
|
|||
|
||||
.. _`unittest.TestCase`:
|
||||
|
||||
unittest.TestCase support
|
||||
=====================================================================
|
||||
|
||||
|
|
Loading…
Reference in New Issue