improve release announcement, shift and fix examples a bit. Bump version to 2.2.0
This commit is contained in:
parent
f7648e11d8
commit
6b4e6eee09
|
@ -1,4 +1,4 @@
|
|||
Changes between 2.1.3 and XXX 2.2.0
|
||||
Changes between 2.1.3 and 2.2.0
|
||||
----------------------------------------
|
||||
|
||||
- fix issue90: introduce eager tearing down of test items so that
|
||||
|
|
|
@ -1,2 +1,2 @@
|
|||
#
|
||||
__version__ = '2.2.0.dev11'
|
||||
__version__ = '2.2.0'
|
||||
|
|
|
@ -16,9 +16,9 @@ def pytest_addoption(parser):
|
|||
|
||||
group._addoption("-m",
|
||||
action="store", dest="markexpr", default="", metavar="MARKEXPR",
|
||||
help="only run tests which match given mark expression. "
|
||||
"An expression is a python expression which can use "
|
||||
"marker names.")
|
||||
help="only run tests matching given mark expression. "
|
||||
"example: -m 'mark1 and not mark2'."
|
||||
)
|
||||
|
||||
group.addoption("--markers", action="store_true", help=
|
||||
"show markers (builtin, plugin and per-project ones).")
|
||||
|
|
|
@ -39,7 +39,7 @@ def pytest_configure(config):
|
|||
config.addinivalue_line("markers",
|
||||
"parametrize(argnames, argvalues): call a test function multiple "
|
||||
"times passing in multiple different argument value sets. Example: "
|
||||
"@parametrize(arg1, [1,2]) would lead to two calls of the decorated "
|
||||
"@parametrize('arg1', [1,2]) would lead to two calls of the decorated "
|
||||
"test function, one with arg1=1 and another with arg1=2."
|
||||
)
|
||||
|
||||
|
|
|
@ -2,29 +2,30 @@ py.test 2.2.0: test marking++, parametrization++ and duration profiling
|
|||
===========================================================================
|
||||
|
||||
pytest-2.2.0 is a test-suite compatible release of the popular
|
||||
py.test testing tool. There are a couple of new features and improvements:
|
||||
py.test testing tool. Plugins might need upgrades. It comes
|
||||
with these improvements:
|
||||
|
||||
* "--duration=N" option showing the N slowest test execution
|
||||
or setup/teardown calls.
|
||||
* more powerful parametrization of tests:
|
||||
|
||||
* @pytest.mark.parametrize decorator for runnin test functions
|
||||
with multiple values and a new more powerful metafunc.parametrize()
|
||||
helper to be used from pytest_generate_tests. Multiple parametrize
|
||||
functions can now be invoked for the same test function.
|
||||
- new @pytest.mark.parametrize decorator for running test functions
|
||||
- new metafunc.parametrize() API for parametrizing arguments independently
|
||||
- see examples at http://pytest.org/latest/example/parametrize.html
|
||||
- NOTE that parametrize() related APIs are still a bit experimental
|
||||
and might change in future releases.
|
||||
|
||||
* "-m markexpr" option for selecting tests according to their mark and
|
||||
a new "markers" ini-variable for registering test markers. The new "--strict"
|
||||
option will bail out with an error if you are using unregistered markers.
|
||||
* improved handling of test markers and refined marking mechanism:
|
||||
|
||||
* teardown functions are now more eagerly called so that they appear
|
||||
more directly connected to the last test item that needed a particular
|
||||
fixture/setup.
|
||||
- "-m markexpr" option for selecting tests according to their mark
|
||||
- a new "markers" ini-variable for registering test markers for your project
|
||||
- the new "--strict" bails out with an error if using unregistered markers.
|
||||
- see examples at http://pytest.org/latest/example/markers.html
|
||||
|
||||
Usage of improved parametrize is documented in examples at
|
||||
http://pytest.org/latest/example/parametrize.html
|
||||
* duration profiling: new "--duration=N" option showing the N slowest test
|
||||
execution or setup/teardown calls. This is most useful if you want to
|
||||
find out where your slowest test code is.
|
||||
|
||||
Usages of the improved marking mechanism is illustrated by a couple
|
||||
of initial examples, see http://pytest.org/latest/example/markers.html
|
||||
* also 2.2.0 performs more eager calling of teardown/finalizers functions
|
||||
resulting in better and more accurate reporting when they fail
|
||||
|
||||
Besides there is the usual set of bug fixes along with a cleanup of
|
||||
pytest's own test suite allowing it to run on a wider range of environments.
|
||||
|
@ -38,8 +39,8 @@ If you want to install or upgrade pytest you might just type::
|
|||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
Thanks to Ronny Pfannschmidt, David Burns, Jeff Donner, Daniel Nouri, XXX for their
|
||||
help and feedback on various issues.
|
||||
Thanks to Ronny Pfannschmidt, David Burns, Jeff Donner, Daniel Nouri,
|
||||
Alfredo Doza and all who gave feedback or sent bug reports.
|
||||
|
||||
best,
|
||||
holger krekel
|
||||
|
@ -55,8 +56,41 @@ While test suites should work unchanged you might need to upgrade plugins:
|
|||
|
||||
* Other plugins might need an upgrade if they implement
|
||||
the ``pytest_runtest_logreport`` hook which now is called unconditionally
|
||||
for the setup/teardown fixture phases of a test. You can just choose to
|
||||
ignore them by inserting "if rep.when != 'call': return". Note that
|
||||
most code probably "just" works because the hook was already called
|
||||
for failing setup/teardown phases of a test.
|
||||
for the setup/teardown fixture phases of a test. You may choose to
|
||||
ignore setup/teardown failures by inserting "if rep.when != 'call': return"
|
||||
or something similar. Note that most code probably "just" works because
|
||||
the hook was already called for failing setup/teardown phases of a test
|
||||
so a plugin should have been ready to grok such reports already.
|
||||
|
||||
|
||||
Changes between 2.1.3 and 2.2.0
|
||||
----------------------------------------
|
||||
|
||||
- fix issue90: introduce eager tearing down of test items so that
|
||||
teardown function are called earlier.
|
||||
- add an all-powerful metafunc.parametrize function which allows to
|
||||
parametrize test function arguments in multiple steps and therefore
|
||||
from indepdenent plugins and palces.
|
||||
- add a @pytest.mark.parametrize helper which allows to easily
|
||||
call a test function with different argument values
|
||||
- Add examples to the "parametrize" example page, including a quick port
|
||||
of Test scenarios and the new parametrize function and decorator.
|
||||
- introduce registration for "pytest.mark.*" helpers via ini-files
|
||||
or through plugin hooks. Also introduce a "--strict" option which
|
||||
will treat unregistered markers as errors
|
||||
allowing to avoid typos and maintain a well described set of markers
|
||||
for your test suite. See exaples at http://pytest.org/latest/mark.html
|
||||
and its links.
|
||||
- issue50: introduce "-m marker" option to select tests based on markers
|
||||
(this is a stricter and more predictable version of '-k' in that "-m"
|
||||
only matches complete markers and has more obvious rules for and/or
|
||||
semantics.
|
||||
- new feature to help optimizing the speed of your tests:
|
||||
--durations=N option for displaying N slowest test calls
|
||||
and setup/teardown methods.
|
||||
- fix issue87: --pastebin now works with python3
|
||||
- fix issue89: --pdb with unexpected exceptions in doctest work more sensibly
|
||||
- fix and cleanup pytest's own test suite to not leak FDs
|
||||
- fix issue83: link to generated funcarg list
|
||||
- fix issue74: pyarg module names are now checked against imp.find_module false positives
|
||||
- fix compatibility with twisted/trial-11.1.0 use cases
|
||||
|
|
|
@ -23,7 +23,7 @@ you will see the return value of the function call::
|
|||
|
||||
$ py.test test_assert1.py
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 1 items
|
||||
|
||||
test_assert1.py F
|
||||
|
@ -37,7 +37,7 @@ you will see the return value of the function call::
|
|||
E + where 3 = f()
|
||||
|
||||
test_assert1.py:5: AssertionError
|
||||
========================= 1 failed in 0.03 seconds =========================
|
||||
========================= 1 failed in 0.02 seconds =========================
|
||||
|
||||
py.test has support for showing the values of the most common subexpressions
|
||||
including calls, attributes, comparisons, and binary and unary
|
||||
|
@ -105,7 +105,7 @@ if you run this module::
|
|||
|
||||
$ py.test test_assert2.py
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 1 items
|
||||
|
||||
test_assert2.py F
|
||||
|
|
|
@ -28,7 +28,7 @@ You can ask for available builtin or project-custom
|
|||
|
||||
$ py.test --funcargs
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collected 0 items
|
||||
pytestconfig
|
||||
the pytest config object with access to command line opts.
|
||||
|
@ -75,7 +75,5 @@ You can ask for available builtin or project-custom
|
|||
See http://docs.python.org/library/warnings.html for information
|
||||
on warning categories.
|
||||
|
||||
cov
|
||||
A pytest funcarg that provides access to the underlying coverage object.
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
|
|
@ -64,7 +64,7 @@ of the failing function and hide the other one::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_module.py .F
|
||||
|
@ -78,8 +78,8 @@ of the failing function and hide the other one::
|
|||
|
||||
test_module.py:9: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
setting up <function test_func2 at 0x10130ccf8>
|
||||
==================== 1 failed, 1 passed in 0.03 seconds ====================
|
||||
setting up <function test_func2 at 0x101353a28>
|
||||
==================== 1 failed, 1 passed in 0.02 seconds ====================
|
||||
|
||||
Accessing captured output from a test function
|
||||
---------------------------------------------------
|
||||
|
|
|
@ -44,10 +44,9 @@ then you can just invoke ``py.test`` without command line options::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 1 items
|
||||
|
||||
mymodule.py .
|
||||
|
||||
========================= 1 passed in 0.06 seconds =========================
|
||||
[?1034h
|
||||
========================= 1 passed in 0.05 seconds =========================
|
||||
|
|
|
@ -1,10 +1,179 @@
|
|||
|
||||
.. _`mark examples`:
|
||||
|
||||
Working with custom markers
|
||||
=================================================
|
||||
|
||||
|
||||
Here are some example using the :ref:`mark` mechanism.
|
||||
|
||||
marking test functions and selecting them for a run
|
||||
----------------------------------------------------
|
||||
|
||||
You can "mark" a test function with custom metadata like this::
|
||||
|
||||
# content of test_server.py
|
||||
|
||||
import pytest
|
||||
@pytest.mark.webtest
|
||||
def test_send_http():
|
||||
pass # perform some webtest test for your app
|
||||
def test_something_quick():
|
||||
pass
|
||||
|
||||
.. versionadded:: 2.2
|
||||
|
||||
You can then restrict a test run to only run tests marked with ``webtest``::
|
||||
|
||||
$ py.test -v -m webtest
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0 -- /Users/hpk/venv/1/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_server.py:3: test_send_http PASSED
|
||||
|
||||
=================== 1 tests deselected by "-m 'webtest'" ===================
|
||||
================== 1 passed, 1 deselected in 0.01 seconds ==================
|
||||
|
||||
Or the inverse, running all tests except the webtest ones::
|
||||
|
||||
$ py.test -v -m "not webtest"
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0 -- /Users/hpk/venv/1/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_server.py:6: test_something_quick PASSED
|
||||
|
||||
================= 1 tests deselected by "-m 'not webtest'" =================
|
||||
================== 1 passed, 1 deselected in 0.01 seconds ==================
|
||||
|
||||
Registering markers
|
||||
-------------------------------------
|
||||
|
||||
.. versionadded:: 2.2
|
||||
|
||||
.. ini-syntax for custom markers:
|
||||
|
||||
Registering markers for your test suite is simple::
|
||||
|
||||
# content of pytest.ini
|
||||
[pytest]
|
||||
markers =
|
||||
webtest: mark a test as a webtest.
|
||||
|
||||
You can ask which markers exist for your test suite - the list includes our just defined ``webtest`` markers::
|
||||
|
||||
$ py.test --markers
|
||||
@pytest.mark.webtest: mark a test as a webtest.
|
||||
|
||||
@pytest.mark.skipif(*conditions): skip the given test function if evaluation of all conditions has a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform.
|
||||
|
||||
@pytest.mark.xfail(*conditions, reason=None, run=True): mark the the test function as an expected failure. Optionally specify a reason and run=False if you don't even want to execute the test function. Any positional condition strings will be evaluated (like with skipif) and if one is False the marker will not be applied.
|
||||
|
||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in multiple different argument value sets. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.
|
||||
|
||||
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
||||
|
||||
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
||||
|
||||
|
||||
For an example on how to add and work with markers from a plugin, see
|
||||
:ref:`adding a custom marker from a plugin`.
|
||||
|
||||
.. note::
|
||||
|
||||
It is recommended to explicitely register markers so that:
|
||||
|
||||
* there is one place in your test suite defining your markers
|
||||
|
||||
* asking for existing markers via ``py.test --markers`` gives good output
|
||||
|
||||
* typos in function markers are treated as an error if you use
|
||||
the ``--strict`` option. Later versions of py.test are probably
|
||||
going to treat non-registered markers as an error.
|
||||
|
||||
.. _`scoped-marking`:
|
||||
|
||||
Marking whole classes or modules
|
||||
----------------------------------------------------
|
||||
|
||||
If you are programming with Python2.6 you may use ``pytest.mark`` decorators
|
||||
with classes to apply markers to all of its test methods::
|
||||
|
||||
# content of test_mark_classlevel.py
|
||||
import pytest
|
||||
@pytest.mark.webtest
|
||||
class TestClass:
|
||||
def test_startup(self):
|
||||
pass
|
||||
def test_startup_and_more(self):
|
||||
pass
|
||||
|
||||
This is equivalent to directly applying the decorator to the
|
||||
two test functions.
|
||||
|
||||
To remain backward-compatible with Python2.4 you can also set a
|
||||
``pytestmark`` attribute on a TestClass like this::
|
||||
|
||||
import pytest
|
||||
|
||||
class TestClass:
|
||||
pytestmark = pytest.mark.webtest
|
||||
|
||||
or if you need to use multiple markers you can use a list::
|
||||
|
||||
import pytest
|
||||
|
||||
class TestClass:
|
||||
pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
|
||||
|
||||
You can also set a module level marker::
|
||||
|
||||
import pytest
|
||||
pytestmark = pytest.mark.webtest
|
||||
|
||||
in which case it will be applied to all functions and
|
||||
methods defined in the module.
|
||||
|
||||
Using ``-k TEXT`` to select tests
|
||||
----------------------------------------------------
|
||||
|
||||
You can use the ``-k`` command line option to only run tests with names that match the given argument::
|
||||
|
||||
$ py.test -k send_http # running with the above defined examples
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 4 items
|
||||
|
||||
test_server.py .
|
||||
|
||||
=================== 3 tests deselected by '-ksend_http' ====================
|
||||
================== 1 passed, 3 deselected in 0.02 seconds ==================
|
||||
|
||||
And you can also run all tests except the ones that match the keyword::
|
||||
|
||||
$ py.test -k-send_http
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 4 items
|
||||
|
||||
test_mark_classlevel.py ..
|
||||
test_server.py .
|
||||
|
||||
=================== 1 tests deselected by '-k-send_http' ===================
|
||||
================== 3 passed, 1 deselected in 0.03 seconds ==================
|
||||
|
||||
Or to only select the class::
|
||||
|
||||
$ py.test -kTestClass
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 4 items
|
||||
|
||||
test_mark_classlevel.py ..
|
||||
|
||||
=================== 2 tests deselected by '-kTestClass' ====================
|
||||
================== 2 passed, 2 deselected in 0.02 seconds ==================
|
||||
|
||||
.. _`adding a custom marker from a plugin`:
|
||||
|
||||
custom marker and command line option to control test runs
|
||||
|
@ -49,34 +218,42 @@ and an example invocations specifying a different environment than what
|
|||
the test needs::
|
||||
|
||||
$ py.test -E stage2
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6
|
||||
collecting ... collected 1 items
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 5 items
|
||||
|
||||
test_mark_classlevel.py ..
|
||||
test_server.py ..
|
||||
test_someenv.py s
|
||||
|
||||
========================== 1 skipped in 0.02 seconds ===========================
|
||||
=================== 4 passed, 1 skipped in 0.04 seconds ====================
|
||||
|
||||
and here is one that specifies exactly the environment needed::
|
||||
|
||||
$ py.test -E stage1
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6
|
||||
collecting ... collected 1 items
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 5 items
|
||||
|
||||
test_mark_classlevel.py ..
|
||||
test_server.py ..
|
||||
test_someenv.py .
|
||||
|
||||
=========================== 1 passed in 0.02 seconds ===========================
|
||||
========================= 5 passed in 0.04 seconds =========================
|
||||
|
||||
The ``--markers`` option always gives you a list of available markers::
|
||||
|
||||
$ py.test --markers
|
||||
@pytest.mark.webtest: mark a test as a webtest.
|
||||
|
||||
@pytest.mark.env(name): mark test to run only on named environment
|
||||
|
||||
@pytest.mark.skipif(*conditions): skip the given test function if evaluation of all conditions has a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform.
|
||||
|
||||
@pytest.mark.xfail(*conditions, reason=None, run=True): mark the the test function as an expected failure. Optionally specify a reason and run=False if you don't even want to execute the test function. Any positional condition strings will be evaluated (like with skipif) and if one is False the marker will not be applied.
|
||||
|
||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in multiple different argument value sets. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.
|
||||
|
||||
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
||||
|
||||
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
||||
|
|
|
@ -2,18 +2,20 @@
|
|||
module containing a parametrized tests testing cross-python
|
||||
serialization via the pickle module.
|
||||
"""
|
||||
import py
|
||||
import py, pytest
|
||||
|
||||
pythonlist = ['python2.4', 'python2.5', 'python2.6', 'python2.7', 'python2.8']
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
# we parametrize all "python1" and "python2" arguments to iterate
|
||||
# over the python interpreters of our list above - the actual
|
||||
# setup and lookup of interpreters in the python1/python2 factories
|
||||
# respectively.
|
||||
for arg in metafunc.funcargnames:
|
||||
if arg.startswith("python"):
|
||||
if arg in ("python1", "python2"):
|
||||
metafunc.parametrize(arg, pythonlist, indirect=True)
|
||||
elif arg == "obj":
|
||||
metafunc.parametrize("obj", metafunc.function.multiarg.kwargs['obj'])
|
||||
|
||||
@py.test.mark.multiarg(obj=[42, {}, {1:3},])
|
||||
@pytest.mark.parametrize("obj", [42, {}, {1:3},])
|
||||
def test_basic_objects(python1, python2, obj):
|
||||
python1.dumps(obj)
|
||||
python2.load_and_is_true("obj == %s" % obj)
|
||||
|
|
|
@ -49,7 +49,7 @@ You can now run the test::
|
|||
|
||||
$ py.test test_sample.py
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 1 items
|
||||
|
||||
test_sample.py F
|
||||
|
@ -57,7 +57,7 @@ You can now run the test::
|
|||
================================= FAILURES =================================
|
||||
_______________________________ test_answer ________________________________
|
||||
|
||||
mysetup = <conftest.MySetup instance at 0x1013145a8>
|
||||
mysetup = <conftest.MySetup instance at 0x1012b2bd8>
|
||||
|
||||
def test_answer(mysetup):
|
||||
app = mysetup.myapp()
|
||||
|
@ -122,12 +122,12 @@ Running it yields::
|
|||
|
||||
$ py.test test_ssh.py -rs
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 1 items
|
||||
|
||||
test_ssh.py s
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] /Users/hpk/tmp/doc-exec-167/conftest.py:22: specify ssh host with --ssh
|
||||
SKIP [1] /Users/hpk/tmp/doc-exec-625/conftest.py:22: specify ssh host with --ssh
|
||||
|
||||
======================== 1 skipped in 0.02 seconds =========================
|
||||
|
||||
|
|
|
@ -27,7 +27,7 @@ now execute the test specification::
|
|||
|
||||
nonpython $ py.test test_simple.yml
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_simple.yml .F
|
||||
|
@ -37,7 +37,7 @@ now execute the test specification::
|
|||
usecase execution failed
|
||||
spec failed: 'some': 'other'
|
||||
no further details known at this point.
|
||||
==================== 1 failed, 1 passed in 0.09 seconds ====================
|
||||
==================== 1 failed, 1 passed in 0.10 seconds ====================
|
||||
|
||||
You get one dot for the passing ``sub1: sub1`` check and one failure.
|
||||
Obviously in the above ``conftest.py`` you'll want to implement a more
|
||||
|
@ -56,7 +56,7 @@ reporting in ``verbose`` mode::
|
|||
|
||||
nonpython $ py.test -v
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3 -- /Users/hpk/venv/0/bin/python
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0 -- /Users/hpk/venv/1/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_simple.yml:1: usecase: ok PASSED
|
||||
|
@ -74,7 +74,7 @@ interesting to just look at the collection tree::
|
|||
|
||||
nonpython $ py.test --collectonly
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 2 items
|
||||
<YamlFile 'test_simple.yml'>
|
||||
<YamlItem 'ok'>
|
||||
|
|
|
@ -17,8 +17,10 @@ simple "decorator" parametrization of a test function
|
|||
|
||||
.. versionadded:: 2.2
|
||||
|
||||
The builtin ``parametrize`` marker allows you to easily write generic
|
||||
test functions that will be invoked with multiple input/output values::
|
||||
The builtin ``pytest.mark.parametrize`` decorator directly enables
|
||||
parametrization of arguments for a test function. Here is an example
|
||||
of a test function that wants to compare that processing some input
|
||||
results in expected output::
|
||||
|
||||
# content of test_expectation.py
|
||||
import pytest
|
||||
|
@ -30,14 +32,14 @@ test functions that will be invoked with multiple input/output values::
|
|||
def test_eval(input, expected):
|
||||
assert eval(input) == expected
|
||||
|
||||
Here we parametrize two arguments of the test function so that the test
|
||||
we parametrize two arguments of the test function so that the test
|
||||
function is called three times. Let's run it::
|
||||
|
||||
$ py.test -q
|
||||
collecting ... collected 3 items
|
||||
..F
|
||||
=================================== FAILURES ===================================
|
||||
______________________________ test_eval[6*9-42] _______________________________
|
||||
================================= FAILURES =================================
|
||||
____________________________ test_eval[6*9-42] _____________________________
|
||||
|
||||
input = '6*9', expected = 42
|
||||
|
||||
|
@ -51,7 +53,7 @@ function is called three times. Let's run it::
|
|||
E assert 54 == 42
|
||||
E + where 54 = eval('6*9')
|
||||
|
||||
test_expectation.py:9: AssertionError
|
||||
test_expectation.py:8: AssertionError
|
||||
1 failed, 2 passed in 0.03 seconds
|
||||
|
||||
As expected only one pair of input/output values fails the simple test function.
|
||||
|
@ -102,8 +104,8 @@ let's run the full monty::
|
|||
$ py.test -q --all
|
||||
collecting ... collected 5 items
|
||||
....F
|
||||
=================================== FAILURES ===================================
|
||||
_______________________________ test_compute[4] ________________________________
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_compute[4] ______________________________
|
||||
|
||||
param1 = 4
|
||||
|
||||
|
@ -151,20 +153,20 @@ only have to work a bit to construct the correct arguments for pytest's
|
|||
this is a fully self-contained example which you can run with::
|
||||
|
||||
$ py.test test_scenarios.py
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_scenarios.py ..
|
||||
|
||||
=========================== 2 passed in 0.02 seconds ===========================
|
||||
========================= 2 passed in 0.02 seconds =========================
|
||||
|
||||
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
|
||||
|
||||
|
||||
$ py.test --collectonly test_scenarios.py
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 2 items
|
||||
<Module 'test_scenarios.py'>
|
||||
<Class 'TestSampleWithScenarios'>
|
||||
|
@ -172,7 +174,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
|
|||
<Function 'test_demo[basic]'>
|
||||
<Function 'test_demo[advanced]'>
|
||||
|
||||
=============================== in 0.01 seconds ===============================
|
||||
============================= in 0.01 seconds =============================
|
||||
|
||||
Deferring the setup of parametrized resources
|
||||
---------------------------------------------------
|
||||
|
@ -219,24 +221,24 @@ creates a database object for the actual test invocations::
|
|||
Let's first see how it looks like at collection time::
|
||||
|
||||
$ py.test test_backends.py --collectonly
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 2 items
|
||||
<Module 'test_backends.py'>
|
||||
<Function 'test_db_initialized[d1]'>
|
||||
<Function 'test_db_initialized[d2]'>
|
||||
|
||||
=============================== in 0.01 seconds ===============================
|
||||
============================= in 0.01 seconds =============================
|
||||
|
||||
And then when we run the test::
|
||||
|
||||
$ py.test -q test_backends.py
|
||||
collecting ... collected 2 items
|
||||
.F
|
||||
=================================== FAILURES ===================================
|
||||
___________________________ test_db_initialized[d2] ____________________________
|
||||
================================= FAILURES =================================
|
||||
_________________________ test_db_initialized[d2] __________________________
|
||||
|
||||
db = <conftest.DB2 instance at 0x1013195f0>
|
||||
db = <conftest.DB2 instance at 0x10150ab90>
|
||||
|
||||
def test_db_initialized(db):
|
||||
# a dummy test
|
||||
|
@ -290,10 +292,10 @@ argument sets to use for each test function. Let's run it::
|
|||
$ py.test -q
|
||||
collecting ... collected 3 items
|
||||
F..
|
||||
=================================== FAILURES ===================================
|
||||
__________________________ TestClass.test_equals[1-2] __________________________
|
||||
================================= FAILURES =================================
|
||||
________________________ TestClass.test_equals[1-2] ________________________
|
||||
|
||||
self = <test_parametrize.TestClass instance at 0x1013158c0>, a = 1, b = 2
|
||||
self = <test_parametrize.TestClass instance at 0x101509638>, a = 1, b = 2
|
||||
|
||||
def test_equals(self, a, b):
|
||||
> assert a == b
|
||||
|
@ -302,13 +304,13 @@ argument sets to use for each test function. Let's run it::
|
|||
test_parametrize.py:18: AssertionError
|
||||
1 failed, 2 passed in 0.03 seconds
|
||||
|
||||
Checking serialization between Python interpreters
|
||||
Indirect parametrization with multiple resources
|
||||
--------------------------------------------------------------
|
||||
|
||||
Here is a stripped down real-life example of using parametrized
|
||||
testing for testing serialization, invoking different python interpreters.
|
||||
We define a ``test_basic_objects`` function which is to be run
|
||||
with different sets of arguments for its three arguments::
|
||||
with different sets of arguments for its three arguments:
|
||||
|
||||
* ``python1``: first python interpreter, run to pickle-dump an object to a file
|
||||
* ``python2``: second interpreter, run to pickle-load an object from a file
|
||||
|
@ -316,9 +318,12 @@ with different sets of arguments for its three arguments::
|
|||
|
||||
.. literalinclude:: multipython.py
|
||||
|
||||
Running it (with Python-2.4 through to Python2.7 installed)::
|
||||
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::
|
||||
|
||||
. $ py.test -q multipython.py
|
||||
. $ py.test -rs -q multipython.py
|
||||
collecting ... collected 75 items
|
||||
ssssssssssssssssss.........ssssss.........ssssss.........ssssssssssssssssss
|
||||
27 passed, 48 skipped in 4.87 seconds
|
||||
========================= short test summary info ==========================
|
||||
SKIP [24] /Users/hpk/p/pytest/doc/example/multipython.py:36: 'python2.8' not found
|
||||
SKIP [24] /Users/hpk/p/pytest/doc/example/multipython.py:36: 'python2.4' not found
|
||||
27 passed, 48 skipped in 3.03 seconds
|
||||
|
|
|
@ -43,7 +43,7 @@ then the test collection looks like this::
|
|||
|
||||
$ py.test --collectonly
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 2 items
|
||||
<Module 'check_myapp.py'>
|
||||
<Class 'CheckMyApp'>
|
||||
|
@ -82,7 +82,7 @@ You can always peek at the collection tree without running tests like this::
|
|||
|
||||
. $ py.test --collectonly pythoncollection.py
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 3 items
|
||||
<Module 'pythoncollection.py'>
|
||||
<Function 'test_function'>
|
||||
|
|
|
@ -13,7 +13,7 @@ get on the terminal - we are working on that):
|
|||
|
||||
assertion $ py.test failure_demo.py
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 39 items
|
||||
|
||||
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
|
||||
|
@ -30,7 +30,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:15: AssertionError
|
||||
_________________________ TestFailing.test_simple __________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x10134bc50>
|
||||
self = <failure_demo.TestFailing object at 0x1013552d0>
|
||||
|
||||
def test_simple(self):
|
||||
def f():
|
||||
|
@ -40,13 +40,13 @@ get on the terminal - we are working on that):
|
|||
|
||||
> assert f() == g()
|
||||
E assert 42 == 43
|
||||
E + where 42 = <function f at 0x101322320>()
|
||||
E + and 43 = <function g at 0x101322398>()
|
||||
E + where 42 = <function f at 0x101514f50>()
|
||||
E + and 43 = <function g at 0x101516050>()
|
||||
|
||||
failure_demo.py:28: AssertionError
|
||||
____________________ TestFailing.test_simple_multiline _____________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x10134b150>
|
||||
self = <failure_demo.TestFailing object at 0x101355950>
|
||||
|
||||
def test_simple_multiline(self):
|
||||
otherfunc_multi(
|
||||
|
@ -66,19 +66,19 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:11: AssertionError
|
||||
___________________________ TestFailing.test_not ___________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x10134b710>
|
||||
self = <failure_demo.TestFailing object at 0x101355ad0>
|
||||
|
||||
def test_not(self):
|
||||
def f():
|
||||
return 42
|
||||
> assert not f()
|
||||
E assert not 42
|
||||
E + where 42 = <function f at 0x101322398>()
|
||||
E + where 42 = <function f at 0x101514f50>()
|
||||
|
||||
failure_demo.py:38: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_text _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10134be10>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1013559d0>
|
||||
|
||||
def test_eq_text(self):
|
||||
> assert 'spam' == 'eggs'
|
||||
|
@ -89,7 +89,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:42: AssertionError
|
||||
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101347110>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101350dd0>
|
||||
|
||||
def test_eq_similar_text(self):
|
||||
> assert 'foo 1 bar' == 'foo 2 bar'
|
||||
|
@ -102,7 +102,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:45: AssertionError
|
||||
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343d50>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101350d10>
|
||||
|
||||
def test_eq_multiline_text(self):
|
||||
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
|
||||
|
@ -115,7 +115,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:48: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_long_text _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10134b210>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101350cd0>
|
||||
|
||||
def test_eq_long_text(self):
|
||||
a = '1'*100 + 'a' + '2'*100
|
||||
|
@ -132,7 +132,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:53: AssertionError
|
||||
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343d90>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101350f50>
|
||||
|
||||
def test_eq_long_text_multiline(self):
|
||||
a = '1\n'*100 + 'a' + '2\n'*100
|
||||
|
@ -156,7 +156,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:58: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343dd0>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10134f350>
|
||||
|
||||
def test_eq_list(self):
|
||||
> assert [0, 1, 2] == [0, 1, 3]
|
||||
|
@ -166,7 +166,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:61: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_list_long _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343b90>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10134fc10>
|
||||
|
||||
def test_eq_list_long(self):
|
||||
a = [0]*100 + [1] + [3]*100
|
||||
|
@ -178,7 +178,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:66: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_dict _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343210>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10134f2d0>
|
||||
|
||||
def test_eq_dict(self):
|
||||
> assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
|
||||
|
@ -191,7 +191,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:69: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_set __________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343990>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10134f110>
|
||||
|
||||
def test_eq_set(self):
|
||||
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
|
||||
|
@ -207,7 +207,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:72: AssertionError
|
||||
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343590>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10134f510>
|
||||
|
||||
def test_eq_longer_list(self):
|
||||
> assert [1,2] == [1,2,3]
|
||||
|
@ -217,7 +217,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:75: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_in_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343e50>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10134f6d0>
|
||||
|
||||
def test_in_list(self):
|
||||
> assert 1 in [0, 2, 3, 4, 5]
|
||||
|
@ -226,7 +226,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:78: AssertionError
|
||||
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10133bb10>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10152c490>
|
||||
|
||||
def test_not_in_text_multiline(self):
|
||||
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
|
||||
|
@ -244,7 +244,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:82: AssertionError
|
||||
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10133b990>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10152cfd0>
|
||||
|
||||
def test_not_in_text_single(self):
|
||||
text = 'single foo line'
|
||||
|
@ -257,7 +257,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:86: AssertionError
|
||||
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10133bbd0>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10152c090>
|
||||
|
||||
def test_not_in_text_single_long(self):
|
||||
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
|
||||
|
@ -270,7 +270,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:90: AssertionError
|
||||
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343510>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10152cb90>
|
||||
|
||||
def test_not_in_text_single_long_term(self):
|
||||
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
|
||||
|
@ -289,7 +289,7 @@ get on the terminal - we are working on that):
|
|||
i = Foo()
|
||||
> assert i.b == 2
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.Foo object at 0x10133b390>.b
|
||||
E + where 1 = <failure_demo.Foo object at 0x10152c350>.b
|
||||
|
||||
failure_demo.py:101: AssertionError
|
||||
_________________________ test_attribute_instance __________________________
|
||||
|
@ -299,8 +299,8 @@ get on the terminal - we are working on that):
|
|||
b = 1
|
||||
> assert Foo().b == 2
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.Foo object at 0x10133b250>.b
|
||||
E + where <failure_demo.Foo object at 0x10133b250> = <class 'failure_demo.Foo'>()
|
||||
E + where 1 = <failure_demo.Foo object at 0x10134fe90>.b
|
||||
E + where <failure_demo.Foo object at 0x10134fe90> = <class 'failure_demo.Foo'>()
|
||||
|
||||
failure_demo.py:107: AssertionError
|
||||
__________________________ test_attribute_failure __________________________
|
||||
|
@ -316,7 +316,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:116:
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
|
||||
self = <failure_demo.Foo object at 0x10133bd50>
|
||||
self = <failure_demo.Foo object at 0x10152c610>
|
||||
|
||||
def _get_b(self):
|
||||
> raise Exception('Failed to get attrib')
|
||||
|
@ -332,15 +332,15 @@ get on the terminal - we are working on that):
|
|||
b = 2
|
||||
> assert Foo().b == Bar().b
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.Foo object at 0x10133bb50>.b
|
||||
E + where <failure_demo.Foo object at 0x10133bb50> = <class 'failure_demo.Foo'>()
|
||||
E + and 2 = <failure_demo.Bar object at 0x10133b1d0>.b
|
||||
E + where <failure_demo.Bar object at 0x10133b1d0> = <class 'failure_demo.Bar'>()
|
||||
E + where 1 = <failure_demo.Foo object at 0x10152c950>.b
|
||||
E + where <failure_demo.Foo object at 0x10152c950> = <class 'failure_demo.Foo'>()
|
||||
E + and 2 = <failure_demo.Bar object at 0x10152c250>.b
|
||||
E + where <failure_demo.Bar object at 0x10152c250> = <class 'failure_demo.Bar'>()
|
||||
|
||||
failure_demo.py:124: AssertionError
|
||||
__________________________ TestRaises.test_raises __________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x1013697e8>
|
||||
self = <failure_demo.TestRaises instance at 0x1015219e0>
|
||||
|
||||
def test_raises(self):
|
||||
s = 'qwe'
|
||||
|
@ -352,10 +352,10 @@ get on the terminal - we are working on that):
|
|||
> int(s)
|
||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||
|
||||
<0-codegen /Users/hpk/p/pytest/_pytest/python.py:833>:1: ValueError
|
||||
<0-codegen /Users/hpk/p/pytest/_pytest/python.py:957>:1: ValueError
|
||||
______________________ TestRaises.test_raises_doesnt _______________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x101372a70>
|
||||
self = <failure_demo.TestRaises instance at 0x1013794d0>
|
||||
|
||||
def test_raises_doesnt(self):
|
||||
> raises(IOError, "int('3')")
|
||||
|
@ -364,7 +364,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:136: Failed
|
||||
__________________________ TestRaises.test_raise ___________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x10136a908>
|
||||
self = <failure_demo.TestRaises instance at 0x10151f6c8>
|
||||
|
||||
def test_raise(self):
|
||||
> raise ValueError("demo error")
|
||||
|
@ -373,7 +373,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:139: ValueError
|
||||
________________________ TestRaises.test_tupleerror ________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x10136c710>
|
||||
self = <failure_demo.TestRaises instance at 0x1013733f8>
|
||||
|
||||
def test_tupleerror(self):
|
||||
> a,b = [1]
|
||||
|
@ -382,7 +382,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:142: ValueError
|
||||
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x101365488>
|
||||
self = <failure_demo.TestRaises instance at 0x10136e170>
|
||||
|
||||
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
|
||||
l = [1,2,3]
|
||||
|
@ -395,7 +395,7 @@ get on the terminal - we are working on that):
|
|||
l is [1, 2, 3]
|
||||
________________________ TestRaises.test_some_error ________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x101367248>
|
||||
self = <failure_demo.TestRaises instance at 0x10136ef38>
|
||||
|
||||
def test_some_error(self):
|
||||
> if namenotexi:
|
||||
|
@ -423,7 +423,7 @@ get on the terminal - we are working on that):
|
|||
<2-codegen 'abc-123' /Users/hpk/p/pytest/doc/example/assertion/failure_demo.py:162>:2: AssertionError
|
||||
____________________ TestMoreErrors.test_complex_error _____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101380f38>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101520638>
|
||||
|
||||
def test_complex_error(self):
|
||||
def f():
|
||||
|
@ -452,7 +452,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:5: AssertionError
|
||||
___________________ TestMoreErrors.test_z1_unpack_error ____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101367f80>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x10136bcb0>
|
||||
|
||||
def test_z1_unpack_error(self):
|
||||
l = []
|
||||
|
@ -462,7 +462,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:179: ValueError
|
||||
____________________ TestMoreErrors.test_z2_type_error _____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101363dd0>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x10136a440>
|
||||
|
||||
def test_z2_type_error(self):
|
||||
l = 3
|
||||
|
@ -472,19 +472,19 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:183: TypeError
|
||||
______________________ TestMoreErrors.test_startswith ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101364bd8>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101368290>
|
||||
|
||||
def test_startswith(self):
|
||||
s = "123"
|
||||
g = "456"
|
||||
> assert s.startswith(g)
|
||||
E assert <built-in method startswith of str object at 0x1013524e0>('456')
|
||||
E + where <built-in method startswith of str object at 0x1013524e0> = '123'.startswith
|
||||
E assert <built-in method startswith of str object at 0x101354030>('456')
|
||||
E + where <built-in method startswith of str object at 0x101354030> = '123'.startswith
|
||||
|
||||
failure_demo.py:188: AssertionError
|
||||
__________________ TestMoreErrors.test_startswith_nested ___________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101363fc8>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101368f38>
|
||||
|
||||
def test_startswith_nested(self):
|
||||
def f():
|
||||
|
@ -492,15 +492,15 @@ get on the terminal - we are working on that):
|
|||
def g():
|
||||
return "456"
|
||||
> assert f().startswith(g())
|
||||
E assert <built-in method startswith of str object at 0x1013524e0>('456')
|
||||
E + where <built-in method startswith of str object at 0x1013524e0> = '123'.startswith
|
||||
E + where '123' = <function f at 0x10132c500>()
|
||||
E + and '456' = <function g at 0x10132c8c0>()
|
||||
E assert <built-in method startswith of str object at 0x101354030>('456')
|
||||
E + where <built-in method startswith of str object at 0x101354030> = '123'.startswith
|
||||
E + where '123' = <function f at 0x10136c578>()
|
||||
E + and '456' = <function g at 0x10136c5f0>()
|
||||
|
||||
failure_demo.py:195: AssertionError
|
||||
_____________________ TestMoreErrors.test_global_func ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1013696c8>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x10136aef0>
|
||||
|
||||
def test_global_func(self):
|
||||
> assert isinstance(globf(42), float)
|
||||
|
@ -510,18 +510,18 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:198: AssertionError
|
||||
_______________________ TestMoreErrors.test_instance _______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1013671b8>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x10151c440>
|
||||
|
||||
def test_instance(self):
|
||||
self.x = 6*7
|
||||
> assert self.x != 42
|
||||
E assert 42 != 42
|
||||
E + where 42 = <failure_demo.TestMoreErrors instance at 0x1013671b8>.x
|
||||
E + where 42 = <failure_demo.TestMoreErrors instance at 0x10151c440>.x
|
||||
|
||||
failure_demo.py:202: AssertionError
|
||||
_______________________ TestMoreErrors.test_compare ________________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101366560>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101373a70>
|
||||
|
||||
def test_compare(self):
|
||||
> assert globf(10) < 5
|
||||
|
@ -531,7 +531,7 @@ get on the terminal - we are working on that):
|
|||
failure_demo.py:205: AssertionError
|
||||
_____________________ TestMoreErrors.test_try_finally ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1013613b0>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101363c68>
|
||||
|
||||
def test_try_finally(self):
|
||||
x = 1
|
||||
|
@ -540,4 +540,4 @@ get on the terminal - we are working on that):
|
|||
E assert 1 == 0
|
||||
|
||||
failure_demo.py:210: AssertionError
|
||||
======================== 39 failed in 0.39 seconds =========================
|
||||
======================== 39 failed in 0.41 seconds =========================
|
||||
|
|
|
@ -109,13 +109,13 @@ directory with the above conftest.py::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
gw0 I
|
||||
gw0 [0]
|
||||
|
||||
scheduling tests via LoadScheduling
|
||||
|
||||
============================= in 0.48 seconds =============================
|
||||
============================= in 0.71 seconds =============================
|
||||
|
||||
.. _`excontrolskip`:
|
||||
|
||||
|
@ -156,12 +156,12 @@ and when running it will see a skipped "slow" test::
|
|||
|
||||
$ py.test -rs # "-rs" means report details on the little 's'
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_module.py .s
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] /Users/hpk/tmp/doc-exec-172/conftest.py:9: need --runslow option to run
|
||||
SKIP [1] /Users/hpk/tmp/doc-exec-630/conftest.py:9: need --runslow option to run
|
||||
|
||||
=================== 1 passed, 1 skipped in 0.02 seconds ====================
|
||||
|
||||
|
@ -169,12 +169,12 @@ Or run it including the ``slow`` marked test::
|
|||
|
||||
$ py.test --runslow
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_module.py ..
|
||||
|
||||
========================= 2 passed in 0.02 seconds =========================
|
||||
========================= 2 passed in 0.62 seconds =========================
|
||||
|
||||
Writing well integrated assertion helpers
|
||||
--------------------------------------------------
|
||||
|
@ -261,7 +261,7 @@ which will add the string to the test header accordingly::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
project deps: mylib-1.1
|
||||
collecting ... collected 0 items
|
||||
|
||||
|
@ -284,7 +284,7 @@ which will add info only when run with "--v"::
|
|||
|
||||
$ py.test -v
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3 -- /Users/hpk/venv/0/bin/python
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0 -- /Users/hpk/venv/1/bin/python
|
||||
info1: did you know that ...
|
||||
did you?
|
||||
collecting ... collected 0 items
|
||||
|
@ -295,7 +295,7 @@ and nothing when run plainly::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 0 items
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
@ -326,3 +326,14 @@ out which tests are slowest. Let's make an artifical test suite::
|
|||
Now we can profile which test functions execute slowest::
|
||||
|
||||
$ py.test --durations=3
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 3 items
|
||||
|
||||
test_some_are_slow.py ...
|
||||
|
||||
========================= slowest 3 test durations =========================
|
||||
0.20s call test_some_are_slow.py::test_funcslow2
|
||||
0.10s call test_some_are_slow.py::test_funcslow1
|
||||
0.00s setup test_some_are_slow.py::test_funcfast
|
||||
========================= 3 passed in 0.32 seconds =========================
|
||||
|
|
|
@ -61,14 +61,14 @@ py.test will discover and call the factory named
|
|||
Running the test looks like this::
|
||||
|
||||
$ py.test test_simplefactory.py
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 1 items
|
||||
|
||||
test_simplefactory.py F
|
||||
|
||||
=================================== FAILURES ===================================
|
||||
________________________________ test_function _________________________________
|
||||
================================= FAILURES =================================
|
||||
______________________________ test_function _______________________________
|
||||
|
||||
myfuncarg = 42
|
||||
|
||||
|
@ -77,7 +77,7 @@ Running the test looks like this::
|
|||
E assert 42 == 17
|
||||
|
||||
test_simplefactory.py:5: AssertionError
|
||||
=========================== 1 failed in 0.02 seconds ===========================
|
||||
========================= 1 failed in 0.03 seconds =========================
|
||||
|
||||
This means that indeed the test function was called with a ``myfuncarg``
|
||||
argument value of ``42`` and the assert fails. Here is how py.test
|
||||
|
@ -166,14 +166,14 @@ hook to generate several calls to the same test function::
|
|||
Running this will generate ten invocations of ``test_func`` passing in each of the items in the list of ``range(10)``::
|
||||
|
||||
$ py.test test_example.py
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 10 items
|
||||
|
||||
test_example.py .........F
|
||||
|
||||
=================================== FAILURES ===================================
|
||||
_________________________________ test_func[9] _________________________________
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_func[9] _______________________________
|
||||
|
||||
numiter = 9
|
||||
|
||||
|
@ -182,15 +182,15 @@ Running this will generate ten invocations of ``test_func`` passing in each of t
|
|||
E assert 9 < 9
|
||||
|
||||
test_example.py:6: AssertionError
|
||||
====================== 1 failed, 9 passed in 0.07 seconds ======================
|
||||
==================== 1 failed, 9 passed in 0.05 seconds ====================
|
||||
|
||||
Obviously, only when ``numiter`` has the value of ``9`` does the test fail. Note that the ``pytest_generate_tests(metafunc)`` hook is called during
|
||||
the test collection phase which is separate from the actual test running.
|
||||
Let's just look at what is collected::
|
||||
|
||||
$ py.test --collectonly test_example.py
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 10 items
|
||||
<Module 'test_example.py'>
|
||||
<Function 'test_func[0]'>
|
||||
|
@ -204,19 +204,19 @@ Let's just look at what is collected::
|
|||
<Function 'test_func[8]'>
|
||||
<Function 'test_func[9]'>
|
||||
|
||||
=============================== in 0.01 seconds ===============================
|
||||
============================= in 0.01 seconds =============================
|
||||
|
||||
If you want to select only the run with the value ``7`` you could do::
|
||||
|
||||
$ py.test -v -k 7 test_example.py # or -k test_func[7]
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8 -- /Users/hpk/venv/1/bin/python
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0 -- /Users/hpk/venv/1/bin/python
|
||||
collecting ... collected 10 items
|
||||
|
||||
test_example.py:5: test_func[7] PASSED
|
||||
|
||||
========================= 9 tests deselected by '-k7' ==========================
|
||||
==================== 1 passed, 9 deselected in 0.01 seconds ====================
|
||||
======================= 9 tests deselected by '-k7' ========================
|
||||
================== 1 passed, 9 deselected in 0.02 seconds ==================
|
||||
|
||||
You might want to look at :ref:`more parametrization examples <paramexamples>`.
|
||||
|
||||
|
|
|
@ -22,10 +22,9 @@ Installation options::
|
|||
To check your installation has installed the correct version::
|
||||
|
||||
$ py.test --version
|
||||
This is py.test version 2.1.3, imported from /Users/hpk/p/pytest/pytest.pyc
|
||||
This is py.test version 2.2.0, imported from /Users/hpk/p/pytest/pytest.pyc
|
||||
setuptools registered plugins:
|
||||
pytest-cov-1.4 at /Users/hpk/venv/0/lib/python2.7/site-packages/pytest_cov.pyc
|
||||
pytest-xdist-1.6 at /Users/hpk/venv/0/lib/python2.7/site-packages/xdist/plugin.pyc
|
||||
pytest-xdist-1.7.dev1 at /Users/hpk/p/pytest-xdist/xdist/plugin.pyc
|
||||
|
||||
If you get an error checkout :ref:`installation issues`.
|
||||
|
||||
|
@ -47,7 +46,7 @@ That's it. You can execute the test function now::
|
|||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 1 items
|
||||
|
||||
test_sample.py F
|
||||
|
@ -61,7 +60,7 @@ That's it. You can execute the test function now::
|
|||
E + where 4 = func(3)
|
||||
|
||||
test_sample.py:5: AssertionError
|
||||
========================= 1 failed in 0.02 seconds =========================
|
||||
========================= 1 failed in 0.04 seconds =========================
|
||||
|
||||
py.test found the ``test_answer`` function by following :ref:`standard test discovery rules <test discovery>`, basically detecting the ``test_`` prefixes. We got a failure report because our little ``func(3)`` call did not return ``5``.
|
||||
|
||||
|
@ -127,7 +126,7 @@ run the module by passing its filename::
|
|||
================================= FAILURES =================================
|
||||
____________________________ TestClass.test_two ____________________________
|
||||
|
||||
self = <test_class.TestClass instance at 0x1013167a0>
|
||||
self = <test_class.TestClass instance at 0x10150a170>
|
||||
|
||||
def test_two(self):
|
||||
x = "hello"
|
||||
|
@ -164,7 +163,7 @@ before performing the test function call. Let's just run it::
|
|||
================================= FAILURES =================================
|
||||
_____________________________ test_needsfiles ______________________________
|
||||
|
||||
tmpdir = local('/Users/hpk/tmp/pytest-93/test_needsfiles0')
|
||||
tmpdir = local('/Users/hpk/tmp/pytest-1595/test_needsfiles0')
|
||||
|
||||
def test_needsfiles(tmpdir):
|
||||
print tmpdir
|
||||
|
@ -173,8 +172,8 @@ before performing the test function call. Let's just run it::
|
|||
|
||||
test_tmpdir.py:3: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
/Users/hpk/tmp/pytest-93/test_needsfiles0
|
||||
1 failed in 0.04 seconds
|
||||
/Users/hpk/tmp/pytest-1595/test_needsfiles0
|
||||
1 failed in 0.15 seconds
|
||||
|
||||
Before the test runs, a unique-per-test-invocation temporary directory
|
||||
was created. More info at :ref:`tmpdir handling`.
|
||||
|
|
|
@ -27,13 +27,13 @@ Welcome to pytest!
|
|||
|
||||
- (new in 2.2) :ref:`durations`
|
||||
- (much improved in 2.2) :ref:`marking and test selection <mark>`
|
||||
- (improved in 2.2) :ref:`parametrized test functions <parametrized test functions>`
|
||||
- advanced :ref:`skip and xfail`
|
||||
- unique :ref:`dependency injection through funcargs <funcargs>`
|
||||
- can :ref:`distribute tests to multiple CPUs <xdistcpu>` through :ref:`xdist plugin <xdist>`
|
||||
- can :ref:`continuously re-run failing tests <looponfailing>`
|
||||
- many :ref:`builtin helpers <pytest helpers>`
|
||||
- flexible :ref:`Python test discovery`
|
||||
- unique :ref:`dependency injection through funcargs <funcargs>`
|
||||
- :ref:`parametrized test functions <parametrized test functions>`
|
||||
|
||||
- **integrates many common testing methods**
|
||||
|
||||
|
|
172
doc/mark.txt
172
doc/mark.txt
|
@ -7,7 +7,7 @@ Marking test functions with attributes
|
|||
.. currentmodule:: _pytest.mark
|
||||
|
||||
By using the ``pytest.mark`` helper you can easily set
|
||||
metadata on your test functions. To begin with, there are
|
||||
metadata on your test functions. There are
|
||||
some builtin markers, for example:
|
||||
|
||||
* :ref:`skipif <skipif>` - skip a test function if a certain condition is met
|
||||
|
@ -16,174 +16,10 @@ some builtin markers, for example:
|
|||
* :ref:`parametrize <parametrizemark>` to perform multiple calls
|
||||
to the same test function.
|
||||
|
||||
It's also easy to create custom markers or to apply markers
|
||||
to whole test classes or modules.
|
||||
It's easy to create custom markers or to apply markers
|
||||
to whole test classes or modules. See :ref:`mark examples` for examples
|
||||
which also serve as documentation.
|
||||
|
||||
marking test functions and selecting them for a run
|
||||
----------------------------------------------------
|
||||
|
||||
You can "mark" a test function with custom metadata like this::
|
||||
|
||||
# content of test_server.py
|
||||
|
||||
import pytest
|
||||
@pytest.mark.webtest
|
||||
def test_send_http():
|
||||
pass # perform some webtest test for your app
|
||||
def test_something_quick():
|
||||
pass
|
||||
|
||||
.. versionadded:: 2.2
|
||||
|
||||
You can then restrict a test run to only run tests marked with ``webtest``::
|
||||
|
||||
$ py.test -v -m webtest
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6 -- /Users/hpk/venv/0/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_server.py:3: test_send_http PASSED
|
||||
|
||||
===================== 1 tests deselected by "-m 'webtest'" =====================
|
||||
==================== 1 passed, 1 deselected in 0.01 seconds ====================
|
||||
|
||||
Or the inverse, running all tests except the webtest ones::
|
||||
|
||||
$ py.test -v -m "not webtest"
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6 -- /Users/hpk/venv/0/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_server.py:6: test_something_quick PASSED
|
||||
|
||||
=================== 1 tests deselected by "-m 'not webtest'" ===================
|
||||
==================== 1 passed, 1 deselected in 0.01 seconds ====================
|
||||
|
||||
Registering markers
|
||||
-------------------------------------
|
||||
|
||||
.. versionadded:: 2.2
|
||||
|
||||
.. ini-syntax for custom markers:
|
||||
|
||||
Registering markers for your test suite is simple::
|
||||
|
||||
# content of pytest.ini
|
||||
[pytest]
|
||||
markers =
|
||||
webtest: mark a test as a webtest.
|
||||
|
||||
You can ask which markers exist for your test suite - the list includes our just defined ``webtest`` markers::
|
||||
|
||||
$ py.test --markers
|
||||
@pytest.mark.webtest: mark a test as a webtest.
|
||||
|
||||
@pytest.mark.skipif(*conditions): skip the given test function if evaluation of all conditions has a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform.
|
||||
|
||||
@pytest.mark.xfail(*conditions, reason=None, run=True): mark the the test function as an expected failure. Optionally specify a reason and run=False if you don't even want to execute the test function. Any positional condition strings will be evaluated (like with skipif) and if one is False the marker will not be applied.
|
||||
|
||||
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
||||
|
||||
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
||||
|
||||
|
||||
For an example on how to add and work markers from a plugin, see
|
||||
:ref:`adding a custom marker from a plugin`.
|
||||
|
||||
.. note::
|
||||
|
||||
It is recommended to explicitely register markers so that:
|
||||
|
||||
* there is one place in your test suite defining your markers
|
||||
|
||||
* asking for existing markers via ``py.test --markers`` gives good output
|
||||
|
||||
* typos in function markers are treated as an error if you use
|
||||
the ``--strict`` option. Later versions of py.test are probably
|
||||
going to treat non-registered markers as an error.
|
||||
|
||||
.. _`scoped-marking`:
|
||||
|
||||
Marking whole classes or modules
|
||||
----------------------------------------------------
|
||||
|
||||
If you are programming with Python2.6 you may use ``pytest.mark`` decorators
|
||||
with classes to apply markers to all of its test methods::
|
||||
|
||||
# content of test_mark_classlevel.py
|
||||
import pytest
|
||||
@pytest.mark.webtest
|
||||
class TestClass:
|
||||
def test_startup(self):
|
||||
pass
|
||||
def test_startup_and_more(self):
|
||||
pass
|
||||
|
||||
This is equivalent to directly applying the decorator to the
|
||||
two test functions.
|
||||
|
||||
To remain backward-compatible with Python2.4 you can also set a
|
||||
``pytestmark`` attribute on a TestClass like this::
|
||||
|
||||
import pytest
|
||||
|
||||
class TestClass:
|
||||
pytestmark = pytest.mark.webtest
|
||||
|
||||
or if you need to use multiple markers you can use a list::
|
||||
|
||||
import pytest
|
||||
|
||||
class TestClass:
|
||||
pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
|
||||
|
||||
You can also set a module level marker::
|
||||
|
||||
import pytest
|
||||
pytestmark = pytest.mark.webtest
|
||||
|
||||
in which case it will be applied to all functions and
|
||||
methods defined in the module.
|
||||
|
||||
Using ``-k TEXT`` to select tests
|
||||
----------------------------------------------------
|
||||
|
||||
You can use the ``-k`` command line option to only run tests with names that match the given argument::
|
||||
|
||||
$ py.test -k send_http # running with the above defined examples
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6
|
||||
collecting ... collected 4 items
|
||||
|
||||
test_server.py .
|
||||
|
||||
===================== 3 tests deselected by '-ksend_http' ======================
|
||||
==================== 1 passed, 3 deselected in 0.02 seconds ====================
|
||||
|
||||
And you can also run all tests except the ones that match the keyword::
|
||||
|
||||
$ py.test -k-send_http
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6
|
||||
collecting ... collected 4 items
|
||||
|
||||
test_mark_classlevel.py ..
|
||||
test_server.py .
|
||||
|
||||
===================== 1 tests deselected by '-k-send_http' =====================
|
||||
==================== 3 passed, 1 deselected in 0.03 seconds ====================
|
||||
|
||||
Or to only select the class::
|
||||
|
||||
$ py.test -kTestClass
|
||||
============================= test session starts ==============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6
|
||||
collecting ... collected 4 items
|
||||
|
||||
test_mark_classlevel.py ..
|
||||
|
||||
===================== 2 tests deselected by '-kTestClass' ======================
|
||||
==================== 2 passed, 2 deselected in 0.02 seconds ====================
|
||||
|
||||
API reference for mark related objects
|
||||
------------------------------------------------
|
||||
|
|
|
@ -39,10 +39,10 @@ will be undone.
|
|||
.. background check:
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 0 items
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
============================= in 0.20 seconds =============================
|
||||
|
||||
Method reference of the monkeypatch function argument
|
||||
-----------------------------------------------------
|
||||
|
|
|
@ -130,7 +130,7 @@ Running it with the report-on-xfail option gives this output::
|
|||
|
||||
example $ py.test -rx xfail_demo.py
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 6 items
|
||||
|
||||
xfail_demo.py xxxxxx
|
||||
|
@ -147,7 +147,7 @@ Running it with the report-on-xfail option gives this output::
|
|||
XFAIL xfail_demo.py::test_hello6
|
||||
reason: reason
|
||||
|
||||
======================== 6 xfailed in 0.11 seconds =========================
|
||||
======================== 6 xfailed in 0.08 seconds =========================
|
||||
|
||||
.. _`evaluation of skipif/xfail conditions`:
|
||||
|
||||
|
|
|
@ -23,8 +23,7 @@ Function arguments:
|
|||
|
||||
Test parametrization:
|
||||
|
||||
- `generating parametrized tests with funcargs`_ (uses deprecated
|
||||
``addcall()`` API.
|
||||
- `generating parametrized tests with funcargs`_ (uses deprecated ``addcall()`` API.
|
||||
- `test generators and cached setup`_
|
||||
- `parametrizing tests, generalized`_ (blog post)
|
||||
- `putting test-hooks into local or global plugins`_ (blog post)
|
||||
|
|
|
@ -28,7 +28,7 @@ Running this would result in a passed test except for the last
|
|||
|
||||
$ py.test test_tmpdir.py
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 1 items
|
||||
|
||||
test_tmpdir.py F
|
||||
|
@ -36,7 +36,7 @@ Running this would result in a passed test except for the last
|
|||
================================= FAILURES =================================
|
||||
_____________________________ test_create_file _____________________________
|
||||
|
||||
tmpdir = local('/Users/hpk/tmp/pytest-94/test_create_file0')
|
||||
tmpdir = local('/Users/hpk/tmp/pytest-1596/test_create_file0')
|
||||
|
||||
def test_create_file(tmpdir):
|
||||
p = tmpdir.mkdir("sub").join("hello.txt")
|
||||
|
@ -47,7 +47,7 @@ Running this would result in a passed test except for the last
|
|||
E assert 0
|
||||
|
||||
test_tmpdir.py:7: AssertionError
|
||||
========================= 1 failed in 0.05 seconds =========================
|
||||
========================= 1 failed in 0.20 seconds =========================
|
||||
|
||||
.. _`base temporary directory`:
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ Running it yields::
|
|||
|
||||
$ py.test test_unittest.py
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.2.0
|
||||
collecting ... collected 1 items
|
||||
|
||||
test_unittest.py F
|
||||
|
@ -42,7 +42,7 @@ Running it yields::
|
|||
test_unittest.py:8: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
hello
|
||||
========================= 1 failed in 0.04 seconds =========================
|
||||
========================= 1 failed in 0.23 seconds =========================
|
||||
|
||||
.. _`unittest.py style`: http://docs.python.org/library/unittest.html
|
||||
|
||||
|
|
2
setup.py
2
setup.py
|
@ -24,7 +24,7 @@ def main():
|
|||
name='pytest',
|
||||
description='py.test: simple powerful testing with Python',
|
||||
long_description = long_description,
|
||||
version='2.2.0.dev11',
|
||||
version='2.2.0',
|
||||
url='http://pytest.org',
|
||||
license='MIT license',
|
||||
platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'],
|
||||
|
|
Loading…
Reference in New Issue