regenerating examples

This commit is contained in:
holger krekel 2010-11-26 13:26:56 +01:00
parent ca72c162c8
commit f1fc6e5eb6
17 changed files with 190 additions and 170 deletions

View File

@ -15,7 +15,7 @@ ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest
regen:
COLUMNS=76 regendoc --update *.txt */*.txt
PYTHONDONTWRITEBYTECODE=1 COLUMNS=76 regendoc --update *.txt */*.txt
help:
@echo "Please use \`make <target>' where <target> is one of"

View File

@ -23,21 +23,21 @@ assertion fails you will see the value of ``x``::
$ py.test test_assert1.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_assert1.py
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 1 items
test_assert1.py F
================================= FAILURES =================================
______________________________ test_function _______________________________
def test_function():
> assert f() == 4
E assert 3 == 4
E + where 3 = f()
test_assert1.py:5: AssertionError
========================= 1 failed in 0.03 seconds =========================
========================= 1 failed in 0.02 seconds =========================
Reporting details about the failing assertion is achieved by re-evaluating
the assert expression and recording intermediate values.
@ -105,14 +105,14 @@ if you run this module::
$ py.test test_assert2.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_assert2.py
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 1 items
test_assert2.py F
================================= FAILURES =================================
___________________________ test_set_comparison ____________________________
def test_set_comparison():
set1 = set("1308")
set2 = set("8035")
@ -122,7 +122,7 @@ if you run this module::
E '1'
E Extra items in the right set:
E '5'
test_assert2.py:5: AssertionError
========================= 1 failed in 0.02 seconds =========================

View File

@ -30,25 +30,25 @@ You can ask for available builtin or project-custom
captures writes to sys.stdout/sys.stderr and makes
them available successively via a ``capsys.readouterr()`` method
which returns a ``(out, err)`` tuple of captured snapshot strings.
capfd
captures writes to file descriptors 1 and 2 and makes
snapshotted ``(out, err)`` string tuples available
via the ``capsys.readouterr()`` method. If the underlying
platform does not have ``os.dup`` (e.g. Jython) tests using
this funcarg will automatically skip.
tmpdir
return a temporary directory path object
unique to each test function invocation,
created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_
path object.
monkeypatch
The returned ``monkeypatch`` funcarg provides these
helper methods to modify objects, dictionaries or os.environ::
monkeypatch.setattr(obj, name, value, raising=True)
monkeypatch.delattr(obj, name, raising=True)
monkeypatch.setitem(mapping, name, value)
@ -56,15 +56,15 @@ You can ask for available builtin or project-custom
monkeypatch.setenv(name, value, prepend=False)
monkeypatch.delenv(name, value, raising=True)
monkeypatch.syspath_prepend(path)
All modifications will be undone when the requesting
test function finished its execution. The ``raising``
parameter determines if a KeyError or AttributeError
will be raised if the set/deletion operation has no target.
recwarn
Return a WarningsRecorder instance that provides these methods:
* ``pop(category=None)``: return last warning matching the category.
* ``clear()``: clear list of warnings

View File

@ -15,11 +15,11 @@ python test modules)::
py.test --doctest-modules
You can make these changes permanent in your project by
putting them into a conftest.py file like this::
putting them into a pytest.ini file like this::
# content of conftest.py
option_doctestmodules = True
option_doctestglob = "*.rst"
# content of pytest.ini
[pytest]
addopts = --doctest-modules
If you then have a text file like this::
@ -35,7 +35,7 @@ and another like this::
# content of mymodule.py
def something():
""" a doctest in a docstring
>>> something()
>>> something()
42
"""
return 42
@ -44,7 +44,9 @@ then you can just invoke ``py.test`` without command line options::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: /tmp/doc-exec-66
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 1 items
============================= in 0.00 seconds =============================
mymodule.py .
========================= 1 passed in 0.03 seconds =========================

View File

@ -49,15 +49,15 @@ You can now run the test::
$ py.test test_sample.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_sample.py
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 1 items
test_sample.py F
================================= FAILURES =================================
_______________________________ test_answer ________________________________
mysetup = <conftest.MySetup instance at 0x16f5998>
mysetup = <conftest.MySetup instance at 0x2c88128>
def test_answer(mysetup):
app = mysetup.myapp()
@ -122,12 +122,12 @@ Running it yields::
$ py.test test_ssh.py -rs
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_ssh.py
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 1 items
test_ssh.py s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-107/conftest.py:22: specify ssh host with --ssh
SKIP [1] /tmp/doc-exec-474/conftest.py:22: specify ssh host with --ssh
======================== 1 skipped in 0.02 seconds =========================

View File

@ -27,8 +27,8 @@ now execute the test specification::
nonpython $ py.test test_simple.yml
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_simple.yml
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 2 items
test_simple.yml .F
@ -37,9 +37,7 @@ now execute the test specification::
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
========================= short test summary info ==========================
FAIL test_simple.yml::hello
==================== 1 failed, 1 passed in 0.06 seconds ====================
==================== 1 failed, 1 passed in 0.03 seconds ====================
You get one dot for the passing ``sub1: sub1`` check and one failure.
Obviously in the above ``conftest.py`` you'll want to implement a more
@ -58,8 +56,8 @@ reporting in ``verbose`` mode::
nonpython $ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30 -- /home/hpk/venv/0/bin/python
test path 1: /home/hpk/p/pytest/doc/example/nonpython
platform linux2 -- Python 2.6.5 -- pytest-2.0.0 -- /home/hpk/venv/0/bin/python
collecting ... collected 2 items
test_simple.yml:1: usecase: ok PASSED
test_simple.yml:1: usecase: hello FAILED
@ -69,9 +67,7 @@ reporting in ``verbose`` mode::
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
========================= short test summary info ==========================
FAIL test_simple.yml::hello
==================== 1 failed, 1 passed in 0.06 seconds ====================
==================== 1 failed, 1 passed in 0.03 seconds ====================
While developing your custom test collection and execution it's also
interesting to just look at the collection tree::

View File

@ -41,11 +41,12 @@ Running it means we are two tests for each test functions, using
the respective settings::
$ py.test -q
collecting ... collected 4 items
F..F
================================= FAILURES =================================
_________________________ TestClass.test_equals[0] _________________________
self = <test_parametrize.TestClass instance at 0x128a638>, a = 1, b = 2
self = <test_parametrize.TestClass instance at 0x26ef2d8>, a = 1, b = 2
def test_equals(self, a, b):
> assert a == b
@ -54,7 +55,7 @@ the respective settings::
test_parametrize.py:17: AssertionError
______________________ TestClass.test_zerodivision[1] ______________________
self = <test_parametrize.TestClass instance at 0x1296440>, a = 3, b = 2
self = <test_parametrize.TestClass instance at 0x26fa758>, a = 3, b = 2
def test_zerodivision(self, a, b):
> pytest.raises(ZeroDivisionError, "a/b")
@ -97,11 +98,12 @@ for parametrizing test methods::
Running it gives similar results as before::
$ py.test -q test_parametrize2.py
collecting ... collected 4 items
F..F
================================= FAILURES =================================
_________________________ TestClass.test_equals[0] _________________________
self = <test_parametrize2.TestClass instance at 0x1dbcc68>, a = 1, b = 2
self = <test_parametrize2.TestClass instance at 0x1e5e638>, a = 1, b = 2
@params([dict(a=1, b=2), dict(a=3, b=3), ])
def test_equals(self, a, b):
@ -111,7 +113,7 @@ Running it gives similar results as before::
test_parametrize2.py:19: AssertionError
______________________ TestClass.test_zerodivision[1] ______________________
self = <test_parametrize2.TestClass instance at 0x1dd0488>, a = 3, b = 2
self = <test_parametrize2.TestClass instance at 0x1e6f560>, a = 3, b = 2
@params([dict(a=1, b=0), dict(a=3, b=2)])
def test_zerodivision(self, a, b):
@ -138,5 +140,6 @@ with different sets of arguments for its three arguments::
Running it (with Python-2.4 through to Python2.7 installed)::
. $ py.test -q multipython.py
collecting ... collected 75 items
....s....s....s....ssssss....s....s....s....ssssss....s....s....s....ssssss
48 passed, 27 skipped in 2.55 seconds
48 passed, 27 skipped in 2.74 seconds

View File

@ -13,9 +13,8 @@ get on the terminal - we are working on that):
assertion $ py.test failure_demo.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev38
collecting ...
collected 35 items
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 35 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
@ -31,7 +30,7 @@ get on the terminal - we are working on that):
failure_demo.py:15: AssertionError
_________________________ TestFailing.test_simple __________________________
self = <failure_demo.TestFailing object at 0x2c9da90>
self = <failure_demo.TestFailing object at 0x17d8750>
def test_simple(self):
def f():
@ -41,21 +40,21 @@ get on the terminal - we are working on that):
> assert f() == g()
E assert 42 == 43
E + where 42 = <function f at 0x2c447d0>()
E + and 43 = <function g at 0x2c44cf8>()
E + where 42 = <function f at 0x17e2488>()
E + and 43 = <function g at 0x17e2140>()
failure_demo.py:28: AssertionError
____________________ TestFailing.test_simple_multiline _____________________
self = <failure_demo.TestFailing object at 0x2c9dc90>
self = <failure_demo.TestFailing object at 0x17d4390>
def test_simple_multiline(self):
otherfunc_multi(
42,
> 6*9)
failure_demo.py:33:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:33:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 42, b = 54
@ -67,19 +66,19 @@ get on the terminal - we are working on that):
failure_demo.py:12: AssertionError
___________________________ TestFailing.test_not ___________________________
self = <failure_demo.TestFailing object at 0x2c93f10>
self = <failure_demo.TestFailing object at 0x17d8cd0>
def test_not(self):
def f():
return 42
> assert not f()
E assert not 42
E + where 42 = <function f at 0x2ca1050>()
E + where 42 = <function f at 0x17e25f0>()
failure_demo.py:38: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x2c9d9d0>
self = <failure_demo.TestSpecialisedExplanations object at 0x17e33d0>
def test_eq_text(self):
> assert 'spam' == 'eggs'
@ -90,7 +89,7 @@ get on the terminal - we are working on that):
failure_demo.py:42: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
self = <failure_demo.TestSpecialisedExplanations object at 0x2a04e90>
self = <failure_demo.TestSpecialisedExplanations object at 0x17ee990>
def test_eq_similar_text(self):
> assert 'foo 1 bar' == 'foo 2 bar'
@ -103,7 +102,7 @@ get on the terminal - we are working on that):
failure_demo.py:45: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x2c9d710>
self = <failure_demo.TestSpecialisedExplanations object at 0x17d8d10>
def test_eq_multiline_text(self):
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
@ -116,7 +115,7 @@ get on the terminal - we are working on that):
failure_demo.py:48: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x2c9db10>
self = <failure_demo.TestSpecialisedExplanations object at 0x17e3990>
def test_eq_long_text(self):
a = '1'*100 + 'a' + '2'*100
@ -133,7 +132,7 @@ get on the terminal - we are working on that):
failure_demo.py:53: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x2caf950>
self = <failure_demo.TestSpecialisedExplanations object at 0x17eee10>
def test_eq_long_text_multiline(self):
a = '1\n'*100 + 'a' + '2\n'*100
@ -157,7 +156,7 @@ get on the terminal - we are working on that):
failure_demo.py:58: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x2caf590>
self = <failure_demo.TestSpecialisedExplanations object at 0x17e3490>
def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3]
@ -167,7 +166,7 @@ get on the terminal - we are working on that):
failure_demo.py:61: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x2c9e310>
self = <failure_demo.TestSpecialisedExplanations object at 0x17e35d0>
def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100
@ -179,7 +178,7 @@ get on the terminal - we are working on that):
failure_demo.py:66: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x2c9dc50>
self = <failure_demo.TestSpecialisedExplanations object at 0x17eef10>
def test_eq_dict(self):
> assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
@ -192,7 +191,7 @@ get on the terminal - we are working on that):
failure_demo.py:69: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________
self = <failure_demo.TestSpecialisedExplanations object at 0x2cafc10>
self = <failure_demo.TestSpecialisedExplanations object at 0x17f4d10>
def test_eq_set(self):
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
@ -208,7 +207,7 @@ get on the terminal - we are working on that):
failure_demo.py:72: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
self = <failure_demo.TestSpecialisedExplanations object at 0x2cba890>
self = <failure_demo.TestSpecialisedExplanations object at 0x17f4e10>
def test_eq_longer_list(self):
> assert [1,2] == [1,2,3]
@ -218,7 +217,7 @@ get on the terminal - we are working on that):
failure_demo.py:75: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x2cba6d0>
self = <failure_demo.TestSpecialisedExplanations object at 0x1801690>
def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5]
@ -233,7 +232,7 @@ get on the terminal - we are working on that):
i = Foo()
> assert i.b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x2c9d750>.b
E + where 1 = <failure_demo.Foo object at 0x17f48d0>.b
failure_demo.py:85: AssertionError
_________________________ test_attribute_instance __________________________
@ -243,8 +242,8 @@ get on the terminal - we are working on that):
b = 1
> assert Foo().b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x2cafdd0>.b
E + where <failure_demo.Foo object at 0x2cafdd0> = <class 'failure_demo.Foo'>()
E + where 1 = <failure_demo.Foo object at 0x17f4390>.b
E + where <failure_demo.Foo object at 0x17f4390> = <class 'failure_demo.Foo'>()
failure_demo.py:91: AssertionError
__________________________ test_attribute_failure __________________________
@ -257,10 +256,10 @@ get on the terminal - we are working on that):
i = Foo()
> assert i.b == 2
failure_demo.py:100:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:100:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.Foo object at 0x2cba790>
self = <failure_demo.Foo object at 0x17ee790>
def _get_b(self):
> raise Exception('Failed to get attrib')
@ -276,22 +275,22 @@ get on the terminal - we are working on that):
b = 2
> assert Foo().b == Bar().b
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x2cba210>.b
E + where <failure_demo.Foo object at 0x2cba210> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x2cba850>.b
E + where <failure_demo.Bar object at 0x2cba850> = <class 'failure_demo.Bar'>()
E + where 1 = <failure_demo.Foo object at 0x17eed10>.b
E + where <failure_demo.Foo object at 0x17eed10> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x17eead0>.b
E + where <failure_demo.Bar object at 0x17eead0> = <class 'failure_demo.Bar'>()
failure_demo.py:108: AssertionError
__________________________ TestRaises.test_raises __________________________
self = <failure_demo.TestRaises instance at 0x2cc2560>
self = <failure_demo.TestRaises instance at 0x1808170>
def test_raises(self):
s = 'qwe'
> raises(TypeError, "int(s)")
failure_demo.py:117:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:117:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> int(s)
E ValueError: invalid literal for int() with base 10: 'qwe'
@ -299,7 +298,7 @@ get on the terminal - we are working on that):
<0-codegen /home/hpk/p/pytest/_pytest/python.py:819>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises instance at 0x2cb6bd8>
self = <failure_demo.TestRaises instance at 0x17fd908>
def test_raises_doesnt(self):
> raises(IOError, "int('3')")
@ -308,7 +307,7 @@ get on the terminal - we are working on that):
failure_demo.py:120: Failed
__________________________ TestRaises.test_raise ___________________________
self = <failure_demo.TestRaises instance at 0x2cc4830>
self = <failure_demo.TestRaises instance at 0x1809440>
def test_raise(self):
> raise ValueError("demo error")
@ -317,7 +316,7 @@ get on the terminal - we are working on that):
failure_demo.py:123: ValueError
________________________ TestRaises.test_tupleerror ________________________
self = <failure_demo.TestRaises instance at 0x2cc5560>
self = <failure_demo.TestRaises instance at 0x1806170>
def test_tupleerror(self):
> a,b = [1]
@ -326,7 +325,7 @@ get on the terminal - we are working on that):
failure_demo.py:126: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
self = <failure_demo.TestRaises instance at 0x2cc6248>
self = <failure_demo.TestRaises instance at 0x1806dd0>
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3]
@ -339,7 +338,7 @@ get on the terminal - we are working on that):
l is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________
self = <failure_demo.TestRaises instance at 0x2cc6f38>
self = <failure_demo.TestRaises instance at 0x180ab48>
def test_some_error(self):
> if namenotexi:
@ -357,8 +356,8 @@ get on the terminal - we are working on that):
py.std.sys.modules[name] = module
> module.foo()
failure_demo.py:149:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:149:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def foo():
> assert 1 == 0
@ -367,7 +366,7 @@ get on the terminal - we are working on that):
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/example/assertion/failure_demo.py:146>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x2cc4050>
self = <failure_demo.TestMoreErrors instance at 0x18081b8>
def test_complex_error(self):
def f():
@ -376,16 +375,16 @@ get on the terminal - we are working on that):
return 43
> somefunc(f(), g())
failure_demo.py:159:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:159:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
x = 44, y = 43
def somefunc(x,y):
> otherfunc(x,y)
failure_demo.py:8:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:8:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 44, b = 43
@ -396,7 +395,7 @@ get on the terminal - we are working on that):
failure_demo.py:5: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________
self = <failure_demo.TestMoreErrors instance at 0x2cc7ab8>
self = <failure_demo.TestMoreErrors instance at 0x180d6c8>
def test_z1_unpack_error(self):
l = []
@ -406,7 +405,7 @@ get on the terminal - we are working on that):
failure_demo.py:163: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x2ccb8c0>
self = <failure_demo.TestMoreErrors instance at 0x18114d0>
def test_z2_type_error(self):
l = 3
@ -416,19 +415,19 @@ get on the terminal - we are working on that):
failure_demo.py:167: TypeError
______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors instance at 0x2ccd5f0>
self = <failure_demo.TestMoreErrors instance at 0x18d1200>
def test_startswith(self):
s = "123"
g = "456"
> assert s.startswith(g)
E assert <built-in method startswith of str object at 0x2c321b0>('456')
E + where <built-in method startswith of str object at 0x2c321b0> = '123'.startswith
E assert <built-in method startswith of str object at 0x177e240>('456')
E + where <built-in method startswith of str object at 0x177e240> = '123'.startswith
failure_demo.py:172: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors instance at 0x2ccbc20>
self = <failure_demo.TestMoreErrors instance at 0x1811908>
def test_startswith_nested(self):
def f():
@ -436,15 +435,15 @@ get on the terminal - we are working on that):
def g():
return "456"
> assert f().startswith(g())
E assert <built-in method startswith of str object at 0x2c321b0>('456')
E + where <built-in method startswith of str object at 0x2c321b0> = '123'.startswith
E + where '123' = <function f at 0x2c2d140>()
E + and '456' = <function g at 0x2cb00c8>()
E assert <built-in method startswith of str object at 0x177e240>('456')
E + where <built-in method startswith of str object at 0x177e240> = '123'.startswith
E + where '123' = <function f at 0x176f668>()
E + and '456' = <function g at 0x1800578>()
failure_demo.py:179: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors instance at 0x2cb69e0>
self = <failure_demo.TestMoreErrors instance at 0x17fd9e0>
def test_global_func(self):
> assert isinstance(globf(42), float)
@ -454,19 +453,19 @@ get on the terminal - we are working on that):
failure_demo.py:182: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors instance at 0x2cc6440>
self = <failure_demo.TestMoreErrors instance at 0x1806b90>
def test_instance(self):
self.x = 6*7
> assert self.x != 42
E assert 42 != 42
E + where 42 = 42
E + where 42 = <failure_demo.TestMoreErrors instance at 0x2cc6440>.x
E + where 42 = <failure_demo.TestMoreErrors instance at 0x1806b90>.x
failure_demo.py:186: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors instance at 0x2dcc200>
self = <failure_demo.TestMoreErrors instance at 0x18d1f38>
def test_compare(self):
> assert globf(10) < 5
@ -476,7 +475,7 @@ get on the terminal - we are working on that):
failure_demo.py:189: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors instance at 0x2dce0e0>
self = <failure_demo.TestMoreErrors instance at 0x18d4cf8>
def test_try_finally(self):
x = 1

View File

@ -7,6 +7,8 @@ basic patterns and examples
pass different values to a test function, depending on command line options
----------------------------------------------------------------------------
.. regendoc:wipe
Suppose we want to write a test that depends on a command line option.
Here is a basic pattern how to achieve this::
@ -32,7 +34,8 @@ provide the ``cmdopt`` through a :ref:`function argument <funcarg>` factory::
Let's run this without supplying our new command line option::
$ py.test -q
$ py.test -q test_sample.py
collecting ... collected 1 items
F
================================= FAILURES =================================
_______________________________ test_answer ________________________________
@ -55,6 +58,7 @@ Let's run this without supplying our new command line option::
And now with supplying a command line option::
$ py.test -q --cmdopt=type2
collecting ... collected 1 items
F
================================= FAILURES =================================
_______________________________ test_answer ________________________________
@ -83,6 +87,8 @@ on real-life examples.
generating parameters combinations, depending on command line
----------------------------------------------------------------------------
.. regendoc:wipe
Let's say we want to execute a test with different parameters
and the parameter range shall be determined by a command
line argument. Let's first write a simple computation test::
@ -112,13 +118,15 @@ Now we add a test configuration like this::
This means that we only run 2 tests if we do not pass ``--all``::
$ py.test -q test_compute.py
collecting ... collected 2 items
..
2 passed in 0.01 seconds
We run only two computations, so we see two dots.
let's run the full monty::
$ py.test -q --all test_compute.py
$ py.test -q --all
collecting ... collected 5 items
....F
================================= FAILURES =================================
_____________________________ test_compute[4] ______________________________
@ -141,6 +149,8 @@ we'll get an error on the last one.
control skipping of tests according to command line option
--------------------------------------------------------------
.. regendoc:wipe
Here is a ``conftest.py`` file adding a ``--runslow`` command
line option to control skipping of ``slow`` marked tests::
@ -171,32 +181,33 @@ We can now write a test module like this::
and when running it will see a skipped "slow" test::
$ py.test test_module.py -rs # "-rs" means report details on the little 's'
$ py.test -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_module.py
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 2 items
test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-104/conftest.py:9: need --runslow option to run
SKIP [1] /tmp/doc-exec-479/conftest.py:9: need --runslow option to run
=================== 1 passed, 1 skipped in 0.02 seconds ====================
Or run it including the ``slow`` marked test::
$ py.test test_module.py --runslow
$ py.test --runslow
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_module.py
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 2 items
test_module.py ..
========================= 2 passed in 0.01 seconds =========================
writing well integrated assertion helpers
--------------------------------------------------
.. regendoc:wipe
If you have a test helper function called from a test you can
use the ``pytest.fail`` marker to fail a test with a certain message.
The test support function will not show up in the traceback if you
@ -218,7 +229,8 @@ of tracebacks: the ``checkconfig`` function will not be shown
unless the ``--fulltrace`` command line option is specified.
Let's run our little function::
$ py.test -q
$ py.test -q test_checkconfig.py
collecting ... collected 1 items
F
================================= FAILURES =================================
______________________________ test_something ______________________________
@ -230,16 +242,17 @@ Let's run our little function::
test_checkconfig.py:8: Failed
1 failed in 0.02 seconds
Detect if running from within a py.test run
--------------------------------------------------------------
.. regendoc:wipe
Usually it is a bad idea to make application code
behave differently if called from a test. But if you
absolutely must find out if your application code is
running from a test you can do something like this::
# content of conftest.py in your testing directory
# content of conftest.py
def pytest_configure(config):
import sys

View File

@ -45,8 +45,8 @@ Running the test looks like this::
$ py.test test_simplefactory.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_simplefactory.py
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 1 items
test_simplefactory.py F
@ -150,8 +150,8 @@ Running this::
$ py.test test_example.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_example.py
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 10 items
test_example.py .........F
@ -188,8 +188,8 @@ If you want to select only the run with the value ``7`` you could do::
$ py.test -v -k 7 test_example.py # or -k test_func[7]
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30 -- /home/hpk/venv/0/bin/python
test path 1: test_example.py
platform linux2 -- Python 2.6.5 -- pytest-2.0.0 -- /home/hpk/venv/0/bin/python
collecting ... collected 10 items
test_example.py:6: test_func[7] PASSED

View File

@ -16,7 +16,7 @@ Installation options::
To check your installation has installed the correct version::
$ py.test --version
This is py.test version 2.0.0.dev30, imported from /home/hpk/p/pytest/pytest.py
This is py.test version 2.0.0, imported from /home/hpk/p/pytest/pytest.pyc
If you get an error checkout :ref:`installation issues`.
@ -38,19 +38,19 @@ That's it. You can execute the test function now::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: /tmp/doc-exec-70
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 1 items
test_sample.py F
================================= FAILURES =================================
_______________________________ test_answer ________________________________
def test_answer():
> assert func(3) == 5
E assert 4 == 5
E + where 4 = func(3)
test_sample.py:5: AssertionError
========================= 1 failed in 0.02 seconds =========================
@ -58,9 +58,10 @@ py.test found the ``test_answer`` function by following :ref:`standard test disc
.. note::
You can simply use the ``assert`` statement for coding expectations because
intermediate values will be presented to you. This is arguably easier than
learning all the `the JUnit legacy methods`_.
You can simply use the ``assert`` statement for asserting
expectations because intermediate values will be presented to you.
This is arguably easier than learning all the `the JUnit legacy
methods`_.
However, there remains one caveat to using simple asserts: your
assertion expression should better be side-effect free. Because
@ -94,6 +95,7 @@ use the ``raises`` helper::
Running it with, this time in "quiet" reporting mode::
$ py.test -q test_sysexit.py
collecting ... collected 1 items
.
1 passed in 0.00 seconds
@ -121,17 +123,18 @@ There is no need to subclass anything. We can simply
run the module by passing its filename::
$ py.test -q test_class.py
collecting ... collected 2 items
.F
================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________
self = <test_class.TestClass instance at 0x288fc20>
self = <test_class.TestClass instance at 0x11fa320>
def test_two(self):
x = "hello"
> assert hasattr(x, 'check')
E assert hasattr('hello', 'check')
test_class.py:8: AssertionError
1 failed, 1 passed in 0.02 seconds
@ -157,21 +160,22 @@ py.test will lookup and call a factory to create the resource
before performing the test function call. Let's just run it::
$ py.test -q test_tmpdir.py
collecting ... collected 1 items
F
================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________
tmpdir = local('/tmp/pytest-122/test_needsfiles0')
tmpdir = local('/tmp/pytest-7/test_needsfiles0')
def test_needsfiles(tmpdir):
print tmpdir
> assert 0
E assert 0
test_tmpdir.py:3: AssertionError
----------------------------- Captured stdout ------------------------------
/tmp/pytest-122/test_needsfiles0
1 failed in 0.05 seconds
/tmp/pytest-7/test_needsfiles0
1 failed in 0.04 seconds
Before the test runs, a unique-per-test-invocation temporary directory
was created. More info at :ref:`tmpdir handling`.

View File

@ -88,8 +88,8 @@ You can use the ``-k`` command line option to select tests::
$ py.test -k webtest # running with the above defined examples yields
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: /tmp/doc-exec-74
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 4 items
test_mark.py ..
test_mark_classlevel.py ..
@ -100,8 +100,8 @@ And you can also run all tests except the ones that match the keyword::
$ py.test -k-webtest
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: /tmp/doc-exec-74
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 4 items
===================== 4 tests deselected by '-webtest' =====================
======================= 4 deselected in 0.01 seconds =======================
@ -110,8 +110,8 @@ Or to only select the class::
$ py.test -kTestClass
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: /tmp/doc-exec-74
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 4 items
test_mark_classlevel.py ..

View File

@ -39,8 +39,8 @@ will be undone.
.. background check:
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: /tmp/doc-exec-75
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 0 items
============================= in 0.00 seconds =============================

View File

@ -15,9 +15,12 @@ Here are some examples of projects using py.test:
* `mwlib <http://pypi.python.org/pypi/mwlib>`_ mediawiki parser and utility library
* `The Translate Toolkit <http://translate.sourceforge.net/wiki/toolkit/index>`_ for localization and conversion
* `execnet <http://codespeak.net/execnet>`_ rapid multi-Python deployment
* `Pacha <http://pacha.cafepais.com/>`_ configuration management in five minutes
* `bbfreeze <http://pypi.python.org/pypi/bbfreeze>`_ create standalone executables from Python scripts
* `pdb++ <http://bitbucket.org/antocuni/pdb>`_ a fancier version of PDB
* `py-s3fuse <http://code.google.com/p/py-s3fuse/>`_ Amazon S3 FUSE based filesystem
* `waskr <http://pacha.cafepais.com/>`_ WSGI Stats Middleware
* `guachi <http://code.google.com/p/guachi/>`_ global persistent configs for Python modules
* `Circuits <http://pypi.python.org/pypi/circuits>`_ lightweight Event Driven Framework
* `pygtk-helpers <http://bitbucket.org/aafshar/pygtkhelpers-main/>`_ easy interaction with PyGTK
* `QuantumCore <http://quantumcore.org/>`_ statusmessage and repoze openid plugin

View File

@ -121,14 +121,14 @@ Running it with the report-on-xfail option gives this output::
example $ py.test -rx xfail_demo.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev31
test path 1: xfail_demo.py
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 5 items
xfail_demo.py xxxxx
========================= short test summary info ==========================
XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2
reason: [NOTRUN]
reason: [NOTRUN]
XFAIL xfail_demo.py::test_hello3
condition: hasattr(os, 'sep')
XFAIL xfail_demo.py::test_hello4

View File

@ -28,15 +28,15 @@ Running this would result in a passed test except for the last
$ py.test test_tmpdir.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_tmpdir.py
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 1 items
test_tmpdir.py F
================================= FAILURES =================================
_____________________________ test_create_file _____________________________
tmpdir = local('/tmp/pytest-123/test_create_file0')
tmpdir = local('/tmp/pytest-8/test_create_file0')
def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt")
@ -47,7 +47,7 @@ Running this would result in a passed test except for the last
E assert 0
test_tmpdir.py:7: AssertionError
========================= 1 failed in 0.02 seconds =========================
========================= 1 failed in 0.04 seconds =========================
.. _`base temporary directory`:

View File

@ -24,8 +24,8 @@ Running it yields::
$ py.test test_unittest.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_unittest.py
platform linux2 -- Python 2.6.5 -- pytest-2.0.0
collecting ... collected 1 items
test_unittest.py F
@ -56,7 +56,7 @@ Running it yields::
/usr/lib/python2.6/unittest.py:350: AssertionError
----------------------------- Captured stdout ------------------------------
hello
========================= 1 failed in 0.02 seconds =========================
========================= 1 failed in 0.03 seconds =========================
.. _`unittest.py style`: http://docs.python.org/library/unittest.html