bump to 2.1.1, regen examples, add release announcement

This commit is contained in:
holger krekel 2011-08-20 18:37:00 +02:00
parent fb1b1d9aae
commit 09933b8b04
22 changed files with 193 additions and 154 deletions

View File

@ -1,2 +1,2 @@
#
__version__ = '2.1.1.dev5'
__version__ = '2.1.1'

View File

@ -5,6 +5,7 @@ Release announcements
.. toctree::
:maxdepth: 2
release-2.1.1
release-2.1.0
release-2.0.3
release-2.0.2

View File

@ -0,0 +1,37 @@
py.test 2.1.1: assertion fixes and improved junitxml output
===========================================================================
pytest-2.1.1 is a backward compatible maintenance release of the
popular py.test testing tool. See extensive docs with examples here:
http://pytest.org/
Most bug fixes fix remaining issues with the perfected assertions introduced
with 2.1.0 - thanks to the bug reporters and to Benjamin Peterson for
helping to fix them. Also, junitxml output now produces system-out/err
tags which are shown more nicely within Jenkins environments.
NOTE particularly to package maintainers and others interested: there now
is a "pytest" man page which can be generated with "make man" in doc/.
If you want to install or upgrade pytest, just type one of::
pip install -U pytest # or
easy_install -U pytest
best,
holger krekel / http://merlinux.eu
Changes between 2.1.0 and 2.1.1
----------------------------------------------
- fix issue64 / pytest.set_trace now works within pytest_generate_tests hooks
- fix issue60 / fix error conditions involving the creation of __pycache__
- fix issue63 / assertion rewriting on inserts involving strings containing '%'
- fix assertion rewriting on calls with a ** arg
- don't cache rewritten modules if bytecode generation is disabled
- fix assertion rewriting in read-only directories
- fix issue59: provide system-out/err tags for junitxml output
- fix issue61: assertion rewriting on boolean operations with 3 or more operands
- you can now build a man page with "cd doc ; make man"

View File

@ -22,14 +22,14 @@ to assert that your function returns a certain value. If this assertion fails
you will see the return value of the function call::
$ py.test test_assert1.py
============================= test session starts ==============================
platform linux2 -- Python 2.6.6 -- pytest-2.1.0.dev6
=========================== test session starts ============================
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 1 items
test_assert1.py F
=================================== FAILURES ===================================
________________________________ test_function _________________________________
================================= FAILURES =================================
______________________________ test_function _______________________________
def test_function():
> assert f() == 4
@ -37,7 +37,7 @@ you will see the return value of the function call::
E + where 3 = f()
test_assert1.py:5: AssertionError
=========================== 1 failed in 0.01 seconds ===========================
========================= 1 failed in 0.01 seconds =========================
py.test has support for showing the values of the most common subexpressions
including calls, attributes, comparisons, and binary and unary
@ -104,14 +104,14 @@ when it encounters comparisons. For example::
if you run this module::
$ py.test test_assert2.py
============================= test session starts ==============================
platform linux2 -- Python 2.6.6 -- pytest-2.1.0.dev6
=========================== test session starts ============================
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 1 items
test_assert2.py F
=================================== FAILURES ===================================
_____________________________ test_set_comparison ______________________________
================================= FAILURES =================================
___________________________ test_set_comparison ____________________________
def test_set_comparison():
set1 = set("1308")
@ -124,7 +124,7 @@ if you run this module::
E '5'
test_assert2.py:5: AssertionError
=========================== 1 failed in 0.01 seconds ===========================
========================= 1 failed in 0.01 seconds =========================
Special comparisons are done for a number of cases:
@ -170,8 +170,8 @@ the conftest file::
$ py.test -q test_foocompare.py
collecting ... collected 1 items
F
=================================== FAILURES ===================================
_________________________________ test_compare _________________________________
================================= FAILURES =================================
_______________________________ test_compare _______________________________
def test_compare():
f1 = Foo(1)

View File

@ -24,6 +24,9 @@ You can ask for available builtin or project-custom
:ref:`function arguments <funcargs>` by typing::
$ py.test --funcargs
=========================== test session starts ============================
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collected 0 items
pytestconfig
the pytest config object with access to command line opts.
capsys
@ -69,3 +72,5 @@ You can ask for available builtin or project-custom
See http://docs.python.org/library/warnings.html for information
on warning categories.
============================= in 0.00 seconds =============================

View File

@ -64,7 +64,7 @@ of the failing function and hide the other one::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 2 items
test_module.py .F
@ -78,8 +78,8 @@ of the failing function and hide the other one::
test_module.py:9: AssertionError
----------------------------- Captured stdout ------------------------------
setting up <function test_func2 at 0x238c410>
==================== 1 failed, 1 passed in 0.02 seconds ====================
setting up <function test_func2 at 0x24fa320>
==================== 1 failed, 1 passed in 0.01 seconds ====================
Accessing captured output from a test function
---------------------------------------------------

View File

@ -44,9 +44,9 @@ then you can just invoke ``py.test`` without command line options::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 1 items
mymodule.py .
========================= 1 passed in 0.40 seconds =========================
========================= 1 passed in 0.02 seconds =========================

View File

@ -49,7 +49,7 @@ You can now run the test::
$ py.test test_sample.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 1 items
test_sample.py F
@ -57,7 +57,7 @@ You can now run the test::
================================= FAILURES =================================
_______________________________ test_answer ________________________________
mysetup = <conftest.MySetup instance at 0x2c1b128>
mysetup = <conftest.MySetup instance at 0x1d345f0>
def test_answer(mysetup):
app = mysetup.myapp()
@ -66,7 +66,7 @@ You can now run the test::
E assert 54 == 42
test_sample.py:4: AssertionError
========================= 1 failed in 0.02 seconds =========================
========================= 1 failed in 0.01 seconds =========================
This means that our ``mysetup`` object was successfully instantiated
and ``mysetup.app()`` returned an initialized ``MyApp`` instance.
@ -122,12 +122,12 @@ Running it yields::
$ py.test test_ssh.py -rs
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 1 items
test_ssh.py s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-37/conftest.py:22: specify ssh host with --ssh
SKIP [1] /tmp/doc-exec-296/conftest.py:22: specify ssh host with --ssh
======================== 1 skipped in 0.01 seconds =========================

View File

@ -27,7 +27,7 @@ now execute the test specification::
nonpython $ py.test test_simple.yml
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 2 items
test_simple.yml .F
@ -37,7 +37,7 @@ now execute the test specification::
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
==================== 1 failed, 1 passed in 0.24 seconds ====================
==================== 1 failed, 1 passed in 0.07 seconds ====================
You get one dot for the passing ``sub1: sub1`` check and one failure.
Obviously in the above ``conftest.py`` you'll want to implement a more
@ -56,7 +56,7 @@ reporting in ``verbose`` mode::
nonpython $ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3 -- /home/hpk/venv/0/bin/python
platform linux2 -- Python 2.7.1 -- pytest-2.1.1 -- /home/hpk/venv/0/bin/python
collecting ... collected 2 items
test_simple.yml:1: usecase: ok PASSED
@ -67,17 +67,17 @@ reporting in ``verbose`` mode::
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
==================== 1 failed, 1 passed in 0.07 seconds ====================
==================== 1 failed, 1 passed in 0.06 seconds ====================
While developing your custom test collection and execution it's also
interesting to just look at the collection tree::
nonpython $ py.test --collectonly
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 2 items
<YamlFile 'test_simple.yml'>
<YamlItem 'ok'>
<YamlItem 'hello'>
============================= in 0.07 seconds =============================
============================= in 0.06 seconds =============================

View File

@ -62,7 +62,7 @@ let's run the full monty::
E assert 4 < 4
test_compute.py:3: AssertionError
1 failed, 4 passed in 0.03 seconds
1 failed, 4 passed in 0.01 seconds
As expected when running the full range of ``param1`` values
we'll get an error on the last one.
@ -114,13 +114,13 @@ Let's first see how it looks like at collection time::
$ py.test test_backends.py --collectonly
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 2 items
<Module 'test_backends.py'>
<Function 'test_db_initialized[0]'>
<Function 'test_db_initialized[1]'>
============================= in 0.01 seconds =============================
============================= in 0.00 seconds =============================
And then when we run the test::
@ -130,7 +130,7 @@ And then when we run the test::
================================= FAILURES =================================
__________________________ test_db_initialized[1] __________________________
db = <conftest.DB2 instance at 0x2bf7bd8>
db = <conftest.DB2 instance at 0x17829e0>
def test_db_initialized(db):
# a dummy test
@ -139,7 +139,7 @@ And then when we run the test::
E Failed: deliberately failing for demo purposes
test_backends.py:6: Failed
1 failed, 1 passed in 0.03 seconds
1 failed, 1 passed in 0.01 seconds
Now you see that one invocation of the test passes and another fails,
as it to be expected.
@ -184,7 +184,7 @@ the respective settings::
================================= FAILURES =================================
__________________________ test_db_initialized[1] __________________________
db = <conftest.DB2 instance at 0x19bcb90>
db = <conftest.DB2 instance at 0x2acf4d0>
def test_db_initialized(db):
# a dummy test
@ -195,7 +195,7 @@ the respective settings::
test_backends.py:6: Failed
_________________________ TestClass.test_equals[0] _________________________
self = <test_parametrize.TestClass instance at 0x19ca8c0>, a = 1, b = 2
self = <test_parametrize.TestClass instance at 0x2ad2830>, a = 1, b = 2
def test_equals(self, a, b):
> assert a == b
@ -204,14 +204,14 @@ the respective settings::
test_parametrize.py:17: AssertionError
______________________ TestClass.test_zerodivision[1] ______________________
self = <test_parametrize.TestClass instance at 0x19cd4d0>, a = 3, b = 2
self = <test_parametrize.TestClass instance at 0x2ad8830>, a = 3, b = 2
def test_zerodivision(self, a, b):
> pytest.raises(ZeroDivisionError, "a/b")
E Failed: DID NOT RAISE
test_parametrize.py:20: Failed
3 failed, 3 passed in 0.05 seconds
3 failed, 3 passed in 0.02 seconds
Parametrizing test methods through a decorator
--------------------------------------------------------------
@ -252,7 +252,7 @@ Running it gives similar results as before::
================================= FAILURES =================================
_________________________ TestClass.test_equals[0] _________________________
self = <test_parametrize2.TestClass instance at 0x1cf1170>, a = 1, b = 2
self = <test_parametrize2.TestClass instance at 0x1ef2170>, a = 1, b = 2
@params([dict(a=1, b=2), dict(a=3, b=3), ])
def test_equals(self, a, b):
@ -262,7 +262,7 @@ Running it gives similar results as before::
test_parametrize2.py:19: AssertionError
______________________ TestClass.test_zerodivision[1] ______________________
self = <test_parametrize2.TestClass instance at 0x1d02170>, a = 3, b = 2
self = <test_parametrize2.TestClass instance at 0x20e4248>, a = 3, b = 2
@params([dict(a=1, b=0), dict(a=3, b=2)])
def test_zerodivision(self, a, b):
@ -270,7 +270,7 @@ Running it gives similar results as before::
E Failed: DID NOT RAISE
test_parametrize2.py:23: Failed
2 failed, 2 passed in 0.03 seconds
2 failed, 2 passed in 0.02 seconds
checking serialization between Python interpreters
--------------------------------------------------------------
@ -291,4 +291,4 @@ Running it (with Python-2.4 through to Python2.7 installed)::
. $ py.test -q multipython.py
collecting ... collected 75 items
....s....s....s....ssssss....s....s....s....ssssss....s....s....s....ssssss
48 passed, 27 skipped in 2.04 seconds
48 passed, 27 skipped in 2.48 seconds

View File

@ -43,7 +43,7 @@ then the test collection looks like this::
$ py.test --collectonly
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 2 items
<Module 'check_myapp.py'>
<Class 'CheckMyApp'>
@ -82,7 +82,7 @@ You can always peek at the collection tree without running tests like this::
. $ py.test --collectonly pythoncollection.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 3 items
<Module 'pythoncollection.py'>
<Function 'test_function'>

View File

@ -13,7 +13,7 @@ get on the terminal - we are working on that):
assertion $ py.test failure_demo.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 39 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
@ -30,7 +30,7 @@ get on the terminal - we are working on that):
failure_demo.py:15: AssertionError
_________________________ TestFailing.test_simple __________________________
self = <failure_demo.TestFailing object at 0x14b9890>
self = <failure_demo.TestFailing object at 0x1b79310>
def test_simple(self):
def f():
@ -40,13 +40,13 @@ get on the terminal - we are working on that):
> assert f() == g()
E assert 42 == 43
E + where 42 = <function f at 0x14a5e60>()
E + and 43 = <function g at 0x14bc1b8>()
E + where 42 = <function f at 0x1c57488>()
E + and 43 = <function g at 0x1c57500>()
failure_demo.py:28: AssertionError
____________________ TestFailing.test_simple_multiline _____________________
self = <failure_demo.TestFailing object at 0x14b9b50>
self = <failure_demo.TestFailing object at 0x1b79850>
def test_simple_multiline(self):
otherfunc_multi(
@ -59,26 +59,26 @@ get on the terminal - we are working on that):
a = 42, b = 54
def otherfunc_multi(a,b):
assert (a ==
> b)
> assert (a ==
b)
E assert 42 == 54
failure_demo.py:12: AssertionError
failure_demo.py:11: AssertionError
___________________________ TestFailing.test_not ___________________________
self = <failure_demo.TestFailing object at 0x14b9790>
self = <failure_demo.TestFailing object at 0x1b79290>
def test_not(self):
def f():
return 42
> assert not f()
E assert not 42
E + where 42 = <function f at 0x14bc398>()
E + where 42 = <function f at 0x1c57500>()
failure_demo.py:38: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x14aa810>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b79f90>
def test_eq_text(self):
> assert 'spam' == 'eggs'
@ -89,7 +89,7 @@ get on the terminal - we are working on that):
failure_demo.py:42: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
self = <failure_demo.TestSpecialisedExplanations object at 0x1576190>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b7af50>
def test_eq_similar_text(self):
> assert 'foo 1 bar' == 'foo 2 bar'
@ -102,7 +102,7 @@ get on the terminal - we are working on that):
failure_demo.py:45: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x14a7450>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b7af90>
def test_eq_multiline_text(self):
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
@ -115,7 +115,7 @@ get on the terminal - we are working on that):
failure_demo.py:48: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x14b9350>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b79d10>
def test_eq_long_text(self):
a = '1'*100 + 'a' + '2'*100
@ -132,7 +132,7 @@ get on the terminal - we are working on that):
failure_demo.py:53: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x15764d0>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b7a490>
def test_eq_long_text_multiline(self):
a = '1\n'*100 + 'a' + '2\n'*100
@ -156,7 +156,7 @@ get on the terminal - we are working on that):
failure_demo.py:58: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1576350>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b7ac90>
def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3]
@ -166,7 +166,7 @@ get on the terminal - we are working on that):
failure_demo.py:61: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x1576f10>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b79cd0>
def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100
@ -178,7 +178,7 @@ get on the terminal - we are working on that):
failure_demo.py:66: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1576390>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b75e90>
def test_eq_dict(self):
> assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
@ -191,7 +191,7 @@ get on the terminal - we are working on that):
failure_demo.py:69: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________
self = <failure_demo.TestSpecialisedExplanations object at 0x14bd790>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b75c10>
def test_eq_set(self):
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
@ -207,7 +207,7 @@ get on the terminal - we are working on that):
failure_demo.py:72: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
self = <failure_demo.TestSpecialisedExplanations object at 0x157a7d0>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b79590>
def test_eq_longer_list(self):
> assert [1,2] == [1,2,3]
@ -217,7 +217,7 @@ get on the terminal - we are working on that):
failure_demo.py:75: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x157ab50>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b7a8d0>
def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5]
@ -226,7 +226,7 @@ get on the terminal - we are working on that):
failure_demo.py:78: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x157a090>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b75410>
def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
@ -244,7 +244,7 @@ get on the terminal - we are working on that):
failure_demo.py:82: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x14aaa50>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b75c90>
def test_not_in_text_single(self):
text = 'single foo line'
@ -257,7 +257,7 @@ get on the terminal - we are working on that):
failure_demo.py:86: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
self = <failure_demo.TestSpecialisedExplanations object at 0x157ab90>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b75dd0>
def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
@ -270,7 +270,7 @@ get on the terminal - we are working on that):
failure_demo.py:90: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
self = <failure_demo.TestSpecialisedExplanations object at 0x1576ed0>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b751d0>
def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
@ -289,7 +289,7 @@ get on the terminal - we are working on that):
i = Foo()
> assert i.b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x157a910>.b
E + where 1 = <failure_demo.Foo object at 0x1b75310>.b
failure_demo.py:101: AssertionError
_________________________ test_attribute_instance __________________________
@ -299,8 +299,8 @@ get on the terminal - we are working on that):
b = 1
> assert Foo().b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x1584610>.b
E + where <failure_demo.Foo object at 0x1584610> = <class 'failure_demo.Foo'>()
E + where 1 = <failure_demo.Foo object at 0x1b75bd0>.b
E + where <failure_demo.Foo object at 0x1b75bd0> = <class 'failure_demo.Foo'>()
failure_demo.py:107: AssertionError
__________________________ test_attribute_failure __________________________
@ -316,7 +316,7 @@ get on the terminal - we are working on that):
failure_demo.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.Foo object at 0x157a3d0>
self = <failure_demo.Foo object at 0x1c6ee50>
def _get_b(self):
> raise Exception('Failed to get attrib')
@ -332,15 +332,15 @@ get on the terminal - we are working on that):
b = 2
> assert Foo().b == Bar().b
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x157a1d0>.b
E + where <failure_demo.Foo object at 0x157a1d0> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x157a9d0>.b
E + where <failure_demo.Bar object at 0x157a9d0> = <class 'failure_demo.Bar'>()
E + where 1 = <failure_demo.Foo object at 0x1b7a750>.b
E + where <failure_demo.Foo object at 0x1b7a750> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x1c6e310>.b
E + where <failure_demo.Bar object at 0x1c6e310> = <class 'failure_demo.Bar'>()
failure_demo.py:124: AssertionError
__________________________ TestRaises.test_raises __________________________
self = <failure_demo.TestRaises instance at 0x157d7e8>
self = <failure_demo.TestRaises instance at 0x1b92878>
def test_raises(self):
s = 'qwe'
@ -352,10 +352,10 @@ get on the terminal - we are working on that):
> int(s)
E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen /home/hpk/p/pytest/_pytest/python.py:831>:1: ValueError
<0-codegen /home/hpk/p/pytest/_pytest/python.py:833>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises instance at 0x158ae60>
self = <failure_demo.TestRaises instance at 0x1c63248>
def test_raises_doesnt(self):
> raises(IOError, "int('3')")
@ -364,7 +364,7 @@ get on the terminal - we are working on that):
failure_demo.py:136: Failed
__________________________ TestRaises.test_raise ___________________________
self = <failure_demo.TestRaises instance at 0x158bb90>
self = <failure_demo.TestRaises instance at 0x1b97560>
def test_raise(self):
> raise ValueError("demo error")
@ -373,7 +373,7 @@ get on the terminal - we are working on that):
failure_demo.py:139: ValueError
________________________ TestRaises.test_tupleerror ________________________
self = <failure_demo.TestRaises instance at 0x157cd40>
self = <failure_demo.TestRaises instance at 0x1b8e0e0>
def test_tupleerror(self):
> a,b = [1]
@ -382,7 +382,7 @@ get on the terminal - we are working on that):
failure_demo.py:142: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
self = <failure_demo.TestRaises instance at 0x157d488>
self = <failure_demo.TestRaises instance at 0x1b8edd0>
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3]
@ -395,7 +395,7 @@ get on the terminal - we are working on that):
l is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________
self = <failure_demo.TestRaises instance at 0x158a7e8>
self = <failure_demo.TestRaises instance at 0x1b88bd8>
def test_some_error(self):
> if namenotexi:
@ -423,7 +423,7 @@ get on the terminal - we are working on that):
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/example/assertion/failure_demo.py:162>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x158f8c0>
self = <failure_demo.TestMoreErrors instance at 0x1b8e248>
def test_complex_error(self):
def f():
@ -452,7 +452,7 @@ get on the terminal - we are working on that):
failure_demo.py:5: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________
self = <failure_demo.TestMoreErrors instance at 0x158c998>
self = <failure_demo.TestMoreErrors instance at 0x1b97050>
def test_z1_unpack_error(self):
l = []
@ -462,7 +462,7 @@ get on the terminal - we are working on that):
failure_demo.py:179: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x15854d0>
self = <failure_demo.TestMoreErrors instance at 0x1b8bd88>
def test_z2_type_error(self):
l = 3
@ -472,20 +472,19 @@ get on the terminal - we are working on that):
failure_demo.py:183: TypeError
______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors instance at 0x14b65a8>
self = <failure_demo.TestMoreErrors instance at 0x1b8ab90>
def test_startswith(self):
s = "123"
g = "456"
> assert s.startswith(g)
E assert False
E + where False = <built-in method startswith of str object at 0x14902a0>('456')
E + where <built-in method startswith of str object at 0x14902a0> = '123'.startswith
E assert <built-in method startswith of str object at 0x1b68508>('456')
E + where <built-in method startswith of str object at 0x1b68508> = '123'.startswith
failure_demo.py:188: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors instance at 0x158d518>
self = <failure_demo.TestMoreErrors instance at 0x1b878c0>
def test_startswith_nested(self):
def f():
@ -493,39 +492,36 @@ get on the terminal - we are working on that):
def g():
return "456"
> assert f().startswith(g())
E assert False
E + where False = <built-in method startswith of str object at 0x14902a0>('456')
E + where <built-in method startswith of str object at 0x14902a0> = '123'.startswith
E + where '123' = <function f at 0x15806e0>()
E + and '456' = <function g at 0x1580aa0>()
E assert <built-in method startswith of str object at 0x1b68508>('456')
E + where <built-in method startswith of str object at 0x1b68508> = '123'.startswith
E + where '123' = <function f at 0x1b96848>()
E + and '456' = <function g at 0x1b968c0>()
failure_demo.py:195: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors instance at 0x1593440>
self = <failure_demo.TestMoreErrors instance at 0x1b8a320>
def test_global_func(self):
> assert isinstance(globf(42), float)
E assert False
E + where False = isinstance(43, float)
E + where 43 = globf(42)
E assert isinstance(43, float)
E + where 43 = globf(42)
failure_demo.py:198: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors instance at 0x15952d8>
self = <failure_demo.TestMoreErrors instance at 0x1b8b0e0>
def test_instance(self):
self.x = 6*7
> assert self.x != 42
E assert 42 != 42
E + where 42 = 42
E + where 42 = <failure_demo.TestMoreErrors instance at 0x15952d8>.x
E + where 42 = <failure_demo.TestMoreErrors instance at 0x1b8b0e0>.x
failure_demo.py:202: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors instance at 0x1593758>
self = <failure_demo.TestMoreErrors instance at 0x1b97998>
def test_compare(self):
> assert globf(10) < 5
@ -535,7 +531,7 @@ get on the terminal - we are working on that):
failure_demo.py:205: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors instance at 0x157cd88>
self = <failure_demo.TestMoreErrors instance at 0x1b807e8>
def test_try_finally(self):
x = 1
@ -544,4 +540,4 @@ get on the terminal - we are working on that):
E assert 1 == 0
failure_demo.py:210: AssertionError
======================== 39 failed in 0.23 seconds =========================
======================== 39 failed in 0.20 seconds =========================

View File

@ -53,7 +53,7 @@ Let's run this without supplying our new command line option::
test_sample.py:6: AssertionError
----------------------------- Captured stdout ------------------------------
first
1 failed in 0.03 seconds
1 failed in 0.01 seconds
And now with supplying a command line option::
@ -76,7 +76,7 @@ And now with supplying a command line option::
test_sample.py:6: AssertionError
----------------------------- Captured stdout ------------------------------
second
1 failed in 0.02 seconds
1 failed in 0.01 seconds
Ok, this completes the basic pattern. However, one often rather
wants to process command line options outside of the test and
@ -109,13 +109,13 @@ directory with the above conftest.py::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
gw0 I / gw1 I / gw2 I / gw3 I
gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0]
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
gw0 I / gw1 I
gw0 [0] / gw1 [0]
scheduling tests via LoadScheduling
============================= in 0.52 seconds =============================
============================= in 0.26 seconds =============================
.. _`excontrolskip`:
@ -156,12 +156,12 @@ and when running it will see a skipped "slow" test::
$ py.test -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 2 items
test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-42/conftest.py:9: need --runslow option to run
SKIP [1] /tmp/doc-exec-301/conftest.py:9: need --runslow option to run
=================== 1 passed, 1 skipped in 0.01 seconds ====================
@ -169,7 +169,7 @@ Or run it including the ``slow`` marked test::
$ py.test --runslow
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 2 items
test_module.py ..
@ -213,7 +213,7 @@ Let's run our little function::
E Failed: not configured: 42
test_checkconfig.py:8: Failed
1 failed in 0.02 seconds
1 failed in 0.01 seconds
Detect if running from within a py.test run
--------------------------------------------------------------
@ -261,7 +261,7 @@ which will add the string to the test header accordingly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
project deps: mylib-1.1
collecting ... collected 0 items
@ -284,7 +284,7 @@ which will add info only when run with "--v"::
$ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3 -- /home/hpk/venv/0/bin/python
platform linux2 -- Python 2.7.1 -- pytest-2.1.1 -- /home/hpk/venv/0/bin/python
info1: did you know that ...
did you?
collecting ... collected 0 items
@ -295,7 +295,7 @@ and nothing when run plainly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 0 items
============================= in 0.00 seconds =============================

View File

@ -61,7 +61,7 @@ Running the test looks like this::
$ py.test test_simplefactory.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 1 items
test_simplefactory.py F
@ -76,7 +76,7 @@ Running the test looks like this::
E assert 42 == 17
test_simplefactory.py:5: AssertionError
========================= 1 failed in 0.02 seconds =========================
========================= 1 failed in 0.01 seconds =========================
This means that indeed the test function was called with a ``myfuncarg``
argument value of ``42`` and the assert fails. Here is how py.test
@ -167,7 +167,7 @@ Running this::
$ py.test test_example.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 10 items
test_example.py .........F
@ -182,7 +182,7 @@ Running this::
E assert 9 < 9
test_example.py:7: AssertionError
==================== 1 failed, 9 passed in 0.03 seconds ====================
==================== 1 failed, 9 passed in 0.02 seconds ====================
Note that the ``pytest_generate_tests(metafunc)`` hook is called during
the test collection phase which is separate from the actual test running.
@ -190,7 +190,7 @@ Let's just look at what is collected::
$ py.test --collectonly test_example.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 10 items
<Module 'test_example.py'>
<Function 'test_func[0]'>
@ -210,7 +210,7 @@ If you want to select only the run with the value ``7`` you could do::
$ py.test -v -k 7 test_example.py # or -k test_func[7]
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3 -- /home/hpk/venv/0/bin/python
platform linux2 -- Python 2.7.1 -- pytest-2.1.1 -- /home/hpk/venv/0/bin/python
collecting ... collected 10 items
test_example.py:6: test_func[7] PASSED

View File

@ -22,10 +22,9 @@ Installation options::
To check your installation has installed the correct version::
$ py.test --version
This is py.test version 2.0.3, imported from /home/hpk/p/pytest/pytest.pyc
This is py.test version 2.1.1, imported from /home/hpk/p/pytest/pytest.py
setuptools registered plugins:
pytest-xdist-1.6.dev3 at /home/hpk/p/pytest-xdist/xdist/plugin.pyc
pytest-incremental-0.1.0 at /home/hpk/venv/0/lib/python2.6/site-packages/pytest_incremental.pyc
pytest-xdist-1.6 at /home/hpk/p/pytest-xdist/xdist/plugin.pyc
If you get an error checkout :ref:`installation issues`.
@ -47,7 +46,7 @@ That's it. You can execute the test function now::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 1 items
test_sample.py F
@ -61,7 +60,7 @@ That's it. You can execute the test function now::
E + where 4 = func(3)
test_sample.py:5: AssertionError
========================= 1 failed in 0.02 seconds =========================
========================= 1 failed in 0.01 seconds =========================
py.test found the ``test_answer`` function by following :ref:`standard test discovery rules <test discovery>`, basically detecting the ``test_`` prefixes. We got a failure report because our little ``func(3)`` call did not return ``5``.
@ -96,7 +95,7 @@ Running it with, this time in "quiet" reporting mode::
$ py.test -q test_sysexit.py
collecting ... collected 1 items
.
1 passed in 0.01 seconds
1 passed in 0.00 seconds
.. todo:: For further ways to assert exceptions see the `raises`
@ -127,16 +126,15 @@ run the module by passing its filename::
================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________
self = <test_class.TestClass instance at 0x142c320>
self = <test_class.TestClass instance at 0x2037908>
def test_two(self):
x = "hello"
> assert hasattr(x, 'check')
E assert False
E + where False = hasattr('hello', 'check')
E assert hasattr('hello', 'check')
test_class.py:8: AssertionError
1 failed, 1 passed in 0.03 seconds
1 failed, 1 passed in 0.01 seconds
The first test passed, the second failed. Again we can easily see
the intermediate values used in the assertion, helping us to
@ -165,7 +163,7 @@ before performing the test function call. Let's just run it::
================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________
tmpdir = local('/tmp/pytest-10/test_needsfiles0')
tmpdir = local('/tmp/pytest-60/test_needsfiles0')
def test_needsfiles(tmpdir):
print tmpdir
@ -174,8 +172,8 @@ before performing the test function call. Let's just run it::
test_tmpdir.py:3: AssertionError
----------------------------- Captured stdout ------------------------------
/tmp/pytest-10/test_needsfiles0
1 failed in 0.13 seconds
/tmp/pytest-60/test_needsfiles0
1 failed in 0.02 seconds
Before the test runs, a unique-per-test-invocation temporary directory
was created. More info at :ref:`tmpdir handling`.

View File

@ -88,7 +88,7 @@ You can use the ``-k`` command line option to select tests::
$ py.test -k webtest # running with the above defined examples yields
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 4 items
test_mark.py ..
@ -100,7 +100,7 @@ And you can also run all tests except the ones that match the keyword::
$ py.test -k-webtest
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 4 items
===================== 4 tests deselected by '-webtest' =====================
@ -110,7 +110,7 @@ Or to only select the class::
$ py.test -kTestClass
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 4 items
test_mark_classlevel.py ..

View File

@ -39,7 +39,7 @@ will be undone.
.. background check:
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 0 items
============================= in 0.00 seconds =============================

View File

@ -130,7 +130,7 @@ Running it with the report-on-xfail option gives this output::
example $ py.test -rx xfail_demo.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 6 items
xfail_demo.py xxxxxx
@ -147,7 +147,7 @@ Running it with the report-on-xfail option gives this output::
XFAIL xfail_demo.py::test_hello6
reason: reason
======================== 6 xfailed in 0.05 seconds =========================
======================== 6 xfailed in 0.03 seconds =========================
.. _`evaluation of skipif/xfail conditions`:

View File

@ -28,7 +28,7 @@ Running this would result in a passed test except for the last
$ py.test test_tmpdir.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 1 items
test_tmpdir.py F
@ -36,7 +36,7 @@ Running this would result in a passed test except for the last
================================= FAILURES =================================
_____________________________ test_create_file _____________________________
tmpdir = local('/tmp/pytest-11/test_create_file0')
tmpdir = local('/tmp/pytest-61/test_create_file0')
def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt")
@ -47,7 +47,7 @@ Running this would result in a passed test except for the last
E assert 0
test_tmpdir.py:7: AssertionError
========================= 1 failed in 0.06 seconds =========================
========================= 1 failed in 0.03 seconds =========================
.. _`base temporary directory`:

View File

@ -24,7 +24,7 @@ Running it yields::
$ py.test test_unittest.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
platform linux2 -- Python 2.7.1 -- pytest-2.1.1
collecting ... collected 1 items
test_unittest.py F

View File

@ -24,7 +24,7 @@ def main():
name='pytest',
description='py.test: simple powerful testing with Python',
long_description = long_description,
version='2.1.1.dev5',
version='2.1.1',
url='http://pytest.org',
license='MIT license',
platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'],
@ -32,8 +32,8 @@ def main():
author_email='holger at merlinux.eu',
entry_points= make_entry_points(),
# the following should be enabled for release
install_requires=['py>=1.4.5.dev1'],
classifiers=['Development Status :: 5 - Production/Stable',
install_requires=['py>=1.4.5'],
classifiers=['Development Status :: 6 - Mature',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: POSIX',

View File

@ -30,9 +30,11 @@ commands=
changedir=.
basepython=python2.6
deps=:pypi:twisted
:pypi:pexpect
py>=1.4.5.dev1
commands=
py.test -rsxf \
--junitxml={envlogdir}/junit-{envname}.xml [testing/test_unittest.py]
--junitxml={envlogdir}/junit-{envname}.xml {posargs:testing/test_unittest.py}
[testenv:doctest]
changedir=.
commands=py.test --doctest-modules _pytest
@ -70,7 +72,7 @@ commands=
[pytest]
minversion=2.0
plugins=pytester
#addopts= -rxf --pyargs --doctest-modules --ignore=.tox
addopts= -rxs #--pyargs --doctest-modules --ignore=.tox
rsyncdirs=tox.ini pytest.py _pytest testing
python_files=test_*.py *_test.py
python_classes=Test Acceptance