Merge pull request #1789 from nicoddemus/regendoc-take-2

Regendoc take 2
This commit is contained in:
Bruno Oliveira 2016-08-03 18:21:36 -03:00 committed by GitHub
commit 6759b042b5
7 changed files with 19 additions and 18 deletions

View File

@ -246,7 +246,8 @@ the conftest file::
f1 = Foo(1)
f2 = Foo(2)
> assert f1 == f2
E AssertionError
E assert Comparing Foo instances:
E vals: 1 != 2
test_foocompare.py:11: AssertionError
1 failed in 0.12 seconds

View File

@ -110,6 +110,7 @@ If you then run it with ``--lf``::
E Failed: bad luck
test_50.py:6: Failed
======= 48 tests deselected ========
======= 2 failed, 48 deselected in 0.12 seconds ========
You have run only the two failing test from the last run, while 48 tests have

View File

@ -97,7 +97,7 @@ that performs some output related checks:
out, err = capsys.readouterr()
assert out == "hello\n"
assert err == "world\n"
print "next"
print ("next")
out, err = capsys.readouterr()
assert out == "next\n"

View File

@ -38,7 +38,7 @@ You can then restrict a test run to only run tests marked with ``webtest``::
test_server.py::test_send_http PASSED
======= 3 tests deselected by "-m 'webtest'" ========
======= 3 tests deselected ========
======= 1 passed, 3 deselected in 0.12 seconds ========
Or the inverse, running all tests except the webtest ones::
@ -54,7 +54,7 @@ Or the inverse, running all tests except the webtest ones::
test_server.py::test_another PASSED
test_server.py::TestClass::test_method PASSED
======= 1 tests deselected by "-m 'not webtest'" ========
======= 1 tests deselected ========
======= 3 passed, 1 deselected in 0.12 seconds ========
Selecting tests based on their node ID
@ -137,7 +137,7 @@ select tests based on their names::
test_server.py::test_send_http PASSED
======= 3 tests deselected by '-khttp' ========
======= 3 tests deselected ========
======= 1 passed, 3 deselected in 0.12 seconds ========
And you can also run all tests except the ones that match the keyword::
@ -153,7 +153,7 @@ And you can also run all tests except the ones that match the keyword::
test_server.py::test_another PASSED
test_server.py::TestClass::test_method PASSED
======= 1 tests deselected by '-knot send_http' ========
======= 1 tests deselected ========
======= 3 passed, 1 deselected in 0.12 seconds ========
Or to select "http" and "quick" tests::
@ -168,7 +168,7 @@ Or to select "http" and "quick" tests::
test_server.py::test_send_http PASSED
test_server.py::test_something_quick PASSED
======= 2 tests deselected by '-khttp or quick' ========
======= 2 tests deselected ========
======= 2 passed, 2 deselected in 0.12 seconds ========
.. note::
@ -505,7 +505,7 @@ Note that if you specify a platform via the marker-command line option like this
test_plat.py s
======= 3 tests deselected by "-m 'linux2'" ========
======= 3 tests deselected ========
======= 1 skipped, 3 deselected in 0.12 seconds ========
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
@ -566,7 +566,7 @@ We can now use the ``-m option`` to select one set::
test_module.py:6: in test_interface_complex
assert 0
E assert 0
======= 2 tests deselected by "-m 'interface'" ========
======= 2 tests deselected ========
======= 2 failed, 2 deselected in 0.12 seconds ========
or to select both "event" and "interface" tests::
@ -592,5 +592,5 @@ or to select both "event" and "interface" tests::
test_module.py:9: in test_event_simple
assert 0
E assert 0
======= 1 tests deselected by "-m 'interface or event'" ========
======= 1 tests deselected ========
======= 3 failed, 1 deselected in 0.12 seconds ========

View File

@ -6,7 +6,7 @@ import py
import pytest
import _pytest._code
pythonlist = ['python2.6', 'python2.7', 'python3.3']
pythonlist = ['python2.6', 'python2.7', 'python3.4', 'python3.5']
@pytest.fixture(params=pythonlist)
def python1(request, tmpdir):
picklefile = tmpdir.join("data.pickle")

View File

@ -346,7 +346,7 @@ parametrizer`_ but in a lot less code::
def pytest_generate_tests(metafunc):
# called once per each test function
funcarglist = metafunc.cls.params[metafunc.function.__name__]
argnames = list(funcarglist[0])
argnames = sorted(funcarglist[0])
metafunc.parametrize(argnames, [[funcargs[name] for name in argnames]
for funcargs in funcarglist])
@ -369,7 +369,7 @@ argument sets to use for each test function. Let's run it::
$ pytest -q
F..
======= FAILURES ========
_______ TestClass.test_equals[2-1] ________
_______ TestClass.test_equals[1-2] ________
self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2
@ -397,11 +397,10 @@ is to be run with different sets of arguments for its three arguments:
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::
. $ pytest -rs -q multipython.py
ssssssssssss...ssssssssssss
sssssssssssssss.........sss.........sss.........
======= short test summary info ========
SKIP [12] $REGENDOC_TMPDIR/CWD/multipython.py:23: 'python3.3' not found
SKIP [12] $REGENDOC_TMPDIR/CWD/multipython.py:23: 'python2.6' not found
3 passed, 24 skipped in 0.12 seconds
SKIP [21] $REGENDOC_TMPDIR/CWD/multipython.py:23: 'python2.6' not found
27 passed, 21 skipped in 0.12 seconds
Indirect parametrization of optional implementations/imports
--------------------------------------------------------------------

View File

@ -501,7 +501,7 @@ We can run this::
file $REGENDOC_TMPDIR/b/test_error.py, line 1
def test_root(db): # no db here, will error out
E fixture 'db' not found
available fixtures: monkeypatch, capfd, recwarn, pytestconfig, tmpdir_factory, tmpdir, cache, capsys, record_xml_property, doctest_namespace
available fixtures: cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
use 'pytest --fixtures [testpath]' for help on them.
$REGENDOC_TMPDIR/b/test_error.py:1