Merge pull request #4466 from nicoddemus/merge-master-into-features
Merge master into features
This commit is contained in:
commit
5cf69fae7d
|
@ -22,7 +22,9 @@ following::
|
||||||
assert f() == 4
|
assert f() == 4
|
||||||
|
|
||||||
to assert that your function returns a certain value. If this assertion fails
|
to assert that your function returns a certain value. If this assertion fails
|
||||||
you will see the return value of the function call::
|
you will see the return value of the function call:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_assert1.py
|
$ pytest test_assert1.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -165,7 +167,9 @@ when it encounters comparisons. For example::
|
||||||
set2 = set("8035")
|
set2 = set("8035")
|
||||||
assert set1 == set2
|
assert set1 == set2
|
||||||
|
|
||||||
if you run this module::
|
if you run this module:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_assert2.py
|
$ pytest test_assert2.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -235,7 +239,9 @@ now, given this test module::
|
||||||
assert f1 == f2
|
assert f1 == f2
|
||||||
|
|
||||||
you can run the test module and get the custom output defined in
|
you can run the test module and get the custom output defined in
|
||||||
the conftest file::
|
the conftest file:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q test_foocompare.py
|
$ pytest -q test_foocompare.py
|
||||||
F [100%]
|
F [100%]
|
||||||
|
|
|
@ -12,7 +12,9 @@ For information on plugin hooks and objects, see :ref:`plugins`.
|
||||||
|
|
||||||
For information on the ``pytest.mark`` mechanism, see :ref:`mark`.
|
For information on the ``pytest.mark`` mechanism, see :ref:`mark`.
|
||||||
|
|
||||||
For information about fixtures, see :ref:`fixtures`. To see a complete list of available fixtures (add ``-v`` to also see fixtures with leading ``_``), type ::
|
For information about fixtures, see :ref:`fixtures`. To see a complete list of available fixtures (add ``-v`` to also see fixtures with leading ``_``), type :
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q --fixtures
|
$ pytest -q --fixtures
|
||||||
cache
|
cache
|
||||||
|
|
|
@ -43,7 +43,9 @@ First, let's create 50 test invocation of which only 2 fail::
|
||||||
if i in (17, 25):
|
if i in (17, 25):
|
||||||
pytest.fail("bad luck")
|
pytest.fail("bad luck")
|
||||||
|
|
||||||
If you run this for the first time you will see two failures::
|
If you run this for the first time you will see two failures:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
.................F.......F........................ [100%]
|
.................F.......F........................ [100%]
|
||||||
|
@ -72,7 +74,9 @@ If you run this for the first time you will see two failures::
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
2 failed, 48 passed in 0.12 seconds
|
2 failed, 48 passed in 0.12 seconds
|
||||||
|
|
||||||
If you then run it with ``--lf``::
|
If you then run it with ``--lf``:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest --lf
|
$ pytest --lf
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -113,7 +117,9 @@ not been run ("deselected").
|
||||||
|
|
||||||
Now, if you run with the ``--ff`` option, all tests will be run but the first
|
Now, if you run with the ``--ff`` option, all tests will be run but the first
|
||||||
previous failures will be executed first (as can be seen from the series
|
previous failures will be executed first (as can be seen from the series
|
||||||
of ``FF`` and dots)::
|
of ``FF`` and dots):
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest --ff
|
$ pytest --ff
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -192,7 +198,9 @@ across pytest invocations::
|
||||||
assert mydata == 23
|
assert mydata == 23
|
||||||
|
|
||||||
If you run this command once, it will take a while because
|
If you run this command once, it will take a while because
|
||||||
of the sleep::
|
of the sleep:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
F [100%]
|
F [100%]
|
||||||
|
@ -209,7 +217,9 @@ of the sleep::
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
If you run it a second time the value will be retrieved from
|
If you run it a second time the value will be retrieved from
|
||||||
the cache and this will be quick::
|
the cache and this will be quick:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
F [100%]
|
F [100%]
|
||||||
|
@ -232,7 +242,9 @@ Inspecting Cache content
|
||||||
-------------------------------
|
-------------------------------
|
||||||
|
|
||||||
You can always peek at the content of the cache using the
|
You can always peek at the content of the cache using the
|
||||||
``--cache-show`` command line option::
|
``--cache-show`` command line option:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest --cache-show
|
$ pytest --cache-show
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
|
|
@ -61,7 +61,9 @@ is that you can use print statements for debugging::
|
||||||
assert False
|
assert False
|
||||||
|
|
||||||
and running this module will show you precisely the output
|
and running this module will show you precisely the output
|
||||||
of the failing function and hide the other one::
|
of the failing function and hide the other one:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
|
|
@ -40,6 +40,7 @@ todo_include_todos = 1
|
||||||
# Add any Sphinx extension module names here, as strings. They can be extensions
|
# Add any Sphinx extension module names here, as strings. They can be extensions
|
||||||
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||||
extensions = [
|
extensions = [
|
||||||
|
"pygments_pytest",
|
||||||
"sphinx.ext.autodoc",
|
"sphinx.ext.autodoc",
|
||||||
"sphinx.ext.todo",
|
"sphinx.ext.todo",
|
||||||
"sphinx.ext.autosummary",
|
"sphinx.ext.autosummary",
|
||||||
|
|
|
@ -58,7 +58,9 @@ and another like this::
|
||||||
"""
|
"""
|
||||||
return 42
|
return 42
|
||||||
|
|
||||||
then you can just invoke ``pytest`` without command line options::
|
then you can just invoke ``pytest`` without command line options:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
|
|
@ -27,7 +27,9 @@ You can "mark" a test function with custom metadata like this::
|
||||||
|
|
||||||
.. versionadded:: 2.2
|
.. versionadded:: 2.2
|
||||||
|
|
||||||
You can then restrict a test run to only run tests marked with ``webtest``::
|
You can then restrict a test run to only run tests marked with ``webtest``:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -v -m webtest
|
$ pytest -v -m webtest
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -40,7 +42,9 @@ You can then restrict a test run to only run tests marked with ``webtest``::
|
||||||
|
|
||||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
Or the inverse, running all tests except the webtest ones::
|
Or the inverse, running all tests except the webtest ones:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -v -m "not webtest"
|
$ pytest -v -m "not webtest"
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -60,7 +64,9 @@ Selecting tests based on their node ID
|
||||||
|
|
||||||
You can provide one or more :ref:`node IDs <node-id>` as positional
|
You can provide one or more :ref:`node IDs <node-id>` as positional
|
||||||
arguments to select only specified tests. This makes it easy to select
|
arguments to select only specified tests. This makes it easy to select
|
||||||
tests based on their module, class, method, or function name::
|
tests based on their module, class, method, or function name:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -v test_server.py::TestClass::test_method
|
$ pytest -v test_server.py::TestClass::test_method
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -73,7 +79,9 @@ tests based on their module, class, method, or function name::
|
||||||
|
|
||||||
========================= 1 passed in 0.12 seconds =========================
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
You can also select on the class::
|
You can also select on the class:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -v test_server.py::TestClass
|
$ pytest -v test_server.py::TestClass
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -86,7 +94,9 @@ You can also select on the class::
|
||||||
|
|
||||||
========================= 1 passed in 0.12 seconds =========================
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
Or select multiple nodes::
|
Or select multiple nodes:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -v test_server.py::TestClass test_server.py::test_send_http
|
$ pytest -v test_server.py::TestClass test_server.py::test_send_http
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -124,7 +134,9 @@ Using ``-k expr`` to select tests based on their name
|
||||||
You can use the ``-k`` command line option to specify an expression
|
You can use the ``-k`` command line option to specify an expression
|
||||||
which implements a substring match on the test names instead of the
|
which implements a substring match on the test names instead of the
|
||||||
exact match on markers that ``-m`` provides. This makes it easy to
|
exact match on markers that ``-m`` provides. This makes it easy to
|
||||||
select tests based on their names::
|
select tests based on their names:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -v -k http # running with the above defined example module
|
$ pytest -v -k http # running with the above defined example module
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -137,7 +149,9 @@ select tests based on their names::
|
||||||
|
|
||||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
And you can also run all tests except the ones that match the keyword::
|
And you can also run all tests except the ones that match the keyword:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -k "not send_http" -v
|
$ pytest -k "not send_http" -v
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -152,7 +166,9 @@ And you can also run all tests except the ones that match the keyword::
|
||||||
|
|
||||||
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
Or to select "http" and "quick" tests::
|
Or to select "http" and "quick" tests:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -k "http or quick" -v
|
$ pytest -k "http or quick" -v
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -351,7 +367,9 @@ A test file using this local plugin::
|
||||||
pass
|
pass
|
||||||
|
|
||||||
and an example invocations specifying a different environment than what
|
and an example invocations specifying a different environment than what
|
||||||
the test needs::
|
the test needs:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -E stage2
|
$ pytest -E stage2
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -363,7 +381,9 @@ the test needs::
|
||||||
|
|
||||||
======================== 1 skipped in 0.12 seconds =========================
|
======================== 1 skipped in 0.12 seconds =========================
|
||||||
|
|
||||||
and here is one that specifies exactly the environment needed::
|
and here is one that specifies exactly the environment needed:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -E stage1
|
$ pytest -E stage1
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -428,7 +448,9 @@ However, if there is a callable as the single positional argument with no keywor
|
||||||
def test_with_args():
|
def test_with_args():
|
||||||
pass
|
pass
|
||||||
|
|
||||||
The output is as follows::
|
The output is as follows:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q -s
|
$ pytest -q -s
|
||||||
Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={})
|
Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={})
|
||||||
|
@ -469,7 +491,9 @@ test function. From a conftest file we can read it like this::
|
||||||
print("glob args=%s kwargs=%s" % (mark.args, mark.kwargs))
|
print("glob args=%s kwargs=%s" % (mark.args, mark.kwargs))
|
||||||
sys.stdout.flush()
|
sys.stdout.flush()
|
||||||
|
|
||||||
Let's run this without capturing output and see what we get::
|
Let's run this without capturing output and see what we get:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q -s
|
$ pytest -q -s
|
||||||
glob args=('function',) kwargs={'x': 3}
|
glob args=('function',) kwargs={'x': 3}
|
||||||
|
@ -524,7 +548,9 @@ Let's do a little test file to show how this looks like::
|
||||||
def test_runs_everywhere():
|
def test_runs_everywhere():
|
||||||
pass
|
pass
|
||||||
|
|
||||||
then you will see two tests skipped and two executed tests as expected::
|
then you will see two tests skipped and two executed tests as expected:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -rs # this option reports skip reasons
|
$ pytest -rs # this option reports skip reasons
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -538,7 +564,9 @@ then you will see two tests skipped and two executed tests as expected::
|
||||||
|
|
||||||
=================== 2 passed, 2 skipped in 0.12 seconds ====================
|
=================== 2 passed, 2 skipped in 0.12 seconds ====================
|
||||||
|
|
||||||
Note that if you specify a platform via the marker-command line option like this::
|
Note that if you specify a platform via the marker-command line option like this:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -m linux
|
$ pytest -m linux
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -589,7 +617,9 @@ We want to dynamically define two markers and can do it in a
|
||||||
elif "event" in item.nodeid:
|
elif "event" in item.nodeid:
|
||||||
item.add_marker(pytest.mark.event)
|
item.add_marker(pytest.mark.event)
|
||||||
|
|
||||||
We can now use the ``-m option`` to select one set::
|
We can now use the ``-m option`` to select one set:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -m interface --tb=short
|
$ pytest -m interface --tb=short
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -610,7 +640,9 @@ We can now use the ``-m option`` to select one set::
|
||||||
E assert 0
|
E assert 0
|
||||||
================== 2 failed, 2 deselected in 0.12 seconds ==================
|
================== 2 failed, 2 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
or to select both "event" and "interface" tests::
|
or to select both "event" and "interface" tests:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -m "interface or event" --tb=short
|
$ pytest -m "interface or event" --tb=short
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
|
|
@ -23,7 +23,9 @@ You can create a simple example file:
|
||||||
:literal:
|
:literal:
|
||||||
|
|
||||||
and if you installed `PyYAML`_ or a compatible YAML-parser you can
|
and if you installed `PyYAML`_ or a compatible YAML-parser you can
|
||||||
now execute the test specification::
|
now execute the test specification:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
nonpython $ pytest test_simple.yml
|
nonpython $ pytest test_simple.yml
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -55,7 +57,9 @@ your own domain specific testing language this way.
|
||||||
will be reported as a (red) string.
|
will be reported as a (red) string.
|
||||||
|
|
||||||
``reportinfo()`` is used for representing the test location and is also
|
``reportinfo()`` is used for representing the test location and is also
|
||||||
consulted when reporting in ``verbose`` mode::
|
consulted when reporting in ``verbose`` mode:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
nonpython $ pytest -v
|
nonpython $ pytest -v
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -77,7 +81,9 @@ consulted when reporting in ``verbose`` mode::
|
||||||
.. regendoc:wipe
|
.. regendoc:wipe
|
||||||
|
|
||||||
While developing your custom test collection and execution it's also
|
While developing your custom test collection and execution it's also
|
||||||
interesting to just look at the collection tree::
|
interesting to just look at the collection tree:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
nonpython $ pytest --collect-only
|
nonpython $ pytest --collect-only
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
|
|
@ -42,14 +42,18 @@ Now we add a test configuration like this::
|
||||||
end = 2
|
end = 2
|
||||||
metafunc.parametrize("param1", range(end))
|
metafunc.parametrize("param1", range(end))
|
||||||
|
|
||||||
This means that we only run 2 tests if we do not pass ``--all``::
|
This means that we only run 2 tests if we do not pass ``--all``:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q test_compute.py
|
$ pytest -q test_compute.py
|
||||||
.. [100%]
|
.. [100%]
|
||||||
2 passed in 0.12 seconds
|
2 passed in 0.12 seconds
|
||||||
|
|
||||||
We run only two computations, so we see two dots.
|
We run only two computations, so we see two dots.
|
||||||
let's run the full monty::
|
let's run the full monty:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q --all
|
$ pytest -q --all
|
||||||
....F [100%]
|
....F [100%]
|
||||||
|
@ -134,8 +138,9 @@ used as the test IDs. These are succinct, but can be a pain to maintain.
|
||||||
In ``test_timedistance_v2``, we specified ``ids`` as a function that can generate a
|
In ``test_timedistance_v2``, we specified ``ids`` as a function that can generate a
|
||||||
string representation to make part of the test ID. So our ``datetime`` values use the
|
string representation to make part of the test ID. So our ``datetime`` values use the
|
||||||
label generated by ``idfn``, but because we didn't generate a label for ``timedelta``
|
label generated by ``idfn``, but because we didn't generate a label for ``timedelta``
|
||||||
objects, they are still using the default pytest representation::
|
objects, they are still using the default pytest representation:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_time.py --collect-only
|
$ pytest test_time.py --collect-only
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -191,7 +196,9 @@ only have to work a bit to construct the correct arguments for pytest's
|
||||||
def test_demo2(self, attribute):
|
def test_demo2(self, attribute):
|
||||||
assert isinstance(attribute, str)
|
assert isinstance(attribute, str)
|
||||||
|
|
||||||
this is a fully self-contained example which you can run with::
|
this is a fully self-contained example which you can run with:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_scenarios.py
|
$ pytest test_scenarios.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -203,8 +210,9 @@ this is a fully self-contained example which you can run with::
|
||||||
|
|
||||||
========================= 4 passed in 0.12 seconds =========================
|
========================= 4 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
|
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest --collect-only test_scenarios.py
|
$ pytest --collect-only test_scenarios.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -268,7 +276,9 @@ creates a database object for the actual test invocations::
|
||||||
else:
|
else:
|
||||||
raise ValueError("invalid internal test config")
|
raise ValueError("invalid internal test config")
|
||||||
|
|
||||||
Let's first see how it looks like at collection time::
|
Let's first see how it looks like at collection time:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_backends.py --collect-only
|
$ pytest test_backends.py --collect-only
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -281,7 +291,9 @@ Let's first see how it looks like at collection time::
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
And then when we run the test::
|
And then when we run the test:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q test_backends.py
|
$ pytest -q test_backends.py
|
||||||
.F [100%]
|
.F [100%]
|
||||||
|
@ -329,7 +341,9 @@ will be passed to respective fixture function::
|
||||||
assert x == 'aaa'
|
assert x == 'aaa'
|
||||||
assert y == 'b'
|
assert y == 'b'
|
||||||
|
|
||||||
The result of this test will be successful::
|
The result of this test will be successful:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_indirect_list.py --collect-only
|
$ pytest test_indirect_list.py --collect-only
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -377,7 +391,9 @@ parametrizer`_ but in a lot less code::
|
||||||
pytest.raises(ZeroDivisionError, "a/b")
|
pytest.raises(ZeroDivisionError, "a/b")
|
||||||
|
|
||||||
Our test generator looks up a class-level definition which specifies which
|
Our test generator looks up a class-level definition which specifies which
|
||||||
argument sets to use for each test function. Let's run it::
|
argument sets to use for each test function. Let's run it:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
F.. [100%]
|
F.. [100%]
|
||||||
|
@ -407,7 +423,9 @@ is to be run with different sets of arguments for its three arguments:
|
||||||
|
|
||||||
.. literalinclude:: multipython.py
|
.. literalinclude:: multipython.py
|
||||||
|
|
||||||
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::
|
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
. $ pytest -rs -q multipython.py
|
. $ pytest -rs -q multipython.py
|
||||||
...sss...sssssssss...sss... [100%]
|
...sss...sssssssss...sss... [100%]
|
||||||
|
@ -456,7 +474,9 @@ And finally a little test module::
|
||||||
assert round(basemod.func1(), 3) == round(optmod.func1(), 3)
|
assert round(basemod.func1(), 3) == round(optmod.func1(), 3)
|
||||||
|
|
||||||
|
|
||||||
If you run this with reporting for skips enabled::
|
If you run this with reporting for skips enabled:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -rs test_module.py
|
$ pytest -rs test_module.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -511,21 +531,22 @@ we mark the rest three parametrized tests with the custom marker ``basic``,
|
||||||
and for the fourth test we also use the built-in mark ``xfail`` to indicate this
|
and for the fourth test we also use the built-in mark ``xfail`` to indicate this
|
||||||
test is expected to fail. For explicitness, we set test ids for some tests.
|
test is expected to fail. For explicitness, we set test ids for some tests.
|
||||||
|
|
||||||
Then run ``pytest`` with verbose mode and with only the ``basic`` marker::
|
Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
|
||||||
|
|
||||||
pytest -v -m basic
|
.. code-block:: pytest
|
||||||
============================================ test session starts =============================================
|
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
$ pytest -v -m basic
|
||||||
|
=========================== test session starts ============================
|
||||||
|
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
|
||||||
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collecting ... collected 17 items / 14 deselected
|
||||||
|
|
||||||
test_pytest_param_example.py::test_eval[1+7-8] PASSED
|
test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%]
|
||||||
test_pytest_param_example.py::test_eval[basic_2+4] PASSED
|
test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
|
||||||
test_pytest_param_example.py::test_eval[basic_6*9] xfail
|
test_pytest_param_example.py::test_eval[basic_6*9] xfail [100%]
|
||||||
========================================== short test summary info ===========================================
|
|
||||||
XFAIL test_pytest_param_example.py::test_eval[basic_6*9]
|
|
||||||
|
|
||||||
============================================= 1 tests deselected =============================================
|
============ 2 passed, 14 deselected, 1 xfailed in 0.12 seconds ============
|
||||||
|
|
||||||
As the result:
|
As the result:
|
||||||
|
|
||||||
|
|
|
@ -24,20 +24,22 @@ by passing the ``--ignore=path`` option on the cli. ``pytest`` allows multiple
|
||||||
'-- test_world_03.py
|
'-- test_world_03.py
|
||||||
|
|
||||||
Now if you invoke ``pytest`` with ``--ignore=tests/foobar/test_foobar_03.py --ignore=tests/hello/``,
|
Now if you invoke ``pytest`` with ``--ignore=tests/foobar/test_foobar_03.py --ignore=tests/hello/``,
|
||||||
you will see that ``pytest`` only collects test-modules, which do not match the patterns specified::
|
you will see that ``pytest`` only collects test-modules, which do not match the patterns specified:
|
||||||
|
|
||||||
========= test session starts ==========
|
.. code-block:: pytest
|
||||||
platform darwin -- Python 2.7.10, pytest-2.8.2, py-1.4.30, pluggy-0.3.1
|
|
||||||
|
=========================== test session starts ============================
|
||||||
|
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 5 items
|
collected 5 items
|
||||||
|
|
||||||
tests/example/test_example_01.py .
|
tests/example/test_example_01.py . [ 20%]
|
||||||
tests/example/test_example_02.py .
|
tests/example/test_example_02.py . [ 40%]
|
||||||
tests/example/test_example_03.py .
|
tests/example/test_example_03.py . [ 60%]
|
||||||
tests/foobar/test_foobar_01.py .
|
tests/foobar/test_foobar_01.py . [ 80%]
|
||||||
tests/foobar/test_foobar_02.py .
|
tests/foobar/test_foobar_02.py . [100%]
|
||||||
|
|
||||||
======= 5 passed in 0.02 seconds =======
|
========================= 5 passed in 0.02 seconds =========================
|
||||||
|
|
||||||
Deselect tests during test collection
|
Deselect tests during test collection
|
||||||
-------------------------------------
|
-------------------------------------
|
||||||
|
@ -123,7 +125,9 @@ that match ``*_check``. For example, if we have::
|
||||||
def complex_check(self):
|
def complex_check(self):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
The test collection would look like this::
|
The test collection would look like this:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest --collect-only
|
$ pytest --collect-only
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -176,7 +180,9 @@ treat it as a filesystem path.
|
||||||
Finding out what is collected
|
Finding out what is collected
|
||||||
-----------------------------------------------
|
-----------------------------------------------
|
||||||
|
|
||||||
You can always peek at the collection tree without running tests like this::
|
You can always peek at the collection tree without running tests like this:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
. $ pytest --collect-only pythoncollection.py
|
. $ pytest --collect-only pythoncollection.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -231,7 +237,9 @@ and a ``setup.py`` dummy file like this::
|
||||||
0/0 # will raise exception if imported
|
0/0 # will raise exception if imported
|
||||||
|
|
||||||
If you run with a Python 2 interpreter then you will find the one test and will
|
If you run with a Python 2 interpreter then you will find the one test and will
|
||||||
leave out the ``setup.py`` file::
|
leave out the ``setup.py`` file:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
#$ pytest --collect-only
|
#$ pytest --collect-only
|
||||||
====== test session starts ======
|
====== test session starts ======
|
||||||
|
@ -244,7 +252,9 @@ leave out the ``setup.py`` file::
|
||||||
====== no tests ran in 0.04 seconds ======
|
====== no tests ran in 0.04 seconds ======
|
||||||
|
|
||||||
If you run with a Python 3 interpreter both the one test and the ``setup.py``
|
If you run with a Python 3 interpreter both the one test and the ``setup.py``
|
||||||
file will be left out::
|
file will be left out:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest --collect-only
|
$ pytest --collect-only
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
|
|
@ -7,7 +7,9 @@ Demo of Python failure reports with pytest
|
||||||
Here is a nice run of several tens of failures
|
Here is a nice run of several tens of failures
|
||||||
and how ``pytest`` presents things (unfortunately
|
and how ``pytest`` presents things (unfortunately
|
||||||
not showing the nice colors here in the HTML that you
|
not showing the nice colors here in the HTML that you
|
||||||
get on the terminal - we are working on that)::
|
get on the terminal - we are working on that):
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
assertion $ pytest failure_demo.py
|
assertion $ pytest failure_demo.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -364,7 +366,7 @@ get on the terminal - we are working on that)::
|
||||||
> int(s)
|
> int(s)
|
||||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||||
|
|
||||||
<0-codegen $PYTHON_PREFIX/lib/python3.6/site-packages/_pytest/python_api.py:682>:1: ValueError
|
<0-codegen $REGENDOC_TMPDIR/assertion/failure_demo.py:145>:1: ValueError
|
||||||
______________________ TestRaises.test_raises_doesnt _______________________
|
______________________ TestRaises.test_raises_doesnt _______________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
|
@ -43,7 +43,9 @@ provide the ``cmdopt`` through a :ref:`fixture function <fixture function>`:
|
||||||
def cmdopt(request):
|
def cmdopt(request):
|
||||||
return request.config.getoption("--cmdopt")
|
return request.config.getoption("--cmdopt")
|
||||||
|
|
||||||
Let's run this without supplying our new option::
|
Let's run this without supplying our new option:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q test_sample.py
|
$ pytest -q test_sample.py
|
||||||
F [100%]
|
F [100%]
|
||||||
|
@ -65,7 +67,9 @@ Let's run this without supplying our new option::
|
||||||
first
|
first
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
And now with supplying a command line option::
|
And now with supplying a command line option:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q --cmdopt=type2
|
$ pytest -q --cmdopt=type2
|
||||||
F [100%]
|
F [100%]
|
||||||
|
@ -117,7 +121,9 @@ the command line arguments before they get processed:
|
||||||
If you have the `xdist plugin <https://pypi.org/project/pytest-xdist/>`_ installed
|
If you have the `xdist plugin <https://pypi.org/project/pytest-xdist/>`_ installed
|
||||||
you will now always perform test runs using a number
|
you will now always perform test runs using a number
|
||||||
of subprocesses close to your CPU. Running in an empty
|
of subprocesses close to your CPU. Running in an empty
|
||||||
directory with the above conftest.py::
|
directory with the above conftest.py:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -175,7 +181,9 @@ We can now write a test module like this:
|
||||||
def test_func_slow():
|
def test_func_slow():
|
||||||
pass
|
pass
|
||||||
|
|
||||||
and when running it will see a skipped "slow" test::
|
and when running it will see a skipped "slow" test:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -rs # "-rs" means report details on the little 's'
|
$ pytest -rs # "-rs" means report details on the little 's'
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -189,7 +197,9 @@ and when running it will see a skipped "slow" test::
|
||||||
|
|
||||||
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
||||||
|
|
||||||
Or run it including the ``slow`` marked test::
|
Or run it including the ``slow`` marked test:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest --runslow
|
$ pytest --runslow
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -230,7 +240,9 @@ Example:
|
||||||
The ``__tracebackhide__`` setting influences ``pytest`` showing
|
The ``__tracebackhide__`` setting influences ``pytest`` showing
|
||||||
of tracebacks: the ``checkconfig`` function will not be shown
|
of tracebacks: the ``checkconfig`` function will not be shown
|
||||||
unless the ``--full-trace`` command line option is specified.
|
unless the ``--full-trace`` command line option is specified.
|
||||||
Let's run our little function::
|
Let's run our little function:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q test_checkconfig.py
|
$ pytest -q test_checkconfig.py
|
||||||
F [100%]
|
F [100%]
|
||||||
|
@ -327,7 +339,9 @@ It's easy to present extra information in a ``pytest`` run:
|
||||||
def pytest_report_header(config):
|
def pytest_report_header(config):
|
||||||
return "project deps: mylib-1.1"
|
return "project deps: mylib-1.1"
|
||||||
|
|
||||||
which will add the string to the test header accordingly::
|
which will add the string to the test header accordingly:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -353,7 +367,9 @@ display more information if applicable:
|
||||||
if config.getoption("verbose") > 0:
|
if config.getoption("verbose") > 0:
|
||||||
return ["info1: did you know that ...", "did you?"]
|
return ["info1: did you know that ...", "did you?"]
|
||||||
|
|
||||||
which will add info only when run with "--v"::
|
which will add info only when run with "--v":
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -v
|
$ pytest -v
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -366,7 +382,9 @@ which will add info only when run with "--v"::
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
and nothing when run plainly::
|
and nothing when run plainly:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -403,7 +421,9 @@ out which tests are the slowest. Let's make an artificial test suite:
|
||||||
def test_funcslow2():
|
def test_funcslow2():
|
||||||
time.sleep(0.3)
|
time.sleep(0.3)
|
||||||
|
|
||||||
Now we can profile which test functions execute the slowest::
|
Now we can profile which test functions execute the slowest:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest --durations=3
|
$ pytest --durations=3
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -475,7 +495,9 @@ tests in a class. Here is a test module example:
|
||||||
def test_normal():
|
def test_normal():
|
||||||
pass
|
pass
|
||||||
|
|
||||||
If we run this::
|
If we run this:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -rx
|
$ pytest -rx
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -556,7 +578,9 @@ the ``db`` fixture:
|
||||||
def test_root(db): # no db here, will error out
|
def test_root(db): # no db here, will error out
|
||||||
pass
|
pass
|
||||||
|
|
||||||
We can run this::
|
We can run this:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -667,7 +691,9 @@ if you then have failing tests:
|
||||||
def test_fail2():
|
def test_fail2():
|
||||||
assert 0
|
assert 0
|
||||||
|
|
||||||
and run them::
|
and run them:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_module.py
|
$ pytest test_module.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -766,7 +792,9 @@ if you then have failing tests:
|
||||||
def test_fail2():
|
def test_fail2():
|
||||||
assert 0
|
assert 0
|
||||||
|
|
||||||
and run it::
|
and run it:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -s test_module.py
|
$ pytest -s test_module.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
|
|
@ -57,7 +57,9 @@ will be called ahead of running any tests::
|
||||||
def test_unit1(self):
|
def test_unit1(self):
|
||||||
print("test_unit1 method called")
|
print("test_unit1 method called")
|
||||||
|
|
||||||
If you run this without output capturing::
|
If you run this without output capturing:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q -s test_module.py
|
$ pytest -q -s test_module.py
|
||||||
callattr_ahead_of_alltests called
|
callattr_ahead_of_alltests called
|
||||||
|
|
|
@ -66,7 +66,9 @@ using it::
|
||||||
|
|
||||||
Here, the ``test_ehlo`` needs the ``smtp_connection`` fixture value. pytest
|
Here, the ``test_ehlo`` needs the ``smtp_connection`` fixture value. pytest
|
||||||
will discover and call the :py:func:`@pytest.fixture <_pytest.python.fixture>`
|
will discover and call the :py:func:`@pytest.fixture <_pytest.python.fixture>`
|
||||||
marked ``smtp_connection`` fixture function. Running the test looks like this::
|
marked ``smtp_connection`` fixture function. Running the test looks like this:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_smtpsimple.py
|
$ pytest test_smtpsimple.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -204,7 +206,9 @@ located)::
|
||||||
assert 0 # for demo purposes
|
assert 0 # for demo purposes
|
||||||
|
|
||||||
We deliberately insert failing ``assert 0`` statements in order to
|
We deliberately insert failing ``assert 0`` statements in order to
|
||||||
inspect what is going on and can now run the tests::
|
inspect what is going on and can now run the tests:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_module.py
|
$ pytest test_module.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -482,7 +486,9 @@ server URL in its module namespace::
|
||||||
def test_showhelo(smtp_connection):
|
def test_showhelo(smtp_connection):
|
||||||
assert 0, smtp_connection.helo()
|
assert 0, smtp_connection.helo()
|
||||||
|
|
||||||
Running it::
|
Running it:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -qq --tb=short test_anothersmtp.py
|
$ pytest -qq --tb=short test_anothersmtp.py
|
||||||
F [100%]
|
F [100%]
|
||||||
|
@ -584,7 +590,9 @@ The main change is the declaration of ``params`` with
|
||||||
:py:func:`@pytest.fixture <_pytest.python.fixture>`, a list of values
|
:py:func:`@pytest.fixture <_pytest.python.fixture>`, a list of values
|
||||||
for each of which the fixture function will execute and can access
|
for each of which the fixture function will execute and can access
|
||||||
a value via ``request.param``. No test function code needs to change.
|
a value via ``request.param``. No test function code needs to change.
|
||||||
So let's just do another run::
|
So let's just do another run:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q test_module.py
|
$ pytest -q test_module.py
|
||||||
FFFF [100%]
|
FFFF [100%]
|
||||||
|
@ -686,7 +694,9 @@ a function which will be called with the fixture value and then
|
||||||
has to return a string to use. In the latter case if the function
|
has to return a string to use. In the latter case if the function
|
||||||
return ``None`` then pytest's auto-generated ID will be used.
|
return ``None`` then pytest's auto-generated ID will be used.
|
||||||
|
|
||||||
Running the above tests results in the following test IDs being used::
|
Running the above tests results in the following test IDs being used:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest --collect-only
|
$ pytest --collect-only
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -728,7 +738,9 @@ Example::
|
||||||
def test_data(data_set):
|
def test_data(data_set):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
Running this test will *skip* the invocation of ``data_set`` with value ``2``::
|
Running this test will *skip* the invocation of ``data_set`` with value ``2``:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_fixture_marks.py -v
|
$ pytest test_fixture_marks.py -v
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -771,7 +783,9 @@ and instantiate an object ``app`` where we stick the already defined
|
||||||
assert app.smtp_connection
|
assert app.smtp_connection
|
||||||
|
|
||||||
Here we declare an ``app`` fixture which receives the previously defined
|
Here we declare an ``app`` fixture which receives the previously defined
|
||||||
``smtp_connection`` fixture and instantiates an ``App`` object with it. Let's run it::
|
``smtp_connection`` fixture and instantiates an ``App`` object with it. Let's run it:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -v test_appsetup.py
|
$ pytest -v test_appsetup.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -840,7 +854,9 @@ to show the setup/teardown flow::
|
||||||
print(" RUN test2 with otherarg %s and modarg %s" % (otherarg, modarg))
|
print(" RUN test2 with otherarg %s and modarg %s" % (otherarg, modarg))
|
||||||
|
|
||||||
|
|
||||||
Let's run the tests in verbose mode and with looking at the print-output::
|
Let's run the tests in verbose mode and with looking at the print-output:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -v -s test_module.py
|
$ pytest -v -s test_module.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -942,7 +958,9 @@ and declare its use in a test module via a ``usefixtures`` marker::
|
||||||
Due to the ``usefixtures`` marker, the ``cleandir`` fixture
|
Due to the ``usefixtures`` marker, the ``cleandir`` fixture
|
||||||
will be required for the execution of each test method, just as if
|
will be required for the execution of each test method, just as if
|
||||||
you specified a "cleandir" function argument to each of them. Let's run it
|
you specified a "cleandir" function argument to each of them. Let's run it
|
||||||
to verify our fixture is activated and the tests pass::
|
to verify our fixture is activated and the tests pass:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
.. [100%]
|
.. [100%]
|
||||||
|
@ -1041,7 +1059,9 @@ which implies that all test methods in the class will use this fixture
|
||||||
without a need to state it in the test function signature or with a
|
without a need to state it in the test function signature or with a
|
||||||
class-level ``usefixtures`` decorator.
|
class-level ``usefixtures`` decorator.
|
||||||
|
|
||||||
If we run it, we get two passing tests::
|
If we run it, we get two passing tests:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
.. [100%]
|
.. [100%]
|
||||||
|
|
|
@ -43,7 +43,9 @@ Create a simple test function with just four lines of code::
|
||||||
def test_answer():
|
def test_answer():
|
||||||
assert func(3) == 5
|
assert func(3) == 5
|
||||||
|
|
||||||
That’s it. You can now execute the test function::
|
That’s it. You can now execute the test function:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -90,7 +92,9 @@ Use the ``raises`` helper to assert that some code raises an exception::
|
||||||
with pytest.raises(SystemExit):
|
with pytest.raises(SystemExit):
|
||||||
f()
|
f()
|
||||||
|
|
||||||
Execute the test function with “quiet” reporting mode::
|
Execute the test function with “quiet” reporting mode:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q test_sysexit.py
|
$ pytest -q test_sysexit.py
|
||||||
. [100%]
|
. [100%]
|
||||||
|
@ -111,7 +115,9 @@ Once you develop multiple tests, you may want to group them into a class. pytest
|
||||||
x = "hello"
|
x = "hello"
|
||||||
assert hasattr(x, 'check')
|
assert hasattr(x, 'check')
|
||||||
|
|
||||||
``pytest`` discovers all tests following its :ref:`Conventions for Python test discovery <test discovery>`, so it finds both ``test_`` prefixed functions. There is no need to subclass anything. We can simply run the module by passing its filename::
|
``pytest`` discovers all tests following its :ref:`Conventions for Python test discovery <test discovery>`, so it finds both ``test_`` prefixed functions. There is no need to subclass anything. We can simply run the module by passing its filename:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q test_class.py
|
$ pytest -q test_class.py
|
||||||
.F [100%]
|
.F [100%]
|
||||||
|
@ -141,7 +147,9 @@ Request a unique temporary directory for functional tests
|
||||||
print(tmpdir)
|
print(tmpdir)
|
||||||
assert 0
|
assert 0
|
||||||
|
|
||||||
List the name ``tmpdir`` in the test function signature and ``pytest`` will lookup and call a fixture factory to create the resource before performing the test function call. Before the test runs, ``pytest`` creates a unique-per-test-invocation temporary directory::
|
List the name ``tmpdir`` in the test function signature and ``pytest`` will lookup and call a fixture factory to create the resource before performing the test function call. Before the test runs, ``pytest`` creates a unique-per-test-invocation temporary directory:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q test_tmpdir.py
|
$ pytest -q test_tmpdir.py
|
||||||
F [100%]
|
F [100%]
|
||||||
|
|
|
@ -22,7 +22,9 @@ An example of a simple test:
|
||||||
assert inc(3) == 5
|
assert inc(3) == 5
|
||||||
|
|
||||||
|
|
||||||
To execute it::
|
To execute it:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
|
|
@ -50,7 +50,9 @@ to an expected output::
|
||||||
|
|
||||||
Here, the ``@parametrize`` decorator defines three different ``(test_input,expected)``
|
Here, the ``@parametrize`` decorator defines three different ``(test_input,expected)``
|
||||||
tuples so that the ``test_eval`` function will run three times using
|
tuples so that the ``test_eval`` function will run three times using
|
||||||
them in turn::
|
them in turn:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -99,7 +101,9 @@ for example with the builtin ``mark.xfail``::
|
||||||
def test_eval(test_input, expected):
|
def test_eval(test_input, expected):
|
||||||
assert eval(test_input) == expected
|
assert eval(test_input) == expected
|
||||||
|
|
||||||
Let's run this::
|
Let's run this:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -172,7 +176,9 @@ If we now pass two stringinput values, our test will run twice::
|
||||||
.. [100%]
|
.. [100%]
|
||||||
2 passed in 0.12 seconds
|
2 passed in 0.12 seconds
|
||||||
|
|
||||||
Let's also run with a stringinput that will lead to a failing test::
|
Let's also run with a stringinput that will lead to a failing test:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q --stringinput="!" test_strings.py
|
$ pytest -q --stringinput="!" test_strings.py
|
||||||
F [100%]
|
F [100%]
|
||||||
|
@ -194,7 +200,9 @@ As expected our test function fails.
|
||||||
|
|
||||||
If you don't specify a stringinput it will be skipped because
|
If you don't specify a stringinput it will be skipped because
|
||||||
``metafunc.parametrize()`` will be called with an empty parameter
|
``metafunc.parametrize()`` will be called with an empty parameter
|
||||||
list::
|
list:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q -rs test_strings.py
|
$ pytest -q -rs test_strings.py
|
||||||
s [100%]
|
s [100%]
|
||||||
|
|
|
@ -1,3 +1,4 @@
|
||||||
|
pygments-pytest>=1.0.4
|
||||||
# pinning sphinx to 1.4.* due to search issues with rtd:
|
# pinning sphinx to 1.4.* due to search issues with rtd:
|
||||||
# https://github.com/rtfd/readthedocs-sphinx-ext/issues/25
|
# https://github.com/rtfd/readthedocs-sphinx-ext/issues/25
|
||||||
sphinx ==1.4.*
|
sphinx ==1.4.*
|
||||||
|
|
|
@ -323,7 +323,9 @@ Here is a simple test file with the several usages:
|
||||||
|
|
||||||
.. literalinclude:: example/xfail_demo.py
|
.. literalinclude:: example/xfail_demo.py
|
||||||
|
|
||||||
Running it with the report-on-xfail option gives this output::
|
Running it with the report-on-xfail option gives this output:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
example $ pytest -rx xfail_demo.py
|
example $ pytest -rx xfail_demo.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
|
|
@ -35,7 +35,9 @@ created in the `base temporary directory`_.
|
||||||
assert 0
|
assert 0
|
||||||
|
|
||||||
Running this would result in a passed test except for the last
|
Running this would result in a passed test except for the last
|
||||||
``assert 0`` line which we use to look at values::
|
``assert 0`` line which we use to look at values:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_tmp_path.py
|
$ pytest test_tmp_path.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -95,7 +97,9 @@ and more. Here is an example test usage::
|
||||||
assert 0
|
assert 0
|
||||||
|
|
||||||
Running this would result in a passed test except for the last
|
Running this would result in a passed test except for the last
|
||||||
``assert 0`` line which we use to look at values::
|
``assert 0`` line which we use to look at values:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_tmpdir.py
|
$ pytest test_tmpdir.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
|
|
@ -122,7 +122,9 @@ fixture definition::
|
||||||
The ``@pytest.mark.usefixtures("db_class")`` class-decorator makes sure that
|
The ``@pytest.mark.usefixtures("db_class")`` class-decorator makes sure that
|
||||||
the pytest fixture function ``db_class`` is called once per class.
|
the pytest fixture function ``db_class`` is called once per class.
|
||||||
Due to the deliberately failing assert statements, we can take a look at
|
Due to the deliberately failing assert statements, we can take a look at
|
||||||
the ``self.db`` values in the traceback::
|
the ``self.db`` values in the traceback:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_unittest_db.py
|
$ pytest test_unittest_db.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -199,7 +201,9 @@ used for all methods of the class where it is defined. This is a
|
||||||
shortcut for using a ``@pytest.mark.usefixtures("initdir")`` marker
|
shortcut for using a ``@pytest.mark.usefixtures("initdir")`` marker
|
||||||
on the class like in the previous example.
|
on the class like in the previous example.
|
||||||
|
|
||||||
Running this test module ...::
|
Running this test module ...:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q test_unittest_cleandir.py
|
$ pytest -q test_unittest_cleandir.py
|
||||||
. [100%]
|
. [100%]
|
||||||
|
|
|
@ -150,7 +150,9 @@ Detailed summary report
|
||||||
The ``-r`` flag can be used to display test results summary at the end of the test session,
|
The ``-r`` flag can be used to display test results summary at the end of the test session,
|
||||||
making it easy in large test suites to get a clear picture of all failures, skips, xfails, etc.
|
making it easy in large test suites to get a clear picture of all failures, skips, xfails, etc.
|
||||||
|
|
||||||
Example::
|
Example:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -ra
|
$ pytest -ra
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -173,7 +175,9 @@ Here is the full list of available characters that can be used:
|
||||||
- ``P`` - passed with output
|
- ``P`` - passed with output
|
||||||
- ``a`` - all except ``pP``
|
- ``a`` - all except ``pP``
|
||||||
|
|
||||||
More than one character can be used, so for example to only see failed and skipped tests, you can execute::
|
More than one character can be used, so for example to only see failed and skipped tests, you can execute:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -rfs
|
$ pytest -rfs
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
|
|
@ -18,7 +18,9 @@ and displays them at the end of the session::
|
||||||
def test_one():
|
def test_one():
|
||||||
assert api_v1() == 1
|
assert api_v1() == 1
|
||||||
|
|
||||||
Running pytest now produces this output::
|
Running pytest now produces this output:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_show_warnings.py
|
$ pytest test_show_warnings.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
@ -37,7 +39,9 @@ Running pytest now produces this output::
|
||||||
=================== 1 passed, 1 warnings in 0.12 seconds ===================
|
=================== 1 passed, 1 warnings in 0.12 seconds ===================
|
||||||
|
|
||||||
The ``-W`` flag can be passed to control which warnings will be displayed or even turn
|
The ``-W`` flag can be passed to control which warnings will be displayed or even turn
|
||||||
them into errors::
|
them into errors:
|
||||||
|
|
||||||
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest -q test_show_warnings.py -W error::UserWarning
|
$ pytest -q test_show_warnings.py -W error::UserWarning
|
||||||
F [100%]
|
F [100%]
|
||||||
|
@ -347,7 +351,7 @@ defines an ``__init__`` constructor, as this prevents the class from being insta
|
||||||
def test_foo(self):
|
def test_foo(self):
|
||||||
assert 1 == 1
|
assert 1 == 1
|
||||||
|
|
||||||
::
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest test_pytest_warnings.py -q
|
$ pytest test_pytest_warnings.py -q
|
||||||
|
|
||||||
|
|
|
@ -388,14 +388,14 @@ return a result object, with which we can assert the tests' outcomes.
|
||||||
|
|
||||||
additionally it is possible to copy examples for an example folder before running pytest on it
|
additionally it is possible to copy examples for an example folder before running pytest on it
|
||||||
|
|
||||||
.. code:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
# content of pytest.ini
|
# content of pytest.ini
|
||||||
[pytest]
|
[pytest]
|
||||||
pytester_example_dir = .
|
pytester_example_dir = .
|
||||||
|
|
||||||
|
|
||||||
.. code:: python
|
.. code-block:: python
|
||||||
|
|
||||||
# content of test_example.py
|
# content of test_example.py
|
||||||
|
|
||||||
|
@ -404,10 +404,11 @@ additionally it is possible to copy examples for an example folder before runnin
|
||||||
testdir.copy_example("test_example.py")
|
testdir.copy_example("test_example.py")
|
||||||
testdir.runpytest("-k", "test_example")
|
testdir.runpytest("-k", "test_example")
|
||||||
|
|
||||||
|
|
||||||
def test_example():
|
def test_example():
|
||||||
pass
|
pass
|
||||||
|
|
||||||
.. code::
|
.. code-block:: pytest
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
|
|
|
@ -0,0 +1,21 @@
|
||||||
|
@echo off
|
||||||
|
rem Source: https://github.com/appveyor/ci/blob/master/scripts/appveyor-retry.cmd
|
||||||
|
rem initiate the retry number
|
||||||
|
set retryNumber=0
|
||||||
|
set maxRetries=3
|
||||||
|
|
||||||
|
:RUN
|
||||||
|
%*
|
||||||
|
set LastErrorLevel=%ERRORLEVEL%
|
||||||
|
IF %LastErrorLevel% == 0 GOTO :EOF
|
||||||
|
set /a retryNumber=%retryNumber%+1
|
||||||
|
IF %reTryNumber% == %maxRetries% (GOTO :FAILED)
|
||||||
|
|
||||||
|
:RETRY
|
||||||
|
set /a retryNumberDisp=%retryNumber%+1
|
||||||
|
@echo Command "%*" failed with exit code %LastErrorLevel%. Retrying %retryNumberDisp% of %maxRetries%
|
||||||
|
GOTO :RUN
|
||||||
|
|
||||||
|
: FAILED
|
||||||
|
@echo Sorry, we tried running command for %maxRetries% times and all attempts were unsuccessful!
|
||||||
|
EXIT /B %LastErrorLevel%
|
|
@ -5,7 +5,7 @@ if not defined PYTEST_NO_COVERAGE (
|
||||||
C:\Python36\Scripts\coverage combine
|
C:\Python36\Scripts\coverage combine
|
||||||
C:\Python36\Scripts\coverage xml --ignore-errors
|
C:\Python36\Scripts\coverage xml --ignore-errors
|
||||||
C:\Python36\Scripts\coverage report -m --ignore-errors
|
C:\Python36\Scripts\coverage report -m --ignore-errors
|
||||||
C:\Python36\Scripts\codecov --required -X gcov pycov search -f coverage.xml --flags %TOXENV:-= % windows
|
scripts\appveyor-retry C:\Python36\Scripts\codecov --required -X gcov pycov search -f coverage.xml --flags %TOXENV:-= % windows
|
||||||
) else (
|
) else (
|
||||||
echo Skipping coverage upload, PYTEST_NO_COVERAGE is set
|
echo Skipping coverage upload, PYTEST_NO_COVERAGE is set
|
||||||
)
|
)
|
||||||
|
|
5
tox.ini
5
tox.ini
|
@ -124,10 +124,7 @@ setenv = {[testenv:py27-pluggymaster]setenv}
|
||||||
skipsdist = True
|
skipsdist = True
|
||||||
usedevelop = True
|
usedevelop = True
|
||||||
changedir = doc/en
|
changedir = doc/en
|
||||||
deps =
|
deps = -r{toxinidir}/doc/en/requirements.txt
|
||||||
PyYAML
|
|
||||||
sphinx
|
|
||||||
sphinxcontrib-trio
|
|
||||||
|
|
||||||
commands =
|
commands =
|
||||||
sphinx-build -W -b html . _build
|
sphinx-build -W -b html . _build
|
||||||
|
|
Loading…
Reference in New Issue