blacken-docs more code samples in docs

This commit is contained in:
Anthony Sottile 2019-04-12 04:50:26 -07:00
parent 1dafe969d1
commit 8449294e5d
8 changed files with 292 additions and 118 deletions

View File

@ -12,12 +12,15 @@ Asserting with the ``assert`` statement
``pytest`` allows you to use the standard python ``assert`` for verifying
expectations and values in Python tests. For example, you can write the
following::
following:
.. code-block:: python
# content of test_assert1.py
def f():
return 3
def test_function():
assert f() == 4
@ -52,7 +55,9 @@ operators. (See :ref:`tbreportdemo`). This allows you to use the
idiomatic python constructs without boilerplate code while not losing
introspection information.
However, if you specify a message with the assertion like this::
However, if you specify a message with the assertion like this:
.. code-block:: python
assert a % 2 == 0, "value was odd, should be even"
@ -67,22 +72,29 @@ Assertions about expected exceptions
------------------------------------------
In order to write assertions about raised exceptions, you can use
``pytest.raises`` as a context manager like this::
``pytest.raises`` as a context manager like this:
.. code-block:: python
import pytest
def test_zero_division():
with pytest.raises(ZeroDivisionError):
1 / 0
and if you need to have access to the actual exception info you may use::
and if you need to have access to the actual exception info you may use:
.. code-block:: python
def test_recursion_depth():
with pytest.raises(RuntimeError) as excinfo:
def f():
f()
f()
assert 'maximum recursion' in str(excinfo.value)
assert "maximum recursion" in str(excinfo.value)
``excinfo`` is a ``ExceptionInfo`` instance, which is a wrapper around
the actual exception raised. The main attributes of interest are
@ -90,15 +102,19 @@ the actual exception raised. The main attributes of interest are
You can pass a ``match`` keyword parameter to the context-manager to test
that a regular expression matches on the string representation of an exception
(similar to the ``TestCase.assertRaisesRegexp`` method from ``unittest``)::
(similar to the ``TestCase.assertRaisesRegexp`` method from ``unittest``):
.. code-block:: python
import pytest
def myfunc():
raise ValueError("Exception 123 raised")
def test_match():
with pytest.raises(ValueError, match=r'.* 123 .*'):
with pytest.raises(ValueError, match=r".* 123 .*"):
myfunc()
The regexp parameter of the ``match`` method is matched with the ``re.search``
@ -107,7 +123,9 @@ well.
There's an alternate form of the ``pytest.raises`` function where you pass
a function that will be executed with the given ``*args`` and ``**kwargs`` and
assert that the given exception is raised::
assert that the given exception is raised:
.. code-block:: python
pytest.raises(ExpectedException, func, *args, **kwargs)
@ -116,7 +134,9 @@ exception* or *wrong exception*.
Note that it is also possible to specify a "raises" argument to
``pytest.mark.xfail``, which checks that the test is failing in a more
specific way than just having any exception raised::
specific way than just having any exception raised:
.. code-block:: python
@pytest.mark.xfail(raises=IndexError)
def test_f():
@ -148,10 +168,13 @@ Making use of context-sensitive comparisons
.. versionadded:: 2.0
``pytest`` has rich support for providing context-sensitive information
when it encounters comparisons. For example::
when it encounters comparisons. For example:
.. code-block:: python
# content of test_assert2.py
def test_set_comparison():
set1 = set("1308")
set2 = set("8035")
@ -205,16 +228,21 @@ the ``pytest_assertrepr_compare`` hook.
:noindex:
As an example consider adding the following hook in a :ref:`conftest.py <conftest.py>`
file which provides an alternative explanation for ``Foo`` objects::
file which provides an alternative explanation for ``Foo`` objects:
.. code-block:: python
# content of conftest.py
from test_foocompare import Foo
def pytest_assertrepr_compare(op, left, right):
if isinstance(left, Foo) and isinstance(right, Foo) and op == "==":
return ['Comparing Foo instances:',
' vals: %s != %s' % (left.val, right.val)]
return ["Comparing Foo instances:", " vals: %s != %s" % (left.val, right.val)]
now, given this test module::
now, given this test module:
.. code-block:: python
# content of test_foocompare.py
class Foo(object):
@ -224,6 +252,7 @@ now, given this test module::
def __eq__(self, other):
return self.val == other.val
def test_compare():
f1 = Foo(1)
f2 = Foo(2)

View File

@ -9,18 +9,28 @@ Here are some example using the :ref:`mark` mechanism.
Marking test functions and selecting them for a run
----------------------------------------------------
You can "mark" a test function with custom metadata like this::
You can "mark" a test function with custom metadata like this:
.. code-block:: python
# content of test_server.py
import pytest
@pytest.mark.webtest
def test_send_http():
pass # perform some webtest test for your app
def test_something_quick():
pass
def test_another():
pass
class TestClass(object):
def test_method(self):
pass
@ -257,14 +267,19 @@ Marking whole classes or modules
----------------------------------------------------
You may use ``pytest.mark`` decorators with classes to apply markers to all of
its test methods::
its test methods:
.. code-block:: python
# content of test_mark_classlevel.py
import pytest
@pytest.mark.webtest
class TestClass(object):
def test_startup(self):
pass
def test_startup_and_more(self):
pass
@ -272,17 +287,23 @@ This is equivalent to directly applying the decorator to the
two test functions.
To remain backward-compatible with Python 2.4 you can also set a
``pytestmark`` attribute on a TestClass like this::
``pytestmark`` attribute on a TestClass like this:
.. code-block:: python
import pytest
class TestClass(object):
pytestmark = pytest.mark.webtest
or if you need to use multiple markers you can use a list::
or if you need to use multiple markers you can use a list:
.. code-block:: python
import pytest
class TestClass(object):
pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
@ -305,16 +326,17 @@ Marking individual tests when using parametrize
When using parametrize, applying a mark will make it apply
to each individual test. However it is also possible to
apply a marker to an individual test instance::
apply a marker to an individual test instance:
.. code-block:: python
import pytest
@pytest.mark.foo
@pytest.mark.parametrize(("n", "expected"), [
(1, 2),
pytest.param((1, 3), marks=pytest.mark.bar),
(2, 3),
])
@pytest.mark.parametrize(
("n", "expected"), [(1, 2), pytest.param((1, 3), marks=pytest.mark.bar), (2, 3)]
)
def test_increment(n, expected):
assert n + 1 == expected
@ -332,31 +354,46 @@ Custom marker and command line option to control test runs
Plugins can provide custom markers and implement specific behaviour
based on it. This is a self-contained example which adds a command
line option and a parametrized test function marker to run tests
specifies via named environments::
specifies via named environments:
.. code-block:: python
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("-E", action="store", metavar="NAME",
help="only run tests matching the environment NAME.")
parser.addoption(
"-E",
action="store",
metavar="NAME",
help="only run tests matching the environment NAME.",
)
def pytest_configure(config):
# register an additional marker
config.addinivalue_line("markers",
"env(name): mark test to run only on named environment")
config.addinivalue_line(
"markers", "env(name): mark test to run only on named environment"
)
def pytest_runtest_setup(item):
envnames = [mark.args[0] for mark in item.iter_markers(name='env')]
envnames = [mark.args[0] for mark in item.iter_markers(name="env")]
if envnames:
if item.config.getoption("-E") not in envnames:
pytest.skip("test requires env in %r" % envnames)
A test file using this local plugin::
A test file using this local plugin:
.. code-block:: python
# content of test_someenv.py
import pytest
@pytest.mark.env("stage1")
def test_basic_db_operation():
pass
@ -423,25 +460,32 @@ Passing a callable to custom markers
.. regendoc:wipe
Below is the config file that will be used in the next examples::
Below is the config file that will be used in the next examples:
.. code-block:: python
# content of conftest.py
import sys
def pytest_runtest_setup(item):
for marker in item.iter_markers(name='my_marker'):
for marker in item.iter_markers(name="my_marker"):
print(marker)
sys.stdout.flush()
A custom marker can have its argument set, i.e. ``args`` and ``kwargs`` properties, defined by either invoking it as a callable or using ``pytest.mark.MARKER_NAME.with_args``. These two methods achieve the same effect most of the time.
However, if there is a callable as the single positional argument with no keyword arguments, using the ``pytest.mark.MARKER_NAME(c)`` will not pass ``c`` as a positional argument but decorate ``c`` with the custom marker (see :ref:`MarkDecorator <mark>`). Fortunately, ``pytest.mark.MARKER_NAME.with_args`` comes to the rescue::
However, if there is a callable as the single positional argument with no keyword arguments, using the ``pytest.mark.MARKER_NAME(c)`` will not pass ``c`` as a positional argument but decorate ``c`` with the custom marker (see :ref:`MarkDecorator <mark>`). Fortunately, ``pytest.mark.MARKER_NAME.with_args`` comes to the rescue:
.. code-block:: python
# content of test_custom_marker.py
import pytest
def hello_world(*args, **kwargs):
return 'Hello World'
return "Hello World"
@pytest.mark.my_marker.with_args(hello_world)
def test_with_args():
@ -467,12 +511,16 @@ Reading markers which were set from multiple places
.. regendoc:wipe
If you are heavily using markers in your test suite you may encounter the case where a marker is applied several times to a test function. From plugin
code you can read over all such settings. Example::
code you can read over all such settings. Example:
.. code-block:: python
# content of test_mark_three_times.py
import pytest
pytestmark = pytest.mark.glob("module", x=1)
@pytest.mark.glob("class", x=2)
class TestClass(object):
@pytest.mark.glob("function", x=3)
@ -480,13 +528,16 @@ code you can read over all such settings. Example::
pass
Here we have the marker "glob" applied three times to the same
test function. From a conftest file we can read it like this::
test function. From a conftest file we can read it like this:
.. code-block:: python
# content of conftest.py
import sys
def pytest_runtest_setup(item):
for mark in item.iter_markers(name='glob'):
for mark in item.iter_markers(name="glob"):
print("glob args=%s kwargs=%s" % (mark.args, mark.kwargs))
sys.stdout.flush()
@ -510,7 +561,9 @@ Consider you have a test suite which marks tests for particular platforms,
namely ``pytest.mark.darwin``, ``pytest.mark.win32`` etc. and you
also have tests that run on all platforms and have no specific
marker. If you now want to have a way to only run the tests
for your particular platform, you could use the following plugin::
for your particular platform, you could use the following plugin:
.. code-block:: python
# content of conftest.py
#
@ -519,6 +572,7 @@ for your particular platform, you could use the following plugin::
ALL = set("darwin linux win32".split())
def pytest_runtest_setup(item):
supported_platforms = ALL.intersection(mark.name for mark in item.iter_markers())
plat = sys.platform
@ -526,24 +580,30 @@ for your particular platform, you could use the following plugin::
pytest.skip("cannot run on platform %s" % (plat))
then tests will be skipped if they were specified for a different platform.
Let's do a little test file to show how this looks like::
Let's do a little test file to show how this looks like:
.. code-block:: python
# content of test_plat.py
import pytest
@pytest.mark.darwin
def test_if_apple_is_evil():
pass
@pytest.mark.linux
def test_if_linux_works():
pass
@pytest.mark.win32
def test_if_win32_crashes():
pass
def test_runs_everywhere():
pass
@ -589,28 +649,38 @@ Automatically adding markers based on test names
If you a test suite where test function names indicate a certain
type of test, you can implement a hook that automatically defines
markers so that you can use the ``-m`` option with it. Let's look
at this test module::
at this test module:
.. code-block:: python
# content of test_module.py
def test_interface_simple():
assert 0
def test_interface_complex():
assert 0
def test_event_simple():
assert 0
def test_something_else():
assert 0
We want to dynamically define two markers and can do it in a
``conftest.py`` plugin::
``conftest.py`` plugin:
.. code-block:: python
# content of conftest.py
import pytest
def pytest_collection_modifyitems(items):
for item in items:
if "interface" in item.nodeid:

View File

@ -515,21 +515,25 @@ Set marks or test ID for individual parametrized test
--------------------------------------------------------------------
Use ``pytest.param`` to apply marks or set test ID to individual parametrized test.
For example::
For example:
.. code-block:: python
# content of test_pytest_param_example.py
import pytest
@pytest.mark.parametrize('test_input,expected', [
('3+5', 8),
pytest.param('1+7', 8,
marks=pytest.mark.basic),
pytest.param('2+4', 6,
marks=pytest.mark.basic,
id='basic_2+4'),
pytest.param('6*9', 42,
marks=[pytest.mark.basic, pytest.mark.xfail],
id='basic_6*9'),
])
@pytest.mark.parametrize(
"test_input,expected",
[
("3+5", 8),
pytest.param("1+7", 8, marks=pytest.mark.basic),
pytest.param("2+4", 6, marks=pytest.mark.basic, id="basic_2+4"),
pytest.param(
"6*9", 42, marks=[pytest.mark.basic, pytest.mark.xfail], id="basic_6*9"
),
],
)
def test_eval(test_input, expected):
assert eval(test_input) == expected

View File

@ -57,14 +57,16 @@ Applying marks to ``@pytest.mark.parametrize`` parameters
.. versionchanged:: 3.1
Prior to version 3.1 the supported mechanism for marking values
used the syntax::
used the syntax:
.. code-block:: python
import pytest
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
pytest.mark.xfail(("6*9", 42),),
])
@pytest.mark.parametrize(
"test_input,expected", [("3+5", 8), ("2+4", 6), pytest.mark.xfail(("6*9", 42))]
)
def test_eval(test_input, expected):
assert eval(test_input) == expected
@ -105,9 +107,13 @@ Conditions as strings instead of booleans
.. versionchanged:: 2.4
Prior to pytest-2.4 the only way to specify skipif/xfail conditions was
to use strings::
to use strings:
.. code-block:: python
import sys
@pytest.mark.skipif("sys.version_info >= (3,3)")
def test_function():
...
@ -139,17 +145,20 @@ dictionary which is constructed as follows:
expression is applied.
The pytest ``config`` object allows you to skip based on a test
configuration value which you might have added::
configuration value which you might have added:
.. code-block:: python
@pytest.mark.skipif("not config.getvalue('db')")
def test_function(...):
def test_function():
...
The equivalent with "boolean conditions" is::
The equivalent with "boolean conditions" is:
@pytest.mark.skipif(not pytest.config.getvalue("db"),
reason="--db was not specified")
def test_function(...):
.. code-block:: python
@pytest.mark.skipif(not pytest.config.getvalue("db"), reason="--db was not specified")
def test_function():
pass
.. note::
@ -164,9 +173,13 @@ The equivalent with "boolean conditions" is::
.. versionchanged:: 2.4
Previous to version 2.4 to set a break point in code one needed to use ``pytest.set_trace()``::
Previous to version 2.4 to set a break point in code one needed to use ``pytest.set_trace()``:
.. code-block:: python
import pytest
def test_function():
...
pytest.set_trace() # invoke PDB debugger and tracing

View File

@ -36,15 +36,15 @@ pytest enables test parametrization at several levels:
The builtin :ref:`pytest.mark.parametrize ref` decorator enables
parametrization of arguments for a test function. Here is a typical example
of a test function that implements checking that a certain input leads
to an expected output::
to an expected output:
.. code-block:: python
# content of test_expectation.py
import pytest
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
@pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
def test_eval(test_input, expected):
assert eval(test_input) == expected
@ -104,16 +104,18 @@ Note that you could also use the parametrize marker on a class or a module
(see :ref:`mark`) which would invoke several functions with the argument sets.
It is also possible to mark individual test instances within parametrize,
for example with the builtin ``mark.xfail``::
for example with the builtin ``mark.xfail``:
.. code-block:: python
# content of test_expectation.py
import pytest
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
pytest.param("6*9", 42,
marks=pytest.mark.xfail),
])
@pytest.mark.parametrize(
"test_input,expected",
[("3+5", 8), ("2+4", 6), pytest.param("6*9", 42, marks=pytest.mark.xfail)],
)
def test_eval(test_input, expected):
assert eval(test_input) == expected
@ -140,9 +142,13 @@ example, if they're dynamically generated by some function - the behaviour of
pytest is defined by the :confval:`empty_parameter_set_mark` option.
To get all combinations of multiple parametrized arguments you can stack
``parametrize`` decorators::
``parametrize`` decorators:
.. code-block:: python
import pytest
@pytest.mark.parametrize("x", [0, 1])
@pytest.mark.parametrize("y", [2, 3])
def test_foo(x, y):
@ -166,26 +172,36 @@ parametrization.
For example, let's say we want to run a test taking string inputs which
we want to set via a new ``pytest`` command line option. Let's first write
a simple test accepting a ``stringinput`` fixture function argument::
a simple test accepting a ``stringinput`` fixture function argument:
.. code-block:: python
# content of test_strings.py
def test_valid_string(stringinput):
assert stringinput.isalpha()
Now we add a ``conftest.py`` file containing the addition of a
command line option and the parametrization of our test function::
command line option and the parametrization of our test function:
.. code-block:: python
# content of conftest.py
def pytest_addoption(parser):
parser.addoption("--stringinput", action="append", default=[],
help="list of stringinputs to pass to test functions")
parser.addoption(
"--stringinput",
action="append",
default=[],
help="list of stringinputs to pass to test functions",
)
def pytest_generate_tests(metafunc):
if 'stringinput' in metafunc.fixturenames:
metafunc.parametrize("stringinput",
metafunc.config.getoption('stringinput'))
if "stringinput" in metafunc.fixturenames:
metafunc.parametrize("stringinput", metafunc.config.getoption("stringinput"))
If we now pass two stringinput values, our test will run twice:

View File

@ -84,32 +84,44 @@ It is also possible to skip the whole module using
If you wish to skip something conditionally then you can use ``skipif`` instead.
Here is an example of marking a test function to be skipped
when run on an interpreter earlier than Python3.6::
when run on an interpreter earlier than Python3.6:
.. code-block:: python
import sys
@pytest.mark.skipif(sys.version_info < (3,6),
reason="requires python3.6 or higher")
@pytest.mark.skipif(sys.version_info < (3, 6), reason="requires python3.6 or higher")
def test_function():
...
If the condition evaluates to ``True`` during collection, the test function will be skipped,
with the specified reason appearing in the summary when using ``-rs``.
You can share ``skipif`` markers between modules. Consider this test module::
You can share ``skipif`` markers between modules. Consider this test module:
.. code-block:: python
# content of test_mymodule.py
import mymodule
minversion = pytest.mark.skipif(mymodule.__versioninfo__ < (1,1),
reason="at least mymodule-1.1 required")
minversion = pytest.mark.skipif(
mymodule.__versioninfo__ < (1, 1), reason="at least mymodule-1.1 required"
)
@minversion
def test_function():
...
You can import the marker and reuse it in another test module::
You can import the marker and reuse it in another test module:
.. code-block:: python
# test_myothermodule.py
from test_mymodule import minversion
@minversion
def test_anotherfunction():
...
@ -128,12 +140,12 @@ so they are supported mainly for backward compatibility reasons.
Skip all test functions of a class or module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use the ``skipif`` marker (as any other marker) on classes::
You can use the ``skipif`` marker (as any other marker) on classes:
@pytest.mark.skipif(sys.platform == 'win32',
reason="does not run on windows")
.. code-block:: python
@pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows")
class TestPosixCalls(object):
def test_function(self):
"will not be setup or run under 'win32' platform"
@ -269,10 +281,11 @@ You can change the default value of the ``strict`` parameter using the
~~~~~~~~~~~~~~~~~~~~
As with skipif_ you can also mark your expectation of a failure
on a particular platform::
on a particular platform:
@pytest.mark.xfail(sys.version_info >= (3,6),
reason="python3.6 api changes")
.. code-block:: python
@pytest.mark.xfail(sys.version_info >= (3, 6), reason="python3.6 api changes")
def test_function():
...

View File

@ -6,15 +6,19 @@ Warnings Capture
.. versionadded:: 3.1
Starting from version ``3.1``, pytest now automatically catches warnings during test execution
and displays them at the end of the session::
and displays them at the end of the session:
.. code-block:: python
# content of test_show_warnings.py
import warnings
def api_v1():
warnings.warn(UserWarning("api v1, should use functions from v2"))
return 1
def test_one():
assert api_v1() == 1
@ -195,28 +199,36 @@ Ensuring code triggers a deprecation warning
You can also call a global helper for checking
that a certain function call triggers a ``DeprecationWarning`` or
``PendingDeprecationWarning``::
``PendingDeprecationWarning``:
.. code-block:: python
import pytest
def test_global():
pytest.deprecated_call(myfunction, 17)
By default, ``DeprecationWarning`` and ``PendingDeprecationWarning`` will not be
caught when using ``pytest.warns`` or ``recwarn`` because default Python warnings filters hide
them. If you wish to record them in your own code, use the
command ``warnings.simplefilter('always')``::
command ``warnings.simplefilter('always')``:
.. code-block:: python
import warnings
import pytest
def test_deprecation(recwarn):
warnings.simplefilter('always')
warnings.simplefilter("always")
warnings.warn("deprecated", DeprecationWarning)
assert len(recwarn) == 1
assert recwarn.pop(DeprecationWarning)
You can also use it as a contextmanager::
You can also use it as a contextmanager:
.. code-block:: python
def test_global():
with pytest.deprecated_call():
@ -238,11 +250,14 @@ Asserting warnings with the warns function
.. versionadded:: 2.8
You can check that code raises a particular warning using ``pytest.warns``,
which works in a similar manner to :ref:`raises <assertraises>`::
which works in a similar manner to :ref:`raises <assertraises>`:
.. code-block:: python
import warnings
import pytest
def test_warning():
with pytest.warns(UserWarning):
warnings.warn("my warning", UserWarning)
@ -269,7 +284,9 @@ You can also call ``pytest.warns`` on a function or code string::
The function also returns a list of all raised warnings (as
``warnings.WarningMessage`` objects), which you can query for
additional information::
additional information:
.. code-block:: python
with pytest.warns(RuntimeWarning) as record:
warnings.warn("another warning", RuntimeWarning)
@ -297,7 +314,9 @@ You can record raised warnings either using ``pytest.warns`` or with
the ``recwarn`` fixture.
To record with ``pytest.warns`` without asserting anything about the warnings,
pass ``None`` as the expected warning type::
pass ``None`` as the expected warning type:
.. code-block:: python
with pytest.warns(None) as record:
warnings.warn("user", UserWarning)
@ -307,10 +326,13 @@ pass ``None`` as the expected warning type::
assert str(record[0].message) == "user"
assert str(record[1].message) == "runtime"
The ``recwarn`` fixture will record warnings for the whole function::
The ``recwarn`` fixture will record warnings for the whole function:
.. code-block:: python
import warnings
def test_hello(recwarn):
warnings.warn("hello", UserWarning)
assert len(recwarn) == 1

View File

@ -509,10 +509,13 @@ a :py:class:`Result <pluggy._Result>` instance which encapsulates a result or
exception info. The yield point itself will thus typically not raise
exceptions (unless there are bugs).
Here is an example definition of a hook wrapper::
Here is an example definition of a hook wrapper:
.. code-block:: python
import pytest
@pytest.hookimpl(hookwrapper=True)
def pytest_pyfunc_call(pyfuncitem):
do_something_before_next_hook_executes()
@ -617,10 +620,13 @@ if you depend on a plugin that is not installed, validation will fail and
the error message will not make much sense to your users.
One approach is to defer the hook implementation to a new plugin instead of
declaring the hook functions directly in your plugin module, for example::
declaring the hook functions directly in your plugin module, for example:
.. code-block:: python
# contents of myplugin.py
class DeferPlugin(object):
"""Simple plugin to defer pytest-xdist hook functions."""
@ -628,8 +634,9 @@ declaring the hook functions directly in your plugin module, for example::
"""standard xdist hook function.
"""
def pytest_configure(config):
if config.pluginmanager.hasplugin('xdist'):
if config.pluginmanager.hasplugin("xdist"):
config.pluginmanager.register(DeferPlugin())
This has the added benefit of allowing you to conditionally install hooks