Merge remote-tracking branch 'upstream/master' into release-5.1.0

This commit is contained in:
Bruno Oliveira 2019-08-09 12:36:19 -03:00
commit 2f065a555f
34 changed files with 621 additions and 204 deletions

View File

@ -243,6 +243,27 @@ Improved Documentation
- `#5416 <https://github.com/pytest-dev/pytest/issues/5416>`_: Fix PytestUnknownMarkWarning in run/skip example.
pytest 4.6.5 (2019-08-05)
=========================
Bug Fixes
---------
- `#4344 <https://github.com/pytest-dev/pytest/issues/4344>`_: Fix RuntimeError/StopIteration when trying to collect package with "__init__.py" only.
- `#5478 <https://github.com/pytest-dev/pytest/issues/5478>`_: Fix encode error when using unicode strings in exceptions with ``pytest.raises``.
- `#5524 <https://github.com/pytest-dev/pytest/issues/5524>`_: Fix issue where ``tmp_path`` and ``tmpdir`` would not remove directories containing files marked as read-only,
which could lead to pytest crashing when executed a second time with the ``--basetemp`` option.
- `#5547 <https://github.com/pytest-dev/pytest/issues/5547>`_: ``--step-wise`` now handles ``xfail(strict=True)`` markers properly.
- `#5650 <https://github.com/pytest-dev/pytest/issues/5650>`_: Improved output when parsing an ini configuration file fails.
pytest 4.6.4 (2019-06-28)
=========================

84
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,84 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at coc@pytest.org. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
The coc@pytest.org address is routed to the following people who can also be
contacted individually:
- Brianna Laugher ([@pfctdayelise](https://github.com/pfctdayelise)): brianna@laugher.id.au
- Bruno Oliveira ([@nicoddemus](https://github.com/nicoddemus)): nicoddemus@gmail.com
- Florian Bruhin ([@the-compiler](https://github.com/the-compiler)): pytest@the-compiler.org
- Ronny Pfannschmidt ([@RonnyPfannschmidt](https://github.com/RonnyPfannschmidt)): ich@ronnypfannschmidt.de
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

View File

@ -8,6 +8,7 @@ Release announcements
release-5.0.1
release-5.0.0
release-4.6.5
release-4.6.4
release-4.6.3
release-4.6.2

View File

@ -0,0 +1,21 @@
pytest-4.6.5
=======================================
pytest 4.6.5 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at https://docs.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Anthony Sottile
* Bruno Oliveira
* Daniel Hahler
* Thomas Grainger
Happy testing,
The pytest Development Team

View File

@ -238,14 +238,17 @@ file which provides an alternative explanation for ``Foo`` objects:
def pytest_assertrepr_compare(op, left, right):
if isinstance(left, Foo) and isinstance(right, Foo) and op == "==":
return ["Comparing Foo instances:", " vals: %s != %s" % (left.val, right.val)]
return [
"Comparing Foo instances:",
" vals: {} != {}".format(left.val, right.val),
]
now, given this test module:
.. code-block:: python
# content of test_foocompare.py
class Foo(object):
class Foo:
def __init__(self, val):
self.val = val

View File

@ -162,7 +162,10 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
no tests ran in 0.12 seconds
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like::
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like:
.. code-block:: python
import pytest
help(pytest)

View File

@ -33,11 +33,14 @@ Other plugins may access the `config.cache`_ object to set/get
Rerunning only failures or failures first
-----------------------------------------------
First, let's create 50 test invocation of which only 2 fail::
First, let's create 50 test invocation of which only 2 fail:
.. code-block:: python
# content of test_50.py
import pytest
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
@ -183,15 +186,19 @@ The new config.cache object
Plugins or conftest.py support code can get a cached value using the
pytest ``config`` object. Here is a basic example plugin which
implements a :ref:`fixture` which re-uses previously created state
across pytest invocations::
across pytest invocations:
.. code-block:: python
# content of test_caching.py
import pytest
import time
def expensive_computation():
print("running expensive computation...")
@pytest.fixture
def mydata(request):
val = request.config.cache.get("example/value", None)
@ -201,6 +208,7 @@ across pytest invocations::
request.config.cache.set("example/value", val)
return val
def test_function(mydata):
assert mydata == 23

View File

@ -49,16 +49,21 @@ Using print statements for debugging
---------------------------------------------------
One primary benefit of the default capturing of stdout/stderr output
is that you can use print statements for debugging::
is that you can use print statements for debugging:
.. code-block:: python
# content of test_module.py
def setup_function(function):
print("setting up %s" % function)
def test_func1():
assert True
def test_func2():
assert False

View File

@ -459,7 +459,9 @@ Internal classes accessed through ``Node``
.. versionremoved:: 4.0
Access of ``Module``, ``Function``, ``Class``, ``Instance``, ``File`` and ``Item`` through ``Node`` instances now issue
this warning::
this warning:
.. code-block:: text
usage of Function.Module is deprecated, please use pytest.Module instead

View File

@ -218,15 +218,21 @@ namespace in which your doctests run. It is intended to be used within
your own fixtures to provide the tests that use them with context.
``doctest_namespace`` is a standard ``dict`` object into which you
place the objects you want to appear in the doctest namespace::
place the objects you want to appear in the doctest namespace:
.. code-block:: python
# content of conftest.py
import numpy
@pytest.fixture(autouse=True)
def add_np(doctest_namespace):
doctest_namespace['np'] = numpy
doctest_namespace["np"] = numpy
which can then be used in your doctests directly::
which can then be used in your doctests directly:
.. code-block:: python
# content of numpy.py
def arange():
@ -246,7 +252,9 @@ Skipping tests dynamically
.. versionadded:: 4.4
You can use ``pytest.skip`` to dynamically skip doctests. For example::
You can use ``pytest.skip`` to dynamically skip doctests. For example:
.. code-block:: text
>>> import sys, pytest
>>> if sys.platform.startswith('win'):

View File

@ -18,7 +18,7 @@ example: specifying and selecting acceptance tests
return AcceptFixture(request)
class AcceptFixture(object):
class AcceptFixture:
def __init__(self, request):
if not request.config.getoption("acceptance"):
pytest.skip("specify -A to run acceptance tests")
@ -65,7 +65,7 @@ extend the `accept example`_ by putting this in our test module:
return arg
class TestSpecialAcceptance(object):
class TestSpecialAcceptance:
def test_sometest(self, accept):
assert accept.tmpdir.join("special").check()

View File

@ -33,7 +33,7 @@ You can "mark" a test function with custom metadata like this:
pass
class TestClass(object):
class TestClass:
def test_method(self):
pass
@ -278,7 +278,7 @@ its test methods:
@pytest.mark.webtest
class TestClass(object):
class TestClass:
def test_startup(self):
pass
@ -295,7 +295,7 @@ Due to legacy reasons, it is possible to set the ``pytestmark`` attribute on a T
import pytest
class TestClass(object):
class TestClass:
pytestmark = pytest.mark.webtest
or if you need to use multiple markers you can use a list:
@ -305,7 +305,7 @@ or if you need to use multiple markers you can use a list:
import pytest
class TestClass(object):
class TestClass:
pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
You can also set a module level marker::
@ -523,7 +523,7 @@ code you can read over all such settings. Example:
@pytest.mark.glob("class", x=2)
class TestClass(object):
class TestClass:
@pytest.mark.glob("function", x=3)
def test_something(self):
pass
@ -539,7 +539,7 @@ test function. From a conftest file we can read it like this:
def pytest_runtest_setup(item):
for mark in item.iter_markers(name="glob"):
print("glob args=%s kwargs=%s" % (mark.args, mark.kwargs))
print("glob args={} kwargs={}".format(mark.args, mark.kwargs))
sys.stdout.flush()
Let's run this without capturing output and see what we get:

View File

@ -19,24 +19,30 @@ Generating parameters combinations, depending on command line
Let's say we want to execute a test with different computation
parameters and the parameter range shall be determined by a command
line argument. Let's first write a simple (do-nothing) computation test::
line argument. Let's first write a simple (do-nothing) computation test:
.. code-block:: python
# content of test_compute.py
def test_compute(param1):
assert param1 < 4
Now we add a test configuration like this::
Now we add a test configuration like this:
.. code-block:: python
# content of conftest.py
def pytest_addoption(parser):
parser.addoption("--all", action="store_true",
help="run all combinations")
parser.addoption("--all", action="store_true", help="run all combinations")
def pytest_generate_tests(metafunc):
if 'param1' in metafunc.fixturenames:
if metafunc.config.getoption('all'):
if "param1" in metafunc.fixturenames:
if metafunc.config.getoption("all"):
end = 5
else:
end = 2
@ -83,7 +89,9 @@ Running pytest with ``--collect-only`` will show the generated IDs.
Numbers, strings, booleans and None will have their usual string representation
used in the test ID. For other objects, pytest will make a string based on
the argument name::
the argument name:
.. code-block:: python
# content of test_time.py
@ -112,7 +120,7 @@ the argument name::
def idfn(val):
if isinstance(val, (datetime,)):
# note this wouldn't show any hours/minutes/seconds
return val.strftime('%Y%m%d')
return val.strftime("%Y%m%d")
@pytest.mark.parametrize("a,b,expected", testdata, ids=idfn)
@ -120,12 +128,18 @@ the argument name::
diff = a - b
assert diff == expected
@pytest.mark.parametrize("a,b,expected", [
pytest.param(datetime(2001, 12, 12), datetime(2001, 12, 11),
timedelta(1), id='forward'),
pytest.param(datetime(2001, 12, 11), datetime(2001, 12, 12),
timedelta(-1), id='backward'),
])
@pytest.mark.parametrize(
"a,b,expected",
[
pytest.param(
datetime(2001, 12, 12), datetime(2001, 12, 11), timedelta(1), id="forward"
),
pytest.param(
datetime(2001, 12, 11), datetime(2001, 12, 12), timedelta(-1), id="backward"
),
],
)
def test_timedistance_v3(a, b, expected):
diff = a - b
assert diff == expected
@ -171,10 +185,13 @@ A quick port of "testscenarios"
Here is a quick port to run tests configured with `test scenarios`_,
an add-on from Robert Collins for the standard unittest framework. We
only have to work a bit to construct the correct arguments for pytest's
:py:func:`Metafunc.parametrize`::
:py:func:`Metafunc.parametrize`:
.. code-block:: python
# content of test_scenarios.py
def pytest_generate_tests(metafunc):
idlist = []
argvalues = []
@ -182,13 +199,15 @@ only have to work a bit to construct the correct arguments for pytest's
idlist.append(scenario[0])
items = scenario[1].items()
argnames = [x[0] for x in items]
argvalues.append(([x[1] for x in items]))
argvalues.append([x[1] for x in items])
metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
scenario1 = ('basic', {'attribute': 'value'})
scenario2 = ('advanced', {'attribute': 'value2'})
class TestSampleWithScenarios(object):
scenario1 = ("basic", {"attribute": "value"})
scenario2 = ("advanced", {"attribute": "value2"})
class TestSampleWithScenarios:
scenarios = [scenario1, scenario2]
def test_demo1(self, attribute):
@ -244,11 +263,15 @@ The parametrization of test functions happens at collection
time. It is a good idea to setup expensive resources like DB
connections or subprocess only when the actual test is run.
Here is a simple example how you can achieve that, first
the actual test requiring a ``db`` object::
the actual test requiring a ``db`` object:
.. code-block:: python
# content of test_backends.py
import pytest
def test_db_initialized(db):
# a dummy test
if db.__class__.__name__ == "DB2":
@ -256,20 +279,27 @@ the actual test requiring a ``db`` object::
We can now add a test configuration that generates two invocations of
the ``test_db_initialized`` function and also implements a factory that
creates a database object for the actual test invocations::
creates a database object for the actual test invocations:
.. code-block:: python
# content of conftest.py
import pytest
def pytest_generate_tests(metafunc):
if 'db' in metafunc.fixturenames:
metafunc.parametrize("db", ['d1', 'd2'], indirect=True)
class DB1(object):
def pytest_generate_tests(metafunc):
if "db" in metafunc.fixturenames:
metafunc.parametrize("db", ["d1", "d2"], indirect=True)
class DB1:
"one database object"
class DB2(object):
class DB2:
"alternative database object"
@pytest.fixture
def db(request):
if request.param == "d1":
@ -327,23 +357,29 @@ parameter on particular arguments. It can be done by passing list or tuple of
arguments' names to ``indirect``. In the example below there is a function ``test_indirect`` which uses
two fixtures: ``x`` and ``y``. Here we give to indirect the list, which contains the name of the
fixture ``x``. The indirect parameter will be applied to this argument only, and the value ``a``
will be passed to respective fixture function::
will be passed to respective fixture function:
.. code-block:: python
# content of test_indirect_list.py
import pytest
@pytest.fixture(scope='function')
@pytest.fixture(scope="function")
def x(request):
return request.param * 3
@pytest.fixture(scope='function')
@pytest.fixture(scope="function")
def y(request):
return request.param * 2
@pytest.mark.parametrize('x, y', [('a', 'b')], indirect=['x'])
@pytest.mark.parametrize("x, y", [("a", "b")], indirect=["x"])
def test_indirect(x, y):
assert x == 'aaa'
assert y == 'b'
assert x == "aaa"
assert y == "b"
The result of this test will be successful:
@ -370,23 +406,28 @@ Parametrizing test methods through per-class configuration
Here is an example ``pytest_generate_tests`` function implementing a
parametrization scheme similar to Michael Foord's `unittest
parametrizer`_ but in a lot less code::
parametrizer`_ but in a lot less code:
.. code-block:: python
# content of ./test_parametrize.py
import pytest
def pytest_generate_tests(metafunc):
# called once per each test function
funcarglist = metafunc.cls.params[metafunc.function.__name__]
argnames = sorted(funcarglist[0])
metafunc.parametrize(argnames, [[funcargs[name] for name in argnames]
for funcargs in funcarglist])
metafunc.parametrize(
argnames, [[funcargs[name] for name in argnames] for funcargs in funcarglist]
)
class TestClass(object):
class TestClass:
# a map specifying multiple argument sets for a test method
params = {
'test_equals': [dict(a=1, b=2), dict(a=3, b=3), ],
'test_zerodivision': [dict(a=1, b=0), ],
"test_equals": [dict(a=1, b=2), dict(a=3, b=3)],
"test_zerodivision": [dict(a=1, b=0)],
}
def test_equals(self, a, b):
@ -447,36 +488,47 @@ If you want to compare the outcomes of several implementations of a given
API, you can write test functions that receive the already imported implementations
and get skipped in case the implementation is not importable/available. Let's
say we have a "base" implementation and the other (possibly optimized ones)
need to provide similar results::
need to provide similar results:
.. code-block:: python
# content of conftest.py
import pytest
@pytest.fixture(scope="session")
def basemod(request):
return pytest.importorskip("base")
@pytest.fixture(scope="session", params=["opt1", "opt2"])
def optmod(request):
return pytest.importorskip(request.param)
And then a base implementation of a simple function::
And then a base implementation of a simple function:
.. code-block:: python
# content of base.py
def func1():
return 1
And an optimized version::
And an optimized version:
.. code-block:: python
# content of opt1.py
def func1():
return 1.0001
And finally a little test module::
And finally a little test module:
.. code-block:: python
# content of test_module.py
def test_func1(basemod, optmod):
assert round(basemod.func1(), 3) == round(optmod.func1(), 3)
@ -579,22 +631,28 @@ Use :func:`pytest.raises` with the
in which some tests raise exceptions and others do not.
It is helpful to define a no-op context manager ``does_not_raise`` to serve
as a complement to ``raises``. For example::
as a complement to ``raises``. For example:
.. code-block:: python
from contextlib import contextmanager
import pytest
@contextmanager
def does_not_raise():
yield
@pytest.mark.parametrize('example_input,expectation', [
@pytest.mark.parametrize(
"example_input,expectation",
[
(3, does_not_raise()),
(2, does_not_raise()),
(1, does_not_raise()),
(0, pytest.raises(ZeroDivisionError)),
])
],
)
def test_division(example_input, expectation):
"""Test how much I know division."""
with expectation:
@ -604,14 +662,20 @@ In the example above, the first three test cases should run unexceptionally,
while the fourth should raise ``ZeroDivisionError``.
If you're only supporting Python 3.7+, you can simply use ``nullcontext``
to define ``does_not_raise``::
to define ``does_not_raise``:
.. code-block:: python
from contextlib import nullcontext as does_not_raise
Or, if you're supporting Python 3.3+ you can use::
Or, if you're supporting Python 3.3+ you can use:
.. code-block:: python
from contextlib import ExitStack as does_not_raise
Or, if desired, you can ``pip install contextlib2`` and use::
Or, if desired, you can ``pip install contextlib2`` and use:
.. code-block:: python
from contextlib2 import ExitStack as does_not_raise

View File

@ -31,7 +31,7 @@ you will see that ``pytest`` only collects test-modules, which do not match the
.. code-block:: pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 5 items
@ -131,12 +131,15 @@ Here is an example:
This would make ``pytest`` look for tests in files that match the ``check_*
.py`` glob-pattern, ``Check`` prefixes in classes, and functions and methods
that match ``*_check``. For example, if we have::
that match ``*_check``. For example, if we have:
.. code-block:: python
# content of check_myapp.py
class CheckMyApp(object):
class CheckMyApp:
def simple_check(self):
pass
def complex_check(self):
pass
@ -238,7 +241,9 @@ You can easily instruct ``pytest`` to discover tests from every Python file:
However, many projects will have a ``setup.py`` which they don't want to be
imported. Moreover, there may files only importable by a specific python
version. For such cases you can dynamically define files to be ignored by
listing them in a ``conftest.py`` file::
listing them in a ``conftest.py`` file:
.. code-block:: python
# content of conftest.py
import sys
@ -247,7 +252,9 @@ listing them in a ``conftest.py`` file::
if sys.version_info[0] > 2:
collect_ignore.append("pkg/module_py2.py")
and then if you have a module file like this::
and then if you have a module file like this:
.. code-block:: python
# content of pkg/module_py2.py
def test_only_on_python2():
@ -256,7 +263,9 @@ and then if you have a module file like this::
except Exception, e:
pass
and a ``setup.py`` dummy file like this::
and a ``setup.py`` dummy file like this:
.. code-block:: python
# content of setup.py
0 / 0 # will raise exception if imported
@ -295,7 +304,9 @@ patterns to ``collect_ignore_glob``.
The following example ``conftest.py`` ignores the file ``setup.py`` and in
addition all files that end with ``*_py2.py`` when executed with a Python 3
interpreter::
interpreter:
.. code-block:: python
# content of conftest.py
import sys

View File

@ -238,7 +238,7 @@ Example:
def checkconfig(x):
__tracebackhide__ = True
if not hasattr(x, "config"):
pytest.fail("not configured: %s" % (x,))
pytest.fail("not configured: {}".format(x))
def test_something():
@ -280,7 +280,7 @@ this to make sure unexpected exception types aren't hidden:
def checkconfig(x):
__tracebackhide__ = operator.methodcaller("errisinstance", ConfigException)
if not hasattr(x, "config"):
raise ConfigException("not configured: %s" % (x,))
raise ConfigException("not configured: {}".format(x))
def test_something():
@ -491,7 +491,7 @@ tests in a class. Here is a test module example:
@pytest.mark.incremental
class TestUserHandling(object):
class TestUserHandling:
def test_login(self):
pass
@ -556,7 +556,7 @@ Here is an example for making a ``db`` fixture available in a directory:
import pytest
class DB(object):
class DB:
pass

View File

@ -5,16 +5,19 @@ A session-scoped fixture effectively has access to all
collected test items. Here is an example of a fixture
function which walks all collected tests and looks
if their test class defines a ``callme`` method and
calls it::
calls it:
.. code-block:: python
# content of conftest.py
import pytest
@pytest.fixture(scope="session", autouse=True)
def callattr_ahead_of_alltests(request):
print("callattr_ahead_of_alltests called")
seen = set([None])
seen = {None}
session = request.node
for item in session.items:
cls = item.getparent(pytest.Class)
@ -24,11 +27,14 @@ calls it::
seen.add(cls)
test classes may now define a ``callme`` method which
will be called ahead of running any tests::
will be called ahead of running any tests:
.. code-block:: python
# content of test_module.py
class TestHello(object):
class TestHello:
@classmethod
def callme(cls):
print("callme called!")
@ -39,16 +45,20 @@ will be called ahead of running any tests::
def test_method2(self):
print("test_method1 called")
class TestOther(object):
class TestOther:
@classmethod
def callme(cls):
print("callme other called")
def test_other(self):
print("test other")
# works with unittest as well ...
import unittest
class SomeTest(unittest.TestCase):
@classmethod
def callme(self):

View File

@ -15,7 +15,9 @@ Running an existing test suite with pytest
Say you want to contribute to an existing repository somewhere.
After pulling the code into your development space using some
flavor of version control and (optionally) setting up a virtualenv
you will want to run::
you will want to run:
.. code-block:: bash
cd <repository>
pip install -e . # Environment dependent alternatives include

View File

@ -49,16 +49,21 @@ argument. For each argument name, a fixture function with that name provides
the fixture object. Fixture functions are registered by marking them with
:py:func:`@pytest.fixture <_pytest.python.fixture>`. Let's look at a simple
self-contained test module containing a fixture and a test function
using it::
using it:
.. code-block:: python
# content of ./test_smtpsimple.py
import pytest
@pytest.fixture
def smtp_connection():
import smtplib
return smtplib.SMTP("smtp.gmail.com", 587, timeout=5)
def test_ehlo(smtp_connection):
response, msg = smtp_connection.ehlo()
assert response == 250
@ -180,12 +185,15 @@ Possible values for ``scope`` are: ``function``, ``class``, ``module``, ``packag
The next example puts the fixture function into a separate ``conftest.py`` file
so that tests from multiple test modules in the directory can
access the fixture function::
access the fixture function:
.. code-block:: python
# content of conftest.py
import pytest
import smtplib
@pytest.fixture(scope="module")
def smtp_connection():
return smtplib.SMTP("smtp.gmail.com", 587, timeout=5)
@ -193,16 +201,20 @@ access the fixture function::
The name of the fixture again is ``smtp_connection`` and you can access its
result by listing the name ``smtp_connection`` as an input parameter in any
test or fixture function (in or below the directory where ``conftest.py`` is
located)::
located):
.. code-block:: python
# content of test_module.py
def test_ehlo(smtp_connection):
response, msg = smtp_connection.ehlo()
assert response == 250
assert b"smtp.gmail.com" in msg
assert 0 # for demo purposes
def test_noop(smtp_connection):
response, msg = smtp_connection.noop()
assert response == 250
@ -477,18 +489,21 @@ Fixtures can introspect the requesting test context
Fixture functions can accept the :py:class:`request <FixtureRequest>` object
to introspect the "requesting" test function, class or module context.
Further extending the previous ``smtp_connection`` fixture example, let's
read an optional server URL from the test module which uses our fixture::
read an optional server URL from the test module which uses our fixture:
.. code-block:: python
# content of conftest.py
import pytest
import smtplib
@pytest.fixture(scope="module")
def smtp_connection(request):
server = getattr(request.module, "smtpserver", "smtp.gmail.com")
smtp_connection = smtplib.SMTP(server, 587, timeout=5)
yield smtp_connection
print("finalizing %s (%s)" % (smtp_connection, server))
print("finalizing {} ({})".format(smtp_connection, server))
smtp_connection.close()
We use the ``request.module`` attribute to optionally obtain an
@ -503,12 +518,15 @@ again, nothing much has changed:
2 failed in 0.12 seconds
Let's quickly create another test module that actually sets the
server URL in its module namespace::
server URL in its module namespace:
.. code-block:: python
# content of test_anothersmtp.py
smtpserver = "mail.python.org" # will be read by smtp fixture
def test_showhelo(smtp_connection):
assert 0, smtp_connection.helo()
@ -540,16 +558,14 @@ of a fixture is needed multiple times in a single test. Instead of returning
data directly, the fixture instead returns a function which generates the data.
This function can then be called multiple times in the test.
Factories can have parameters as needed::
Factories can have parameters as needed:
.. code-block:: python
@pytest.fixture
def make_customer_record():
def _make_customer_record(name):
return {
"name": name,
"orders": []
}
return {"name": name, "orders": []}
return _make_customer_record
@ -559,7 +575,9 @@ Factories can have parameters as needed::
customer_2 = make_customer_record("Mike")
customer_3 = make_customer_record("Meredith")
If the data created by the factory requires managing, the fixture can take care of that::
If the data created by the factory requires managing, the fixture can take care of that:
.. code-block:: python
@pytest.fixture
def make_customer_record():
@ -598,14 +616,16 @@ configured in multiple ways.
Extending the previous example, we can flag the fixture to create two
``smtp_connection`` fixture instances which will cause all tests using the fixture
to run twice. The fixture function gets access to each parameter
through the special :py:class:`request <FixtureRequest>` object::
through the special :py:class:`request <FixtureRequest>` object:
.. code-block:: python
# content of conftest.py
import pytest
import smtplib
@pytest.fixture(scope="module",
params=["smtp.gmail.com", "mail.python.org"])
@pytest.fixture(scope="module", params=["smtp.gmail.com", "mail.python.org"])
def smtp_connection(request):
smtp_connection = smtplib.SMTP(request.param, 587, timeout=5)
yield smtp_connection
@ -690,28 +710,35 @@ Numbers, strings, booleans and None will have their usual string
representation used in the test ID. For other objects, pytest will
make a string based on the argument name. It is possible to customise
the string used in a test ID for a certain fixture value by using the
``ids`` keyword argument::
``ids`` keyword argument:
.. code-block:: python
# content of test_ids.py
import pytest
@pytest.fixture(params=[0, 1], ids=["spam", "ham"])
def a(request):
return request.param
def test_a(a):
pass
def idfn(fixture_value):
if fixture_value == 0:
return "eggs"
else:
return None
@pytest.fixture(params=[0, 1], ids=idfn)
def b(request):
return request.param
def test_b(b):
pass
@ -754,14 +781,19 @@ Using marks with parametrized fixtures
:func:`pytest.param` can be used to apply marks in values sets of parametrized fixtures in the same way
that they can be used with :ref:`@pytest.mark.parametrize <@pytest.mark.parametrize>`.
Example::
Example:
.. code-block:: python
# content of test_fixture_marks.py
import pytest
@pytest.fixture(params=[0, 1, pytest.param(2, marks=pytest.mark.skip)])
def data_set(request):
return request.param
def test_data(data_set):
pass
@ -792,20 +824,25 @@ can use other fixtures themselves. This contributes to a modular design
of your fixtures and allows re-use of framework-specific fixtures across
many projects. As a simple example, we can extend the previous example
and instantiate an object ``app`` where we stick the already defined
``smtp_connection`` resource into it::
``smtp_connection`` resource into it:
.. code-block:: python
# content of test_appsetup.py
import pytest
class App(object):
class App:
def __init__(self, smtp_connection):
self.smtp_connection = smtp_connection
@pytest.fixture(scope="module")
def app(smtp_connection):
return App(smtp_connection)
def test_smtp_connection_exists(app):
assert app.smtp_connection
@ -854,11 +891,14 @@ this eases testing of applications which create and use global state.
The following example uses two parametrized fixtures, one of which is
scoped on a per-module basis, and all the functions perform ``print`` calls
to show the setup/teardown flow::
to show the setup/teardown flow:
.. code-block:: python
# content of test_module.py
import pytest
@pytest.fixture(scope="module", params=["mod1", "mod2"])
def modarg(request):
param = request.param
@ -866,6 +906,7 @@ to show the setup/teardown flow::
yield param
print(" TEARDOWN modarg %s" % param)
@pytest.fixture(scope="function", params=[1, 2])
def otherarg(request):
param = request.param
@ -873,12 +914,17 @@ to show the setup/teardown flow::
yield param
print(" TEARDOWN otherarg %s" % param)
def test_0(otherarg):
print(" RUN test0 with otherarg %s" % otherarg)
def test_1(modarg):
print(" RUN test1 with modarg %s" % modarg)
def test_2(otherarg, modarg):
print(" RUN test2 with otherarg %s and modarg %s" % (otherarg, modarg))
print(" RUN test2 with otherarg {} and modarg {}".format(otherarg, modarg))
Let's run the tests in verbose mode and with looking at the print-output:
@ -953,7 +999,9 @@ current working directory but otherwise do not care for the concrete
directory. Here is how you can use the standard `tempfile
<http://docs.python.org/library/tempfile.html>`_ and pytest fixtures to
achieve it. We separate the creation of the fixture into a conftest.py
file::
file:
.. code-block:: python
# content of conftest.py
@ -961,19 +1009,23 @@ file::
import tempfile
import os
@pytest.fixture()
def cleandir():
newpath = tempfile.mkdtemp()
os.chdir(newpath)
and declare its use in a test module via a ``usefixtures`` marker::
and declare its use in a test module via a ``usefixtures`` marker:
.. code-block:: python
# content of test_setenv.py
import os
import pytest
@pytest.mark.usefixtures("cleandir")
class TestDirectoryInit(object):
class TestDirectoryInit:
def test_cwd_starts_empty(self):
assert os.listdir(os.getcwd()) == []
with open("myfile", "w") as f:
@ -1050,25 +1102,32 @@ without declaring a function argument explicitly or a `usefixtures`_ decorator.
As a practical example, suppose we have a database fixture which has a
begin/rollback/commit architecture and we want to automatically surround
each test method by a transaction and a rollback. Here is a dummy
self-contained implementation of this idea::
self-contained implementation of this idea:
.. code-block:: python
# content of test_db_transact.py
import pytest
class DB(object):
class DB:
def __init__(self):
self.intransaction = []
def begin(self, name):
self.intransaction.append(name)
def rollback(self):
self.intransaction.pop()
@pytest.fixture(scope="module")
def db():
return DB()
class TestClass(object):
class TestClass:
@pytest.fixture(autouse=True)
def transact(self, request, db):
db.begin(request.function.__name__)
@ -1116,7 +1175,9 @@ Here is how autouse fixtures work in other scopes:
Note that the above ``transact`` fixture may very well be a fixture that
you want to make available in your project without having it generally
active. The canonical way to do that is to put the transact definition
into a conftest.py file **without** using ``autouse``::
into a conftest.py file **without** using ``autouse``:
.. code-block:: python
# content of conftest.py
@pytest.fixture
@ -1125,10 +1186,12 @@ into a conftest.py file **without** using ``autouse``::
yield
db.rollback()
and then e.g. have a TestClass using it by declaring the need::
and then e.g. have a TestClass using it by declaring the need:
.. code-block:: python
@pytest.mark.usefixtures("transact")
class TestClass(object):
class TestClass:
def test_method1(self):
...

View File

@ -21,19 +21,23 @@ funcarg for a test function is required. If a factory wants to
re-use a resource across different scopes, it often used
the ``request.cached_setup()`` helper to manage caching of
resources. Here is a basic example how we could implement
a per-session Database object::
a per-session Database object:
.. code-block:: python
# content of conftest.py
class Database(object):
class Database:
def __init__(self):
print("database instance created")
def destroy(self):
print("database instance destroyed")
def pytest_funcarg__db(request):
return request.cached_setup(setup=DataBase,
teardown=lambda db: db.destroy,
scope="session")
return request.cached_setup(
setup=DataBase, teardown=lambda db: db.destroy, scope="session"
)
There are several limitations and difficulties with this approach:
@ -68,7 +72,9 @@ Direct scoping of fixture/funcarg factories
Instead of calling cached_setup() with a cache scope, you can use the
:ref:`@pytest.fixture <pytest.fixture>` decorator and directly state
the scope::
the scope:
.. code-block:: python
@pytest.fixture(scope="session")
def db(request):
@ -90,7 +96,9 @@ Previously, funcarg factories could not directly cause parametrization.
You needed to specify a ``@parametrize`` decorator on your test function
or implement a ``pytest_generate_tests`` hook to perform
parametrization, i.e. calling a test multiple times with different value
sets. pytest-2.3 introduces a decorator for use on the factory itself::
sets. pytest-2.3 introduces a decorator for use on the factory itself:
.. code-block:: python
@pytest.fixture(params=["mysql", "pg"])
def db(request):
@ -107,7 +115,9 @@ allow to re-use already written factories because effectively
parametrized via
:py:func:`~_pytest.python.Metafunc.parametrize(indirect=True)` calls.
Of course it's perfectly fine to combine parametrization and scoping::
Of course it's perfectly fine to combine parametrization and scoping:
.. code-block:: python
@pytest.fixture(scope="session", params=["mysql", "pg"])
def db(request):
@ -128,7 +138,9 @@ No ``pytest_funcarg__`` prefix when using @fixture decorator
When using the ``@fixture`` decorator the name of the function
denotes the name under which the resource can be accessed as a function
argument::
argument:
.. code-block:: python
@pytest.fixture()
def db(request):
@ -137,7 +149,9 @@ argument::
The name under which the funcarg resource can be requested is ``db``.
You can still use the "old" non-decorator way of specifying funcarg factories
aka::
aka:
.. code-block:: python
def pytest_funcarg__db(request):
...

View File

@ -35,12 +35,15 @@ Install ``pytest``
Create your first test
----------------------------------------------------------
Create a simple test function with just four lines of code::
Create a simple test function with just four lines of code:
.. code-block:: python
# content of test_sample.py
def func(x):
return x + 1
def test_answer():
assert func(3) == 5
@ -83,13 +86,18 @@ Run multiple tests
Assert that a certain exception is raised
--------------------------------------------------------------
Use the :ref:`raises <assertraises>` helper to assert that some code raises an exception::
Use the :ref:`raises <assertraises>` helper to assert that some code raises an exception:
.. code-block:: python
# content of test_sysexit.py
import pytest
def f():
raise SystemExit(1)
def test_mytest():
with pytest.raises(SystemExit):
f()
@ -105,17 +113,19 @@ Execute the test function with “quiet” reporting mode:
Group multiple tests in a class
--------------------------------------------------------------
Once you develop multiple tests, you may want to group them into a class. pytest makes it easy to create a class containing more than one test::
Once you develop multiple tests, you may want to group them into a class. pytest makes it easy to create a class containing more than one test:
.. code-block:: python
# content of test_class.py
class TestClass(object):
class TestClass:
def test_one(self):
x = "this"
assert 'h' in x
assert "h" in x
def test_two(self):
x = "hello"
assert hasattr(x, 'check')
assert hasattr(x, "check")
``pytest`` discovers all tests following its :ref:`Conventions for Python test discovery <test discovery>`, so it finds both ``test_`` prefixed functions. There is no need to subclass anything. We can simply run the module by passing its filename:
@ -142,7 +152,9 @@ The first test passed and the second failed. You can easily see the intermediate
Request a unique temporary directory for functional tests
--------------------------------------------------------------
``pytest`` provides `Builtin fixtures/function arguments <https://docs.pytest.org/en/latest/builtin.html>`_ to request arbitrary resources, like a unique temporary directory::
``pytest`` provides `Builtin fixtures/function arguments <https://docs.pytest.org/en/latest/builtin.html>`_ to request arbitrary resources, like a unique temporary directory:
.. code-block:: python
# content of test_tmpdir.py
def test_needsfiles(tmpdir):

View File

@ -12,13 +12,17 @@ pip_ for installing your application and any dependencies,
as well as the ``pytest`` package itself.
This ensures your code and dependencies are isolated from your system Python installation.
Next, place a ``setup.py`` file in the root of your package with the following minimum content::
Next, place a ``setup.py`` file in the root of your package with the following minimum content:
.. code-block:: python
from setuptools import setup, find_packages
setup(name="PACKAGENAME", packages=find_packages())
Where ``PACKAGENAME`` is the name of your package. You can then install your package in "editable" mode by running from the same directory::
Where ``PACKAGENAME`` is the name of your package. You can then install your package in "editable" mode by running from the same directory:
.. code-block:: bash
pip install -e .
@ -60,7 +64,9 @@ Tests outside application code
Putting tests into an extra directory outside your actual application code
might be useful if you have many functional tests or for other reasons want
to keep tests separate from actual application code (often a good idea)::
to keep tests separate from actual application code (often a good idea):
.. code-block:: text
setup.py
mypkg/
@ -92,7 +98,9 @@ be imported as ``test_app`` and ``test_view`` top-level modules by adding ``test
``sys.path``.
If you need to have test modules with the same name, you might add ``__init__.py`` files to your
``tests`` folder and subfolders, changing them to packages::
``tests`` folder and subfolders, changing them to packages:
.. code-block:: text
setup.py
mypkg/
@ -114,7 +122,9 @@ This is problematic if you are using a tool like `tox`_ to test your package in
because you want to test the *installed* version of your package, not the local code from the repository.
In this situation, it is **strongly** suggested to use a ``src`` layout where application root package resides in a
sub-directory of your root::
sub-directory of your root:
.. code-block:: text
setup.py
src/
@ -140,7 +150,9 @@ Tests as part of application code
Inlining test directories into your application package
is useful if you have direct relation between tests and application modules and
want to distribute them along with your application::
want to distribute them along with your application:
.. code-block:: text
setup.py
mypkg/
@ -153,7 +165,9 @@ want to distribute them along with your application::
test_view.py
...
In this scheme, it is easy to run your tests using the ``--pyargs`` option::
In this scheme, it is easy to run your tests using the ``--pyargs`` option:
.. code-block:: bash
pytest --pyargs mypkg

View File

@ -70,7 +70,9 @@ caplog fixture
^^^^^^^^^^^^^^
Inside tests it is possible to change the log level for the captured log
messages. This is supported by the ``caplog`` fixture::
messages. This is supported by the ``caplog`` fixture:
.. code-block:: python
def test_foo(caplog):
caplog.set_level(logging.INFO)
@ -78,59 +80,69 @@ messages. This is supported by the ``caplog`` fixture::
By default the level is set on the root logger,
however as a convenience it is also possible to set the log level of any
logger::
logger:
.. code-block:: python
def test_foo(caplog):
caplog.set_level(logging.CRITICAL, logger='root.baz')
caplog.set_level(logging.CRITICAL, logger="root.baz")
pass
The log levels set are restored automatically at the end of the test.
It is also possible to use a context manager to temporarily change the log
level inside a ``with`` block::
level inside a ``with`` block:
.. code-block:: python
def test_bar(caplog):
with caplog.at_level(logging.INFO):
pass
Again, by default the level of the root logger is affected but the level of any
logger can be changed instead with::
logger can be changed instead with:
.. code-block:: python
def test_bar(caplog):
with caplog.at_level(logging.CRITICAL, logger='root.baz'):
with caplog.at_level(logging.CRITICAL, logger="root.baz"):
pass
Lastly all the logs sent to the logger during the test run are made available on
the fixture in the form of both the ``logging.LogRecord`` instances and the final log text.
This is useful for when you want to assert on the contents of a message::
This is useful for when you want to assert on the contents of a message:
.. code-block:: python
def test_baz(caplog):
func_under_test()
for record in caplog.records:
assert record.levelname != 'CRITICAL'
assert 'wally' not in caplog.text
assert record.levelname != "CRITICAL"
assert "wally" not in caplog.text
For all the available attributes of the log records see the
``logging.LogRecord`` class.
You can also resort to ``record_tuples`` if all you want to do is to ensure,
that certain messages have been logged under a given logger name with a given
severity and message::
severity and message:
.. code-block:: python
def test_foo(caplog):
logging.getLogger().info('boo %s', 'arg')
logging.getLogger().info("boo %s", "arg")
assert caplog.record_tuples == [
('root', logging.INFO, 'boo arg'),
]
assert caplog.record_tuples == [("root", logging.INFO, "boo arg")]
You can call ``caplog.clear()`` to reset the captured log records in a test::
You can call ``caplog.clear()`` to reset the captured log records in a test:
.. code-block:: python
def test_something_with_clearing_records(caplog):
some_method_that_creates_log_records()
caplog.clear()
your_test_method()
assert ['Foo'] == [rec.message for rec in caplog.records]
assert ["Foo"] == [rec.message for rec in caplog.records]
The ``caplog.records`` attribute contains records from the current stage only, so

View File

@ -272,7 +272,7 @@ to do this using the ``setenv`` and ``delenv`` method. Our example code to test:
username = os.getenv("USER")
if username is None:
raise EnvironmentError("USER environment is not set.")
raise OSError("USER environment is not set.")
return username.lower()
@ -296,7 +296,7 @@ both paths can be safely tested without impacting the running environment:
"""Remove the USER env var and assert EnvironmentError is raised."""
monkeypatch.delenv("USER", raising=False)
with pytest.raises(EnvironmentError):
with pytest.raises(OSError):
_ = get_os_user_lower()
This behavior can be moved into ``fixture`` structures and shared across tests:
@ -323,7 +323,7 @@ This behavior can be moved into ``fixture`` structures and shared across tests:
def test_raise_exception(mock_env_missing):
with pytest.raises(EnvironmentError):
with pytest.raises(OSError):
_ = get_os_user_lower()

View File

@ -8,7 +8,9 @@ Installing and Using plugins
This section talks about installing and using third party plugins.
For writing your own plugins, please refer to :ref:`writing-plugins`.
Installing a third party plugin can be easily done with ``pip``::
Installing a third party plugin can be easily done with ``pip``:
.. code-block:: bash
pip install pytest-NAME
pip uninstall pytest-NAME
@ -95,7 +97,9 @@ Finding out which plugins are active
------------------------------------
If you want to find out which plugins are active in your
environment you can type::
environment you can type:
.. code-block:: bash
pytest --trace-config
@ -108,7 +112,9 @@ and their names. It will also print local plugins aka
Deactivating / unregistering a plugin by name
---------------------------------------------
You can prevent plugins from loading or unregister them::
You can prevent plugins from loading or unregister them:
.. code-block:: bash
pytest -p no:NAME

View File

@ -22,7 +22,9 @@ Consider this file and directory layout::
|- test_foo.py
When executing::
When executing:
.. code-block:: bash
pytest root/
@ -54,7 +56,9 @@ Consider this file and directory layout::
|- test_foo.py
When executing::
When executing:
.. code-block:: bash
pytest root/

View File

@ -469,9 +469,11 @@ testdir
This fixture provides a :class:`Testdir` instance useful for black-box testing of test files, making it ideal to
test plugins.
To use it, include in your top-most ``conftest.py`` file::
To use it, include in your top-most ``conftest.py`` file:
pytest_plugins = 'pytester'
.. code-block:: python
pytest_plugins = "pytester"
@ -1001,6 +1003,8 @@ passed multiple times. The expected format is ``name=value``. For example::
issuing ``pytest test_hello.py`` actually means::
.. code-block:: bash
pytest --maxfail=2 -rf test_hello.py
Default is to add no options.

View File

@ -145,7 +145,7 @@ You can use the ``skipif`` marker (as any other marker) on classes:
.. code-block:: python
@pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows")
class TestPosixCalls(object):
class TestPosixCalls:
def test_function(self):
"will not be setup or run under 'win32' platform"
@ -180,13 +180,17 @@ Skipping on a missing import dependency
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use the following helper at module level
or within a test or test setup function::
or within a test or test setup function:
.. code-block:: python
docutils = pytest.importorskip("docutils")
If ``docutils`` cannot be imported here, this will lead to a
skip outcome of the test. You can also skip based on the
version number of a library::
version number of a library:
.. code-block:: python
docutils = pytest.importorskip("docutils", minversion="0.3")
@ -223,7 +227,9 @@ XFail: mark test functions as expected to fail
----------------------------------------------
You can use the ``xfail`` marker to indicate that you
expect a test to fail::
expect a test to fail:
.. code-block:: python
@pytest.mark.xfail
def test_function():

View File

@ -90,10 +90,14 @@ provide a temporary directory unique to the test invocation,
created in the `base temporary directory`_.
``tmpdir`` is a `py.path.local`_ object which offers ``os.path`` methods
and more. Here is an example test usage::
and more. Here is an example test usage:
.. code-block:: python
# content of test_tmpdir.py
import os
def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt")
p.write("content")

View File

@ -10,7 +10,9 @@ It's meant for leveraging existing ``unittest``-based test suites
to use pytest as a test runner and also allow to incrementally adapt
the test suite to take full advantage of pytest's features.
To run an existing ``unittest``-style test suite using ``pytest``, type::
To run an existing ``unittest``-style test suite using ``pytest``, type:
.. code-block:: bash
pytest tests
@ -78,7 +80,9 @@ Running your unittest with ``pytest`` allows you to use its
tests. Assuming you have at least skimmed the pytest fixture features,
let's jump-start into an example that integrates a pytest ``db_class``
fixture, setting up a class-cached database object, and then reference
it from a unittest-style test::
it from a unittest-style test:
.. code-block:: python
# content of conftest.py
@ -87,10 +91,12 @@ it from a unittest-style test::
import pytest
@pytest.fixture(scope="class")
def db_class(request):
class DummyDB(object):
class DummyDB:
pass
# set a class attribute on the invoking test context
request.cls.db = DummyDB()
@ -103,13 +109,16 @@ as the ``cls`` attribute, denoting the class from which the fixture
is used. This architecture de-couples fixture writing from actual test
code and allows re-use of the fixture by a minimal reference, the fixture
name. So let's write an actual ``unittest.TestCase`` class using our
fixture definition::
fixture definition:
.. code-block:: python
# content of test_unittest_db.py
import unittest
import pytest
@pytest.mark.usefixtures("db_class")
class MyTest(unittest.TestCase):
def test_method1(self):
@ -179,14 +188,16 @@ Let's look at an ``initdir`` fixture which makes all test methods of a
``TestCase`` class execute in a temporary directory with a
pre-initialized ``samplefile.ini``. Our ``initdir`` fixture itself uses
the pytest builtin :ref:`tmpdir <tmpdir>` fixture to delegate the
creation of a per-test temporary directory::
creation of a per-test temporary directory:
.. code-block:: python
# content of test_unittest_cleandir.py
import pytest
import unittest
class MyTest(unittest.TestCase):
class MyTest(unittest.TestCase):
@pytest.fixture(autouse=True)
def initdir(self, tmpdir):
tmpdir.chdir() # change to pytest-provided temporary directory

View File

@ -652,7 +652,7 @@ to all tests.
record_testsuite_property("STORAGE_TYPE", "CEPH")
class TestMe(object):
class TestMe:
def test_foo(self):
assert True
@ -754,24 +754,33 @@ Calling pytest from Python code
You can invoke ``pytest`` from Python code directly::
You can invoke ``pytest`` from Python code directly:
.. code-block:: python
pytest.main()
this acts as if you would call "pytest" from the command line.
It will not raise ``SystemExit`` but return the exitcode instead.
You can pass in options and arguments::
You can pass in options and arguments:
pytest.main(['-x', 'mytestdir'])
.. code-block:: python
You can specify additional plugins to ``pytest.main``::
pytest.main(["-x", "mytestdir"])
You can specify additional plugins to ``pytest.main``:
.. code-block:: python
# content of myinvoke.py
import pytest
class MyPlugin(object):
class MyPlugin:
def pytest_sessionfinish(self):
print("*** test run reporting finishing")
pytest.main(["-qq"], plugins=[MyPlugin()])
Running it will show that ``MyPlugin`` was added and its

View File

@ -180,6 +180,7 @@ This will ignore all warnings of type ``DeprecationWarning`` where the start of
the regular expression ``".*U.*mode is deprecated"``.
.. note::
If warnings are configured at the interpreter level, using
the `PYTHONWARNINGS <https://docs.python.org/3/using/cmdline.html#envvar-PYTHONWARNINGS>`_ environment variable or the
``-W`` command-line option, pytest will not configure any filters by default.
@ -277,7 +278,9 @@ argument ``match`` to assert that the exception matches a text or regex::
...
Failed: DID NOT WARN. No warnings of type ...UserWarning... was emitted...
You can also call ``pytest.warns`` on a function or code string::
You can also call ``pytest.warns`` on a function or code string:
.. code-block:: python
pytest.warns(expected_warning, func, *args, **kwargs)
pytest.warns(expected_warning, "func(*args, **kwargs)")
@ -411,7 +414,7 @@ These warnings might be filtered using the same builtin mechanisms used to filte
Please read our :ref:`backwards-compatibility` to learn how we proceed about deprecating and eventually removing
features.
The following warning types ares used by pytest and are part of the public API:
The following warning types are used by pytest and are part of the public API:
.. autoclass:: pytest.PytestWarning

View File

@ -693,7 +693,7 @@ declaring the hook functions directly in your plugin module, for example:
# contents of myplugin.py
class DeferPlugin(object):
class DeferPlugin:
"""Simple plugin to defer pytest-xdist hook functions."""
def pytest_testnodedown(self, node, error):

View File

@ -27,11 +27,14 @@ Module level setup/teardown
If you have multiple test functions and test classes in a single
module you can optionally implement the following fixture methods
which will usually be called once for all the functions::
which will usually be called once for all the functions:
.. code-block:: python
def setup_module(module):
""" setup any state specific to the execution of the given module."""
def teardown_module(module):
""" teardown any state that was previously setup with a setup_module
method.
@ -43,7 +46,9 @@ Class level setup/teardown
----------------------------------
Similarly, the following methods are called at class level before
and after all test methods of the class are called::
and after all test methods of the class are called:
.. code-block:: python
@classmethod
def setup_class(cls):
@ -51,6 +56,7 @@ and after all test methods of the class are called::
usually contains tests).
"""
@classmethod
def teardown_class(cls):
""" teardown any state that was previously setup with a call to
@ -60,13 +66,16 @@ and after all test methods of the class are called::
Method and function level setup/teardown
-----------------------------------------------
Similarly, the following methods are called around each method invocation::
Similarly, the following methods are called around each method invocation:
.. code-block:: python
def setup_method(self, method):
""" setup any state tied to the execution of the given method in a
class. setup_method is invoked for every test method of a class.
"""
def teardown_method(self, method):
""" teardown any state that was previously setup with a setup_method
call.
@ -75,13 +84,16 @@ Similarly, the following methods are called around each method invocation::
As of pytest-3.0, the ``method`` parameter is optional.
If you would rather define test functions directly at module level
you can also use the following functions to implement fixtures::
you can also use the following functions to implement fixtures:
.. code-block:: python
def setup_function(function):
""" setup any state tied to the execution of the given function.
Invoked for every test function in the module.
"""
def teardown_function(function):
""" teardown any state that was previously setup with a setup_function
call.

View File

@ -1146,7 +1146,7 @@ def test_unorderable_types(testdir):
def test_collect_functools_partial(testdir):
"""
Test that collection of functools.partial object works, and arguments
to the wrapped functions are dealt correctly (see #811).
to the wrapped functions are dealt with correctly (see #811).
"""
testdir.makepyfile(
"""