Preparing release version 3.7.4

This commit is contained in:
Anthony Sottile 2018-08-29 08:57:54 -07:00
parent 4345efaffc
commit aea962dc21
12 changed files with 61 additions and 18 deletions

View File

@ -18,6 +18,31 @@ with advance notice in the **Deprecations** section of releases.
.. towncrier release notes start .. towncrier release notes start
pytest 3.7.4 (2018-08-29)
=========================
Bug Fixes
---------
- `#3506 <https://github.com/pytest-dev/pytest/issues/3506>`_: Fix possible infinite recursion when writing ``.pyc`` files.
- `#3853 <https://github.com/pytest-dev/pytest/issues/3853>`_: Cache plugin now obeys the ``-q`` flag when ``--last-failed`` and ``--failed-first`` flags are used.
- `#3883 <https://github.com/pytest-dev/pytest/issues/3883>`_: Fix bad console output when using ``console_output_style=classic``.
- `#3888 <https://github.com/pytest-dev/pytest/issues/3888>`_: Fix macOS specific code using ``capturemanager`` plugin in doctests.
Improved Documentation
----------------------
- `#3902 <https://github.com/pytest-dev/pytest/issues/3902>`_: Fix pytest.org links
pytest 3.7.3 (2018-08-26) pytest 3.7.3 (2018-08-26)
========================= =========================

View File

@ -1 +0,0 @@
Fix possible infinite recursion when writing ``.pyc`` files.

View File

@ -1 +0,0 @@
Cache plugin now obeys the ``-q`` flag when ``--last-failed`` and ``--failed-first`` flags are used.

View File

@ -1 +0,0 @@
Fix bad console output when using ``console_output_style=classic``.

View File

@ -1 +0,0 @@
Fix macOS specific code using ``capturemanager`` plugin in doctests.

View File

@ -1 +0,0 @@
Fix pytest.org links

View File

@ -6,6 +6,7 @@ Release announcements
:maxdepth: 2 :maxdepth: 2
release-3.7.4
release-3.7.3 release-3.7.3
release-3.7.2 release-3.7.2
release-3.7.1 release-3.7.1

View File

@ -0,0 +1,22 @@
pytest-3.7.4
=======================================
pytest 3.7.4 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at https://docs.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Anthony Sottile
* Bruno Oliveira
* Daniel Hahler
* Jiri Kuncar
* Steve Piercy
Happy testing,
The pytest Development Team

View File

@ -200,17 +200,17 @@ You can ask which markers exist for your test suite - the list includes our just
$ pytest --markers $ pytest --markers
@pytest.mark.webtest: mark a test as a webtest. @pytest.mark.webtest: mark a test as a webtest.
@pytest.mark.filterwarnings(warning): add a warning filter to the given test. see http://pytest.org/latest/warnings.html#pytest-mark-filterwarnings @pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test. @pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html @pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see https://docs.pytest.org/en/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html @pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples. @pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/latest/parametrize.html for more info and examples.
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/latest/fixture.html#usefixtures
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@ -376,17 +376,17 @@ The ``--markers`` option always gives you a list of available markers::
$ pytest --markers $ pytest --markers
@pytest.mark.env(name): mark test to run only on named environment @pytest.mark.env(name): mark test to run only on named environment
@pytest.mark.filterwarnings(warning): add a warning filter to the given test. see http://pytest.org/latest/warnings.html#pytest-mark-filterwarnings @pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test. @pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html @pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see https://docs.pytest.org/en/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html @pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples. @pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/latest/parametrize.html for more info and examples.
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/latest/fixture.html#usefixtures
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.

View File

@ -617,5 +617,5 @@ get on the terminal - we are working on that)::
Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0. Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0.
Please use Metafunc.parametrize instead. Please use Metafunc.parametrize instead.
-- Docs: http://doc.pytest.org/en/latest/warnings.html -- Docs: https://docs.pytest.org/en/latest/warnings.html
================== 42 failed, 1 warnings in 0.12 seconds =================== ================== 42 failed, 1 warnings in 0.12 seconds ===================

View File

@ -33,7 +33,7 @@ Running pytest now produces this output::
$REGENDOC_TMPDIR/test_show_warnings.py:4: UserWarning: api v1, should use functions from v2 $REGENDOC_TMPDIR/test_show_warnings.py:4: UserWarning: api v1, should use functions from v2
warnings.warn(UserWarning("api v1, should use functions from v2")) warnings.warn(UserWarning("api v1, should use functions from v2"))
-- Docs: http://doc.pytest.org/en/latest/warnings.html -- Docs: https://docs.pytest.org/en/latest/warnings.html
=================== 1 passed, 1 warnings in 0.12 seconds =================== =================== 1 passed, 1 warnings in 0.12 seconds ===================
Pytest by default catches all warnings except for ``DeprecationWarning`` and ``PendingDeprecationWarning``. Pytest by default catches all warnings except for ``DeprecationWarning`` and ``PendingDeprecationWarning``.

View File

@ -422,7 +422,7 @@ additionally it is possible to copy examples for a example folder before running
$REGENDOC_TMPDIR/test_example.py:4: PytestExerimentalApiWarning: testdir.copy_example is an experimental api that may change over time $REGENDOC_TMPDIR/test_example.py:4: PytestExerimentalApiWarning: testdir.copy_example is an experimental api that may change over time
testdir.copy_example("test_example.py") testdir.copy_example("test_example.py")
-- Docs: http://doc.pytest.org/en/latest/warnings.html -- Docs: https://docs.pytest.org/en/latest/warnings.html
=================== 2 passed, 1 warnings in 0.12 seconds =================== =================== 2 passed, 1 warnings in 0.12 seconds ===================
For more information about the result object that ``runpytest()`` returns, and For more information about the result object that ``runpytest()`` returns, and