* Rename pytest_ignore_collect fspath parameter to collection_path
* Rename pytest_collect_file fspath parameter to file_path
* Rename pytest_pycollect_makemodule fspath parameter to module_path
* Rename pytest_report_header startpath parameter to start_path
* Rename pytest_report_collectionfinish startpath parameter to start_path
* Update docs with the renamed parameters
* Use pytest-flakes fork temporarily to prove it works
* Use pytest-flakes 4.0.5
* [pre-commit.ci] pre-commit autoupdate
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Ran Benita <ran@unusedvar.com>
_pytest.timing is an indirection to 'time' functions, which pytest production
code should use instead of 'time' directly.
'mock_timing' is a new fixture which then mocks those functions, allowing us
to write time-reliable tests which run instantly and are not flaky.
This was triggered by recent flaky junitxml tests on Windows related to timing
issues.
Running `pytest | head -1` and similar causes an annoying error to be
printed to stderr:
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>
BrokenPipeError: [Errno 32] Broken pipe
(or possibly even a propagating exception in older/other Python versions).
The standard UNIX behavior is to handle the EPIPE silently. To
recommended method to do this in Python is described here:
https://docs.python.org/3/library/signal.html#note-on-sigpipe
It is not appropriate to apply this recommendation to `pytest.main()`,
which is used programmatically for in-process runs. Hence, change
pytest's entrypoint to a new `pytest.console_main()` function, to be
used exclusively by pytest's CLI, and add the SIGPIPE code there.
Fixes#4375.
Currently this test issues a warning which is displayed in the warning
summary (of pytest's own test suite):
testing/acceptance_test.py::TestGeneralUsage::test_early_skip
/tmp/pytest-of-ran/pytest-396/test_early_skip0/conftest.py:2: PytestDeprecationWarning: The pytest_collect_directory hook is not working.
Please use collect_ignore in conftests or pytest_collection_modifyitems.
def pytest_collect_directory():
I think the filter was meant to be `ignore` in the first place, and not
`always` which is not a valid action AFAIK.
TestDurations tests the `--durations=N` functionality which reports N
slowest tests, with durations <= 0.005s not shown by default.
The test relies on real time.sleep() (in addition to the code which uses
time.perf_counter()) which makes it flaky and inconsistent between
platforms.
Instead of trying to tweak it more, make it use fake time instead. The
way it is done is a little hacky but seems to work.
Co-authored-by: Sylvain MARIE <sylvain.marie@se.com>
Co-authored-by: Ran Benita <ran@unusedvar.com>
Co-authored-by: Bruno Oliveira <nicoddemus@gmail.com>
Passing in a tuple crashes in `_prepareconfig`:
def test_invoke_with_tuple(self):
> pytest.main(("-h",))
src/_pytest/config/__init__.py:82: in main
config = _prepareconfig(args, plugins)
src/_pytest/config/__init__.py:229: in _prepareconfig
return pluginmanager.hook.pytest_cmdline_parse(
…
src/_pytest/helpconfig.py:98: in pytest_cmdline_parse
config = outcome.get_result() # type: Config
src/_pytest/config/__init__.py:808: in pytest_cmdline_parse
self.parse(args)
src/_pytest/config/__init__.py:1017: in parse
self._preparse(args, addopts=addopts)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def _preparse(self, args: List[str], addopts: bool = True) -> None:
…
if addopts:
ini_addopts = self.getini("addopts")
if ini_addopts:
> args[:] = self._validate_args(ini_addopts, "via addopts config") + args
E TypeError: can only concatenate list (not "tuple") to list
addopts = True
args = ('-h',)
env_addopts = ''
ini_addopts = ['-rfEX', …]
src/_pytest/config/__init__.py:956: TypeError: can only concatenate list (not "tuple") to list
Might be worth handling (converting it to a list for example), but it
was documented to be a list to begin with when removing support for
strings (a7e401656).
ExitCode is used in several internal modules and hooks and so with type
annotations added, needs to be imported a lot.
_pytest.main, being the entry point, generally sits at the top of the
import tree.
So, it's not great to have ExitCode defined in _pytest.main, because it
will cause a lot of import cycles once type annotations are added (in
fact there is already one, which this change removes).
Move it to _pytest.config instead.
_pytest.main still imports ExitCode, so importing from there still
works, although external users should really be importing from `pytest`.
The convention is "assert result is expected". Pytest's error diffs now
reflect this. "-" means that sth. expected is missing in the result and
"+" means that there are unexpected extras in the result.
Fixes: #3333
Harden one test where it is tested.
All tests testing this:
testing/acceptance_test.py:184(TestGeneralUsage::test_not_collectable_arguments)
testing/acceptance_test.py:373(TestGeneralUsage::test_direct_addressing_notfound)
testing/acceptance_test.py:403(TestGeneralUsage::test_issue134_report_error_when_collecting_member[test_fun.py::test_a])
testing/acceptance_test.py:420(TestGeneralUsage::test_report_all_failed_collections_initargs)
testing/test_config.py:1309(test_config_blocked_default_plugins[python])
(via https://github.com/blueyed/pytest/pull/88)
This is important when used with ``pytester``'s ``runpytest_inprocess``.
Since 07f20ccab `pytest testing/acceptance_test.py -k test_doctest_id`
would fail, since the second run would not consider the exception to be
an instance of `doctest.DocTestFailure` anymore, since the module was
re-imported, and use another failure message then in the short test
summary info (and in the report itself):
> FAILED test_doctest_id.txt::test_doctest_id.txt - doctest.DocTestFailure: <Do...
while it should be:
> FAILED test_doctest_id.txt::test_doctest_id.txt