There are state transitions start/done/suspend/resume and two additional
operations snap/writeorg.
Previously it was not well defined in what order they can be called, and
which operations are idempotent.
Formalize this and enforce using assert checks with informative error
messages if they fail (rather than random AttributeErrors).
Make it easier to read the file in progression, and avoid forward
references for upcoming type annotations.
There is one cycle, CaptureManager <-> CaptureFixture, which is hard to
untangle.
(This commit should be added to `.gitblameignore`).
`TerminalWriter`, imported recently from `py`, contains its own
incomplete wcwidth (`char_with`/`get_line_width`) implementation. The
`TerminalReporter` also needs this, but uses the external `wcwidth`
package.
This commit brings the `TerminalWriter` implementation up-to-par with
`wcwidth`, moves to implementation to a new file `_pytest._io.wcwidth`
which is used everywhere, and removes the dependency.
The differences compared to the `wcwidth` package are:
- Normalizes the string before counting.
- Uses Python's `unicodedata` instead of vendored Unicode tables. This
means the data corresponds to the Python's version Unicode version
instead of the `wcwidth`'s package version.
- Apply some optimizations.
pytest.fixture() can be used either as
@pytest.fixture
def func(): ...
or as
@pytest.fixture()
def func(): ...
or (while maybe not intended)
func = pytest.fixture(func)
so it needs to inspect internally whether it got a function in the first
positional argument or not.
Previously, there were was oddity. In the following,
func = pytest.fixture(func, autouse=True)
# OR
func = pytest.fixture(func, parms=['a', 'b'])
The result is as if `func` wasn't passed.
There isn't any reason for this special that I can understand, so remove
it.
_pytest.timing is an indirection to 'time' functions, which pytest production
code should use instead of 'time' directly.
'mock_timing' is a new fixture which then mocks those functions, allowing us
to write time-reliable tests which run instantly and are not flaky.
This was triggered by recent flaky junitxml tests on Windows related to timing
issues.
This is already done in pytest_runtest_logstart, so the fspath is
already guaranteed to have been printed (for xdist, it is disabled
anyway).
write_fspath_result is mildly expensive so it is worth avoiding calling
it twice.
The `FDCapture`/`FDCaptureBinary` classes, used by `capfd`/`capfdbinary`
fixtures and the `--capture=fd` option (set by default), redirect FDs
1/2 (stdout/stderr) to a temporary file. To do this, they need to save
the old file by duplicating the FD before redirecting it, to be restored
once finished.
Previously, if this duplicating (`os.dup()`) failed, most likely due to
that FD being invalid, the FD redirection would silently not be done. The
FD capturing also performs python-level redirection (monkeypatching
`sys.stdout`/`sys.stderr`) which would still be done, but direct writes
to the FDs would fail.
This is not great. If pytest is run with `--capture=fd`, or a test is
using `capfd`, it expects writes to the FD to work and be captured,
regardless of external circumstances.
So, instead of disabling FD capturing, keep the redirection to a
temporary file, just don't restore it after closing, because there is
nothing to restore to.
This makes a difference for e.g. pytest-xdist:
Before:
```
--dist=distmode set mode for distributing tests to exec environments. each: …
available environment. loadscope: …
grouped by file to any available environment. (default) no: …
```
After:
```
--dist=distmode set mode for distributing tests to exec environments.
each: send each test to all available environments.
load: load balance by sending any pending test to any available environment.
…
(default) no: run tests inprocess, don't distribute.
```
This might also result in unexpected changes (hard wrapping), when line
endings where used unintentionally, e.g. with:
```
help="""
some long
help text
"""
```
But the benefits from that are worth it, and it is easy to fix, as will
be done for the internal `assertmode` option.
Currently, a bad logging call, e.g.
logger.info('oops', 'first', 2)
triggers the default logging handling, which is printing an error to
stderr but otherwise continuing.
For regular programs this behavior makes sense, a bad log message
shouldn't take down the program. But during tests, it is better not to
skip over such mistakes, but propagate them to the user.
Previously, a LoggingCaptureHandler was instantiated for each test's
setup/call/teardown which turns out to be expensive.
Instead, only keep one instance and reset it between runs.
The logstart/logreport/logfinish hooks don't need the stuff in
_runtest_for. The test capturing catching_logs call is irrelevant for
them, and the item-conditional sections are gone.
- Instead of making it optional, always set up a handler, but possibly
going to /dev/null. This simplifies the code by removing a lot of
conditionals. It also can replace the NullHandler() we already add.
- Change `set_log_path` to just change the stream, instead of recreating
one. Besides plugging a resource leak, it enables the next item.
- Remove the capturing_logs from _runtest_for, since it sufficiently
covered by the one in pytest_runtestloop now, which wraps all other
_runtest_for calls.
The first item alone would have had an adverse performance impact, but
the last item removes it.
Remove usage of `@contextmanager` as it is a bit slower than
hand-rolling, and also disallows re-entry which we want to use.
Removing protections around addHandler()/removeHandler(), because
logging already checks that internally.
Conceptually it doesn't check per catching_logs (and catching_logs
doesn't restore the older one either). It is just something that is
defined for each handler once.
The default message is often hard to read:
E _pytest.config.ConftestImportFailure: (local('D:\\projects\\pytest\\.tmp\\root\\foo\\conftest.py'), (<class 'RuntimeError'>, RuntimeError('some error',), <traceback object at 0x000001CCC3E39348>))
Using a shorter message is better:
E _pytest.config.ConftestImportFailure: RuntimeError: some error (from D:\projects\pytest\.tmp\root\foo\conftest.py)
And we don't really lose any information due to exception chaining.
This removes the KeyError from the traceback chain when an
conftest fails to import:
return self._conftestpath2mod[key]
E KeyError: WindowsPath('D:/projects/pytest/.tmp/root/foo/conftest.py')
During handling of the above exception, another exception occurred:
...
raise RuntimeError("some error")
E RuntimeError: some error
During handling of the above exception, another exception occurred:
...
E _pytest.config.ConftestImportFailure: (...)
By slightly changing the code, we can remove the first chain, which is often
very confusing to users and doesn't help with anything.
Fix#7223
Only filter with known failures, and explicitly keep paths of passed
arguments.
This also displays the "run-last-failure" status before collected files,
and does not update the cache with "--collect-only".
Fixes https://github.com/pytest-dev/pytest/issues/6968.
The previous commit made this possible, so utilize it.
Since legacy.py becomes pretty bare, I inlined it into __init__.py. I'm
not sure it's really "legacy" anyway!
Using a simple 50000 items benchmark with `--collect-only -k nomatch`:
Before (two commits ago):
======================== 50000 deselected in 10.31s =====================
19129345 function calls (18275596 primitive calls) in 10.634 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.001 0.001 2.270 2.270 __init__.py:149(pytest_collection_modifyitems)
1 0.036 0.036 2.270 2.270 __init__.py:104(deselect_by_keyword)
50000 0.055 0.000 2.226 0.000 legacy.py:87(matchkeyword)
After:
======================== 50000 deselected in 9.37s =========================
18029363 function calls (17175972 primitive calls) in 9.701 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 1.394 1.394 __init__.py:239(pytest_collection_modifyitems)
1 0.057 0.057 1.393 1.393 __init__.py:162(deselect_by_keyword)
The matching itself can be optimized more but that's a different story.
In current pytest, the same expression is matched against all items. But
it is re-parsed for every match.
Add support for "compiling" an expression and reusing the result. Errors
may only occur during compilation.
This is done by parsing the expression into a Python `ast.Expression`,
then `compile()`ing it into a code object. Evaluation is then done using
`eval()`.
Note: historically we used to use `eval` directly on the user input --
this is not the case here, the expression is entirely under our control
according to our grammar, we just JIT-compile it to Python as a
(completely safe) optimization.
The `-k '-expr'` syntax is an old alias to `-k 'not expr'`. It's also
not a very convenient to have syntax that start with `-` on the CLI.
Deprecate it and suggest replacing with `not`.
---
The `-k 'expr:'` syntax discards all items until the first match and
keeps all subsequent, e.g. `-k foo` with
test_bar
test_foo
test_baz
results in `test_foo`, `test_baz`. That's a bit weird, so deprecate it
without a replacement. If someone complains we can reconsider or devise
a better alternative.
Running `pytest | head -1` and similar causes an annoying error to be
printed to stderr:
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>
BrokenPipeError: [Errno 32] Broken pipe
(or possibly even a propagating exception in older/other Python versions).
The standard UNIX behavior is to handle the EPIPE silently. To
recommended method to do this in Python is described here:
https://docs.python.org/3/library/signal.html#note-on-sigpipe
It is not appropriate to apply this recommendation to `pytest.main()`,
which is used programmatically for in-process runs. Hence, change
pytest's entrypoint to a new `pytest.console_main()` function, to be
used exclusively by pytest's CLI, and add the SIGPIPE code there.
Fixes#4375.
When setting up the warnings capture, filter strings (with the general
form `action:message:category:module:line`) are collected from the
cmdline, ini and item and applied. This happens for every test and other
cases.
To apply a string it needs to be parsed into a tuple, and it turns out
this is slow. Since we already vendor the parsing code from Python's
warnings.py, we can speed it up by caching the result. After splitting
the parsing part from the applying part, the parsing is pure and is
straightforward to cache.
An alternative is to parse ahead of time and reuse the result, however
the caching solution turns out cleaner and more general in this case.
On this benchmark:
import pytest
@pytest.mark.parametrize("x", range(5000))
def test_foo(x): pass
Before:
============================ 5000 passed in 14.11s =============================
14365646 function calls (13450775 primitive calls) in 14.536 seconds
After:
============================ 5000 passed in 13.61s =============================
13290372 function calls (12375498 primitive calls) in 14.034 seconds
- replace "tests with warnings" with just warnings: they a) might not
come from a test in the first place, and b) a single test might have
multiple warnings.
- fix usage of (unused) argument to `collapsed_location_report`
Co-authored-by: Ran Benita <ran@unusedvar.com>
Move {Passthrough,CaptureIO} to capture module, and rename Passthrough
-> Tee to match the existing terminology.
Co-authored-by: Ran Benita <ran@unusedvar.com>
Also delay calling tearDown() when --pdb is given, so users still have
access to the instance variables (which are usually cleaned up during tearDown())
when debugging.
Fix#6947
Instead of trying to handle unittest-async functions in pytest_pyfunc_call,
let the unittest framework handle them instead.
This lets us remove the hack in pytest_pyfunc_call, with the upside that
we should support any unittest-async based framework.
Also included 'asynctest' as test dependency for py37-twisted, and renamed
'twisted' to 'unittestextras' to better reflect that we install 'twisted' and
'asynctest' now.
This also fixes the problem of cleanUp functions not being properly called
for async functions.
Fix#7110Fix#6924
Previously, the expressions given to the `-m` and `-k` options were
evaluated with `eval`. This causes a few issues:
- Python keywords cannot be used.
- Constants like numbers, None, True, False are not handled correctly.
- Various syntax like numeric operators and `X if Y else Z` is supported
unintentionally.
- `eval()` is somewhat dangerous for arbitrary input.
- Can fail in many ways so requires `except Exception`.
The format we want to support is quite simple, so change to a custom
parser. This fixes the issues above, and gives us full control of the
format, so can be documented comprehensively and even be extended in the
future if we wish.
Currently this property is computed eagerly, which means
get_line_width() is computed on everything written, but that is a slow
function.
Compute it lazily, so that get_line_width() only runs when needed.
Flushing on every write is somewhat expensive.
Rely on line buffering instead (if line buffering for stdout is
disabled, there must be some reason...), and add explicit flushes when
not outputting lines.
This is how regular `print()` e.g. work so should be familiar.
The code used an O(n^2) loop. Replace list with set to make it O(n).
For backward compatibility the filesystem cache still remains a list.
On this test:
import pytest
@pytest.mark.parametrize("x", range(5000))
def test_foo(x): pass
run with `pytest --collect-only`:
Before: 0m1.251s
After: 0m0.921s
This function is exposed and kept alive for the oejskit plugin which is
abandoned and no longer works with recent plugins, so let's prepare to
completely remove it.