When enabled, floating-point numbers only need to match as far as the
precision you have written in the expected doctest output. This avoids
false positives caused by limited floating-point precision, like this:
Expected:
0.233
Got:
0.23300000000000001
This is inspired by Sébastien Boisgérault's [numtest] but the
implementation is a bit different:
* This implementation edits the literals that are in the "got"
string (the actual output from the expression being tested), and then
proceeds to compare the strings literally. This is similar to pytest's
existing ALLOW_UNICODE and ALLOW_BYTES implementation.
* This implementation only compares floats against floats, not ints
against floats. That is, the following doctest will fail with pytest
whereas it would pass with numtest:
>>> math.py # doctest: +NUMBER
3
This behaviour should be less surprising (less false negatives) when
you enable NUMBER globally in pytest.ini.
Advantages of this implementation compared to numtest:
* Doesn't require `import numtest` at the top level of the file.
* Works with pytest (if you try to use pytest & numtest together, pytest
raises "TypeError: unbound method check_output() must be called with
NumTestOutputChecker instance as first argument (got
LiteralsOutputChecker instance instead)").
* Works with Python 3.
[numtest]: https://github.com/boisgera/numtest