Merge branch 'master' into merge-master-into-features
This commit is contained in:
commit
7704f73db9
2
AUTHORS
2
AUTHORS
|
@ -48,7 +48,6 @@ Eduardo Schettino
|
||||||
Elizaveta Shashkova
|
Elizaveta Shashkova
|
||||||
Endre Galaczi
|
Endre Galaczi
|
||||||
Eric Hunsberger
|
Eric Hunsberger
|
||||||
Eric Hunsberger
|
|
||||||
Eric Siegerman
|
Eric Siegerman
|
||||||
Erik M. Bray
|
Erik M. Bray
|
||||||
Feng Ma
|
Feng Ma
|
||||||
|
@ -81,6 +80,7 @@ Lukas Bednar
|
||||||
Maciek Fijalkowski
|
Maciek Fijalkowski
|
||||||
Maho
|
Maho
|
||||||
Marc Schlaich
|
Marc Schlaich
|
||||||
|
Marcin Bachry
|
||||||
Mark Abramowitz
|
Mark Abramowitz
|
||||||
Markus Unterwaditzer
|
Markus Unterwaditzer
|
||||||
Martijn Faassen
|
Martijn Faassen
|
||||||
|
|
|
@ -10,7 +10,7 @@
|
||||||
*
|
*
|
||||||
|
|
||||||
|
|
||||||
3.0.1.dev
|
3.0.2.dev
|
||||||
=========
|
=========
|
||||||
|
|
||||||
*
|
*
|
||||||
|
@ -21,6 +21,33 @@
|
||||||
|
|
||||||
*
|
*
|
||||||
|
|
||||||
|
3.0.1
|
||||||
|
=====
|
||||||
|
|
||||||
|
* Fix regression when ``importorskip`` is used at module level (`#1822`_).
|
||||||
|
Thanks `@jaraco`_ and `@The-Compiler`_ for the report and `@nicoddemus`_ for the PR.
|
||||||
|
|
||||||
|
* Fix parametrization scope when session fixtures are used in conjunction
|
||||||
|
with normal parameters in the same call (`#1832`_).
|
||||||
|
Thanks `@The-Compiler`_ for the report, `@Kingdread`_ and `@nicoddemus`_ for the PR.
|
||||||
|
|
||||||
|
* Fix internal error when parametrizing tests or fixtures using an empty ``ids`` argument (`#1849`_).
|
||||||
|
Thanks `@OPpuolitaival`_ for the report and `@nicoddemus`_ for the PR.
|
||||||
|
|
||||||
|
* Fix loader error when running ``pytest`` embedded in a zipfile.
|
||||||
|
Thanks `@mbachry`_ for the PR.
|
||||||
|
|
||||||
|
|
||||||
|
.. _@Kingdread: https://github.com/Kingdread
|
||||||
|
.. _@mbachry: https://github.com/mbachry
|
||||||
|
.. _@OPpuolitaival: https://github.com/OPpuolitaival
|
||||||
|
|
||||||
|
.. _#1822: https://github.com/pytest-dev/pytest/issues/1822
|
||||||
|
.. _#1832: https://github.com/pytest-dev/pytest/issues/1832
|
||||||
|
.. _#1849: https://github.com/pytest-dev/pytest/issues/1849
|
||||||
|
|
||||||
|
>>>>>>> master
|
||||||
|
|
||||||
3.0.0
|
3.0.0
|
||||||
=====
|
=====
|
||||||
|
|
||||||
|
@ -323,10 +350,6 @@ time or change existing behaviors in order to make them less surprising/more use
|
||||||
identify bugs in ``conftest.py`` files (`#1516`_). Thanks `@txomon`_ for
|
identify bugs in ``conftest.py`` files (`#1516`_). Thanks `@txomon`_ for
|
||||||
the PR.
|
the PR.
|
||||||
|
|
||||||
* Add an 'E' to the first line of error messages from FixtureLookupErrorRepr.
|
|
||||||
Fixes `#717`_. Thanks `@blueyed`_ for reporting, `@eolo999`_ for the PR
|
|
||||||
and `@tomviner`_ for his guidance during EuroPython2016 sprint.
|
|
||||||
|
|
||||||
* Text documents without any doctests no longer appear as "skipped".
|
* Text documents without any doctests no longer appear as "skipped".
|
||||||
Thanks `@graingert`_ for reporting and providing a full PR (`#1580`_).
|
Thanks `@graingert`_ for reporting and providing a full PR (`#1580`_).
|
||||||
|
|
||||||
|
@ -1225,7 +1248,7 @@ time or change existing behaviors in order to make them less surprising/more use
|
||||||
dep). Thanks Charles Cloud for analysing the issue.
|
dep). Thanks Charles Cloud for analysing the issue.
|
||||||
|
|
||||||
- fix conftest related fixture visibility issue: when running with a
|
- fix conftest related fixture visibility issue: when running with a
|
||||||
CWD outside a test package pytest would get fixture discovery wrong.
|
CWD outside of a test package pytest would get fixture discovery wrong.
|
||||||
Thanks to Wolfgang Schnerring for figuring out a reproducable example.
|
Thanks to Wolfgang Schnerring for figuring out a reproducable example.
|
||||||
|
|
||||||
- Introduce pytest_enter_pdb hook (needed e.g. by pytest_timeout to cancel the
|
- Introduce pytest_enter_pdb hook (needed e.g. by pytest_timeout to cancel the
|
||||||
|
|
111
HOWTORELEASE.rst
111
HOWTORELEASE.rst
|
@ -3,90 +3,83 @@ How to release pytest
|
||||||
|
|
||||||
Note: this assumes you have already registered on pypi.
|
Note: this assumes you have already registered on pypi.
|
||||||
|
|
||||||
0. create the branch release-VERSION
|
1. Bump version numbers in ``_pytest/__init__.py`` (``setup.py`` reads it).
|
||||||
use features as base for minor/major releases
|
|
||||||
and master as base for bugfix releases
|
|
||||||
|
|
||||||
1. Bump version numbers in _pytest/__init__.py (setup.py reads it)
|
2. Check and finalize ``CHANGELOG.rst``.
|
||||||
|
|
||||||
2. Check and finalize CHANGELOG
|
3. Write ``doc/en/announce/release-VERSION.txt`` and include
|
||||||
|
it in ``doc/en/announce/index.txt``. Run this command to list names of authors involved::
|
||||||
|
|
||||||
3. Write doc/en/announce/release-VERSION.txt and include
|
git log $(git describe --abbrev=0 --tags)..HEAD --format='%aN' | sort -u
|
||||||
it in doc/en/announce/index.txt::
|
|
||||||
|
|
||||||
git log 2.8.2..HEAD --format='%aN' | sort -u # lists the names of authors involved
|
4. Regenerate the docs examples using tox::
|
||||||
|
|
||||||
4. Use devpi for uploading a release tarball to a staging area::
|
tox -e regen
|
||||||
|
|
||||||
|
5. At this point, open a PR named ``release-X`` so others can help find regressions or provide suggestions.
|
||||||
|
|
||||||
|
6. Use devpi for uploading a release tarball to a staging area::
|
||||||
|
|
||||||
devpi use https://devpi.net/USER/dev
|
devpi use https://devpi.net/USER/dev
|
||||||
devpi upload --formats sdist,bdist_wheel
|
devpi upload --formats sdist,bdist_wheel
|
||||||
|
|
||||||
5. Run from multiple machines::
|
7. Run from multiple machines::
|
||||||
|
|
||||||
devpi use https://devpi.net/USER/dev
|
devpi use https://devpi.net/USER/dev
|
||||||
devpi test pytest==VERSION
|
devpi test pytest==VERSION
|
||||||
|
|
||||||
6. Check that tests pass for relevant combinations with::
|
Alternatively, you can use `devpi-cloud-tester <https://github.com/nicoddemus/devpi-cloud-tester>`_ to test
|
||||||
|
the package on AppVeyor and Travis (follow instructions on the ``README``).
|
||||||
|
|
||||||
|
8. Check that tests pass for relevant combinations with::
|
||||||
|
|
||||||
devpi list pytest
|
devpi list pytest
|
||||||
|
|
||||||
or look at failures with "devpi list -f pytest".
|
or look at failures with "devpi list -f pytest".
|
||||||
|
|
||||||
7. Regenerate the docs examples using tox, and check for regressions::
|
9. Feeling confident? Publish to pypi::
|
||||||
|
|
||||||
tox -e regen
|
|
||||||
git diff
|
|
||||||
|
|
||||||
|
|
||||||
8. Build the docs, you need a virtualenv with py and sphinx
|
|
||||||
installed::
|
|
||||||
|
|
||||||
cd doc/en
|
|
||||||
make html
|
|
||||||
|
|
||||||
Commit any changes before tagging the release.
|
|
||||||
|
|
||||||
9. Tag the release::
|
|
||||||
|
|
||||||
git tag VERSION
|
|
||||||
git push
|
|
||||||
|
|
||||||
10. Upload the docs using doc/en/Makefile::
|
|
||||||
|
|
||||||
cd doc/en
|
|
||||||
make install # or "installall" if you have LaTeX installed for PDF
|
|
||||||
|
|
||||||
This requires ssh-login permission on pytest.org because it uses
|
|
||||||
rsync.
|
|
||||||
Note that the ``install`` target of ``doc/en/Makefile`` defines where the
|
|
||||||
rsync goes to, typically to the "latest" section of pytest.org.
|
|
||||||
|
|
||||||
If you are making a minor release (e.g. 5.4), you also need to manually
|
|
||||||
create a symlink for "latest"::
|
|
||||||
|
|
||||||
ssh pytest-dev@pytest.org
|
|
||||||
ln -s 5.4 latest
|
|
||||||
|
|
||||||
Browse to pytest.org to verify.
|
|
||||||
|
|
||||||
11. Publish to pypi::
|
|
||||||
|
|
||||||
devpi push pytest==VERSION pypi:NAME
|
devpi push pytest==VERSION pypi:NAME
|
||||||
|
|
||||||
where NAME is the name of pypi.python.org as configured in your ``~/.pypirc``
|
where NAME is the name of pypi.python.org as configured in your ``~/.pypirc``
|
||||||
file `for devpi <http://doc.devpi.net/latest/quickstart-releaseprocess.html?highlight=pypirc#devpi-push-releasing-to-an-external-index>`_.
|
file `for devpi <http://doc.devpi.net/latest/quickstart-releaseprocess.html?highlight=pypirc#devpi-push-releasing-to-an-external-index>`_.
|
||||||
|
|
||||||
|
10. Tag the release::
|
||||||
|
|
||||||
12. Send release announcement to mailing lists:
|
git tag VERSION <hash>
|
||||||
|
git push origin VERSION
|
||||||
|
|
||||||
- pytest-dev
|
Make sure ``<hash>`` is **exactly** the git hash at the time the package was created.
|
||||||
- testing-in-python
|
|
||||||
|
11. Send release announcement to mailing lists:
|
||||||
|
|
||||||
|
- pytest-dev@python.org
|
||||||
|
- testing-in-python@lists.idyll.org
|
||||||
- python-announce-list@python.org
|
- python-announce-list@python.org
|
||||||
|
|
||||||
|
And announce the release on Twitter, making sure to add the hashtag ``#pytest``.
|
||||||
|
|
||||||
|
12. **After the release**
|
||||||
|
|
||||||
|
a. **patch release (2.8.3)**:
|
||||||
|
|
||||||
|
1. Checkout ``master``.
|
||||||
|
2. Update version number in ``_pytest/__init__.py`` to ``"2.8.4.dev"``.
|
||||||
|
3. Create a new section in ``CHANGELOG.rst`` titled ``2.8.4.dev`` and add a few bullet points as placeholders for new entries.
|
||||||
|
4. Commit and push.
|
||||||
|
|
||||||
|
b. **minor release (2.9.0)**:
|
||||||
|
|
||||||
|
1. Merge ``features`` into ``master``.
|
||||||
|
2. Checkout ``master``.
|
||||||
|
3. Follow the same steps for a **patch release** above, using the next patch release: ``2.9.1.dev``.
|
||||||
|
4. Commit ``master``.
|
||||||
|
5. Checkout ``features`` and merge with ``master`` (should be a fast-forward at this point).
|
||||||
|
6. Update version number in ``_pytest/__init__.py`` to the next minor release: ``"2.10.0.dev"``.
|
||||||
|
7. Create a new section in ``CHANGELOG.rst`` titled ``2.10.0.dev``, above ``2.9.1.dev``, and add a few bullet points as placeholders for new entries.
|
||||||
|
8. Commit ``features``.
|
||||||
|
9. Push ``master`` and ``features``.
|
||||||
|
|
||||||
|
c. **major release (3.0.0)**: same steps as that of a **minor release**
|
||||||
|
|
||||||
13. **after the release** Bump the version number in ``_pytest/__init__.py``,
|
|
||||||
to the next Minor release version (i.e. if you released ``pytest-2.8.0``,
|
|
||||||
set it to ``pytest-2.9.0.dev1``).
|
|
||||||
|
|
||||||
14. merge the actual release into the master branch and do a pull request against it
|
|
||||||
15. merge from master to features
|
|
||||||
|
|
365
ISSUES.txt
365
ISSUES.txt
|
@ -1,365 +0,0 @@
|
||||||
|
|
||||||
|
|
||||||
recorder = monkeypatch.function(".......")
|
|
||||||
-------------------------------------------------------------
|
|
||||||
tags: nice feature
|
|
||||||
|
|
||||||
Like monkeypatch.replace but sets a mock-like call recorder:
|
|
||||||
|
|
||||||
recorder = monkeypatch.function("os.path.abspath")
|
|
||||||
recorder.set_return("/hello")
|
|
||||||
os.path.abspath("hello")
|
|
||||||
call, = recorder.calls
|
|
||||||
assert call.args.path == "hello"
|
|
||||||
assert call.returned == "/hello"
|
|
||||||
...
|
|
||||||
|
|
||||||
Unlike mock, "args.path" acts on the parsed auto-spec'ed ``os.path.abspath``
|
|
||||||
so it's independent from if the client side called "os.path.abspath(path=...)"
|
|
||||||
or "os.path.abspath('positional')".
|
|
||||||
|
|
||||||
|
|
||||||
refine parametrize API
|
|
||||||
-------------------------------------------------------------
|
|
||||||
tags: critical feature
|
|
||||||
|
|
||||||
extend metafunc.parametrize to directly support indirection, example:
|
|
||||||
|
|
||||||
def setupdb(request, config):
|
|
||||||
# setup "resource" based on test request and the values passed
|
|
||||||
# in to parametrize. setupfunc is called for each such value.
|
|
||||||
# you may use request.addfinalizer() or request.cached_setup ...
|
|
||||||
return dynamic_setup_database(val)
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("db", ["pg", "mysql"], setupfunc=setupdb)
|
|
||||||
def test_heavy_functional_test(db):
|
|
||||||
...
|
|
||||||
|
|
||||||
There would be no need to write or explain funcarg factories and
|
|
||||||
their special __ syntax.
|
|
||||||
|
|
||||||
The examples and improvements should also show how to put the parametrize
|
|
||||||
decorator to a class, to a module or even to a directory. For the directory
|
|
||||||
part a conftest.py content like this::
|
|
||||||
|
|
||||||
pytestmark = [
|
|
||||||
@pytest.mark.parametrize_setup("db", ...),
|
|
||||||
]
|
|
||||||
|
|
||||||
probably makes sense in order to keep the declarative nature. This mirrors
|
|
||||||
the marker-mechanism with respect to a test module but puts it to a directory
|
|
||||||
scale.
|
|
||||||
|
|
||||||
When doing larger scoped parametrization it probably becomes necessary
|
|
||||||
to allow parametrization to be ignored if the according parameter is not
|
|
||||||
used (currently any parametrized argument that is not present in a function will cause a ValueError). Example:
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("db", ..., mustmatch=False)
|
|
||||||
|
|
||||||
means to not raise an error but simply ignore the parametrization
|
|
||||||
if the signature of a decorated function does not match. XXX is it
|
|
||||||
not sufficient to always allow non-matches?
|
|
||||||
|
|
||||||
|
|
||||||
allow parametrized attributes on classes
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
tags: wish 2.4
|
|
||||||
|
|
||||||
example:
|
|
||||||
|
|
||||||
@pytest.mark.parametrize_attr("db", setupfunc, [1,2,3], scope="class")
|
|
||||||
@pytest.mark.parametrize_attr("tmp", setupfunc, scope="...")
|
|
||||||
class TestMe:
|
|
||||||
def test_hello(self):
|
|
||||||
access self.db ...
|
|
||||||
|
|
||||||
this would run the test_hello() function three times with three
|
|
||||||
different values for self.db. This could also work with unittest/nose
|
|
||||||
style tests, i.e. it leverages existing test suites without needing
|
|
||||||
to rewrite them. Together with the previously mentioned setup_test()
|
|
||||||
maybe the setupfunc could be omitted?
|
|
||||||
|
|
||||||
optimizations
|
|
||||||
---------------------------------------------------------------
|
|
||||||
tags: 2.4 core
|
|
||||||
|
|
||||||
- look at ihook optimization such that all lookups for
|
|
||||||
hooks relating to the same fspath are cached.
|
|
||||||
|
|
||||||
fix start/finish partial finailization problem
|
|
||||||
---------------------------------------------------------------
|
|
||||||
tags: bug core
|
|
||||||
|
|
||||||
if a configure/runtest_setup/sessionstart/... hook invocation partially
|
|
||||||
fails the sessionfinishes is not called. Each hook implementation
|
|
||||||
should better be repsonsible for registering a cleanup/finalizer
|
|
||||||
appropriately to avoid this issue. Moreover/Alternatively, we could
|
|
||||||
record which implementations of a hook succeeded and only call their
|
|
||||||
teardown.
|
|
||||||
|
|
||||||
|
|
||||||
relax requirement to have tests/testing contain an __init__
|
|
||||||
----------------------------------------------------------------
|
|
||||||
tags: feature
|
|
||||||
bb: http://bitbucket.org/hpk42/py-trunk/issue/64
|
|
||||||
|
|
||||||
A local test run of a "tests" directory may work
|
|
||||||
but a remote one fail because the tests directory
|
|
||||||
does not contain an "__init__.py". Either give
|
|
||||||
an error or make it work without the __init__.py
|
|
||||||
i.e. port the nose-logic of unloading a test module.
|
|
||||||
|
|
||||||
customize test function collection
|
|
||||||
-------------------------------------------------------
|
|
||||||
tags: feature
|
|
||||||
|
|
||||||
- introduce pytest.mark.nocollect for not considering a function for
|
|
||||||
test collection at all. maybe also introduce a pytest.mark.test to
|
|
||||||
explicitly mark a function to become a tested one. Lookup JUnit ways
|
|
||||||
of tagging tests.
|
|
||||||
|
|
||||||
introduce pytest.mark.importorskip
|
|
||||||
-------------------------------------------------------
|
|
||||||
tags: feature
|
|
||||||
|
|
||||||
in addition to the imperative pytest.importorskip also introduce
|
|
||||||
a pytest.mark.importorskip so that the test count is more correct.
|
|
||||||
|
|
||||||
|
|
||||||
introduce pytest.mark.platform
|
|
||||||
-------------------------------------------------------
|
|
||||||
tags: feature
|
|
||||||
|
|
||||||
Introduce nice-to-spell platform-skipping, examples:
|
|
||||||
|
|
||||||
@pytest.mark.platform("python3")
|
|
||||||
@pytest.mark.platform("not python3")
|
|
||||||
@pytest.mark.platform("win32 and not python3")
|
|
||||||
@pytest.mark.platform("darwin")
|
|
||||||
@pytest.mark.platform("not (jython and win32)")
|
|
||||||
@pytest.mark.platform("not (jython and win32)", xfail=True)
|
|
||||||
|
|
||||||
etc. Idea is to allow Python expressions which can operate
|
|
||||||
on common spellings for operating systems and python
|
|
||||||
interpreter versions.
|
|
||||||
|
|
||||||
pytest.mark.xfail signature change
|
|
||||||
-------------------------------------------------------
|
|
||||||
tags: feature
|
|
||||||
|
|
||||||
change to pytest.mark.xfail(reason, (optional)condition)
|
|
||||||
to better implement the word meaning. It also signals
|
|
||||||
better that we always have some kind of an implementation
|
|
||||||
reason that can be formualated.
|
|
||||||
Compatibility? how to introduce a new name/keep compat?
|
|
||||||
|
|
||||||
allow to non-intrusively apply skipfs/xfail/marks
|
|
||||||
---------------------------------------------------
|
|
||||||
tags: feature
|
|
||||||
|
|
||||||
use case: mark a module or directory structures
|
|
||||||
to be skipped on certain platforms (i.e. no import
|
|
||||||
attempt will be made).
|
|
||||||
|
|
||||||
consider introducing a hook/mechanism that allows to apply marks
|
|
||||||
from conftests or plugins. (See extended parametrization)
|
|
||||||
|
|
||||||
|
|
||||||
explicit referencing of conftest.py files
|
|
||||||
-----------------------------------------
|
|
||||||
tags: feature
|
|
||||||
|
|
||||||
allow to name conftest.py files (in sub directories) that should
|
|
||||||
be imported early, as to include command line options.
|
|
||||||
|
|
||||||
improve central pytest ini file
|
|
||||||
-------------------------------
|
|
||||||
tags: feature
|
|
||||||
|
|
||||||
introduce more declarative configuration options:
|
|
||||||
- (to-be-collected test directories)
|
|
||||||
- required plugins
|
|
||||||
- test func/class/file matching patterns
|
|
||||||
- skip/xfail (non-intrusive)
|
|
||||||
- pytest.ini and tox.ini and setup.cfg configuration in the same file
|
|
||||||
|
|
||||||
new documentation
|
|
||||||
----------------------------------
|
|
||||||
tags: feature
|
|
||||||
|
|
||||||
- logo pytest
|
|
||||||
- examples for unittest or functional testing
|
|
||||||
- resource management for functional testing
|
|
||||||
- patterns: page object
|
|
||||||
|
|
||||||
have imported module mismatch honour relative paths
|
|
||||||
--------------------------------------------------------
|
|
||||||
tags: bug
|
|
||||||
|
|
||||||
With 1.1.1 pytest fails at least on windows if an import
|
|
||||||
is relative and compared against an absolute conftest.py
|
|
||||||
path. Normalize.
|
|
||||||
|
|
||||||
consider globals: pytest.ensuretemp and config
|
|
||||||
--------------------------------------------------------------
|
|
||||||
tags: experimental-wish
|
|
||||||
|
|
||||||
consider deprecating pytest.ensuretemp and pytest.config
|
|
||||||
to further reduce pytest globality. Also consider
|
|
||||||
having pytest.config and ensuretemp coming from
|
|
||||||
a plugin rather than being there from the start.
|
|
||||||
|
|
||||||
|
|
||||||
consider pytest_addsyspath hook
|
|
||||||
-----------------------------------------
|
|
||||||
tags: wish
|
|
||||||
|
|
||||||
pytest could call a new pytest_addsyspath() in order to systematically
|
|
||||||
allow manipulation of sys.path and to inhibit it via --no-addsyspath
|
|
||||||
in order to more easily run against installed packages.
|
|
||||||
|
|
||||||
Alternatively it could also be done via the config object
|
|
||||||
and pytest_configure.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
deprecate global pytest.config usage
|
|
||||||
----------------------------------------------------------------
|
|
||||||
tags: feature
|
|
||||||
|
|
||||||
pytest.ensuretemp and pytest.config are probably the last
|
|
||||||
objects containing global state. Often using them is not
|
|
||||||
necessary. This is about trying to get rid of them, i.e.
|
|
||||||
deprecating them and checking with PyPy's usages as well
|
|
||||||
as others.
|
|
||||||
|
|
||||||
remove deprecated bits in collect.py
|
|
||||||
-------------------------------------------------------------------
|
|
||||||
tags: feature
|
|
||||||
|
|
||||||
In an effort to further simplify code, review and remove deprecated bits
|
|
||||||
in collect.py. Probably good:
|
|
||||||
- inline consider_file/dir methods, no need to have them
|
|
||||||
subclass-overridable because of hooks
|
|
||||||
|
|
||||||
implement fslayout decorator
|
|
||||||
---------------------------------
|
|
||||||
tags: feature
|
|
||||||
|
|
||||||
Improve the way how tests can work with pre-made examples,
|
|
||||||
keeping the layout close to the test function:
|
|
||||||
|
|
||||||
@pytest.mark.fslayout("""
|
|
||||||
conftest.py:
|
|
||||||
# empty
|
|
||||||
tests/
|
|
||||||
test_%(NAME)s: # becomes test_run1.py
|
|
||||||
def test_function(self):
|
|
||||||
pass
|
|
||||||
""")
|
|
||||||
def test_run(pytester, fslayout):
|
|
||||||
p = fslayout.findone("test_*.py")
|
|
||||||
result = pytester.runpytest(p)
|
|
||||||
assert result.ret == 0
|
|
||||||
assert result.passed == 1
|
|
||||||
|
|
||||||
Another idea is to allow to define a full scenario including the run
|
|
||||||
in one content string::
|
|
||||||
|
|
||||||
runscenario("""
|
|
||||||
test_{TESTNAME}.py:
|
|
||||||
import pytest
|
|
||||||
@pytest.mark.xfail
|
|
||||||
def test_that_fails():
|
|
||||||
assert 0
|
|
||||||
|
|
||||||
@pytest.mark.skipif("True")
|
|
||||||
def test_hello():
|
|
||||||
pass
|
|
||||||
|
|
||||||
conftest.py:
|
|
||||||
import pytest
|
|
||||||
def pytest_runsetup_setup(item):
|
|
||||||
pytest.skip("abc")
|
|
||||||
|
|
||||||
runpytest -rsxX
|
|
||||||
*SKIP*{TESTNAME}*
|
|
||||||
*1 skipped*
|
|
||||||
""")
|
|
||||||
|
|
||||||
This could be run with at least three different ways to invoke pytest:
|
|
||||||
through the shell, through "python -m pytest" and inlined. As inlined
|
|
||||||
would be the fastest it could be run first (or "--fast" mode).
|
|
||||||
|
|
||||||
|
|
||||||
Create isolate plugin
|
|
||||||
---------------------
|
|
||||||
tags: feature
|
|
||||||
|
|
||||||
The idea is that you can e.g. import modules in a test and afterwards
|
|
||||||
sys.modules, sys.meta_path etc would be reverted. It can go further
|
|
||||||
then just importing however, e.g. current working directory, file
|
|
||||||
descriptors, ...
|
|
||||||
|
|
||||||
This would probably be done by marking::
|
|
||||||
|
|
||||||
@pytest.mark.isolate(importing=True, cwd=True, fds=False)
|
|
||||||
def test_foo():
|
|
||||||
...
|
|
||||||
|
|
||||||
With the possibility of doing this globally in an ini-file.
|
|
||||||
|
|
||||||
|
|
||||||
fnmatch for test names
|
|
||||||
----------------------
|
|
||||||
tags: feature-wish
|
|
||||||
|
|
||||||
various testsuites use suffixes instead of prefixes for test classes
|
|
||||||
also it lends itself to bdd style test names::
|
|
||||||
|
|
||||||
class UserBehaviour:
|
|
||||||
def anonymous_should_not_have_inbox(user):
|
|
||||||
...
|
|
||||||
def registred_should_have_inbox(user):
|
|
||||||
..
|
|
||||||
|
|
||||||
using the following in pytest.ini::
|
|
||||||
|
|
||||||
[pytest]
|
|
||||||
python_classes = Test *Behaviour *Test
|
|
||||||
python_functions = test *_should_*
|
|
||||||
|
|
||||||
|
|
||||||
mechanism for running named parts of tests with different reporting behaviour
|
|
||||||
------------------------------------------------------------------------------
|
|
||||||
tags: feature-wish-incomplete
|
|
||||||
|
|
||||||
a few use-cases come to mind:
|
|
||||||
|
|
||||||
* fail assertions and record that without stopping a complete test
|
|
||||||
|
|
||||||
* this is in particular hepfull if a small bit of a test is known to fail/xfail::
|
|
||||||
|
|
||||||
def test_fun():
|
|
||||||
with pytest.section('fdcheck, marks=pytest.mark.xfail_if(...)):
|
|
||||||
breaks_on_windows()
|
|
||||||
|
|
||||||
* divide functional/acceptance tests into sections
|
|
||||||
* provide a different mechanism for generators, maybe something like::
|
|
||||||
|
|
||||||
def pytest_runtest_call(item)
|
|
||||||
if not generator:
|
|
||||||
...
|
|
||||||
prepare_check = GeneratorCheckprepare()
|
|
||||||
|
|
||||||
gen = item.obj(**fixtures)
|
|
||||||
for check in gen
|
|
||||||
id, call = prepare_check(check)
|
|
||||||
# bubble should only prevent exception propagation after a failure
|
|
||||||
# the whole test should still fail
|
|
||||||
# there might be need for a lower level api and taking custom markers into account
|
|
||||||
with pytest.section(id, bubble=False):
|
|
||||||
call()
|
|
||||||
|
|
||||||
|
|
|
@ -4,6 +4,7 @@ include AUTHORS
|
||||||
|
|
||||||
include README.rst
|
include README.rst
|
||||||
include CONTRIBUTING.rst
|
include CONTRIBUTING.rst
|
||||||
|
include HOWTORELEASE.rst
|
||||||
|
|
||||||
include tox.ini
|
include tox.ini
|
||||||
include setup.py
|
include setup.py
|
||||||
|
@ -29,6 +30,3 @@ recursive-exclude * *.pyc *.pyo
|
||||||
exclude appveyor/install.ps1
|
exclude appveyor/install.ps1
|
||||||
exclude appveyor.yml
|
exclude appveyor.yml
|
||||||
exclude appveyor
|
exclude appveyor
|
||||||
|
|
||||||
exclude ISSUES.txt
|
|
||||||
exclude HOWTORELEASE.rst
|
|
||||||
|
|
|
@ -687,7 +687,7 @@ class Session(FSCollector):
|
||||||
# This method is sometimes invoked when AssertionRewritingHook, which
|
# This method is sometimes invoked when AssertionRewritingHook, which
|
||||||
# does not define a get_filename method, is already in place:
|
# does not define a get_filename method, is already in place:
|
||||||
try:
|
try:
|
||||||
path = loader.get_filename()
|
path = loader.get_filename(x)
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
# Retrieve path from AssertionRewritingHook:
|
# Retrieve path from AssertionRewritingHook:
|
||||||
path = loader.modules[x][0].co_filename
|
path = loader.modules[x][0].co_filename
|
||||||
|
|
|
@ -283,6 +283,21 @@ class MarkDecorator:
|
||||||
return self.__class__(self.name, args=args, kwargs=kw)
|
return self.__class__(self.name, args=args, kwargs=kw)
|
||||||
|
|
||||||
|
|
||||||
|
def extract_argvalue(maybe_marked_args):
|
||||||
|
# TODO: incorrect mark data, the old code wanst able to collect lists
|
||||||
|
# individual parametrized argument sets can be wrapped in a series
|
||||||
|
# of markers in which case we unwrap the values and apply the mark
|
||||||
|
# at Function init
|
||||||
|
newmarks = {}
|
||||||
|
argval = maybe_marked_args
|
||||||
|
while isinstance(argval, MarkDecorator):
|
||||||
|
newmark = MarkDecorator(argval.markname,
|
||||||
|
argval.args[:-1], argval.kwargs)
|
||||||
|
newmarks[newmark.markname] = newmark
|
||||||
|
argval = argval.args[-1]
|
||||||
|
return argval, newmarks
|
||||||
|
|
||||||
|
|
||||||
class MarkInfo:
|
class MarkInfo:
|
||||||
""" Marking object created by :class:`MarkDecorator` instances. """
|
""" Marking object created by :class:`MarkDecorator` instances. """
|
||||||
def __init__(self, name, args, kwargs):
|
def __init__(self, name, args, kwargs):
|
||||||
|
|
|
@ -5,10 +5,11 @@ import inspect
|
||||||
import sys
|
import sys
|
||||||
import collections
|
import collections
|
||||||
import math
|
import math
|
||||||
|
from itertools import count
|
||||||
|
|
||||||
import py
|
import py
|
||||||
import pytest
|
import pytest
|
||||||
from _pytest.mark import MarkDecorator, MarkerError
|
from _pytest.mark import MarkerError
|
||||||
|
|
||||||
|
|
||||||
import _pytest
|
import _pytest
|
||||||
|
@ -431,10 +432,12 @@ class Module(pytest.File, PyCollector):
|
||||||
"Make sure your test modules/packages have valid Python names."
|
"Make sure your test modules/packages have valid Python names."
|
||||||
% (self.fspath, exc or exc_class)
|
% (self.fspath, exc or exc_class)
|
||||||
)
|
)
|
||||||
except _pytest.runner.Skipped:
|
except _pytest.runner.Skipped as e:
|
||||||
|
if e.allow_module_level:
|
||||||
|
raise
|
||||||
raise self.CollectError(
|
raise self.CollectError(
|
||||||
"Using @pytest.skip outside a test (e.g. as a test function "
|
"Using @pytest.skip outside of a test (e.g. as a test "
|
||||||
"decorator) is not allowed. Use @pytest.mark.skip or "
|
"function decorator) is not allowed. Use @pytest.mark.skip or "
|
||||||
"@pytest.mark.skipif instead."
|
"@pytest.mark.skipif instead."
|
||||||
)
|
)
|
||||||
self.config.pluginmanager.consider_module(mod)
|
self.config.pluginmanager.consider_module(mod)
|
||||||
|
@ -774,19 +777,14 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
|
||||||
to set a dynamic scope using test context or configuration.
|
to set a dynamic scope using test context or configuration.
|
||||||
"""
|
"""
|
||||||
from _pytest.fixtures import scopes
|
from _pytest.fixtures import scopes
|
||||||
# individual parametrized argument sets can be wrapped in a series
|
from _pytest.mark import extract_argvalue
|
||||||
# of markers in which case we unwrap the values and apply the mark
|
|
||||||
# at Function init
|
|
||||||
newkeywords = {}
|
|
||||||
unwrapped_argvalues = []
|
unwrapped_argvalues = []
|
||||||
for i, argval in enumerate(argvalues):
|
newkeywords = []
|
||||||
while isinstance(argval, MarkDecorator):
|
for maybe_marked_args in argvalues:
|
||||||
newmark = MarkDecorator(argval.markname,
|
argval, newmarks = extract_argvalue(maybe_marked_args)
|
||||||
argval.args[:-1], argval.kwargs)
|
|
||||||
newmarks = newkeywords.setdefault(i, {})
|
|
||||||
newmarks[newmark.markname] = newmark
|
|
||||||
argval = argval.args[-1]
|
|
||||||
unwrapped_argvalues.append(argval)
|
unwrapped_argvalues.append(argval)
|
||||||
|
newkeywords.append(newmarks)
|
||||||
argvalues = unwrapped_argvalues
|
argvalues = unwrapped_argvalues
|
||||||
|
|
||||||
if not isinstance(argnames, (tuple, list)):
|
if not isinstance(argnames, (tuple, list)):
|
||||||
|
@ -801,18 +799,11 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
|
||||||
newmark = pytest.mark.skip(
|
newmark = pytest.mark.skip(
|
||||||
reason="got empty parameter set %r, function %s at %s:%d" % (
|
reason="got empty parameter set %r, function %s at %s:%d" % (
|
||||||
argnames, self.function.__name__, fs, lineno))
|
argnames, self.function.__name__, fs, lineno))
|
||||||
newmarks = newkeywords.setdefault(0, {})
|
newkeywords = [{newmark.markname: newmark}]
|
||||||
newmarks[newmark.markname] = newmark
|
|
||||||
|
|
||||||
if scope is None:
|
if scope is None:
|
||||||
if self._arg2fixturedefs:
|
scope = _find_parametrized_scope(argnames, self._arg2fixturedefs, indirect)
|
||||||
# Takes the most narrow scope from used fixtures
|
|
||||||
fixtures_scopes = [fixturedef[0].scope for fixturedef in self._arg2fixturedefs.values()]
|
|
||||||
for scope in reversed(scopes):
|
|
||||||
if scope in fixtures_scopes:
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
scope = 'function'
|
|
||||||
scopenum = scopes.index(scope)
|
scopenum = scopes.index(scope)
|
||||||
valtypes = {}
|
valtypes = {}
|
||||||
for arg in argnames:
|
for arg in argnames:
|
||||||
|
@ -846,12 +837,12 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
|
||||||
ids = idmaker(argnames, argvalues, idfn, ids, self.config)
|
ids = idmaker(argnames, argvalues, idfn, ids, self.config)
|
||||||
newcalls = []
|
newcalls = []
|
||||||
for callspec in self._calls or [CallSpec2(self)]:
|
for callspec in self._calls or [CallSpec2(self)]:
|
||||||
for param_index, valset in enumerate(argvalues):
|
elements = zip(ids, argvalues, newkeywords, count())
|
||||||
|
for a_id, valset, keywords, param_index in elements:
|
||||||
assert len(valset) == len(argnames)
|
assert len(valset) == len(argnames)
|
||||||
newcallspec = callspec.copy(self)
|
newcallspec = callspec.copy(self)
|
||||||
newcallspec.setmulti(valtypes, argnames, valset, ids[param_index],
|
newcallspec.setmulti(valtypes, argnames, valset, a_id,
|
||||||
newkeywords.get(param_index, {}), scopenum,
|
keywords, scopenum, param_index)
|
||||||
param_index)
|
|
||||||
newcalls.append(newcallspec)
|
newcalls.append(newcallspec)
|
||||||
self._calls = newcalls
|
self._calls = newcalls
|
||||||
|
|
||||||
|
@ -892,6 +883,30 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
|
||||||
self._calls.append(cs)
|
self._calls.append(cs)
|
||||||
|
|
||||||
|
|
||||||
|
def _find_parametrized_scope(argnames, arg2fixturedefs, indirect):
|
||||||
|
"""Find the most appropriate scope for a parametrized call based on its arguments.
|
||||||
|
|
||||||
|
When there's at least one direct argument, always use "function" scope.
|
||||||
|
|
||||||
|
When a test function is parametrized and all its arguments are indirect
|
||||||
|
(e.g. fixtures), return the most narrow scope based on the fixtures used.
|
||||||
|
|
||||||
|
Related to issue #1832, based on code posted by @Kingdread.
|
||||||
|
"""
|
||||||
|
from _pytest.fixtures import scopes
|
||||||
|
indirect_as_list = isinstance(indirect, (list, tuple))
|
||||||
|
all_arguments_are_fixtures = indirect is True or \
|
||||||
|
indirect_as_list and len(indirect) == argnames
|
||||||
|
if all_arguments_are_fixtures:
|
||||||
|
fixturedefs = arg2fixturedefs or {}
|
||||||
|
used_scopes = [fixturedef[0].scope for name, fixturedef in fixturedefs.items()]
|
||||||
|
if used_scopes:
|
||||||
|
# Takes the most narrow scope from used fixtures
|
||||||
|
for scope in reversed(scopes):
|
||||||
|
if scope in used_scopes:
|
||||||
|
return scope
|
||||||
|
|
||||||
|
return 'function'
|
||||||
|
|
||||||
|
|
||||||
def _idval(val, argname, idx, idfn, config=None):
|
def _idval(val, argname, idx, idfn, config=None):
|
||||||
|
@ -921,7 +936,7 @@ def _idval(val, argname, idx, idfn, config=None):
|
||||||
return str(argname)+str(idx)
|
return str(argname)+str(idx)
|
||||||
|
|
||||||
def _idvalset(idx, valset, argnames, idfn, ids, config=None):
|
def _idvalset(idx, valset, argnames, idfn, ids, config=None):
|
||||||
if ids is None or ids[idx] is None:
|
if ids is None or (idx >= len(ids) or ids[idx] is None):
|
||||||
this_id = [_idval(val, argname, idx, idfn, config)
|
this_id = [_idval(val, argname, idx, idfn, config)
|
||||||
for val, argname in zip(valset, argnames)]
|
for val, argname in zip(valset, argnames)]
|
||||||
return "-".join(this_id)
|
return "-".join(this_id)
|
||||||
|
|
|
@ -492,10 +492,16 @@ class Skipped(OutcomeException):
|
||||||
# in order to have Skipped exception printing shorter/nicer
|
# in order to have Skipped exception printing shorter/nicer
|
||||||
__module__ = 'builtins'
|
__module__ = 'builtins'
|
||||||
|
|
||||||
|
def __init__(self, msg=None, pytrace=True, allow_module_level=False):
|
||||||
|
OutcomeException.__init__(self, msg=msg, pytrace=pytrace)
|
||||||
|
self.allow_module_level = allow_module_level
|
||||||
|
|
||||||
|
|
||||||
class Failed(OutcomeException):
|
class Failed(OutcomeException):
|
||||||
""" raised from an explicit call to pytest.fail() """
|
""" raised from an explicit call to pytest.fail() """
|
||||||
__module__ = 'builtins'
|
__module__ = 'builtins'
|
||||||
|
|
||||||
|
|
||||||
class Exit(KeyboardInterrupt):
|
class Exit(KeyboardInterrupt):
|
||||||
""" raised for immediate program exits (no tracebacks/summaries)"""
|
""" raised for immediate program exits (no tracebacks/summaries)"""
|
||||||
def __init__(self, msg="unknown reason"):
|
def __init__(self, msg="unknown reason"):
|
||||||
|
@ -546,7 +552,7 @@ def importorskip(modname, minversion=None):
|
||||||
# Do not raise chained exception here(#1485)
|
# Do not raise chained exception here(#1485)
|
||||||
should_skip = True
|
should_skip = True
|
||||||
if should_skip:
|
if should_skip:
|
||||||
skip("could not import %r" %(modname,))
|
raise Skipped("could not import %r" %(modname,), allow_module_level=True)
|
||||||
mod = sys.modules[modname]
|
mod = sys.modules[modname]
|
||||||
if minversion is None:
|
if minversion is None:
|
||||||
return mod
|
return mod
|
||||||
|
@ -555,10 +561,11 @@ def importorskip(modname, minversion=None):
|
||||||
try:
|
try:
|
||||||
from pkg_resources import parse_version as pv
|
from pkg_resources import parse_version as pv
|
||||||
except ImportError:
|
except ImportError:
|
||||||
skip("we have a required version for %r but can not import "
|
raise Skipped("we have a required version for %r but can not import "
|
||||||
"no pkg_resources to parse version strings." %(modname,))
|
"pkg_resources to parse version strings." % (modname,),
|
||||||
|
allow_module_level=True)
|
||||||
if verattr is None or pv(verattr) < pv(minversion):
|
if verattr is None or pv(verattr) < pv(minversion):
|
||||||
skip("module %r has __version__ %r, required is: %r" %(
|
raise Skipped("module %r has __version__ %r, required is: %r" %(
|
||||||
modname, verattr, minversion))
|
modname, verattr, minversion), allow_module_level=True)
|
||||||
return mod
|
return mod
|
||||||
|
|
||||||
|
|
|
@ -7,6 +7,8 @@ Release announcements
|
||||||
|
|
||||||
|
|
||||||
sprint2016
|
sprint2016
|
||||||
|
release-3.0.1
|
||||||
|
release-3.0.0
|
||||||
release-2.9.2
|
release-2.9.2
|
||||||
release-2.9.1
|
release-2.9.1
|
||||||
release-2.9.0
|
release-2.9.0
|
||||||
|
|
|
@ -41,7 +41,7 @@ Changes 2.6.3
|
||||||
dep). Thanks Charles Cloud for analysing the issue.
|
dep). Thanks Charles Cloud for analysing the issue.
|
||||||
|
|
||||||
- fix conftest related fixture visibility issue: when running with a
|
- fix conftest related fixture visibility issue: when running with a
|
||||||
CWD outside a test package pytest would get fixture discovery wrong.
|
CWD outside of a test package pytest would get fixture discovery wrong.
|
||||||
Thanks to Wolfgang Schnerring for figuring out a reproducable example.
|
Thanks to Wolfgang Schnerring for figuring out a reproducable example.
|
||||||
|
|
||||||
- Introduce pytest_enter_pdb hook (needed e.g. by pytest_timeout to cancel the
|
- Introduce pytest_enter_pdb hook (needed e.g. by pytest_timeout to cancel the
|
||||||
|
|
|
@ -6,7 +6,7 @@ The pytest team is proud to announce the 3.0.0 release!
|
||||||
pytest is a mature Python testing tool with more than a 1600 tests
|
pytest is a mature Python testing tool with more than a 1600 tests
|
||||||
against itself, passing on many different interpreters and platforms.
|
against itself, passing on many different interpreters and platforms.
|
||||||
|
|
||||||
This release contains a lot of bugs and improvements, and much of
|
This release contains a lot of bugs fixes and improvements, and much of
|
||||||
the work done on it was possible because of the 2016 Sprint[1], which
|
the work done on it was possible because of the 2016 Sprint[1], which
|
||||||
was funded by an indiegogo campaign which raised over US$12,000 with
|
was funded by an indiegogo campaign which raised over US$12,000 with
|
||||||
nearly 100 backers.
|
nearly 100 backers.
|
||||||
|
@ -76,7 +76,7 @@ Thanks to all who contributed to this release, among them:
|
||||||
|
|
||||||
|
|
||||||
Happy testing,
|
Happy testing,
|
||||||
The py.test Development Team
|
The Pytest Development Team
|
||||||
|
|
||||||
[1] http://blog.pytest.org/2016/pytest-development-sprint/
|
[1] http://blog.pytest.org/2016/pytest-development-sprint/
|
||||||
[2] http://blog.pytest.org/2016/whats-new-in-pytest-30/
|
[2] http://blog.pytest.org/2016/whats-new-in-pytest-30/
|
||||||
|
|
|
@ -0,0 +1,26 @@
|
||||||
|
pytest-3.0.1
|
||||||
|
============
|
||||||
|
|
||||||
|
pytest 3.0.1 has just been released to PyPI.
|
||||||
|
|
||||||
|
This release fixes some regressions reported in version 3.0.0, being a
|
||||||
|
drop-in replacement. To upgrade:
|
||||||
|
|
||||||
|
pip install --upgrade pytest
|
||||||
|
|
||||||
|
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
||||||
|
Adam Chainz
|
||||||
|
Andrew Svetlov
|
||||||
|
Bruno Oliveira
|
||||||
|
Daniel Hahler
|
||||||
|
Dmitry Dygalo
|
||||||
|
Florian Bruhin
|
||||||
|
Marcin Bachry
|
||||||
|
Ronny Pfannschmidt
|
||||||
|
matthiasha
|
||||||
|
|
||||||
|
Happy testing,
|
||||||
|
The py.test Development Team
|
|
@ -26,7 +26,7 @@ you will see the return value of the function call::
|
||||||
|
|
||||||
$ pytest test_assert1.py
|
$ pytest test_assert1.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 items
|
collected 1 items
|
||||||
|
|
||||||
|
@ -170,7 +170,7 @@ if you run this module::
|
||||||
|
|
||||||
$ pytest test_assert2.py
|
$ pytest test_assert2.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 items
|
collected 1 items
|
||||||
|
|
||||||
|
|
|
@ -80,7 +80,7 @@ If you then run it with ``--lf``::
|
||||||
|
|
||||||
$ pytest --lf
|
$ pytest --lf
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
run-last-failure: rerun last 2 failures
|
run-last-failure: rerun last 2 failures
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 50 items
|
collected 50 items
|
||||||
|
@ -122,7 +122,7 @@ of ``FF`` and dots)::
|
||||||
|
|
||||||
$ pytest --ff
|
$ pytest --ff
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
run-last-failure: rerun last 2 failures first
|
run-last-failure: rerun last 2 failures first
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 50 items
|
collected 50 items
|
||||||
|
@ -227,7 +227,7 @@ You can always peek at the content of the cache using the
|
||||||
|
|
||||||
$ py.test --cache-show
|
$ py.test --cache-show
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
cachedir: $REGENDOC_TMPDIR/.cache
|
cachedir: $REGENDOC_TMPDIR/.cache
|
||||||
------------------------------- cache values -------------------------------
|
------------------------------- cache values -------------------------------
|
||||||
|
|
|
@ -64,7 +64,7 @@ of the failing function and hide the other one::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
|
|
|
@ -48,7 +48,7 @@ then you can just invoke ``pytest`` without command line options::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||||
collected 1 items
|
collected 1 items
|
||||||
|
|
||||||
|
|
|
@ -1,4 +0,0 @@
|
||||||
[pytest]
|
|
||||||
testfilepatterns =
|
|
||||||
${topdir}/tests/unit/test_${basename}
|
|
||||||
${topdir}/tests/functional/*.py
|
|
|
@ -31,7 +31,7 @@ You can then restrict a test run to only run tests marked with ``webtest``::
|
||||||
|
|
||||||
$ pytest -v -m webtest
|
$ pytest -v -m webtest
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
@ -45,7 +45,7 @@ Or the inverse, running all tests except the webtest ones::
|
||||||
|
|
||||||
$ pytest -v -m "not webtest"
|
$ pytest -v -m "not webtest"
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
@ -66,7 +66,7 @@ tests based on their module, class, method, or function name::
|
||||||
|
|
||||||
$ pytest -v test_server.py::TestClass::test_method
|
$ pytest -v test_server.py::TestClass::test_method
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 5 items
|
collecting ... collected 5 items
|
||||||
|
@ -79,7 +79,7 @@ You can also select on the class::
|
||||||
|
|
||||||
$ pytest -v test_server.py::TestClass
|
$ pytest -v test_server.py::TestClass
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
@ -92,7 +92,7 @@ Or select multiple nodes::
|
||||||
|
|
||||||
$ pytest -v test_server.py::TestClass test_server.py::test_send_http
|
$ pytest -v test_server.py::TestClass test_server.py::test_send_http
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 8 items
|
collecting ... collected 8 items
|
||||||
|
@ -130,7 +130,7 @@ select tests based on their names::
|
||||||
|
|
||||||
$ pytest -v -k http # running with the above defined example module
|
$ pytest -v -k http # running with the above defined example module
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
@ -144,7 +144,7 @@ And you can also run all tests except the ones that match the keyword::
|
||||||
|
|
||||||
$ pytest -k "not send_http" -v
|
$ pytest -k "not send_http" -v
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
@ -160,7 +160,7 @@ Or to select "http" and "quick" tests::
|
||||||
|
|
||||||
$ pytest -k "http or quick" -v
|
$ pytest -k "http or quick" -v
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
@ -352,7 +352,7 @@ the test needs::
|
||||||
|
|
||||||
$ pytest -E stage2
|
$ pytest -E stage2
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 items
|
collected 1 items
|
||||||
|
|
||||||
|
@ -364,7 +364,7 @@ and here is one that specifies exactly the environment needed::
|
||||||
|
|
||||||
$ pytest -E stage1
|
$ pytest -E stage1
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 items
|
collected 1 items
|
||||||
|
|
||||||
|
@ -485,7 +485,7 @@ then you will see two test skipped and two executed tests as expected::
|
||||||
|
|
||||||
$ pytest -rs # this option reports skip reasons
|
$ pytest -rs # this option reports skip reasons
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
|
@ -499,7 +499,7 @@ Note that if you specify a platform via the marker-command line option like this
|
||||||
|
|
||||||
$ pytest -m linux2
|
$ pytest -m linux2
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
|
@ -551,7 +551,7 @@ We can now use the ``-m option`` to select one set::
|
||||||
|
|
||||||
$ pytest -m interface --tb=short
|
$ pytest -m interface --tb=short
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
|
@ -573,7 +573,7 @@ or to select both "event" and "interface" tests::
|
||||||
|
|
||||||
$ pytest -m "interface or event" --tb=short
|
$ pytest -m "interface or event" --tb=short
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
|
|
|
@ -27,7 +27,7 @@ now execute the test specification::
|
||||||
|
|
||||||
nonpython $ pytest test_simple.yml
|
nonpython $ pytest test_simple.yml
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
|
@ -59,7 +59,7 @@ consulted when reporting in ``verbose`` mode::
|
||||||
|
|
||||||
nonpython $ pytest -v
|
nonpython $ pytest -v
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
@ -81,7 +81,7 @@ interesting to just look at the collection tree::
|
||||||
|
|
||||||
nonpython $ pytest --collect-only
|
nonpython $ pytest --collect-only
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
<YamlFile 'test_simple.yml'>
|
<YamlFile 'test_simple.yml'>
|
||||||
|
|
|
@ -130,7 +130,7 @@ objects, they are still using the default pytest representation::
|
||||||
|
|
||||||
$ pytest test_time.py --collect-only
|
$ pytest test_time.py --collect-only
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 6 items
|
collected 6 items
|
||||||
<Module 'test_time.py'>
|
<Module 'test_time.py'>
|
||||||
|
@ -181,7 +181,7 @@ this is a fully self-contained example which you can run with::
|
||||||
|
|
||||||
$ pytest test_scenarios.py
|
$ pytest test_scenarios.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
|
@ -194,7 +194,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
|
||||||
|
|
||||||
$ pytest --collect-only test_scenarios.py
|
$ pytest --collect-only test_scenarios.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
<Module 'test_scenarios.py'>
|
<Module 'test_scenarios.py'>
|
||||||
|
@ -259,7 +259,7 @@ Let's first see how it looks like at collection time::
|
||||||
|
|
||||||
$ pytest test_backends.py --collect-only
|
$ pytest test_backends.py --collect-only
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
<Module 'test_backends.py'>
|
<Module 'test_backends.py'>
|
||||||
|
@ -320,7 +320,7 @@ The result of this test will be successful::
|
||||||
|
|
||||||
$ pytest test_indirect_list.py --collect-only
|
$ pytest test_indirect_list.py --collect-only
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 items
|
collected 1 items
|
||||||
<Module 'test_indirect_list.py'>
|
<Module 'test_indirect_list.py'>
|
||||||
|
@ -447,7 +447,7 @@ If you run this with reporting for skips enabled::
|
||||||
|
|
||||||
$ pytest -rs test_module.py
|
$ pytest -rs test_module.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
|
|
|
@ -117,7 +117,7 @@ then the test collection looks like this::
|
||||||
|
|
||||||
$ pytest --collect-only
|
$ pytest --collect-only
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||||
collected 2 items
|
collected 2 items
|
||||||
<Module 'check_myapp.py'>
|
<Module 'check_myapp.py'>
|
||||||
|
@ -163,7 +163,7 @@ You can always peek at the collection tree without running tests like this::
|
||||||
|
|
||||||
. $ pytest --collect-only pythoncollection.py
|
. $ pytest --collect-only pythoncollection.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||||
collected 3 items
|
collected 3 items
|
||||||
<Module 'CWD/pythoncollection.py'>
|
<Module 'CWD/pythoncollection.py'>
|
||||||
|
@ -230,7 +230,7 @@ will be left out::
|
||||||
|
|
||||||
$ pytest --collect-only
|
$ pytest --collect-only
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,7 @@ get on the terminal - we are working on that):
|
||||||
|
|
||||||
assertion $ pytest failure_demo.py
|
assertion $ pytest failure_demo.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
|
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
|
||||||
collected 42 items
|
collected 42 items
|
||||||
|
|
||||||
|
@ -361,7 +361,7 @@ get on the terminal - we are working on that):
|
||||||
> int(s)
|
> int(s)
|
||||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||||
|
|
||||||
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python.py:1174>:1: ValueError
|
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python.py:1189>:1: ValueError
|
||||||
_______ TestRaises.test_raises_doesnt ________
|
_______ TestRaises.test_raises_doesnt ________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
|
@ -108,7 +108,7 @@ directory with the above conftest.py::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
|
@ -156,7 +156,7 @@ and when running it will see a skipped "slow" test::
|
||||||
|
|
||||||
$ pytest -rs # "-rs" means report details on the little 's'
|
$ pytest -rs # "-rs" means report details on the little 's'
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
|
@ -170,7 +170,7 @@ Or run it including the ``slow`` marked test::
|
||||||
|
|
||||||
$ pytest --runslow
|
$ pytest --runslow
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
|
@ -284,7 +284,7 @@ which will add the string to the test header accordingly::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
project deps: mylib-1.1
|
project deps: mylib-1.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
@ -308,7 +308,7 @@ which will add info only when run with "--v"::
|
||||||
|
|
||||||
$ pytest -v
|
$ pytest -v
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
info1: did you know that ...
|
info1: did you know that ...
|
||||||
did you?
|
did you?
|
||||||
|
@ -321,7 +321,7 @@ and nothing when run plainly::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
|
@ -354,7 +354,7 @@ Now we can profile which test functions execute the slowest::
|
||||||
|
|
||||||
$ pytest --durations=3
|
$ pytest --durations=3
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
|
@ -416,7 +416,7 @@ If we run this::
|
||||||
|
|
||||||
$ pytest -rx
|
$ pytest -rx
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
|
@ -487,7 +487,7 @@ We can run this::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 7 items
|
collected 7 items
|
||||||
|
|
||||||
|
@ -591,7 +591,7 @@ and run them::
|
||||||
|
|
||||||
$ pytest test_module.py
|
$ pytest test_module.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
|
@ -681,7 +681,7 @@ and run it::
|
||||||
|
|
||||||
$ pytest -s test_module.py
|
$ pytest -s test_module.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
|
|
|
@ -70,7 +70,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
|
||||||
|
|
||||||
$ pytest test_smtpsimple.py
|
$ pytest test_smtpsimple.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 items
|
collected 1 items
|
||||||
|
|
||||||
|
@ -188,7 +188,7 @@ inspect what is going on and can now run the tests::
|
||||||
|
|
||||||
$ pytest test_module.py
|
$ pytest test_module.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
|
@ -516,7 +516,7 @@ Running the above tests results in the following test IDs being used::
|
||||||
|
|
||||||
$ pytest --collect-only
|
$ pytest --collect-only
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 11 items
|
collected 11 items
|
||||||
<Module 'test_anothersmtp.py'>
|
<Module 'test_anothersmtp.py'>
|
||||||
|
@ -569,7 +569,7 @@ Here we declare an ``app`` fixture which receives the previously defined
|
||||||
|
|
||||||
$ pytest -v test_appsetup.py
|
$ pytest -v test_appsetup.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
@ -638,7 +638,7 @@ Let's run the tests in verbose mode and with looking at the print-output::
|
||||||
|
|
||||||
$ pytest -v -s test_module.py
|
$ pytest -v -s test_module.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
|
||||||
cachedir: .cache
|
cachedir: .cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 8 items
|
collecting ... collected 8 items
|
||||||
|
|
|
@ -27,7 +27,7 @@ Installation options::
|
||||||
To check your installation has installed the correct version::
|
To check your installation has installed the correct version::
|
||||||
|
|
||||||
$ pytest --version
|
$ pytest --version
|
||||||
This is pytest version 3.0.0, imported from $PYTHON_PREFIX/lib/python3.5/site-packages/pytest.py
|
This is pytest version 3.0.1, imported from $PYTHON_PREFIX/lib/python3.5/site-packages/pytest.py
|
||||||
|
|
||||||
If you get an error checkout :ref:`installation issues`.
|
If you get an error checkout :ref:`installation issues`.
|
||||||
|
|
||||||
|
@ -49,7 +49,7 @@ That's it. You can execute the test function now::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 items
|
collected 1 items
|
||||||
|
|
||||||
|
|
|
@ -55,7 +55,7 @@ them in turn::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
|
@ -103,7 +103,7 @@ Let's run this::
|
||||||
|
|
||||||
$ pytest
|
$ pytest
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
|
|
|
@ -224,7 +224,7 @@ Running it with the report-on-xfail option gives this output::
|
||||||
|
|
||||||
example $ pytest -rx xfail_demo.py
|
example $ pytest -rx xfail_demo.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR/example, inifile:
|
rootdir: $REGENDOC_TMPDIR/example, inifile:
|
||||||
collected 7 items
|
collected 7 items
|
||||||
|
|
||||||
|
|
|
@ -29,7 +29,7 @@ Running this would result in a passed test except for the last
|
||||||
|
|
||||||
$ pytest test_tmpdir.py
|
$ pytest test_tmpdir.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 items
|
collected 1 items
|
||||||
|
|
||||||
|
|
|
@ -88,7 +88,7 @@ the ``self.db`` values in the traceback::
|
||||||
|
|
||||||
$ pytest test_unittest_db.py
|
$ pytest test_unittest_db.py
|
||||||
======= test session starts ========
|
======= test session starts ========
|
||||||
platform linux -- Python 3.5.2, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
|
platform linux -- Python 3.5.2, pytest-3.0.1, py-1.4.31, pluggy-0.3.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
|
|
|
@ -293,7 +293,7 @@ can use like this::
|
||||||
pass
|
pass
|
||||||
""")
|
""")
|
||||||
result = testdir.runpytest("--verbose")
|
result = testdir.runpytest("--verbose")
|
||||||
result.fnmatch_lines("""
|
result.stdout.fnmatch_lines("""
|
||||||
test_example*
|
test_example*
|
||||||
""")
|
""")
|
||||||
|
|
||||||
|
|
|
@ -763,3 +763,21 @@ class TestDurationWithFixture:
|
||||||
* call *test_1*
|
* call *test_1*
|
||||||
""")
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_zipimport_hook(testdir, tmpdir):
|
||||||
|
"""Test package loader is being used correctly (see #1837)."""
|
||||||
|
zipapp = pytest.importorskip('zipapp')
|
||||||
|
testdir.tmpdir.join('app').ensure(dir=1)
|
||||||
|
testdir.makepyfile(**{
|
||||||
|
'app/foo.py': """
|
||||||
|
import pytest
|
||||||
|
def main():
|
||||||
|
pytest.main(['--pyarg', 'foo'])
|
||||||
|
""",
|
||||||
|
})
|
||||||
|
target = tmpdir.join('foo.zip')
|
||||||
|
zipapp.create_archive(str(testdir.tmpdir.join('app')), str(target), main='foo:main')
|
||||||
|
result = testdir.runpython(target)
|
||||||
|
assert result.ret == 0
|
||||||
|
result.stderr.fnmatch_lines(['*not found*foo*'])
|
||||||
|
assert 'INTERNALERROR>' not in result.stdout.str()
|
||||||
|
|
|
@ -889,6 +889,33 @@ class TestMetafuncFunctional:
|
||||||
"*test_function*advanced*FAILED",
|
"*test_function*advanced*FAILED",
|
||||||
])
|
])
|
||||||
|
|
||||||
|
def test_fixture_parametrized_empty_ids(self, testdir):
|
||||||
|
"""Fixtures parametrized with empty ids cause an internal error (#1849)."""
|
||||||
|
testdir.makepyfile('''
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
@pytest.fixture(scope="module", ids=[], params=[])
|
||||||
|
def temp(request):
|
||||||
|
return request.param
|
||||||
|
|
||||||
|
def test_temp(temp):
|
||||||
|
pass
|
||||||
|
''')
|
||||||
|
result = testdir.runpytest()
|
||||||
|
result.stdout.fnmatch_lines(['* 1 skipped *'])
|
||||||
|
|
||||||
|
def test_parametrized_empty_ids(self, testdir):
|
||||||
|
"""Tests parametrized with empty ids cause an internal error (#1849)."""
|
||||||
|
testdir.makepyfile('''
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
@pytest.mark.parametrize('temp', [], ids=list())
|
||||||
|
def test_temp(temp):
|
||||||
|
pass
|
||||||
|
''')
|
||||||
|
result = testdir.runpytest()
|
||||||
|
result.stdout.fnmatch_lines(['* 1 skipped *'])
|
||||||
|
|
||||||
def test_parametrize_with_identical_ids_get_unique_names(self, testdir):
|
def test_parametrize_with_identical_ids_get_unique_names(self, testdir):
|
||||||
testdir.makepyfile("""
|
testdir.makepyfile("""
|
||||||
import pytest
|
import pytest
|
||||||
|
@ -930,43 +957,6 @@ class TestMetafuncFunctional:
|
||||||
reprec = testdir.inline_run()
|
reprec = testdir.inline_run()
|
||||||
reprec.assertoutcome(passed=5)
|
reprec.assertoutcome(passed=5)
|
||||||
|
|
||||||
def test_parametrize_issue634(self, testdir):
|
|
||||||
testdir.makepyfile('''
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
@pytest.fixture(scope='module')
|
|
||||||
def foo(request):
|
|
||||||
print('preparing foo-%d' % request.param)
|
|
||||||
return 'foo-%d' % request.param
|
|
||||||
|
|
||||||
|
|
||||||
def test_one(foo):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def test_two(foo):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
test_two.test_with = (2, 3)
|
|
||||||
|
|
||||||
|
|
||||||
def pytest_generate_tests(metafunc):
|
|
||||||
params = (1, 2, 3, 4)
|
|
||||||
if not 'foo' in metafunc.fixturenames:
|
|
||||||
return
|
|
||||||
|
|
||||||
test_with = getattr(metafunc.function, 'test_with', None)
|
|
||||||
if test_with:
|
|
||||||
params = test_with
|
|
||||||
metafunc.parametrize('foo', params, indirect=True)
|
|
||||||
|
|
||||||
''')
|
|
||||||
result = testdir.runpytest("-s")
|
|
||||||
output = result.stdout.str()
|
|
||||||
assert output.count('preparing foo-2') == 1
|
|
||||||
assert output.count('preparing foo-3') == 1
|
|
||||||
|
|
||||||
def test_parametrize_issue323(self, testdir):
|
def test_parametrize_issue323(self, testdir):
|
||||||
testdir.makepyfile("""
|
testdir.makepyfile("""
|
||||||
import pytest
|
import pytest
|
||||||
|
@ -1047,6 +1037,125 @@ class TestMetafuncFunctional:
|
||||||
assert expectederror in failures[0].longrepr.reprcrash.message
|
assert expectederror in failures[0].longrepr.reprcrash.message
|
||||||
|
|
||||||
|
|
||||||
|
class TestMetafuncFunctionalAuto:
|
||||||
|
"""
|
||||||
|
Tests related to automatically find out the correct scope for parametrized tests (#1832).
|
||||||
|
"""
|
||||||
|
|
||||||
|
def test_parametrize_auto_scope(self, testdir):
|
||||||
|
testdir.makepyfile('''
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
@pytest.fixture(scope='session', autouse=True)
|
||||||
|
def fixture():
|
||||||
|
return 1
|
||||||
|
|
||||||
|
@pytest.mark.parametrize('animal', ["dog", "cat"])
|
||||||
|
def test_1(animal):
|
||||||
|
assert animal in ('dog', 'cat')
|
||||||
|
|
||||||
|
@pytest.mark.parametrize('animal', ['fish'])
|
||||||
|
def test_2(animal):
|
||||||
|
assert animal == 'fish'
|
||||||
|
|
||||||
|
''')
|
||||||
|
result = testdir.runpytest()
|
||||||
|
result.stdout.fnmatch_lines(['* 3 passed *'])
|
||||||
|
|
||||||
|
def test_parametrize_auto_scope_indirect(self, testdir):
|
||||||
|
testdir.makepyfile('''
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
@pytest.fixture(scope='session')
|
||||||
|
def echo(request):
|
||||||
|
return request.param
|
||||||
|
|
||||||
|
@pytest.mark.parametrize('animal, echo', [("dog", 1), ("cat", 2)], indirect=['echo'])
|
||||||
|
def test_1(animal, echo):
|
||||||
|
assert animal in ('dog', 'cat')
|
||||||
|
assert echo in (1, 2, 3)
|
||||||
|
|
||||||
|
@pytest.mark.parametrize('animal, echo', [('fish', 3)], indirect=['echo'])
|
||||||
|
def test_2(animal, echo):
|
||||||
|
assert animal == 'fish'
|
||||||
|
assert echo in (1, 2, 3)
|
||||||
|
''')
|
||||||
|
result = testdir.runpytest()
|
||||||
|
result.stdout.fnmatch_lines(['* 3 passed *'])
|
||||||
|
|
||||||
|
def test_parametrize_auto_scope_override_fixture(self, testdir):
|
||||||
|
testdir.makepyfile('''
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
@pytest.fixture(scope='session', autouse=True)
|
||||||
|
def animal():
|
||||||
|
return 'fox'
|
||||||
|
|
||||||
|
@pytest.mark.parametrize('animal', ["dog", "cat"])
|
||||||
|
def test_1(animal):
|
||||||
|
assert animal in ('dog', 'cat')
|
||||||
|
''')
|
||||||
|
result = testdir.runpytest()
|
||||||
|
result.stdout.fnmatch_lines(['* 2 passed *'])
|
||||||
|
|
||||||
|
def test_parametrize_all_indirects(self, testdir):
|
||||||
|
testdir.makepyfile('''
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
@pytest.fixture()
|
||||||
|
def animal(request):
|
||||||
|
return request.param
|
||||||
|
|
||||||
|
@pytest.fixture(scope='session')
|
||||||
|
def echo(request):
|
||||||
|
return request.param
|
||||||
|
|
||||||
|
@pytest.mark.parametrize('animal, echo', [("dog", 1), ("cat", 2)], indirect=True)
|
||||||
|
def test_1(animal, echo):
|
||||||
|
assert animal in ('dog', 'cat')
|
||||||
|
assert echo in (1, 2, 3)
|
||||||
|
|
||||||
|
@pytest.mark.parametrize('animal, echo', [("fish", 3)], indirect=True)
|
||||||
|
def test_2(animal, echo):
|
||||||
|
assert animal == 'fish'
|
||||||
|
assert echo in (1, 2, 3)
|
||||||
|
''')
|
||||||
|
result = testdir.runpytest()
|
||||||
|
result.stdout.fnmatch_lines(['* 3 passed *'])
|
||||||
|
|
||||||
|
def test_parametrize_issue634(self, testdir):
|
||||||
|
testdir.makepyfile('''
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
@pytest.fixture(scope='module')
|
||||||
|
def foo(request):
|
||||||
|
print('preparing foo-%d' % request.param)
|
||||||
|
return 'foo-%d' % request.param
|
||||||
|
|
||||||
|
def test_one(foo):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def test_two(foo):
|
||||||
|
pass
|
||||||
|
|
||||||
|
test_two.test_with = (2, 3)
|
||||||
|
|
||||||
|
def pytest_generate_tests(metafunc):
|
||||||
|
params = (1, 2, 3, 4)
|
||||||
|
if not 'foo' in metafunc.fixturenames:
|
||||||
|
return
|
||||||
|
|
||||||
|
test_with = getattr(metafunc.function, 'test_with', None)
|
||||||
|
if test_with:
|
||||||
|
params = test_with
|
||||||
|
metafunc.parametrize('foo', params, indirect=True)
|
||||||
|
''')
|
||||||
|
result = testdir.runpytest("-s")
|
||||||
|
output = result.stdout.str()
|
||||||
|
assert output.count('preparing foo-2') == 1
|
||||||
|
assert output.count('preparing foo-3') == 1
|
||||||
|
|
||||||
|
|
||||||
class TestMarkersWithParametrization:
|
class TestMarkersWithParametrization:
|
||||||
pytestmark = pytest.mark.issue308
|
pytestmark = pytest.mark.issue308
|
||||||
def test_simple_mark(self, testdir):
|
def test_simple_mark(self, testdir):
|
||||||
|
|
|
@ -571,6 +571,19 @@ def test_importorskip_dev_module(monkeypatch):
|
||||||
pytest.fail("spurious skip")
|
pytest.fail("spurious skip")
|
||||||
|
|
||||||
|
|
||||||
|
def test_importorskip_module_level(testdir):
|
||||||
|
"""importorskip must be able to skip entire modules when used at module level"""
|
||||||
|
testdir.makepyfile('''
|
||||||
|
import pytest
|
||||||
|
foobarbaz = pytest.importorskip("foobarbaz")
|
||||||
|
|
||||||
|
def test_foo():
|
||||||
|
pass
|
||||||
|
''')
|
||||||
|
result = testdir.runpytest()
|
||||||
|
result.stdout.fnmatch_lines(['*collected 0 items / 1 skipped*'])
|
||||||
|
|
||||||
|
|
||||||
def test_pytest_cmdline_main(testdir):
|
def test_pytest_cmdline_main(testdir):
|
||||||
p = testdir.makepyfile("""
|
p = testdir.makepyfile("""
|
||||||
import pytest
|
import pytest
|
||||||
|
|
|
@ -967,5 +967,5 @@ def test_module_level_skip_error(testdir):
|
||||||
""")
|
""")
|
||||||
result = testdir.runpytest()
|
result = testdir.runpytest()
|
||||||
result.stdout.fnmatch_lines(
|
result.stdout.fnmatch_lines(
|
||||||
"*Using @pytest.skip outside a test * is not allowed*"
|
"*Using @pytest.skip outside of a test * is not allowed*"
|
||||||
)
|
)
|
||||||
|
|
2
tox.ini
2
tox.ini
|
@ -41,7 +41,7 @@ basepython = python2.7
|
||||||
deps = flake8
|
deps = flake8
|
||||||
restructuredtext_lint
|
restructuredtext_lint
|
||||||
commands = flake8 pytest.py _pytest testing
|
commands = flake8 pytest.py _pytest testing
|
||||||
rst-lint CHANGELOG.rst
|
rst-lint CHANGELOG.rst HOWTORELEASE.rst
|
||||||
|
|
||||||
[testenv:py27-xdist]
|
[testenv:py27-xdist]
|
||||||
deps=pytest-xdist>=1.13
|
deps=pytest-xdist>=1.13
|
||||||
|
|
Loading…
Reference in New Issue