test_ok1/doc/test/funcargs.txt

558 lines
19 KiB
Plaintext

======================================================
**funcargs**: test setup and parametrization
======================================================
Since version 1.0 py.test automatically discovers and
manages test function arguments. The mechanism
naturally connects to the automatic discovery of
test files, classes and functions. Automatic test discovery
values the `Convention over Configuration`_ concept.
By discovering and calling functions ("funcarg providers") that
provide values for your actual test functions
it becomes easy to:
* separate test function code from test state setup/fixtures
* manage test value setup and teardown depending on
command line options or configuration
* parametrize multiple runs of the same test functions
* present useful debug info if setting up test state goes wrong
Using funcargs, test functions become more expressive,
more "templaty" and more test-aspect oriented. In fact,
funcarg mechanisms are meant to be complete and
convenient enough to
* substitute and improve on most usages of `xUnit style`_ setup.
For a simple example of how funcargs compare
to xUnit setup, see the `blog post about
the monkeypatch funcarg`_.
* substitute and improve on all usages of `old-style generative tests`_,
i.e. test functions that use the "yield" statement.
Using yield in test functions is deprecated since 1.0.
.. _`blog post about the monkeypatch funcarg`: http://tetamap.wordpress.com/2009/03/03/monkeypatching-in-unit-tests-done-right/
.. _`xUnit style`: xunit_setup.html
.. _`old-style generative tests`: features.html#generative-tests
.. _`funcarg provider`:
funcarg providers: setting up test function arguments
==============================================================
Test functions can specify one ore more arguments ("funcargs")
and a test module or plugin can define functions that provide
the function argument. Let's look at a simple self-contained
example that you can put into a test module:
.. sourcecode:: python
# ./test_simpleprovider.py
def pytest_funcarg__myfuncarg(request):
return 42
def test_function(myfuncarg):
assert myfuncarg == 17
If you run this with ``py.test test_simpleprovider.py`` you see something like this:
.. sourcecode:: python
============================ test session starts ============================
python: platform linux2 -- Python 2.6.2
test object 1: /home/hpk/hg/py/trunk/example/funcarg/test_simpleprovider.py
test_simpleprovider.py F
================================= FAILURES ==================================
_______________________________ test_function _______________________________
myfuncarg = 42
def test_function(myfuncarg):
> assert myfuncarg == 17
E assert 42 == 17
test_simpleprovider.py:6: AssertionError
========================= 1 failed in 0.11 seconds ==========================
This means that the test function got executed and the assertion failed.
Here is how py.test comes to execute this test function:
1. py.test discovers the ``test_function`` because of the ``test_`` prefix.
The test function needs a function argument named ``myfuncarg``.
A matching provider function is discovered by looking for the special
name ``pytest_funcarg__myfuncarg``.
2. ``pytest_funcarg__myfuncarg(request)`` is called and
returns the value for ``myfuncarg``.
3. ``test_function(42)`` call is executed.
Note that if you misspell a function argument or want
to use one that isn't available, an error with a list of
available function argument is provided.
For more interesting provider functions that make good use of the
`request object`_ please see the `application setup tutorial example`_.
.. _`request object`:
funcarg request objects
------------------------------------------
Request objects are passed to funcarg providers. They
encapsulate a request for a function argument for a
specific test function. Request objects allow providers
to access test configuration and test context:
``request.function``: python function object requesting the argument
``request.cls``: class object where the test function is defined in or None.
``request.module``: module object where the test function is defined in.
``request.config``: access to command line opts and general config
``request.param``: if exists was passed by a `parametrizing test generator`_
perform scoped setup and teardown
---------------------------------------------
.. sourcecode:: python
def cached_setup(setup, teardown=None, scope="module", keyextra=None):
""" cache and return result of calling setup().
The scope determines the cache key and ``keyextra`` adds to the cachekey.
The scope also determines when teardown(result) will be called.
valid scopes:
scope == 'function': when the single test function run finishes.
scope == 'module': when tests in a different module are run
scope == 'session': when tests of the session have run.
"""
example for providing a value that is to be setup only once during a test run:
.. sourcecode:: python
def pytest_funcarg__db(request):
return request.cached_setup(
lambda: ExpensiveSetup(request.config.option.db),
lambda val: val.close(),
scope="run"
)
cleanup after test function execution
---------------------------------------------
.. sourcecode:: python
def addfinalizer(func, scope="function"):
""" register calling a a finalizer function.
scope == 'function': when the single test function run finishes.
scope == 'module': when tests in a different module are run
scope == 'session': when tests of the session have run.
"""
Calling ``request.addfinalizer()`` is useful for scheduling teardown
functions. The given scope determines when the teardown function
will be called. Here is a basic example for providing a ``myfile``
object that is to be closed when the test function finishes.
.. sourcecode:: python
def pytest_funcarg__myfile(self, request):
# ... create and open a unique per-function "myfile" object ...
request.addfinalizer(lambda: myfile.close())
return myfile
requesting values of other funcargs
---------------------------------------------
Inside a funcarg provider, you sometimes may want to use a
different function argument which may be specified with
the test function or not. For such purposes you can
dynamically request a funcarg value:
.. sourcecode:: python
def getfuncargvalue(name):
""" Lookup and call function argument provider for the given name.
Each function argument is only requested once per function setup.
"""
You can also use this function if you want to `decorate a funcarg`_
locally, i.e. you want to provide the normal value but add/do something
extra. If a provider cannot be found a ``request.Error`` exception will be
raised.
.. _`test generators`:
.. _`parametrizing test generator`:
generating parametrized tests with funcargs
===========================================================
You can directly parametrize multiple runs of the same test
function by adding new test function calls with different
function argument values. Let's look at a simple self-contained
example:
.. sourcecode:: python
# ./test_example.py
def pytest_generate_tests(metafunc):
if "numiter" in metafunc.funcargnames:
for i in range(10):
metafunc.addcall(funcargs=dict(numiter=i))
def test_func(numiter):
assert numiter < 9
If you run this with ``py.test test_example.py`` you'll get:
.. sourcecode:: python
================================= test session starts =================================
python: platform linux2 -- Python 2.6.2
test object 1: /home/hpk/hg/py/trunk/test_example.py
test_example.py .........F
====================================== FAILURES =======================================
_______________________________ test_func.test_func[9] ________________________________
numiter = 9
def test_func(numiter):
> assert numiter < 9
E assert 9 < 9
/home/hpk/hg/py/trunk/test_example.py:10: AssertionError
Here is what happens in detail:
1. ``pytest_generate_tests(metafunc)`` hook is called once for each test
function. It adds ten new function calls with explicit function arguments.
2. **execute tests**: ``test_func(numiter)`` is called ten times with
ten different arguments.
.. _`metafunc object`:
test generators and metafunc objects
-------------------------------------------
metafunc objects are passed to the ``pytest_generate_tests`` hook.
They help to inspect a testfunction and to generate tests
according to test configuration or values specified
in the class or module where a test function is defined:
``metafunc.funcargnames``: set of required function arguments for given function
``metafunc.function``: underlying python test function
``metafunc.cls``: class object where the test function is defined in or None.
``metafunc.module``: the module object where the test function is defined in.
``metafunc.config``: access to command line opts and general config
the ``metafunc.addcall()`` method
-----------------------------------------------
.. sourcecode:: python
def addcall(funcargs={}, id=None, param=None):
""" trigger a new test function call. """
``funcargs`` can be a dictionary of argument names
mapped to values - providing it is called *direct parametrization*.
If you provide an `id`` it will be used for reporting
and identification purposes. If you don't supply an `id`
the stringified counter of the list of added calls will be used.
``id`` values needs to be unique between all
invocations for a given test function.
``param`` if specified will be seen by any
`funcarg provider`_ as a ``request.param`` attribute.
Setting it is called *indirect parametrization*.
Indirect parametrization is preferable if test values are
expensive to setup or can only be created in certain environments.
Test generators and thus ``addcall()`` invocations are performed
during test collection which is separate from the actual test
setup and test run phase. With distributed testing collection
and test setup/run happens in different process.
.. _`tutorial examples`:
Funcarg Tutorial Examples
=======================================
.. _`application setup tutorial example`:
application specific test setup
---------------------------------------------------------
Here is a basic useful step-wise example for handling application
specific test setup. The goal is to have one place where we have the
glue and test support code for bootstrapping and configuring application objects and allow
test modules and test functions to stay ignorant of involved details.
step 1: use and implement a test/app-specific "mysetup"
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Let's write a simple test function living in a test file
``test_sample.py`` that uses a ``mysetup`` funcarg for accessing test
specific setup.
.. sourcecode:: python
# ./test_sample.py
def test_answer(mysetup):
app = mysetup.myapp()
answer = app.question()
assert answer == 42
To run this test py.test needs to find and call a provider to
obtain the required ``mysetup`` function argument. The test
function interacts with the provided application specific setup.
To provide the ``mysetup`` function argument we write down
a provider method in a `local plugin`_ by putting the
following code into a local ``conftest.py``:
.. sourcecode:: python
# ./conftest.py
from myapp import MyApp
class ConftestPlugin:
def pytest_funcarg__mysetup(self, request):
return MySetup()
class MySetup:
def myapp(self):
return MyApp()
To run the example we represent our application by putting a pseudo MyApp object into ``myapp.py``:
.. sourcecode:: python
# ./myapp.py
class MyApp:
def question(self):
return 6 * 9
You can now run the test with ``py.test test_sample.py`` which will
show this failure:
.. sourcecode:: python
========================= test session starts =========================
python: platform linux2 -- Python 2.6.2
test object 1: /home/hpk/hg/py/trunk/example/funcarg/mysetup
test_sample.py F
============================== FAILURES ===============================
_____________________________ test_answer _____________________________
mysetup = <mysetup.conftest.MySetup instance at 0xa020eac>
def test_answer(mysetup):
app = mysetup.myapp()
answer = app.question()
> assert answer == 42
E assert 54 == 42
test_sample.py:5: AssertionError
====================== 1 failed in 0.11 seconds =======================
This means that our ``mysetup`` object was successfully instantiated,
we asked it to provide an application instance and checking
its ``question`` method resulted in the wrong answer. If you are
confused as to what the concrete question or answers actually mean,
please see here_ :) Otherwise proceed to step 2.
.. _here: http://uncyclopedia.wikia.com/wiki/The_Hitchhiker's_Guide_to_the_Galaxy
.. _`local plugin`: extend.html#local-plugin
step 2: adding a command line option
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
If you provide a "funcarg" from a plugin you can easily make methods
depend on command line options or environment settings.
To add a command line option we update the conftest.py of
the previous example to add a command line option
and to offer a new mysetup method:
.. sourcecode:: python
# ./conftest.py
import py
from myapp import MyApp
class ConftestPlugin:
def pytest_funcarg__mysetup(self, request):
return MySetup(request)
def pytest_addoption(self, parser):
parser.addoption("--ssh", action="store", default=None,
help="specify ssh host to run tests with")
class MySetup:
def __init__(self, request):
self.config = request.config
def myapp(self):
return MyApp()
def getsshconnection(self):
host = self.config.option.ssh
if host is None:
py.test.skip("specify ssh host with --ssh")
return py.execnet.SshGateway(host)
Now any test function can use the ``mysetup.getsshconnection()`` method like this:
.. sourcecode:: python
# ./test_ssh.py
class TestClass:
def test_function(self, mysetup):
conn = mysetup.getsshconnection()
# work with conn
Running ``py.test test_ssh.py`` without specifying a command line option will result in a skipped test_function:
.. sourcecode:: python
========================= test session starts =========================
python: platform linux2 -- Python 2.6.2
test object 1: test_ssh.py
test_ssh.py s
________________________ skipped test summary _________________________
conftest.py:23: [1] Skipped: 'specify ssh host with --ssh'
====================== 1 skipped in 0.11 seconds ======================
Note especially how the test function could stay clear knowing about how to construct test state values or when to skip and with what message. The test function can concentrate on actual test code and test state providers can interact with execution of tests.
If you specify a command line option like ``py.test --ssh=python.org`` the test will get un-skipped and actually execute.
.. _`accept example`:
example: specifying and selecting acceptance tests
--------------------------------------------------------------
.. sourcecode:: python
# ./conftest.py
class ConftestPlugin:
def pytest_option(self, parser):
group = parser.getgroup("myproject")
group.addoption("-A", dest="acceptance", action="store_true",
help="run (slow) acceptance tests")
def pytest_funcarg__accept(self, request):
return AcceptFuncarg(request)
class AcceptFuncarg:
def __init__(self, request):
if not request.config.option.acceptance:
py.test.skip("specify -A to run acceptance tests")
self.tmpdir = request.config.mktemp(request.function.__name__, numbered=True)
def run(self, cmd):
""" called by test code to execute an acceptance test. """
self.tmpdir.chdir()
return py.process.cmdexec(cmd)
and the actual test function example:
.. sourcecode:: python
def test_some_acceptance_aspect(accept):
accept.tmpdir.mkdir("somesub")
result = accept.run("ls -la")
assert "somesub" in result
If you run this test without specifying a command line option
the test will get skipped with an appropriate message. Otherwise
you can start to add convenience and test support methods
to your AcceptFuncarg and drive running of tools or
applications and provide ways to do assertions about
the output.
.. _`decorate a funcarg`:
example: decorating a funcarg in a test module
--------------------------------------------------------------
For larger scale setups it's sometimes useful to decorare
a funcarg just for a particular test module. We can
extend the `accept example`_ by putting this in our test class:
.. sourcecode:: python
def pytest_funcarg__accept(self, request):
arg = request.getfuncargvalue("accept") # call the next provider
# create a special layout in our tempdir
arg.tmpdir.mkdir("special")
return arg
class TestSpecialAcceptance:
def test_sometest(self, accept):
assert accept.tmpdir.join("special").check()
Our module level provider will be invoked first and it can
ask its request object to call the next provider and then
decorate its result. This mechanism allows us to stay
ignorant of how/where the function argument is provided -
in our example from a ConftestPlugin but could be any plugin.
sidenote: the temporary directory used here are instances of
the `py.path.local`_ class which provides many of the os.path
methods in a convenient way.
.. _`py.path.local`: ../path.html#local
Questions and Answers
==================================
.. _`why pytest_pyfuncarg__ methods?`:
Why ``pytest_funcarg__*`` methods?
------------------------------------
When experimenting with funcargs we also
considered an explicit registration mechanism, i.e. calling a register
method on the config object. But lacking a good use case for this
indirection and flexibility we decided to go for `Convention over
Configuration`_ and allow to directly specify the provider. It has the
positive implication that you should be able to "grep" for
``pytest_funcarg__MYARG`` and will find all providing sites (usually
exactly one).
.. _`Convention over Configuration`: http://en.wikipedia.org/wiki/Convention_over_Configuration