=========================== Testing Django applications =========================== .. module:: django.test :synopsis: Testing tools for Django applications. .. seealso:: The :doc:`testing tutorial ` and the :doc:`advanced testing topics `. This document is split into two primary sections. First, we explain how to write tests with Django. Then, we explain how to run them. Writing tests ============= Django's unit tests use a Python standard library module: :mod:`unittest`. This module defines tests using a class-based approach. .. admonition:: unittest2 .. deprecated:: 1.7 Python 2.7 introduced some major changes to the ``unittest`` library, adding some extremely useful features. To ensure that every Django project could benefit from these new features, Django used to ship with a copy of Python 2.7's ``unittest`` backported for Python 2.6 compatibility. Since Django no longer supports Python versions older than 2.7, ``django.utils.unittest`` is deprecated. Simply use ``unittest``. .. _unittest2: http://pypi.python.org/pypi/unittest2 Here is an example which subclasses from :class:`django.test.TestCase`, which is a subclass of :class:`unittest.TestCase` that runs each test inside a transaction to provide isolation:: from django.test import TestCase from myapp.models import Animal class AnimalTestCase(TestCase): def setUp(self): Animal.objects.create(name="lion", sound="roar") Animal.objects.create(name="cat", sound="meow") def test_animals_can_speak(self): """Animals that can speak are correctly identified""" lion = Animal.objects.get(name="lion") cat = Animal.objects.get(name="cat") self.assertEqual(lion.speak(), 'The lion says "roar"') self.assertEqual(cat.speak(), 'The cat says "meow"') When you :ref:`run your tests `, the default behavior of the test utility is to find all the test cases (that is, subclasses of :class:`unittest.TestCase`) in any file whose name begins with ``test``, automatically build a test suite out of those test cases, and run that suite. .. versionchanged:: 1.6 Previously, Django's default test runner only discovered tests in ``tests.py`` and ``models.py`` files within a Python package listed in :setting:`INSTALLED_APPS`. For more details about :mod:`unittest`, see the Python documentation. .. warning:: If your tests rely on database access such as creating or querying models, be sure to create your test classes as subclasses of :class:`django.test.TestCase` rather than :class:`unittest.TestCase`. Using :class:`unittest.TestCase` avoids the cost of running each test in a transaction and flushing the database, but if your tests interact with the database their behavior will vary based on the order that the test runner executes them. This can lead to unit tests that pass when run in isolation but fail when run in a suite. .. _running-tests: Running tests ============= Once you've written tests, run them using the :djadmin:`test` command of your project's ``manage.py`` utility:: $ ./manage.py test Test discovery is based on the unittest module's `built-in test discovery`. By default, this will discover tests in any file named "test*.py" under the current working directory. .. _built-in test discovery: http://docs.python.org/2/library/unittest.html#test-discovery You can specify particular tests to run by supplying any number of "test labels" to ``./manage.py test``. Each test label can be a full Python dotted path to a package, module, ``TestCase`` subclass, or test method. For instance:: # Run all the tests in the animals.tests module $ ./manage.py test animals.tests # Run all the tests found within the 'animals' package $ ./manage.py test animals # Run just one test case $ ./manage.py test animals.tests.AnimalTestCase # Run just one test method $ ./manage.py test animals.tests.AnimalTestCase.test_animals_can_speak You can also provide a path to a directory to discover tests below that directory:: $ ./manage.py test animals/ You can specify a custom filename pattern match using the ``-p`` (or ``--pattern``) option, if your test files are named differently from the ``test*.py`` pattern:: $ ./manage.py test --pattern="tests_*.py" .. versionchanged:: 1.6 Previously, test labels were in the form ``applabel``, ``applabel.TestCase``, or ``applabel.TestCase.test_method``, rather than being true Python dotted paths, and tests could only be found within ``tests.py`` or ``models.py`` files within a Python package listed in :setting:`INSTALLED_APPS`. The ``--pattern`` option and file paths as test labels are new in 1.6. If you press ``Ctrl-C`` while the tests are running, the test runner will wait for the currently running test to complete and then exit gracefully. During a graceful exit the test runner will output details of any test failures, report on how many tests were run and how many errors and failures were encountered, and destroy any test databases as usual. Thus pressing ``Ctrl-C`` can be very useful if you forget to pass the :djadminopt:`--failfast` option, notice that some tests are unexpectedly failing, and want to get details on the failures without waiting for the full test run to complete. If you do not want to wait for the currently running test to finish, you can press ``Ctrl-C`` a second time and the test run will halt immediately, but not gracefully. No details of the tests run before the interruption will be reported, and any test databases created by the run will not be destroyed. .. admonition:: Test with warnings enabled It's a good idea to run your tests with Python warnings enabled: ``python -Wall manage.py test``. The ``-Wall`` flag tells Python to display deprecation warnings. Django, like many other Python libraries, uses these warnings to flag when features are going away. It also might flag areas in your code that aren't strictly wrong but could benefit from a better implementation. .. _the-test-database: The test database ----------------- Tests that require a database (namely, model tests) will not use your "real" (production) database. Separate, blank databases are created for the tests. Regardless of whether the tests pass or fail, the test databases are destroyed when all the tests have been executed. By default the test databases get their names by prepending ``test_`` to the value of the :setting:`NAME` settings for the databases defined in :setting:`DATABASES`. When using the SQLite database engine the tests will by default use an in-memory database (i.e., the database will be created in memory, bypassing the filesystem entirely!). If you want to use a different database name, specify :setting:`TEST_NAME` in the dictionary for any given database in :setting:`DATABASES`. Aside from using a separate database, the test runner will otherwise use all of the same database settings you have in your settings file: :setting:`ENGINE `, :setting:`USER`, :setting:`HOST`, etc. The test database is created by the user specified by :setting:`USER`, so you'll need to make sure that the given user account has sufficient privileges to create a new database on the system. For fine-grained control over the character encoding of your test database, use the :setting:`TEST_CHARSET` option. If you're using MySQL, you can also use the :setting:`TEST_COLLATION` option to control the particular collation used by the test database. See the :doc:`settings documentation ` for details of these advanced settings. .. admonition:: Finding data from your production database when running tests? If your code attempts to access the database when its modules are compiled, this will occur *before* the test database is set up, with potentially unexpected results. For example, if you have a database query in module-level code and a real database exists, production data could pollute your tests. *It is a bad idea to have such import-time database queries in your code* anyway - rewrite your code so that it doesn't do this. .. seealso:: The :ref:`advanced multi-db testing topics `. .. _order-of-tests: Order in which tests are executed --------------------------------- In order to guarantee that all ``TestCase`` code starts with a clean database, the Django test runner reorders tests in the following way: * All :class:`~django.test.TestCase` subclasses are run first. * Then, all other unittests (including :class:`unittest.TestCase`, :class:`~django.test.SimpleTestCase` and :class:`~django.test.TransactionTestCase`) are run with no particular ordering guaranteed nor enforced among them. * Then any other tests (e.g. doctests) that may alter the database without restoring it to its original state are run. .. note:: The new ordering of tests may reveal unexpected dependencies on test case ordering. This is the case with doctests that relied on state left in the database by a given :class:`~django.test.TransactionTestCase` test, they must be updated to be able to run independently. Other test conditions --------------------- Regardless of the value of the :setting:`DEBUG` setting in your configuration file, all Django tests run with :setting:`DEBUG`\=False. This is to ensure that the observed output of your code matches what will be seen in a production setting. Caches are not cleared after each test, and running "manage.py test fooapp" can insert data from the tests into the cache of a live system if you run your tests in production because, unlike databases, a separate "test cache" is not used. This behavior `may change`_ in the future. .. _may change: https://code.djangoproject.com/ticket/11505 Understanding the test output ----------------------------- When you run your tests, you'll see a number of messages as the test runner prepares itself. You can control the level of detail of these messages with the ``verbosity`` option on the command line:: Creating test database... Creating table myapp_animal Creating table myapp_mineral Loading 'initial_data' fixtures... No fixtures found. This tells you that the test runner is creating a test database, as described in the previous section. Once the test database has been created, Django will run your tests. If everything goes well, you'll see something like this:: ---------------------------------------------------------------------- Ran 22 tests in 0.221s OK If there are test failures, however, you'll see full details about which tests failed:: ====================================================================== FAIL: test_was_published_recently_with_future_poll (polls.tests.PollMethodTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/dev/mysite/polls/tests.py", line 16, in test_was_published_recently_with_future_poll self.assertEqual(future_poll.was_published_recently(), False) AssertionError: True != False ---------------------------------------------------------------------- Ran 1 test in 0.003s FAILED (failures=1) A full explanation of this error output is beyond the scope of this document, but it's pretty intuitive. You can consult the documentation of Python's :mod:`unittest` library for details. Note that the return code for the test-runner script is 1 for any number of failed and erroneous tests. If all the tests pass, the return code is 0. This feature is useful if you're using the test-runner script in a shell script and need to test for success or failure at that level. Speeding up the tests --------------------- In recent versions of Django, the default password hasher is rather slow by design. If during your tests you are authenticating many users, you may want to use a custom settings file and set the :setting:`PASSWORD_HASHERS` setting to a faster hashing algorithm:: PASSWORD_HASHERS = ( 'django.contrib.auth.hashers.MD5PasswordHasher', ) Don't forget to also include in :setting:`PASSWORD_HASHERS` any hashing algorithm used in fixtures, if any. Testing tools ============= Django provides a small set of tools that come in handy when writing tests. .. _test-client: The test client --------------- .. module:: django.test.client :synopsis: Django's test client. The test client is a Python class that acts as a dummy Web browser, allowing you to test your views and interact with your Django-powered application programmatically. Some of the things you can do with the test client are: * Simulate GET and POST requests on a URL and observe the response -- everything from low-level HTTP (result headers and status codes) to page content. * Test that the correct view is executed for a given URL. * Test that a given request is rendered by a given Django template, with a template context that contains certain values. Note that the test client is not intended to be a replacement for Selenium_ or other "in-browser" frameworks. Django's test client has a different focus. In short: * Use Django's test client to establish that the correct view is being called and that the view is collecting the correct context data. * Use in-browser frameworks like Selenium_ to test *rendered* HTML and the *behavior* of Web pages, namely JavaScript functionality. Django also provides special support for those frameworks; see the section on :class:`~django.test.LiveServerTestCase` for more details. A comprehensive test suite should use a combination of both test types. Overview and a quick example ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To use the test client, instantiate ``django.test.client.Client`` and retrieve Web pages:: >>> from django.test.client import Client >>> c = Client() >>> response = c.post('/login/', {'username': 'john', 'password': 'smith'}) >>> response.status_code 200 >>> response = c.get('/customer/details/') >>> response.content '>> c.get('/login/') This is incorrect:: >>> c.get('http://www.example.com/login/') The test client is not capable of retrieving Web pages that are not powered by your Django project. If you need to retrieve other Web pages, use a Python standard library module such as :mod:`urllib` or :mod:`urllib2`. * To resolve URLs, the test client uses whatever URLconf is pointed-to by your :setting:`ROOT_URLCONF` setting. * Although the above example would work in the Python interactive interpreter, some of the test client's functionality, notably the template-related functionality, is only available *while tests are running*. The reason for this is that Django's test runner performs a bit of black magic in order to determine which template was loaded by a given view. This black magic (essentially a patching of Django's template system in memory) only happens during test running. * By default, the test client will disable any CSRF checks performed by your site. If, for some reason, you *want* the test client to perform CSRF checks, you can create an instance of the test client that enforces CSRF checks. To do this, pass in the ``enforce_csrf_checks`` argument when you construct your client:: >>> from django.test import Client >>> csrf_client = Client(enforce_csrf_checks=True) Making requests ~~~~~~~~~~~~~~~ Use the ``django.test.client.Client`` class to make requests. .. class:: Client(enforce_csrf_checks=False, **defaults) It requires no arguments at time of construction. However, you can use keywords arguments to specify some default headers. For example, this will send a ``User-Agent`` HTTP header in each request:: >>> c = Client(HTTP_USER_AGENT='Mozilla/5.0') The values from the ``extra`` keywords arguments passed to :meth:`~django.test.client.Client.get()`, :meth:`~django.test.client.Client.post()`, etc. have precedence over the defaults passed to the class constructor. The ``enforce_csrf_checks`` argument can be used to test CSRF protection (see above). Once you have a ``Client`` instance, you can call any of the following methods: .. method:: Client.get(path, data={}, follow=False, **extra) Makes a GET request on the provided ``path`` and returns a ``Response`` object, which is documented below. The key-value pairs in the ``data`` dictionary are used to create a GET data payload. For example:: >>> c = Client() >>> c.get('/customers/details/', {'name': 'fred', 'age': 7}) ...will result in the evaluation of a GET request equivalent to:: /customers/details/?name=fred&age=7 The ``extra`` keyword arguments parameter can be used to specify headers to be sent in the request. For example:: >>> c = Client() >>> c.get('/customers/details/', {'name': 'fred', 'age': 7}, ... HTTP_X_REQUESTED_WITH='XMLHttpRequest') ...will send the HTTP header ``HTTP_X_REQUESTED_WITH`` to the details view, which is a good way to test code paths that use the :meth:`django.http.HttpRequest.is_ajax()` method. .. admonition:: CGI specification The headers sent via ``**extra`` should follow CGI_ specification. For example, emulating a different "Host" header as sent in the HTTP request from the browser to the server should be passed as ``HTTP_HOST``. .. _CGI: http://www.w3.org/CGI/ If you already have the GET arguments in URL-encoded form, you can use that encoding instead of using the data argument. For example, the previous GET request could also be posed as:: >>> c = Client() >>> c.get('/customers/details/?name=fred&age=7') If you provide a URL with both an encoded GET data and a data argument, the data argument will take precedence. If you set ``follow`` to ``True`` the client will follow any redirects and a ``redirect_chain`` attribute will be set in the response object containing tuples of the intermediate urls and status codes. If you had a URL ``/redirect_me/`` that redirected to ``/next/``, that redirected to ``/final/``, this is what you'd see:: >>> response = c.get('/redirect_me/', follow=True) >>> response.redirect_chain [(u'http://testserver/next/', 302), (u'http://testserver/final/', 302)] .. method:: Client.post(path, data={}, content_type=MULTIPART_CONTENT, follow=False, **extra) Makes a POST request on the provided ``path`` and returns a ``Response`` object, which is documented below. The key-value pairs in the ``data`` dictionary are used to submit POST data. For example:: >>> c = Client() >>> c.post('/login/', {'name': 'fred', 'passwd': 'secret'}) ...will result in the evaluation of a POST request to this URL:: /login/ ...with this POST data:: name=fred&passwd=secret If you provide ``content_type`` (e.g. :mimetype:`text/xml` for an XML payload), the contents of ``data`` will be sent as-is in the POST request, using ``content_type`` in the HTTP ``Content-Type`` header. If you don't provide a value for ``content_type``, the values in ``data`` will be transmitted with a content type of :mimetype:`multipart/form-data`. In this case, the key-value pairs in ``data`` will be encoded as a multipart message and used to create the POST data payload. To submit multiple values for a given key -- for example, to specify the selections for a ``', '') ``html1`` and ``html2`` must be valid HTML. An ``AssertionError`` will be raised if one of them cannot be parsed. Output in case of error can be customized with the ``msg`` argument. .. method:: SimpleTestCase.assertHTMLNotEqual(html1, html2, msg=None) Asserts that the strings ``html1`` and ``html2`` are *not* equal. The comparison is based on HTML semantics. See :meth:`~SimpleTestCase.assertHTMLEqual` for details. ``html1`` and ``html2`` must be valid HTML. An ``AssertionError`` will be raised if one of them cannot be parsed. Output in case of error can be customized with the ``msg`` argument. .. method:: SimpleTestCase.assertXMLEqual(xml1, xml2, msg=None) Asserts that the strings ``xml1`` and ``xml2`` are equal. The comparison is based on XML semantics. Similarly to :meth:`~SimpleTestCase.assertHTMLEqual`, the comparison is made on parsed content, hence only semantic differences are considered, not syntax differences. When unvalid XML is passed in any parameter, an ``AssertionError`` is always raised, even if both string are identical. Output in case of error can be customized with the ``msg`` argument. .. method:: SimpleTestCase.assertXMLNotEqual(xml1, xml2, msg=None) Asserts that the strings ``xml1`` and ``xml2`` are *not* equal. The comparison is based on XML semantics. See :meth:`~SimpleTestCase.assertXMLEqual` for details. Output in case of error can be customized with the ``msg`` argument. .. method:: SimpleTestCase.assertInHTML(needle, haystack, count=None, msg_prefix='') Asserts that the HTML fragment ``needle`` is contained in the ``haystack`` one. If the ``count`` integer argument is specified, then additionally the number of ``needle`` occurrences will be strictly verified. Whitespace in most cases is ignored, and attribute ordering is not significant. The passed-in arguments must be valid HTML. .. method:: SimpleTestCase.assertJSONEqual(raw, expected_data, msg=None) Asserts that the JSON fragments ``raw`` and ``expected_data`` are equal. Usual JSON non-significant whitespace rules apply as the heavyweight is delegated to the :mod:`json` library. Output in case of error can be customized with the ``msg`` argument. .. method:: TransactionTestCase.assertQuerysetEqual(qs, values, transform=repr, ordered=True) Asserts that a queryset ``qs`` returns a particular list of values ``values``. The comparison of the contents of ``qs`` and ``values`` is performed using the function ``transform``; by default, this means that the ``repr()`` of each value is compared. Any other callable can be used if ``repr()`` doesn't provide a unique or helpful comparison. By default, the comparison is also ordering dependent. If ``qs`` doesn't provide an implicit ordering, you can set the ``ordered`` parameter to ``False``, which turns the comparison into a Python set comparison. .. versionchanged:: 1.6 The method now checks for undefined order and raises ``ValueError`` if undefined order is spotted. The ordering is seen as undefined if the given ``qs`` isn't ordered and the comparison is against more than one ordered values. .. method:: TransactionTestCase.assertNumQueries(num, func, *args, **kwargs) Asserts that when ``func`` is called with ``*args`` and ``**kwargs`` that ``num`` database queries are executed. If a ``"using"`` key is present in ``kwargs`` it is used as the database alias for which to check the number of queries. If you wish to call a function with a ``using`` parameter you can do it by wrapping the call with a ``lambda`` to add an extra parameter:: self.assertNumQueries(7, lambda: my_function(using=7)) You can also use this as a context manager:: with self.assertNumQueries(2): Person.objects.create(name="Aaron") Person.objects.create(name="Daniel") .. _topics-testing-email: Email services -------------- If any of your Django views send email using :doc:`Django's email functionality `, you probably don't want to send email each time you run a test using that view. For this reason, Django's test runner automatically redirects all Django-sent email to a dummy outbox. This lets you test every aspect of sending email -- from the number of messages sent to the contents of each message -- without actually sending the messages. The test runner accomplishes this by transparently replacing the normal email backend with a testing backend. (Don't worry -- this has no effect on any other email senders outside of Django, such as your machine's mail server, if you're running one.) .. currentmodule:: django.core.mail .. data:: django.core.mail.outbox During test running, each outgoing email is saved in ``django.core.mail.outbox``. This is a simple list of all :class:`~django.core.mail.EmailMessage` instances that have been sent. The ``outbox`` attribute is a special attribute that is created *only* when the ``locmem`` email backend is used. It doesn't normally exist as part of the :mod:`django.core.mail` module and you can't import it directly. The code below shows how to access this attribute correctly. Here's an example test that examines ``django.core.mail.outbox`` for length and contents:: from django.core import mail from django.test import TestCase class EmailTest(TestCase): def test_send_email(self): # Send message. mail.send_mail('Subject here', 'Here is the message.', 'from@example.com', ['to@example.com'], fail_silently=False) # Test that one message has been sent. self.assertEqual(len(mail.outbox), 1) # Verify that the subject of the first message is correct. self.assertEqual(mail.outbox[0].subject, 'Subject here') As noted :ref:`previously `, the test outbox is emptied at the start of every test in a Django ``*TestCase``. To empty the outbox manually, assign the empty list to ``mail.outbox``:: from django.core import mail # Empty the test outbox mail.outbox = [] .. _skipping-tests: Skipping tests -------------- .. currentmodule:: django.test The unittest library provides the :func:`@skipIf ` and :func:`@skipUnless ` decorators to allow you to skip tests if you know ahead of time that those tests are going to fail under certain conditions. For example, if your test requires a particular optional library in order to succeed, you could decorate the test case with :func:`@skipIf `. Then, the test runner will report that the test wasn't executed and why, instead of failing the test or omitting the test altogether. To supplement these test skipping behaviors, Django provides two additional skip decorators. Instead of testing a generic boolean, these decorators check the capabilities of the database, and skip the test if the database doesn't support a specific named feature. The decorators use a string identifier to describe database features. This string corresponds to attributes of the database connection features class. See ``django.db.backends.BaseDatabaseFeatures`` class for a full list of database features that can be used as a basis for skipping tests. .. function:: skipIfDBFeature(feature_name_string) Skip the decorated test or ``TestCase`` if the named database feature is supported. For example, the following test will not be executed if the database supports transactions (e.g., it would *not* run under PostgreSQL, but it would under MySQL with MyISAM tables):: class MyTests(TestCase): @skipIfDBFeature('supports_transactions') def test_transaction_behavior(self): # ... conditional test code .. versionchanged:: 1.7 ``skipIfDBFeature`` can now be used to decorate a ``TestCase`` class. .. function:: skipUnlessDBFeature(feature_name_string) Skip the decorated test or ``TestCase`` if the named database feature is *not* supported. For example, the following test will only be executed if the database supports transactions (e.g., it would run under PostgreSQL, but *not* under MySQL with MyISAM tables):: class MyTests(TestCase): @skipUnlessDBFeature('supports_transactions') def test_transaction_behavior(self): # ... conditional test code .. versionchanged:: 1.7 ``skipUnlessDBFeature`` can now be used to decorate a ``TestCase`` class.