Unit tests

Django comes with a test suite of its own, in the tests directory of the code base. It’s our policy to make sure all tests pass at all times.

We appreciate any and all contributions to the test suite!

The Django tests all use the testing infrastructure that ships with Django for testing applications. See Writing and running tests for an explanation of how to write new tests.

Running the unit tests

Inicio rápido

First, fork Django on GitHub.

Second, create and activate a virtual environment. If you’re not familiar with how to do that, read our contributing tutorial.

Next, clone your fork, install some requirements, and run the tests:

$ git clone https://github.com/YourGitHubName/django.git django-repo
$ cd django-repo/tests
$ python -m pip install -e ..
$ python -m pip install -r requirements/py3.txt
$ ./runtests.py
...\> git clone https://github.com/YourGitHubName/django.git django-repo
...\> cd django-repo\tests
...\> py -m pip install -e ..
...\> py -m pip install -r requirements\py3.txt
...\> runtests.py 

Installing the requirements will likely require some operating system packages that your computer doesn’t have installed. You can usually figure out which package to install by doing a web search for the last line or so of the error message. Try adding your operating system to the search query if needed.

If you have trouble installing the requirements, you can skip that step. See Running all the tests for details on installing the optional test dependencies. If you don’t have an optional dependency installed, the tests that require it will be skipped.

Running the tests requires a Django settings module that defines the databases to use. To help you get started, Django provides and uses a sample settings module that uses the SQLite database. See Using another settings module to learn how to use a different settings module to run the tests with a different database.

Having problems? See Solución de problemas for some common issues.

Running tests using tox

Tox is a tool for running tests in different virtual environments. Django includes a basic tox.ini that automates some checks that our build server performs on pull requests. To run the unit tests and other checks (such as import sorting, the documentation spelling checker, and code formatting), install and run the tox command from any place in the Django source tree:

$ python -m pip install tox
$ tox
...\> py -m pip install tox
...\> tox

By default, tox runs the test suite with the bundled test settings file for SQLite, black, blacken-docs, flake8, isort, and the documentation spelling checker. In addition to the system dependencies noted elsewhere in this documentation, the command python3 must be on your path and linked to the appropriate version of Python. A list of default environments can be seen as follows:

$ tox -l
py3
black
blacken-docs
flake8>=3.7.0
docs
isort>=5.1.0
...\> tox -l
py3
black
blacken-docs
flake8>=3.7.0
docs
isort>=5.1.0

Testing other Python versions and database backends

In addition to the default environments, tox supports running unit tests for other versions of Python and other database backends. Since Django’s test suite doesn’t bundle a settings file for database backends other than SQLite, however, you must create and provide your own test settings. For example, to run the tests on Python 3.10 using PostgreSQL:

$ tox -e py310-postgres -- --settings=my_postgres_settings
...\> tox -e py310-postgres -- --settings=my_postgres_settings

This command sets up a Python 3.10 virtual environment, installs Django’s test suite dependencies (including those for PostgreSQL), and calls runtests.py with the supplied arguments (in this case, --settings=my_postgres_settings).

The remainder of this documentation shows commands for running tests without tox, however, any option passed to runtests.py can also be passed to tox by prefixing the argument list with --, as above.

Tox also respects the DJANGO_SETTINGS_MODULE environment variable, if set. For example, the following is equivalent to the command above:

$ DJANGO_SETTINGS_MODULE=my_postgres_settings tox -e py310-postgres

Los usuarios de Windows deben usar:

...\> set DJANGO_SETTINGS_MODULE=my_postgres_settings
...\> tox -e py310-postgres

Running the JavaScript tests

Django includes a set of JavaScript unit tests for functions in certain contrib apps. The JavaScript tests aren’t run by default using tox because they require Node.js to be installed and aren’t necessary for the majority of patches. To run the JavaScript tests using tox:

$ tox -e javascript
...\> tox -e javascript

Este comando ejecuta npm install para asegurar que los requisitos de los tests están actualizados y después ejecuta npm test.

Running tests using django-docker-box

django-docker-box allows you to run the Django’s test suite across all supported databases and python versions. See the django-docker-box project page for installation and usage instructions.

Using another settings module

The included settings module (tests/test_sqlite.py) allows you to run the test suite using SQLite. If you want to run the tests using a different database, you’ll need to define your own settings file. Some tests, such as those for contrib.postgres, are specific to a particular database backend and will be skipped if run with a different backend. Some tests are skipped or expected failures on a particular database backend (see DatabaseFeatures.django_test_skips and DatabaseFeatures.django_test_expected_failures on each backend).

To run the tests with different settings, ensure that the module is on your PYTHONPATH and pass the module with --settings.

The DATABASES setting in any test settings module needs to define two databases:

  • A default database. This database should use the backend that you want to use for primary testing.

  • A database with the alias other. The other database is used to test that queries can be directed to different databases. This database should use the same backend as the default, and it must have a different name.

If you’re using a backend that isn’t SQLite, you will need to provide other details for each database:

  • La opción :settings:`USER` debe especificar una cuenta de usuario existente para la base de datos. Dicho usuario necesita permiso para ejecutar CREATE DATABASE para que la base de datos de prueba pueda ser creada.

  • La opción PASSWORD debe proporcionar la contraseña para el USER que se ha especificado.

Los nombres de las bases de datos de prueba se obtienen prefijando test_ al valor de los :settings:`NAMES` para las bases de datos definidas en DATABASES. Estas bases de datos de prueba se eliminan cuando las pruebas finalizan.

You will also need to ensure that your database uses UTF-8 as the default character set. If your database server doesn’t use UTF-8 as a default charset, you will need to include a value for CHARSET in the test settings dictionary for the applicable database.

Running only some of the tests

La ejecución de todo el conjunto de tests de Django lleva su tiempo, y ejecutar cada uno de los tests podría ser redundante si, por ejemplo, solo se ha añadido una prueba a Django que quiere ejecutar rápidamente sin la ejecución del resto. Puede ejecutar un subconjunto de las pruebas unitarias indicando los nombres de los módulos de los tests en runtests.py en la línea de comandos.

For example, if you’d like to run tests only for generic relations and internationalization, type:

$ ./runtests.py --settings=path.to.settings generic_relations i18n
...\> runtests.py --settings=path.to.settings generic_relations i18n

How do you find out the names of individual tests? Look in tests/ — each directory name there is the name of a test.

If you want to run only a particular class of tests, you can specify a list of paths to individual test classes. For example, to run the TranslationTests of the i18n module, type:

$ ./runtests.py --settings=path.to.settings i18n.tests.TranslationTests
...\> runtests.py --settings=path.to.settings i18n.tests.TranslationTests

Going beyond that, you can specify an individual test method like this:

$ ./runtests.py --settings=path.to.settings i18n.tests.TranslationTests.test_lazy_objects
...\> runtests.py --settings=path.to.settings i18n.tests.TranslationTests.test_lazy_objects

You can run tests starting at a specified top-level module with --start-at option. For example:

$ ./runtests.py --start-at=wsgi
...\> runtests.py --start-at=wsgi

You can also run tests starting after a specified top-level module with --start-after option. For example:

$ ./runtests.py --start-after=wsgi
...\> runtests.py --start-after=wsgi

Note that the --reverse option doesn’t impact on --start-at or --start-after options. Moreover these options cannot be used with test labels.

Running the Selenium tests

Algunos tests requieren Selenium y un navegador web. Para ejecutar estos tests, debe instalar el paquete selenium y correr los tests con la opción --selenium=<BROWSERS>. Por ejemplo, si tiene Firefox y Google Chrome instalados:

$ ./runtests.py --selenium=firefox,chrome
...\> runtests.py --selenium=firefox,chrome

See the selenium.webdriver package for the list of available browsers.

Specifying --selenium automatically sets --tags=selenium to run only the tests that require selenium.

Algunos navegadores (e.g. Chrome o Firefox) soportan pruebas sin cabeceras, que pueden ser más rápidas y estables. Añada la opción --headless para habilitar este modo.

Para probar cambios en la UI de administración, las pruebas de selenium pueden ejecutarse con la opción --screenshots habilitada. Las capturas de pantalla se guardarán en el directorio tests/screenshots/.

To define when screenshots should be taken during a selenium test, the test class must use the @django.test.selenium.screenshot_cases decorator with a list of supported screenshot types ("desktop_size", "mobile_size", "small_screen_size", "rtl", "dark", and "high_contrast"). It can then call self.take_screenshot("unique-screenshot-name") at the desired point to generate the screenshots. For example:

from django.test.selenium import SeleniumTestCase, screenshot_cases
from django.urls import reverse


class SeleniumTests(SeleniumTestCase):
    @screenshot_cases(["desktop_size", "mobile_size", "rtl", "dark", "high_contrast"])
    def test_login_button_centered(self):
        self.selenium.get(self.live_server_url + reverse("admin:login"))
        self.take_screenshot("login")
        ...

This generates multiple screenshots of the login page - one for a desktop screen, one for a mobile screen, one for right-to-left languages on desktop, one for the dark mode on desktop, and one for high contrast mode on desktop when using chrome.

Changed in Django 5.1:

La opción --screenshots y el decorador @screenshot_cases se han añadido.

Running all the tests

If you want to run the full suite of tests, you’ll need to install a number of dependencies:

You can find these dependencies in pip requirements files inside the tests/requirements directory of the Django source tree and install them like so:

$ python -m pip install -r tests/requirements/py3.txt
...\> py -m pip install -r tests\requirements\py3.txt

Si encuentra un error durante la instalación, es posible que a su sistema le falte una dependencia para uno o más de los paquetes de Python. Consulte la documentación del paquete que falla o busque en la web el mensaje de error que encuentre.

You can also install the database adapter(s) of your choice using oracle.txt, mysql.txt, or postgres.txt.

If you want to test the memcached or Redis cache backends, you’ll also need to define a CACHES setting that points at your memcached or Redis instance respectively.

To run the GeoDjango tests, you will need to set up a spatial database and install the Geospatial libraries.

Each of these dependencies is optional. If you’re missing any of them, the associated tests will be skipped.

To run some of the autoreload tests, you’ll need to install the Watchman service.

Code coverage

Contributors are encouraged to run coverage on the test suite to identify areas that need additional tests. The coverage tool installation and use is described in testing code coverage.

To run coverage on the Django test suite using the standard test settings:

$ coverage run ./runtests.py --settings=test_sqlite
...\> coverage run runtests.py --settings=test_sqlite

After running coverage, combine all coverage statistics by running:

$ coverage combine
...\> coverage combine

After that generate the html report by running:

$ coverage html
...\> coverage html

When running coverage for the Django tests, the included .coveragerc settings file defines coverage_html as the output directory for the report and also excludes several directories not relevant to the results (test code or external code included in Django).

Contrib apps

Tests for contrib apps can be found in the tests/ directory, typically under <app_name>_tests. For example, tests for contrib.auth are located in tests/auth_tests.

Solución de problemas

Test suite hangs or shows failures on main branch

Ensure you have the latest point release of a supported Python version, since there are often bugs in earlier versions that may cause the test suite to fail or hang.

On macOS (High Sierra and newer versions), you might see this message logged, after which the tests hang:

objc[42074]: +[__NSPlaceholderDate initialize] may have been in progress in
another thread when fork() was called.

To avoid this set a OBJC_DISABLE_INITIALIZE_FORK_SAFETY environment variable, for example:

$ OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ./runtests.py

Or add export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES to your shell’s startup file (e.g. ~/.profile).

Many test failures with UnicodeEncodeError

If the locales package is not installed, some tests will fail with a UnicodeEncodeError.

You can resolve this on Debian-based systems, for example, by running:

$ apt-get install locales
$ dpkg-reconfigure locales

You can resolve this for macOS systems by configuring your shell’s locale:

$ export LANG="en_US.UTF-8"
$ export LC_ALL="en_US.UTF-8"

Run the locale command to confirm the change. Optionally, add those export commands to your shell’s startup file (e.g. ~/.bashrc for Bash) to avoid having to retype them.

Tests that only fail in combination

In case a test passes when run in isolation but fails within the whole suite, we have some tools to help analyze the problem.

The --bisect option of runtests.py will run the failing test while halving the test set it is run together with on each iteration, often making it possible to identify a small number of tests that may be related to the failure.

For example, suppose that the failing test that works on its own is ModelTest.test_eq, then using:

$ ./runtests.py --bisect basic.tests.ModelTest.test_eq
...\> runtests.py --bisect basic.tests.ModelTest.test_eq

will try to determine a test that interferes with the given one. First, the test is run with the first half of the test suite. If a failure occurs, the first half of the test suite is split in two groups and each group is then run with the specified test. If there is no failure with the first half of the test suite, the second half of the test suite is run with the specified test and split appropriately as described earlier. The process repeats until the set of failing tests is minimized.

The --pair option runs the given test alongside every other test from the suite, letting you check if another test has side-effects that cause the failure. So:

$ ./runtests.py --pair basic.tests.ModelTest.test_eq
...\> runtests.py --pair basic.tests.ModelTest.test_eq

will pair test_eq with every test label.

With both --bisect and --pair, if you already suspect which cases might be responsible for the failure, you may limit tests to be cross-analyzed by specifying further test labels after the first one:

$ ./runtests.py --pair basic.tests.ModelTest.test_eq queries transactions
...\> runtests.py --pair basic.tests.ModelTest.test_eq queries transactions

You can also try running any set of tests in a random or reverse order using the --shuffle and --reverse options. This can help verify that executing tests in a different order does not cause any trouble:

$ ./runtests.py basic --shuffle
$ ./runtests.py basic --reverse
...\> runtests.py basic --shuffle
...\> runtests.py basic --reverse

Seeing the SQL queries run during a test

If you wish to examine the SQL being run in failing tests, you can turn on SQL logging using the --debug-sql option. If you combine this with --verbosity=2, all SQL queries will be output:

$ ./runtests.py basic --debug-sql
...\> runtests.py basic --debug-sql

Seeing the full traceback of a test failure

By default tests are run in parallel with one process per core. When the tests are run in parallel, however, you’ll only see a truncated traceback for any test failures. You can adjust this behavior with the --parallel option:

$ ./runtests.py basic --parallel=1
...\> runtests.py basic --parallel=1

You can also use the DJANGO_TEST_PROCESSES environment variable for this purpose.

Tips for writing tests

Isolating model registration

To avoid polluting the global apps registry and prevent unnecessary table creation, models defined in a test method should be bound to a temporary Apps instance. To do this, use the isolate_apps() decorator:

from django.db import models
from django.test import SimpleTestCase
from django.test.utils import isolate_apps


class TestModelDefinition(SimpleTestCase):
    @isolate_apps("app_label")
    def test_model_definition(self):
        class TestModel(models.Model):
            pass

        ...

Setting app_label

Models defined in a test method with no explicit app_label are automatically assigned the label of the app in which their test class is located.

In order to make sure the models defined within the context of isolate_apps() instances are correctly installed, you should pass the set of targeted app_label as arguments:

tests/app_label/tests.py
from django.db import models
from django.test import SimpleTestCase
from django.test.utils import isolate_apps


class TestModelDefinition(SimpleTestCase):
    @isolate_apps("app_label", "other_app_label")
    def test_model_definition(self):
        # This model automatically receives app_label='app_label'
        class TestModel(models.Model):
            pass

        class OtherAppModel(models.Model):
            class Meta:
                app_label = "other_app_label"

        ...