Advanced testing topics

The request factory

class RequestFactory

The RequestFactory shares the same API as the test client. However, instead of behaving like a browser, the RequestFactory provides a way to generate a request instance that can be used as the first argument to any view. This means you can test a view function the same way as you would test any other function -- as a black box, with exactly known inputs, testing for specific outputs.

The API for the RequestFactory is a slightly restricted subset of the test client API:

  • It only has access to the HTTP methods get(), post(), put(), delete(), head(), options(), and trace().
  • These methods accept all the same arguments except for follow. Since this is just a factory for producing requests, it's up to you to handle the response.
  • It does not support middleware. Session and authentication attributes must be supplied by the test itself if required for the view to function properly.

Contoh

The following is a unit test using the request factory:

from django.contrib.auth.models import AnonymousUser, User
from django.test import RequestFactory, TestCase

from .views import MyView, my_view

class SimpleTest(TestCase):
    def setUp(self):
        # Every test needs access to the request factory.
        self.factory = RequestFactory()
        self.user = User.objects.create_user(
            username='jacob', email='jacob@…', password='top_secret')

    def test_details(self):
        # Create an instance of a GET request.
        request = self.factory.get('/customer/details')

        # Recall that middleware are not supported. You can simulate a
        # logged-in user by setting request.user manually.
        request.user = self.user

        # Or you can simulate an anonymous user by setting request.user to
        # an AnonymousUser instance.
        request.user = AnonymousUser()

        # Test my_view() as if it were deployed at /customer/details
        response = my_view(request)
        # Use this syntax for class-based views.
        response = MyView.as_view()(request)
        self.assertEqual(response.status_code, 200)

AsyncRequestFactory

RequestFactory creates WSGI-like requests. If you want to create ASGI-like requests, including having a correct ASGI scope, you can instead use django.test.AsyncRequestFactory.

This class is directly API-compatible with RequestFactory, with the only difference being that it returns ASGIRequest instances rather than WSGIRequest instances. All of its methods are still synchronous callables.

Testing class-based views

In order to test class-based views outside of the request/response cycle you must ensure that they are configured correctly, by calling setup() after instantiation.

For example, assuming the following class-based view:

views.py
from django.views.generic import TemplateView


class HomeView(TemplateView):
    template_name = 'myapp/home.html'

    def get_context_data(self, **kwargs):
        kwargs['environment'] = 'Production'
        return super().get_context_data(**kwargs)

You may directly test the get_context_data() method by first instantiating the view, then passing a request to setup(), before proceeding with your test's code:

tests.py
from django.test import RequestFactory, TestCase
from .views import HomeView


class HomePageTest(TestCase):
    def test_environment_set_in_context(self):
        request = RequestFactory().get('/')
        view = HomeView()
        view.setup(request)

        context = view.get_context_data()
        self.assertIn('environment', context)

Tests and multiple host names

The ALLOWED_HOSTS setting is validated when running tests. This allows the test client to differentiate between internal and external URLs.

Projects that support multitenancy or otherwise alter business logic based on the request's host and use custom host names in tests must include those hosts in ALLOWED_HOSTS.

The first option to do so is to add the hosts to your settings file. For example, the test suite for docs.djangoproject.com includes the following:

from django.test import TestCase

class SearchFormTestCase(TestCase):
    def test_empty_get(self):
        response = self.client.get('/en/dev/search/', HTTP_HOST='docs.djangoproject.dev:8000')
        self.assertEqual(response.status_code, 200)

and the settings file includes a list of the domains supported by the project:

ALLOWED_HOSTS = [
    'www.djangoproject.dev',
    'docs.djangoproject.dev',
    ...
]

Another option is to add the required hosts to ALLOWED_HOSTS using override_settings() or modify_settings(). This option may be preferable in standalone apps that can't package their own settings file or for projects where the list of domains is not static (e.g., subdomains for multitenancy). For example, you could write a test for the domain http://otherserver/ as follows:

from django.test import TestCase, override_settings

class MultiDomainTestCase(TestCase):
    @override_settings(ALLOWED_HOSTS=['otherserver'])
    def test_other_domain(self):
        response = self.client.get('http://otherserver/foo/bar/')

Disabling ALLOWED_HOSTS checking (ALLOWED_HOSTS = ['*']) when running tests prevents the test client from raising a helpful error message if you follow a redirect to an external URL.

Uji coba dan multi database

Testing primary/replica configurations

If you're testing a multiple database configuration with primary/replica (referred to as master/slave by some databases) replication, this strategy of creating test databases poses a problem. When the test databases are created, there won't be any replication, and as a result, data created on the primary won't be seen on the replica.

To compensate for this, Django allows you to define that a database is a test mirror. Consider the following (simplified) example database configuration:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'myproject',
        'HOST': 'dbprimary',
         # ... plus some other settings
    },
    'replica': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'myproject',
        'HOST': 'dbreplica',
        'TEST': {
            'MIRROR': 'default',
        },
        # ... plus some other settings
    }
}

In this setup, we have two database servers: dbprimary, described by the database alias default, and dbreplica described by the alias replica. As you might expect, dbreplica has been configured by the database administrator as a read replica of dbprimary, so in normal activity, any write to default will appear on replica.

If Django created two independent test databases, this would break any tests that expected replication to occur. However, the replica database has been configured as a test mirror (using the MIRROR test setting), indicating that under testing, replica should be treated as a mirror of default.

When the test environment is configured, a test version of replica will not be created. Instead the connection to replica will be redirected to point at default. As a result, writes to default will appear on replica -- but because they are actually the same database, not because there is data replication between the two databases.

Mengendalikan pembuatan urutan untuk basisdata percobaan

By default, Django will assume all databases depend on the default database and therefore always create the default database first. However, no guarantees are made on the creation order of any other databases in your test setup.

If your database configuration requires a specific creation order, you can specify the dependencies that exist using the DEPENDENCIES test setting. Consider the following (simplified) example database configuration:

DATABASES = {
    'default': {
        # ... db settings
        'TEST': {
            'DEPENDENCIES': ['diamonds'],
        },
    },
    'diamonds': {
        # ... db settings
        'TEST': {
            'DEPENDENCIES': [],
        },
    },
    'clubs': {
        # ... db settings
        'TEST': {
            'DEPENDENCIES': ['diamonds'],
        },
    },
    'spades': {
        # ... db settings
        'TEST': {
            'DEPENDENCIES': ['diamonds', 'hearts'],
        },
    },
    'hearts': {
        # ... db settings
        'TEST': {
            'DEPENDENCIES': ['diamonds', 'clubs'],
        },
    }
}

Under this configuration, the diamonds database will be created first, as it is the only database alias without dependencies. The default and clubs alias will be created next (although the order of creation of this pair is not guaranteed), then hearts, and finally spades.

If there are any circular dependencies in the DEPENDENCIES definition, an ImproperlyConfigured exception will be raised.

Fitus-fitur lanjutan dari TransactionTestCase

TransactionTestCase.available_apps

Peringatan

Atribut ini adalah API pribadi. Itu mungkin berubah atau dipindahkan tanpa masa pengusangan di masa datang, sebagai contoh untuk menampung perubahan-perubahan dalam memuat aplikasi.

Itu digunakan untuk memaksimalkan deretan percobaan sendiri Djangom, yang mengandung ratusan dari model-model tetapi tidak ada hubungan diantara model-model dalam aplikasi berbeda.

By default, available_apps is set to None. After each test, Django calls flush to reset the database state. This empties all tables and emits the post_migrate signal, which recreates one content type and four permissions for each model. This operation gets expensive proportionally to the number of models.

Setting available_apps to a list of applications instructs Django to behave as if only the models from these applications were available. The behavior of TransactionTestCase changes as follows:

  • post_migrate dinyalakan sebelum setiap percobaan membuat jenis-jenis isi dan perizinan untuk setiap model dalam aplikasi tersedia, dalam kasus mereka hilang.
  • Setelah setiap percobaan, Django mengkosongkan hanya tabel-tabel sesuai ke model dalam aplikasi tersedia. Bagaimanapun, pada tingkat basisdata, pemotongan mungkin menurun pada model-model terkait dalam aplikasi tidak tersedia. Lebih lanjut post_migrate tidak dinyalakan. Itu akan dinyalakan oleh TransactionTestCase selanjutnya, setelah kumpulan benar dari aplikasi-aplikasi dipilih.

Since the database isn't fully flushed, if a test creates instances of models not included in available_apps, they will leak and they may cause unrelated tests to fail. Be careful with tests that use sessions; the default session engine stores them in the database.

Since post_migrate isn't emitted after flushing the database, its state after a TransactionTestCase isn't the same as after a TestCase: it's missing the rows created by listeners to post_migrate. Considering the order in which tests are executed, this isn't an issue, provided either all TransactionTestCase in a given test suite declare available_apps, or none of them.

available_apps adalah wajib dalam deretan percobaan Django sendiri.

TransactionTestCase.reset_sequences

Setting reset_sequences = True on a TransactionTestCase will make sure sequences are always reset before the test run:

class TestsThatDependsOnPrimaryKeySequences(TransactionTestCase):
    reset_sequences = True

    def test_animal_pk(self):
        lion = Animal.objects.create(name="lion", sound="roar")
        # lion.pk is guaranteed to always be 1
        self.assertEqual(lion.pk, 1)

Unless you are explicitly testing primary keys sequence numbers, it is recommended that you do not hard code primary key values in tests.

Using reset_sequences = True will slow down the test, since the primary key reset is a relatively expensive database operation.

Enforce running test classes sequentially

If you have test classes that cannot be run in parallel (e.g. because they share a common resource), you can use django.test.testcases.SerializeMixin to run them sequentially. This mixin uses a filesystem lockfile.

For example, you can use __file__ to determine that all test classes in the same file that inherit from SerializeMixin will run sequentially:

import os

from django.test import TestCase
from django.test.testcases import SerializeMixin

class ImageTestCaseMixin(SerializeMixin):
    lockfile = __file__

    def setUp(self):
        self.filename = os.path.join(temp_storage_dir, 'my_file.png')
        self.file = create_file(self.filename)

class RemoveImageTests(ImageTestCaseMixin, TestCase):
    def test_remove_image(self):
        os.remove(self.filename)
        self.assertFalse(os.path.exists(self.filename))

class ResizeImageTests(ImageTestCaseMixin, TestCase):
    def test_resize_image(self):
        resize_image(self.file, (48, 48))
        self.assertEqual(get_image_size(self.file), (48, 48))

Menggunakan penjalan percobaan Django untuk menjalankan aplikasi digunakan kembali

If you are writing a reusable application you may want to use the Django test runner to run your own test suite and thus benefit from the Django testing infrastructure.

A common practice is a tests directory next to the application code, with the following structure:

runtests.py
polls/
    __init__.py
    models.py
    ...
tests/
    __init__.py
    models.py
    test_settings.py
    tests.py

Mari kita lihat didalam sepasang berkas-berkas tersebut:

runtests.py
#!/usr/bin/env python
import os
import sys

import django
from django.conf import settings
from django.test.utils import get_runner

if __name__ == "__main__":
    os.environ['DJANGO_SETTINGS_MODULE'] = 'tests.test_settings'
    django.setup()
    TestRunner = get_runner(settings)
    test_runner = TestRunner()
    failures = test_runner.run_tests(["tests"])
    sys.exit(bool(failures))

Ini adalah tulisan yang anda minta menjalankan deretan percobaan. itu menyetel lingkungan Django, membuat basisdata percobaan dan menjalankan percobaan.

For the sake of clarity, this example contains only the bare minimum necessary to use the Django test runner. You may want to add command-line options for controlling verbosity, passing in specific test labels to run, etc.

tests/test_settings.py
SECRET_KEY = 'fake-key'
INSTALLED_APPS = [
    "tests",
]

Berkas ini mengandung Django settings dibutuhkan untuk menjalankan aplikasi percobaan anda.

Kembali, ini adalah cotoh minimal; percobaan anda mungkin membutuhkan pengaturan tambahan untuk berjalan.

Since the tests package is included in INSTALLED_APPS when running your tests, you can define test-only models in its models.py file.

Menggunakan kerangka percobaan berbeda

Clearly, unittest is not the only Python testing framework. While Django doesn't provide explicit support for alternative frameworks, it does provide a way to invoke tests constructed for an alternative framework as if they were normal Django tests.

When you run ./manage.py test, Django looks at the TEST_RUNNER setting to determine what to do. By default, TEST_RUNNER points to 'django.test.runner.DiscoverRunner'. This class defines the default Django testing behavior. This behavior involves:

  1. Performing global pre-test setup.
  2. Looking for tests in any file below the current directory whose name matches the pattern test*.py.
  3. Membuat basisdata percobaan.
  4. Running migrate to install models and initial data into the test databases.
  5. Menjalankan system checks.
  6. Menjalankan percobaan yang ditemukan.
  7. Menghancurkan basisdata percobaan.
  8. Performing global post-test teardown.

If you define your own test runner class and point TEST_RUNNER at that class, Django will execute your test runner whenever you run ./manage.py test. In this way, it is possible to use any test framework that can be executed from Python code, or to modify the Django test execution process to satisfy whatever testing requirements you may have.

Defining a test runner

A test runner is a class defining a run_tests() method. Django ships with a DiscoverRunner class that defines the default Django testing behavior. This class defines the run_tests() entry point, plus a selection of other methods that are used by run_tests() to set up, execute and tear down the test suite.

class DiscoverRunner(pattern='test*.py', top_level=None, verbosity=1, interactive=True, failfast=False, keepdb=False, reverse=False, debug_mode=False, debug_sql=False, parallel=0, tags=None, exclude_tags=None, test_name_patterns=None, pdb=False, buffer=False, enable_faulthandler=True, timing=True, shuffle=False, logger=None, **kwargs)

DiscoverRunner will search for tests in any file matching pattern.

top_level can be used to specify the directory containing your top-level Python modules. Usually Django can figure this out automatically, so it's not necessary to specify this option. If specified, it should generally be the directory containing your manage.py file.

verbosity determines the amount of notification and debug information that will be printed to the console; 0 is no output, 1 is normal output, and 2 is verbose output.

If interactive is True, the test suite has permission to ask the user for instructions when the test suite is executed. An example of this behavior would be asking for permission to delete an existing test database. If interactive is False, the test suite must be able to run without any manual intervention.

If failfast is True, the test suite will stop running after the first test failure is detected.

If keepdb is True, the test suite will use the existing database, or create one if necessary. If False, a new database will be created, prompting the user to remove the existing one, if present.

If reverse is True, test cases will be executed in the opposite order. This could be useful to debug tests that aren't properly isolated and have side effects. Grouping by test class is preserved when using this option. This option can be used in conjunction with --shuffle to reverse the order for a particular random seed.

debug_mode specifies what the DEBUG setting should be set to prior to running tests.

parallel specifies the number of processes. If parallel is greater than 1, the test suite will run in parallel processes. If there are fewer test cases than configured processes, Django will reduce the number of processes accordingly. Each process gets its own database. This option requires the third-party tblib package to display tracebacks correctly.

tags can be used to specify a set of tags for filtering tests. May be combined with exclude_tags.

exclude_tags can be used to specify a set of tags for excluding tests. May be combined with tags.

If debug_sql is True, failing test cases will output SQL queries logged to the django.db.backends logger as well as the traceback. If verbosity is 2, then queries in all tests are output.

test_name_patterns can be used to specify a set of patterns for filtering test methods and classes by their names.

If pdb is True, a debugger (pdb or ipdb) will be spawned at each test error or failure.

If buffer is True, outputs from passing tests will be discarded.

If enable_faulthandler is True, faulthandler will be enabled.

If timing is True, test timings, including database setup and total run time, will be shown.

If shuffle is an integer, test cases will be shuffled in a random order prior to execution, using the integer as a random seed. If shuffle is None, the seed will be generated randomly. In both cases, the seed will be logged and set to self.shuffle_seed prior to running tests. This option can be used to help detect tests that aren't properly isolated. Grouping by test class is preserved when using this option.

logger can be used to pass a Python Logger object. If provided, the logger will be used to log messages instead of printing to the console. The logger object will respect its logging level rather than the verbosity.

Django may, from time to time, extend the capabilities of the test runner by adding new arguments. The **kwargs declaration allows for this expansion. If you subclass DiscoverRunner or write your own test runner, ensure it accepts **kwargs.

Your test runner may also define additional command-line options. Create or override an add_arguments(cls, parser) class method and add custom arguments by calling parser.add_argument() inside the method, so that the test command will be able to use those arguments.

New in Django 3.2:

The enable_faulthandler and timing arguments were added.

New in Django 4.0:

The logger and shuffle arguments were added.

Atribut

DiscoverRunner.test_suite

The class used to build the test suite. By default it is set to unittest.TestSuite. This can be overridden if you wish to implement different logic for collecting tests.

DiscoverRunner.test_runner

This is the class of the low-level test runner which is used to execute the individual tests and format the results. By default it is set to unittest.TextTestRunner. Despite the unfortunate similarity in naming conventions, this is not the same type of class as DiscoverRunner, which covers a broader set of responsibilities. You can override this attribute to modify the way tests are run and reported.

DiscoverRunner.test_loader

This is the class that loads tests, whether from TestCases or modules or otherwise and bundles them into test suites for the runner to execute. By default it is set to unittest.defaultTestLoader. You can override this attribute if your tests are going to be loaded in unusual ways.

Cara

DiscoverRunner.run_tests(test_labels, **kwargs)

Jalankan deretan percobaan.

test_labels allows you to specify which tests to run and supports several formats (see DiscoverRunner.build_suite() for a list of supported formats).

Ditinggalkan sejak versi 4.0: extra_tests is a list of extra TestCase instances to add to the suite that is executed by the test runner. These extra tests are run in addition to those discovered in the modules listed in test_labels.

Cara ini harus mengembalikan angka dari percobaan yang gagal.

classmethod DiscoverRunner.add_arguments(parser)

Override this class method to add custom arguments accepted by the test management command. See argparse.ArgumentParser.add_argument() for details about adding arguments to a parser.

DiscoverRunner.setup_test_environment(**kwargs)

Sets up the test environment by calling setup_test_environment() and setting DEBUG to self.debug_mode (defaults to False).

DiscoverRunner.build_suite(test_labels=None, **kwargs)

Membangun rangkaian percobaan yang cocok dengan label percobaan disediakan.

test_labels adalah daftar dari string menggambarkan percobaan untuk dijalankan. Sebuah label percobaan dapat mengambil satu dari empat bentuk:

  • path.to.test_module.TestCase.test_method -- Jalankan metode percobaan tunggal dalam kasus percobaan.
  • path.to.test_module.TestCase -- Jalankan semua metode percobaan dalam kasus percobaan.
  • path.to.module -- Cari dan jalankan semua percobaan yang bernama paket atau modul Python.
  • path/to/directory -- Cari dan jalankan semua percobaan dibawan direktori dinamai.

If test_labels has a value of None, the test runner will search for tests in all files below the current directory whose names match its pattern (see above).

Ditinggalkan sejak versi 4.0: extra_tests is a list of extra TestCase instances to add to the suite that is executed by the test runner. These extra tests are run in addition to those discovered in the modules listed in test_labels.

Mengembalikan sebuah instance TestSuite untuk dijalankan.

DiscoverRunner.setup_databases(**kwargs)

Membuat basisdata percobaan dengan memanggil setup_databases().

DiscoverRunner.run_checks(databases)

Runs the system checks on the test databases.

DiscoverRunner.run_suite(suite, **kwargs)

Jalankan deretan percobaan.

Mengembalikan hasil dihasilkan dari menjalankan deretan percobaan.

DiscoverRunner.get_test_runner_kwargs()

Mengembalikan argumen kata kunci untuk menginstansiasi dengan DiscoverRunner.test_runner.

DiscoverRunner.teardown_databases(old_config, **kwargs)

Menghancurkan basisdata percobaan, menyimpan kembali keadaan pra-percobaan dengan memanggil teardown_databases().

DiscoverRunner.teardown_test_environment(**kwargs)

Menyimpan kembali lingkungan pra-percobaan.

DiscoverRunner.suite_result(suite, result, **kwargs)

Menghitung dan mengembalikan kode balikan berdasarkan pada deretan percobaan, dan hasil dari deretan percobaan itu.

DiscoverRunner.log(msg, level=None)
New in Django 4.0.

If a logger is set, logs the message at the given integer logging level (e.g. logging.DEBUG, logging.INFO, or logging.WARNING). Otherwise, the message is printed to the console, respecting the current verbosity. For example, no message will be printed if the verbosity is 0, INFO and above will be printed if the verbosity is at least 1, and DEBUG will be printed if it is at least 2. The level defaults to logging.INFO.

Percobaan keperluan

django.test.utils

Untuk membantu dalam pembuatan dari pejalan percobaan anda sendiri, Django menyediakan sejumlah metode utilitas dalam modul django.test.utils.

setup_test_environment(debug=None)

Performs global pre-test setup, such as installing instrumentation for the template rendering system and setting up the dummy email outbox.

Jika debug bukan None, pengaturan DEBUG diperbaharui pada nilainya.

teardown_test_environment()

Performs global post-test teardown, such as removing instrumentation from the template system and restoring normal email services.

setup_databases(verbosity, interactive, *, time_keeper=None, keepdb=False, debug_sql=False, parallel=0, aliases=None, serialized_aliases=None, **kwargs)

Membuat basisdata percobaan.

Returns a data structure that provides enough detail to undo the changes that have been made. This data will be provided to the teardown_databases() function at the conclusion of testing.

The aliases argument determines which DATABASES aliases test databases should be set up for. If it's not provided, it defaults to all of DATABASES aliases.

The serialized_aliases argument determines what subset of aliases test databases should have their state serialized to allow usage of the serialized_rollback feature. If it's not provided, it defaults to aliases.

Changed in Django 3.2:

The time_keeper kwarg was added, and all kwargs were made keyword-only.

Changed in Django 4.0:

The serialized_aliases kwarg was added.

teardown_databases(old_config, parallel=0, keepdb=False)

Menghancurkan basisdata percobaan, menyimpan kembali keadaan pra-pengujian.

old_config adalah struktur data menentukan peribahan dalam konfigurasi basisdata yang butuh di balik. Nilai kembaliannya dari metode setup_databases().

django.db.connection.creation`

The creation module of the database backend also provides some utilities that can be useful during testing.

create_test_db(verbosity=1, autoclobber=False, serialize=True, keepdb=False)

Membuat percobaan basisdata baru dan menjalankan migrate terhadapnya.

verbosity mempunyai perilaku sama seperti dalam run_tests().

autoclobber describes the behavior that will occur if a database with the same name as the test database is discovered:

  • If autoclobber is False, the user will be asked to approve destroying the existing database. sys.exit is called if the user does not approve.
  • If autoclobber is True, the database will be destroyed without consulting the user.

serialize menentukan jika Django serialkan basisdata menjadi sebuah string JSON dalam-memori sebelum menjalankan percobaan (digunakan untuk menyimpan kembali keadaan basisdata jika anda tidak mempunyai transaksi). Anda dapat menyetel ini menjadi False untuk mempercepat waktu pembuatan jika anda tidak mempunyai kelas-kelas percobaan dengan serialized_rollback=True 1.

If you are using the default test runner, you can control this with the the SERIALIZE entry in the TEST dictionary.

keepdb determines if the test run should use an existing database, or create a new one. If True, the existing database will be used, or created if not present. If False, a new database will be created, prompting the user to remove the existing one, if present.

Mengembalikan nama dari basisdata percobaan yang dia buat.

create_test_db() has the side effect of modifying the value of NAME in DATABASES to match the name of the test database.

destroy_test_db(old_database_name, verbosity=1, keepdb=False)

Destroys the database whose name is the value of NAME in DATABASES, and sets NAME to the value of old_database_name.

The verbosity argument has the same behavior as for DiscoverRunner.

If the keepdb argument is True, then the connection to the database will be closed, but the database will not be destroyed.

Perpaduan dengan coverage.py

Code coverage describes how much source code has been tested. It shows which parts of your code are being exercised by tests and which are not. It's an important part of testing applications, so it's strongly recommended to check the coverage of your tests.

Django can be easily integrated with coverage.py, a tool for measuring code coverage of Python programs. First, install coverage.py. Next, run the following from your project folder containing manage.py:

coverage run --source='.' manage.py test myapp

This runs your tests and collects coverage data of the executed files in your project. You can see a report of this data by typing following command:

coverage report

Catat bahwa beberapa kode Django telah dijalankan selagi menjalankan percobaan, tetapi itu tidak didaftarkan disini karena bendera source dilewatkan ke perintah sebelumnya.

For more options like annotated HTML listings detailing missed lines, see the coverage.py docs.