2016-01-25 05:26:11 +08:00
|
|
|
|
==========================
|
|
|
|
|
``QuerySet`` API reference
|
|
|
|
|
==========================
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-03-09 03:51:19 +08:00
|
|
|
|
.. currentmodule:: django.db.models.query
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
This document describes the details of the ``QuerySet`` API. It builds on the
|
2010-08-20 03:27:44 +08:00
|
|
|
|
material presented in the :doc:`model </topics/db/models>` and :doc:`database
|
|
|
|
|
query </topics/db/queries>` guides, so you'll probably want to read and
|
2008-09-11 08:24:02 +08:00
|
|
|
|
understand those documents before reading this one.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-10-09 16:12:50 +08:00
|
|
|
|
Throughout this reference we'll use the :ref:`example Weblog models
|
2010-08-20 03:27:44 +08:00
|
|
|
|
<queryset-model-example>` presented in the :doc:`database query guide
|
|
|
|
|
</topics/db/queries>`.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
.. _when-querysets-are-evaluated:
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
When ``QuerySet``\s are evaluated
|
|
|
|
|
=================================
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2009-09-13 06:59:41 +08:00
|
|
|
|
Internally, a ``QuerySet`` can be constructed, filtered, sliced, and generally
|
2008-08-24 06:25:40 +08:00
|
|
|
|
passed around without actually hitting the database. No database activity
|
|
|
|
|
actually occurs until you do something to evaluate the queryset.
|
|
|
|
|
|
|
|
|
|
You can evaluate a ``QuerySet`` in the following ways:
|
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* **Iteration.** A ``QuerySet`` is iterable, and it executes its database
|
|
|
|
|
query the first time you iterate over it. For example, this will print
|
|
|
|
|
the headline of all entries in the database::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
for e in Entry.objects.all():
|
2012-04-29 00:02:01 +08:00
|
|
|
|
print(e.headline)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2012-09-08 23:19:49 +08:00
|
|
|
|
Note: Don't use this if all you want to do is determine if at least one
|
|
|
|
|
result exists. It's more efficient to use :meth:`~QuerySet.exists`.
|
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* **Slicing.** As explained in :ref:`limiting-querysets`, a ``QuerySet`` can
|
2012-01-21 23:32:52 +08:00
|
|
|
|
be sliced, using Python's array-slicing syntax. Slicing an unevaluated
|
|
|
|
|
``QuerySet`` usually returns another unevaluated ``QuerySet``, but Django
|
|
|
|
|
will execute the database query if you use the "step" parameter of slice
|
|
|
|
|
syntax, and will return a list. Slicing a ``QuerySet`` that has been
|
2014-11-14 02:59:00 +08:00
|
|
|
|
evaluated also returns a list.
|
|
|
|
|
|
|
|
|
|
Also note that even though slicing an unevaluated ``QuerySet`` returns
|
|
|
|
|
another unevaluated ``QuerySet``, modifying it further (e.g., adding
|
|
|
|
|
more filters, or modifying ordering) is not allowed, since that does not
|
|
|
|
|
translate well into SQL and it would not have a clear meaning either.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* **Pickling/Caching.** See the following section for details of what
|
|
|
|
|
is involved when `pickling QuerySets`_. The important thing for the
|
|
|
|
|
purposes of this section is that the results are read from the database.
|
2008-09-28 11:08:30 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* **repr().** A ``QuerySet`` is evaluated when you call ``repr()`` on it.
|
|
|
|
|
This is for convenience in the Python interactive interpreter, so you can
|
|
|
|
|
immediately see your results when using the API interactively.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* **len().** A ``QuerySet`` is evaluated when you call ``len()`` on it.
|
|
|
|
|
This, as you might expect, returns the length of the result list.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2015-01-20 23:20:02 +08:00
|
|
|
|
Note: If you only need to determine the number of records in the set (and
|
|
|
|
|
don't need the actual objects), it's much more efficient to handle a count
|
|
|
|
|
at the database level using SQL's ``SELECT COUNT(*)``. Django provides a
|
|
|
|
|
:meth:`~QuerySet.count` method for precisely this reason.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* **list().** Force evaluation of a ``QuerySet`` by calling ``list()`` on
|
|
|
|
|
it. For example::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
entry_list = list(Entry.objects.all())
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* **bool().** Testing a ``QuerySet`` in a boolean context, such as using
|
|
|
|
|
``bool()``, ``or``, ``and`` or an ``if`` statement, will cause the query
|
|
|
|
|
to be executed. If there is at least one result, the ``QuerySet`` is
|
|
|
|
|
``True``, otherwise ``False``. For example::
|
2010-01-16 11:13:16 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
if Entry.objects.filter(headline="Test"):
|
2012-04-29 00:02:01 +08:00
|
|
|
|
print("There is at least one Entry with the headline Test")
|
2010-01-16 11:13:16 +08:00
|
|
|
|
|
2015-01-20 23:20:02 +08:00
|
|
|
|
Note: If you only want to determine if at least one result exists (and don't
|
|
|
|
|
need the actual objects), it's more efficient to use :meth:`~QuerySet.exists`.
|
2010-01-16 11:13:16 +08:00
|
|
|
|
|
2008-09-28 11:08:30 +08:00
|
|
|
|
.. _pickling QuerySets:
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
Pickling ``QuerySet``\s
|
|
|
|
|
-----------------------
|
2008-09-28 11:08:30 +08:00
|
|
|
|
|
2011-09-05 05:17:30 +08:00
|
|
|
|
If you :mod:`pickle` a ``QuerySet``, this will force all the results to be loaded
|
2008-09-28 11:08:30 +08:00
|
|
|
|
into memory prior to pickling. Pickling is usually used as a precursor to
|
|
|
|
|
caching and when the cached queryset is reloaded, you want the results to
|
|
|
|
|
already be present and ready for use (reading from the database can take some
|
|
|
|
|
time, defeating the purpose of caching). This means that when you unpickle a
|
|
|
|
|
``QuerySet``, it contains the results at the moment it was pickled, rather
|
|
|
|
|
than the results that are currently in the database.
|
|
|
|
|
|
|
|
|
|
If you only want to pickle the necessary information to recreate the
|
2010-05-10 21:14:19 +08:00
|
|
|
|
``QuerySet`` from the database at a later time, pickle the ``query`` attribute
|
2008-09-28 11:08:30 +08:00
|
|
|
|
of the ``QuerySet``. You can then recreate the original ``QuerySet`` (without
|
|
|
|
|
any results loaded) using some code like this::
|
|
|
|
|
|
|
|
|
|
>>> import pickle
|
|
|
|
|
>>> query = pickle.loads(s) # Assuming 's' is the pickled string.
|
|
|
|
|
>>> qs = MyModel.objects.all()
|
|
|
|
|
>>> qs.query = query # Restore the original 'query'.
|
|
|
|
|
|
|
|
|
|
The ``query`` attribute is an opaque object. It represents the internals of
|
|
|
|
|
the query construction and is not part of the public API. However, it is safe
|
|
|
|
|
(and fully supported) to pickle and unpickle the attribute's contents as
|
|
|
|
|
described here.
|
|
|
|
|
|
2010-02-24 21:56:19 +08:00
|
|
|
|
.. admonition:: You can't share pickles between versions
|
|
|
|
|
|
2013-08-02 21:26:19 +08:00
|
|
|
|
Pickles of ``QuerySets`` are only valid for the version of Django that
|
2011-09-30 18:28:39 +08:00
|
|
|
|
was used to generate them. If you generate a pickle using Django
|
|
|
|
|
version N, there is no guarantee that pickle will be readable with
|
|
|
|
|
Django version N+1. Pickles should not be used as part of a long-term
|
|
|
|
|
archival strategy.
|
2010-02-24 21:56:19 +08:00
|
|
|
|
|
2014-06-06 19:10:20 +08:00
|
|
|
|
Since pickle compatibility errors can be difficult to diagnose, such as
|
|
|
|
|
silently corrupted objects, a ``RuntimeWarning`` is raised when you try to
|
|
|
|
|
unpickle a queryset in a Django version that is different than the one in
|
|
|
|
|
which it was pickled.
|
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
.. _queryset-api:
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``QuerySet`` API
|
|
|
|
|
================
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2013-07-26 16:59:40 +08:00
|
|
|
|
Here's the formal declaration of a ``QuerySet``:
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2019-10-22 20:23:21 +08:00
|
|
|
|
.. class:: QuerySet(model=None, query=None, using=None, hints=None)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-03-09 03:51:19 +08:00
|
|
|
|
Usually when you'll interact with a ``QuerySet`` you'll use it by
|
|
|
|
|
:ref:`chaining filters <chaining-filters>`. To make this work, most
|
|
|
|
|
``QuerySet`` methods return new querysets. These methods are covered in
|
|
|
|
|
detail later in this section.
|
|
|
|
|
|
|
|
|
|
The ``QuerySet`` class has two public attributes you can use for
|
|
|
|
|
introspection:
|
|
|
|
|
|
|
|
|
|
.. attribute:: ordered
|
|
|
|
|
|
2011-09-30 18:28:39 +08:00
|
|
|
|
``True`` if the ``QuerySet`` is ordered — i.e. has an
|
|
|
|
|
:meth:`order_by()` clause or a default ordering on the model.
|
|
|
|
|
``False`` otherwise.
|
2011-03-09 03:51:19 +08:00
|
|
|
|
|
|
|
|
|
.. attribute:: db
|
2011-08-26 14:19:30 +08:00
|
|
|
|
|
2011-03-09 03:51:19 +08:00
|
|
|
|
The database that will be used if this query is executed now.
|
|
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
|
|
The ``query`` parameter to :class:`QuerySet` exists so that specialized
|
2016-11-07 20:00:40 +08:00
|
|
|
|
query subclasses can reconstruct internal query state. The value of the
|
|
|
|
|
parameter is an opaque representation of that query state and is not
|
2019-06-17 22:54:55 +08:00
|
|
|
|
part of a public API. To put it another way: if you need to ask, you
|
|
|
|
|
don't need to use it.
|
2011-03-09 03:51:19 +08:00
|
|
|
|
|
|
|
|
|
.. currentmodule:: django.db.models.query.QuerySet
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
Methods that return new ``QuerySet``\s
|
|
|
|
|
--------------------------------------
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Django provides a range of ``QuerySet`` refinement methods that modify either
|
|
|
|
|
the types of results returned by the ``QuerySet`` or the way its SQL query is
|
|
|
|
|
executed.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``filter()``
|
|
|
|
|
~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: filter(**kwargs)
|
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
Returns a new ``QuerySet`` containing objects that match the given lookup
|
|
|
|
|
parameters.
|
|
|
|
|
|
|
|
|
|
The lookup parameters (``**kwargs``) should be in the format described in
|
|
|
|
|
`Field lookups`_ below. Multiple parameters are joined via ``AND`` in the
|
|
|
|
|
underlying SQL statement.
|
|
|
|
|
|
2015-01-12 04:02:33 +08:00
|
|
|
|
If you need to execute more complex queries (for example, queries with ``OR`` statements),
|
|
|
|
|
you can use :class:`Q objects <django.db.models.Q>`.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``exclude()``
|
|
|
|
|
~~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: exclude(**kwargs)
|
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
Returns a new ``QuerySet`` containing objects that do *not* match the given
|
|
|
|
|
lookup parameters.
|
|
|
|
|
|
|
|
|
|
The lookup parameters (``**kwargs``) should be in the format described in
|
|
|
|
|
`Field lookups`_ below. Multiple parameters are joined via ``AND`` in the
|
|
|
|
|
underlying SQL statement, and the whole thing is enclosed in a ``NOT()``.
|
|
|
|
|
|
|
|
|
|
This example excludes all entries whose ``pub_date`` is later than 2005-1-3
|
|
|
|
|
AND whose ``headline`` is "Hello"::
|
|
|
|
|
|
|
|
|
|
Entry.objects.exclude(pub_date__gt=datetime.date(2005, 1, 3), headline='Hello')
|
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
In SQL terms, that evaluates to:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ...
|
|
|
|
|
WHERE NOT (pub_date > '2005-1-3' AND headline = 'Hello')
|
|
|
|
|
|
|
|
|
|
This example excludes all entries whose ``pub_date`` is later than 2005-1-3
|
|
|
|
|
OR whose headline is "Hello"::
|
|
|
|
|
|
|
|
|
|
Entry.objects.exclude(pub_date__gt=datetime.date(2005, 1, 3)).exclude(headline='Hello')
|
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
In SQL terms, that evaluates to:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ...
|
|
|
|
|
WHERE NOT pub_date > '2005-1-3'
|
2010-04-06 22:35:36 +08:00
|
|
|
|
AND NOT headline = 'Hello'
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Note the second example is more restrictive.
|
|
|
|
|
|
2015-01-12 04:02:33 +08:00
|
|
|
|
If you need to execute more complex queries (for example, queries with ``OR`` statements),
|
|
|
|
|
you can use :class:`Q objects <django.db.models.Q>`.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``annotate()``
|
|
|
|
|
~~~~~~~~~~~~~~
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: annotate(*args, **kwargs)
|
|
|
|
|
|
2013-12-25 21:13:18 +08:00
|
|
|
|
Annotates each object in the ``QuerySet`` with the provided list of :doc:`query
|
|
|
|
|
expressions </ref/models/expressions>`. An expression may be a simple value, a
|
|
|
|
|
reference to a field on the model (or any related models), or an aggregate
|
2016-01-11 00:48:16 +08:00
|
|
|
|
expression (averages, sums, etc.) that has been computed over the objects that
|
2013-12-25 21:13:18 +08:00
|
|
|
|
are related to the objects in the ``QuerySet``.
|
|
|
|
|
|
2009-01-15 19:06:34 +08:00
|
|
|
|
Each argument to ``annotate()`` is an annotation that will be added
|
|
|
|
|
to each object in the ``QuerySet`` that is returned.
|
|
|
|
|
|
|
|
|
|
The aggregation functions that are provided by Django are described
|
|
|
|
|
in `Aggregation Functions`_ below.
|
|
|
|
|
|
|
|
|
|
Annotations specified using keyword arguments will use the keyword as
|
|
|
|
|
the alias for the annotation. Anonymous arguments will have an alias
|
|
|
|
|
generated for them based upon the name of the aggregate function and
|
2013-12-25 21:13:18 +08:00
|
|
|
|
the model field that is being aggregated. Only aggregate expressions
|
|
|
|
|
that reference a single field can be anonymous arguments. Everything
|
|
|
|
|
else must be a keyword argument.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
|
|
|
|
For example, if you were manipulating a list of blogs, you may want
|
|
|
|
|
to determine how many entries have been made in each blog::
|
|
|
|
|
|
2013-05-18 18:12:26 +08:00
|
|
|
|
>>> from django.db.models import Count
|
2009-01-15 19:06:34 +08:00
|
|
|
|
>>> q = Blog.objects.annotate(Count('entry'))
|
|
|
|
|
# The name of the first blog
|
|
|
|
|
>>> q[0].name
|
|
|
|
|
'Blogasaurus'
|
|
|
|
|
# The number of entries on the first blog
|
|
|
|
|
>>> q[0].entry__count
|
|
|
|
|
42
|
|
|
|
|
|
2009-04-16 20:45:35 +08:00
|
|
|
|
The ``Blog`` model doesn't define an ``entry__count`` attribute by itself,
|
2009-01-15 19:06:34 +08:00
|
|
|
|
but by using a keyword argument to specify the aggregate function, you can
|
|
|
|
|
control the name of the annotation::
|
|
|
|
|
|
|
|
|
|
>>> q = Blog.objects.annotate(number_of_entries=Count('entry'))
|
|
|
|
|
# The number of entries on the first blog, using the name provided
|
|
|
|
|
>>> q[0].number_of_entries
|
|
|
|
|
42
|
|
|
|
|
|
2010-08-20 03:27:44 +08:00
|
|
|
|
For an in-depth discussion of aggregation, see :doc:`the topic guide on
|
|
|
|
|
Aggregation </topics/db/aggregation>`.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``order_by()``
|
|
|
|
|
~~~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: order_by(*fields)
|
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
By default, results returned by a ``QuerySet`` are ordered by the ordering
|
|
|
|
|
tuple given by the ``ordering`` option in the model's ``Meta``. You can
|
|
|
|
|
override this on a per-``QuerySet`` basis by using the ``order_by`` method.
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(pub_date__year=2005).order_by('-pub_date', 'headline')
|
|
|
|
|
|
|
|
|
|
The result above will be ordered by ``pub_date`` descending, then by
|
|
|
|
|
``headline`` ascending. The negative sign in front of ``"-pub_date"`` indicates
|
|
|
|
|
*descending* order. Ascending order is implied. To order randomly, use ``"?"``,
|
|
|
|
|
like so::
|
|
|
|
|
|
|
|
|
|
Entry.objects.order_by('?')
|
|
|
|
|
|
|
|
|
|
Note: ``order_by('?')`` queries may be expensive and slow, depending on the
|
|
|
|
|
database backend you're using.
|
|
|
|
|
|
2008-08-29 13:04:26 +08:00
|
|
|
|
To order by a field in a different model, use the same syntax as when you are
|
|
|
|
|
querying across model relations. That is, the name of the field, followed by a
|
|
|
|
|
double underscore (``__``), followed by the name of the field in the new model,
|
|
|
|
|
and so on for as many models as you want to join. For example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.order_by('blog__name', 'headline')
|
|
|
|
|
|
|
|
|
|
If you try to order by a field that is a relation to another model, Django will
|
2014-11-13 05:00:48 +08:00
|
|
|
|
use the default ordering on the related model, or order by the related model's
|
2011-08-27 10:56:18 +08:00
|
|
|
|
primary key if there is no :attr:`Meta.ordering
|
2014-11-13 05:00:48 +08:00
|
|
|
|
<django.db.models.Options.ordering>` specified. For example, since the ``Blog``
|
|
|
|
|
model has no default ordering specified::
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
|
|
|
|
Entry.objects.order_by('blog')
|
|
|
|
|
|
|
|
|
|
...is identical to::
|
|
|
|
|
|
|
|
|
|
Entry.objects.order_by('blog__id')
|
|
|
|
|
|
2014-11-13 05:00:48 +08:00
|
|
|
|
If ``Blog`` had ``ordering = ['name']``, then the first queryset would be
|
|
|
|
|
identical to::
|
|
|
|
|
|
|
|
|
|
Entry.objects.order_by('blog__name')
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2015-01-09 23:16:16 +08:00
|
|
|
|
You can also order by :doc:`query expressions </ref/models/expressions>` by
|
2019-03-14 11:08:02 +08:00
|
|
|
|
calling :meth:`~.Expression.asc` or :meth:`~.Expression.desc` on the
|
|
|
|
|
expression::
|
2015-01-09 23:16:16 +08:00
|
|
|
|
|
|
|
|
|
Entry.objects.order_by(Coalesce('summary', 'headline').desc())
|
2014-04-26 15:34:20 +08:00
|
|
|
|
|
2019-03-14 11:08:02 +08:00
|
|
|
|
:meth:`~.Expression.asc` and :meth:`~.Expression.desc` have arguments
|
|
|
|
|
(``nulls_first`` and ``nulls_last``) that control how null values are sorted.
|
|
|
|
|
|
2008-08-29 13:04:26 +08:00
|
|
|
|
Be cautious when ordering by fields in related models if you are also using
|
2011-08-27 10:56:18 +08:00
|
|
|
|
:meth:`distinct()`. See the note in :meth:`distinct` for an explanation of how
|
2010-11-29 02:15:40 +08:00
|
|
|
|
related model ordering can change the expected results.
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
2013-08-02 21:26:19 +08:00
|
|
|
|
.. note::
|
|
|
|
|
It is permissible to specify a multi-valued field to order the results by
|
|
|
|
|
(for example, a :class:`~django.db.models.ManyToManyField` field, or the
|
|
|
|
|
reverse relation of a :class:`~django.db.models.ForeignKey` field).
|
|
|
|
|
|
|
|
|
|
Consider this case::
|
|
|
|
|
|
|
|
|
|
class Event(Model):
|
2015-07-22 22:43:21 +08:00
|
|
|
|
parent = models.ForeignKey(
|
|
|
|
|
'self',
|
|
|
|
|
on_delete=models.CASCADE,
|
|
|
|
|
related_name='children',
|
|
|
|
|
)
|
2013-08-02 21:26:19 +08:00
|
|
|
|
date = models.DateField()
|
|
|
|
|
|
|
|
|
|
Event.objects.order_by('children__date')
|
|
|
|
|
|
|
|
|
|
Here, there could potentially be multiple ordering data for each ``Event``;
|
|
|
|
|
each ``Event`` with multiple ``children`` will be returned multiple times
|
|
|
|
|
into the new ``QuerySet`` that ``order_by()`` creates. In other words,
|
|
|
|
|
using ``order_by()`` on the ``QuerySet`` could return more items than you
|
|
|
|
|
were working on to begin with - which is probably neither expected nor
|
|
|
|
|
useful.
|
|
|
|
|
|
|
|
|
|
Thus, take care when using multi-valued field to order the results. **If**
|
|
|
|
|
you can be sure that there will only be one ordering piece of data for each
|
|
|
|
|
of the items you're ordering, this approach should not present problems. If
|
|
|
|
|
not, make sure the results are what you expect.
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
There's no way to specify whether ordering should be case sensitive. With
|
|
|
|
|
respect to case-sensitivity, Django will order results however your database
|
|
|
|
|
backend normally orders them.
|
|
|
|
|
|
2015-01-09 23:16:16 +08:00
|
|
|
|
You can order by a field converted to lowercase with
|
|
|
|
|
:class:`~django.db.models.functions.Lower` which will achieve case-consistent
|
|
|
|
|
ordering::
|
|
|
|
|
|
|
|
|
|
Entry.objects.order_by(Lower('headline').desc())
|
|
|
|
|
|
2010-11-29 02:15:40 +08:00
|
|
|
|
If you don't want any ordering to be applied to a query, not even the default
|
2011-08-27 10:56:18 +08:00
|
|
|
|
ordering, call :meth:`order_by()` with no parameters.
|
2010-11-29 02:15:40 +08:00
|
|
|
|
|
2009-04-23 06:16:19 +08:00
|
|
|
|
You can tell if a query is ordered or not by checking the
|
2011-03-09 03:51:19 +08:00
|
|
|
|
:attr:`.QuerySet.ordered` attribute, which will be ``True`` if the
|
2009-04-23 06:16:19 +08:00
|
|
|
|
``QuerySet`` has been ordered in any way.
|
|
|
|
|
|
2015-03-14 03:40:25 +08:00
|
|
|
|
Each ``order_by()`` call will clear any previous ordering. For example, this
|
|
|
|
|
query will be ordered by ``pub_date`` and not ``headline``::
|
|
|
|
|
|
|
|
|
|
Entry.objects.order_by('headline').order_by('pub_date')
|
|
|
|
|
|
2013-09-09 03:57:36 +08:00
|
|
|
|
.. warning::
|
|
|
|
|
|
|
|
|
|
Ordering is not a free operation. Each field you add to the ordering
|
|
|
|
|
incurs a cost to your database. Each foreign key you add will
|
2013-09-09 23:30:31 +08:00
|
|
|
|
implicitly include all of its default orderings as well.
|
2013-09-09 03:57:36 +08:00
|
|
|
|
|
2016-03-03 23:15:24 +08:00
|
|
|
|
If a query doesn't have an ordering specified, results are returned from
|
|
|
|
|
the database in an unspecified order. A particular ordering is guaranteed
|
|
|
|
|
only when ordering by a set of fields that uniquely identify each object in
|
|
|
|
|
the results. For example, if a ``name`` field isn't unique, ordering by it
|
|
|
|
|
won't guarantee objects with the same name always appear in the same order.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``reverse()``
|
|
|
|
|
~~~~~~~~~~~~~
|
2008-09-11 08:24:02 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: reverse()
|
|
|
|
|
|
2008-09-11 08:24:02 +08:00
|
|
|
|
Use the ``reverse()`` method to reverse the order in which a queryset's
|
|
|
|
|
elements are returned. Calling ``reverse()`` a second time restores the
|
|
|
|
|
ordering back to the normal direction.
|
|
|
|
|
|
2013-05-13 03:24:48 +08:00
|
|
|
|
To retrieve the "last" five items in a queryset, you could do this::
|
2008-09-11 08:24:02 +08:00
|
|
|
|
|
|
|
|
|
my_queryset.reverse()[:5]
|
|
|
|
|
|
|
|
|
|
Note that this is not quite the same as slicing from the end of a sequence in
|
|
|
|
|
Python. The above example will return the last item first, then the
|
|
|
|
|
penultimate item and so on. If we had a Python sequence and looked at
|
|
|
|
|
``seq[-5:]``, we would see the fifth-last item first. Django doesn't support
|
|
|
|
|
that mode of access (slicing from the end), because it's not possible to do it
|
|
|
|
|
efficiently in SQL.
|
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
Also, note that ``reverse()`` should generally only be called on a ``QuerySet``
|
|
|
|
|
which has a defined ordering (e.g., when querying against a model which defines
|
|
|
|
|
a default ordering, or when using :meth:`order_by()`). If no such ordering is
|
|
|
|
|
defined for a given ``QuerySet``, calling ``reverse()`` on it has no real
|
|
|
|
|
effect (the ordering was undefined prior to calling ``reverse()``, and will
|
|
|
|
|
remain undefined afterward).
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``distinct()``
|
|
|
|
|
~~~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2015-07-27 20:35:21 +08:00
|
|
|
|
.. method:: distinct(*fields)
|
2010-05-09 13:48:45 +08:00
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
Returns a new ``QuerySet`` that uses ``SELECT DISTINCT`` in its SQL query. This
|
|
|
|
|
eliminates duplicate rows from the query results.
|
|
|
|
|
|
|
|
|
|
By default, a ``QuerySet`` will not eliminate duplicate rows. In practice, this
|
|
|
|
|
is rarely a problem, because simple queries such as ``Blog.objects.all()``
|
2008-08-29 13:04:26 +08:00
|
|
|
|
don't introduce the possibility of duplicate result rows. However, if your
|
|
|
|
|
query spans multiple tables, it's possible to get duplicate results when a
|
|
|
|
|
``QuerySet`` is evaluated. That's when you'd use ``distinct()``.
|
|
|
|
|
|
|
|
|
|
.. note::
|
2010-11-29 02:15:40 +08:00
|
|
|
|
Any fields used in an :meth:`order_by` call are included in the SQL
|
2011-08-27 10:56:18 +08:00
|
|
|
|
``SELECT`` columns. This can sometimes lead to unexpected results when used
|
|
|
|
|
in conjunction with ``distinct()``. If you order by fields from a related
|
|
|
|
|
model, those fields will be added to the selected columns and they may make
|
|
|
|
|
otherwise duplicate rows appear to be distinct. Since the extra columns
|
|
|
|
|
don't appear in the returned results (they are only there to support
|
|
|
|
|
ordering), it sometimes looks like non-distinct results are being returned.
|
|
|
|
|
|
|
|
|
|
Similarly, if you use a :meth:`values()` query to restrict the columns
|
|
|
|
|
selected, the columns used in any :meth:`order_by()` (or default model
|
2008-08-29 13:04:26 +08:00
|
|
|
|
ordering) will still be involved and may affect uniqueness of the results.
|
|
|
|
|
|
|
|
|
|
The moral here is that if you are using ``distinct()`` be careful about
|
|
|
|
|
ordering by related models. Similarly, when using ``distinct()`` and
|
2011-08-27 10:56:18 +08:00
|
|
|
|
:meth:`values()` together, be careful when ordering by fields not in the
|
|
|
|
|
:meth:`values()` call.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2013-07-05 19:20:05 +08:00
|
|
|
|
On PostgreSQL only, you can pass positional arguments (``*fields``) in order to
|
|
|
|
|
specify the names of fields to which the ``DISTINCT`` should apply. This
|
|
|
|
|
translates to a ``SELECT DISTINCT ON`` SQL query. Here's the difference. For a
|
|
|
|
|
normal ``distinct()`` call, the database compares *each* field in each row when
|
2012-12-27 04:47:29 +08:00
|
|
|
|
determining which rows are distinct. For a ``distinct()`` call with specified
|
|
|
|
|
field names, the database will only compare the specified field names.
|
2011-12-23 04:42:40 +08:00
|
|
|
|
|
|
|
|
|
.. note::
|
2011-12-31 02:01:34 +08:00
|
|
|
|
When you specify field names, you *must* provide an ``order_by()`` in the
|
2013-08-02 21:26:19 +08:00
|
|
|
|
``QuerySet``, and the fields in ``order_by()`` must start with the fields in
|
2011-12-31 02:01:34 +08:00
|
|
|
|
``distinct()``, in the same order.
|
|
|
|
|
|
|
|
|
|
For example, ``SELECT DISTINCT ON (a)`` gives you the first row for each
|
|
|
|
|
value in column ``a``. If you don't specify an order, you'll get some
|
|
|
|
|
arbitrary row.
|
2011-12-23 04:42:40 +08:00
|
|
|
|
|
2013-07-05 19:20:05 +08:00
|
|
|
|
Examples (those after the first will only work on PostgreSQL)::
|
2011-12-23 04:42:40 +08:00
|
|
|
|
|
|
|
|
|
>>> Author.objects.distinct()
|
|
|
|
|
[...]
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.order_by('pub_date').distinct('pub_date')
|
|
|
|
|
[...]
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.order_by('blog').distinct('blog')
|
|
|
|
|
[...]
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.order_by('author', 'pub_date').distinct('author', 'pub_date')
|
|
|
|
|
[...]
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.order_by('blog__name', 'mod_date').distinct('blog__name', 'mod_date')
|
|
|
|
|
[...]
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.order_by('author', 'pub_date').distinct('author')
|
|
|
|
|
[...]
|
|
|
|
|
|
2014-04-26 15:34:20 +08:00
|
|
|
|
.. note::
|
|
|
|
|
Keep in mind that :meth:`order_by` uses any default related model ordering
|
|
|
|
|
that has been defined. You might have to explicitly order by the relation
|
|
|
|
|
``_id`` or referenced field to make sure the ``DISTINCT ON`` expressions
|
|
|
|
|
match those at the beginning of the ``ORDER BY`` clause. For example, if
|
|
|
|
|
the ``Blog`` model defined an :attr:`~django.db.models.Options.ordering` by
|
|
|
|
|
``name``::
|
|
|
|
|
|
|
|
|
|
Entry.objects.order_by('blog').distinct('blog')
|
|
|
|
|
|
|
|
|
|
...wouldn't work because the query would be ordered by ``blog__name`` thus
|
|
|
|
|
mismatching the ``DISTINCT ON`` expression. You'd have to explicitly order
|
2020-03-06 18:52:49 +08:00
|
|
|
|
by the relation ``_id`` field (``blog_id`` in this case) or the referenced
|
2014-04-26 15:34:20 +08:00
|
|
|
|
one (``blog__pk``) to make sure both expressions match.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``values()``
|
|
|
|
|
~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2016-08-15 09:35:12 +08:00
|
|
|
|
.. method:: values(*fields, **expressions)
|
2010-05-09 13:48:45 +08:00
|
|
|
|
|
2015-01-30 15:26:13 +08:00
|
|
|
|
Returns a ``QuerySet`` that returns dictionaries, rather than model instances,
|
|
|
|
|
when used as an iterable.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Each of those dictionaries represents an object, with the keys corresponding to
|
|
|
|
|
the attribute names of model objects.
|
|
|
|
|
|
|
|
|
|
This example compares the dictionaries of ``values()`` with the normal model
|
|
|
|
|
objects::
|
|
|
|
|
|
|
|
|
|
# This list contains a Blog object.
|
|
|
|
|
>>> Blog.objects.filter(name__startswith='Beatles')
|
2015-10-06 07:07:34 +08:00
|
|
|
|
<QuerySet [<Blog: Beatles Blog>]>
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
# This list contains a dictionary.
|
|
|
|
|
>>> Blog.objects.filter(name__startswith='Beatles').values()
|
2015-10-06 07:07:34 +08:00
|
|
|
|
<QuerySet [{'id': 1, 'name': 'Beatles Blog', 'tagline': 'All the latest Beatles news.'}]>
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
The ``values()`` method takes optional positional arguments, ``*fields``, which
|
|
|
|
|
specify field names to which the ``SELECT`` should be limited. If you specify
|
|
|
|
|
the fields, each dictionary will contain only the field keys/values for the
|
|
|
|
|
fields you specify. If you don't specify the fields, each dictionary will
|
|
|
|
|
contain a key and value for every field in the database table.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
>>> Blog.objects.values()
|
2015-10-06 07:07:34 +08:00
|
|
|
|
<QuerySet [{'id': 1, 'name': 'Beatles Blog', 'tagline': 'All the latest Beatles news.'}]>
|
2008-08-24 06:25:40 +08:00
|
|
|
|
>>> Blog.objects.values('id', 'name')
|
2015-10-06 07:07:34 +08:00
|
|
|
|
<QuerySet [{'id': 1, 'name': 'Beatles Blog'}]>
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2016-08-15 09:35:12 +08:00
|
|
|
|
The ``values()`` method also takes optional keyword arguments,
|
|
|
|
|
``**expressions``, which are passed through to :meth:`annotate`::
|
|
|
|
|
|
|
|
|
|
>>> from django.db.models.functions import Lower
|
|
|
|
|
>>> Blog.objects.values(lower_name=Lower('name'))
|
|
|
|
|
<QuerySet [{'lower_name': 'beatles blog'}]>
|
|
|
|
|
|
2017-06-18 23:53:40 +08:00
|
|
|
|
You can use built-in and :doc:`custom lookups </howto/custom-lookups>` in
|
|
|
|
|
ordering. For example::
|
|
|
|
|
|
|
|
|
|
>>> from django.db.models import CharField
|
|
|
|
|
>>> from django.db.models.functions import Lower
|
2018-02-20 02:12:13 +08:00
|
|
|
|
>>> CharField.register_lookup(Lower)
|
2017-06-18 23:53:40 +08:00
|
|
|
|
>>> Blog.objects.values('name__lower')
|
|
|
|
|
<QuerySet [{'name__lower': 'beatles blog'}]>
|
|
|
|
|
|
2016-08-15 09:35:12 +08:00
|
|
|
|
An aggregate within a ``values()`` clause is applied before other arguments
|
|
|
|
|
within the same ``values()`` clause. If you need to group by another value,
|
|
|
|
|
add it to an earlier ``values()`` clause instead. For example::
|
|
|
|
|
|
|
|
|
|
>>> from django.db.models import Count
|
2018-01-19 15:55:29 +08:00
|
|
|
|
>>> Blog.objects.values('entry__authors', entries=Count('entry'))
|
|
|
|
|
<QuerySet [{'entry__authors': 1, 'entries': 20}, {'entry__authors': 1, 'entries': 13}]>
|
|
|
|
|
>>> Blog.objects.values('entry__authors').annotate(entries=Count('entry'))
|
|
|
|
|
<QuerySet [{'entry__authors': 1, 'entries': 33}]>
|
2016-08-15 09:35:12 +08:00
|
|
|
|
|
2010-11-21 10:28:25 +08:00
|
|
|
|
A few subtleties that are worth mentioning:
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* If you have a field called ``foo`` that is a
|
|
|
|
|
:class:`~django.db.models.ForeignKey`, the default ``values()`` call
|
|
|
|
|
will return a dictionary key called ``foo_id``, since this is the name
|
|
|
|
|
of the hidden model attribute that stores the actual value (the ``foo``
|
|
|
|
|
attribute refers to the related model). When you are calling
|
|
|
|
|
``values()`` and passing in field names, you can pass in either ``foo``
|
|
|
|
|
or ``foo_id`` and you will get back the same thing (the dictionary key
|
|
|
|
|
will match the field name you passed in).
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
For example::
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
>>> Entry.objects.values()
|
2015-10-06 07:07:34 +08:00
|
|
|
|
<QuerySet [{'blog_id': 1, 'headline': 'First Entry', ...}, ...]>
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
>>> Entry.objects.values('blog')
|
2015-10-06 07:07:34 +08:00
|
|
|
|
<QuerySet [{'blog': 1}, ...]>
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
>>> Entry.objects.values('blog_id')
|
2015-10-06 07:07:34 +08:00
|
|
|
|
<QuerySet [{'blog_id': 1}, ...]>
|
2010-11-29 02:15:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* When using ``values()`` together with :meth:`distinct()`, be aware that
|
|
|
|
|
ordering can affect the results. See the note in :meth:`distinct` for
|
|
|
|
|
details.
|
2010-11-29 02:15:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* If you use a ``values()`` clause after an :meth:`extra()` call,
|
|
|
|
|
any fields defined by a ``select`` argument in the :meth:`extra()` must
|
|
|
|
|
be explicitly included in the ``values()`` call. Any :meth:`extra()` call
|
|
|
|
|
made after a ``values()`` call will have its extra selected fields
|
|
|
|
|
ignored.
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
2014-02-16 05:44:14 +08:00
|
|
|
|
* Calling :meth:`only()` and :meth:`defer()` after ``values()`` doesn't make
|
|
|
|
|
sense, so doing so will raise a ``NotImplementedError``.
|
|
|
|
|
|
2017-06-18 23:53:40 +08:00
|
|
|
|
* Combining transforms and aggregates requires the use of two :meth:`annotate`
|
|
|
|
|
calls, either explicitly or as keyword arguments to :meth:`values`. As above,
|
|
|
|
|
if the transform has been registered on the relevant field type the first
|
|
|
|
|
:meth:`annotate` can be omitted, thus the following examples are equivalent::
|
|
|
|
|
|
|
|
|
|
>>> from django.db.models import CharField, Count
|
|
|
|
|
>>> from django.db.models.functions import Lower
|
2018-02-20 02:12:13 +08:00
|
|
|
|
>>> CharField.register_lookup(Lower)
|
2017-06-18 23:53:40 +08:00
|
|
|
|
>>> Blog.objects.values('entry__authors__name__lower').annotate(entries=Count('entry'))
|
|
|
|
|
<QuerySet [{'entry__authors__name__lower': 'test author', 'entries': 33}]>
|
|
|
|
|
>>> Blog.objects.values(
|
|
|
|
|
... entry__authors__name__lower=Lower('entry__authors__name')
|
|
|
|
|
... ).annotate(entries=Count('entry'))
|
|
|
|
|
<QuerySet [{'entry__authors__name__lower': 'test author', 'entries': 33}]>
|
|
|
|
|
>>> Blog.objects.annotate(
|
|
|
|
|
... entry__authors__name__lower=Lower('entry__authors__name')
|
|
|
|
|
... ).values('entry__authors__name__lower').annotate(entries=Count('entry'))
|
|
|
|
|
<QuerySet [{'entry__authors__name__lower': 'test author', 'entries': 33}]>
|
|
|
|
|
|
2015-01-30 15:26:13 +08:00
|
|
|
|
It is useful when you know you're only going to need values from a small number
|
|
|
|
|
of the available fields and you won't need the functionality of a model
|
|
|
|
|
instance object. It's more efficient to select only the fields you need to use.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2015-01-30 15:26:13 +08:00
|
|
|
|
Finally, note that you can call ``filter()``, ``order_by()``, etc. after the
|
|
|
|
|
``values()`` call, that means that these two calls are identical::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Blog.objects.values().order_by('id')
|
|
|
|
|
Blog.objects.order_by('id').values()
|
|
|
|
|
|
|
|
|
|
The people who made Django prefer to put all the SQL-affecting methods first,
|
|
|
|
|
followed (optionally) by any output-affecting methods (such as ``values()``),
|
|
|
|
|
but it doesn't really matter. This is your chance to really flaunt your
|
|
|
|
|
individualism.
|
|
|
|
|
|
2012-09-20 04:39:14 +08:00
|
|
|
|
You can also refer to fields on related models with reverse relations through
|
|
|
|
|
``OneToOneField``, ``ForeignKey`` and ``ManyToManyField`` attributes::
|
2010-11-21 10:28:25 +08:00
|
|
|
|
|
2015-10-06 07:07:34 +08:00
|
|
|
|
>>> Blog.objects.values('name', 'entry__headline')
|
|
|
|
|
<QuerySet [{'name': 'My blog', 'entry__headline': 'An entry'},
|
|
|
|
|
{'name': 'My blog', 'entry__headline': 'Another entry'}, ...]>
|
2010-11-21 10:28:25 +08:00
|
|
|
|
|
|
|
|
|
.. warning::
|
|
|
|
|
|
|
|
|
|
Because :class:`~django.db.models.ManyToManyField` attributes and reverse
|
|
|
|
|
relations can have multiple related rows, including these can have a
|
|
|
|
|
multiplier effect on the size of your result set. This will be especially
|
|
|
|
|
pronounced if you include multiple such fields in your ``values()`` query,
|
|
|
|
|
in which case all possible combinations will be returned.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``values_list()``
|
|
|
|
|
~~~~~~~~~~~~~~~~~
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
2017-08-04 18:28:39 +08:00
|
|
|
|
.. method:: values_list(*fields, flat=False, named=False)
|
2010-05-09 13:48:45 +08:00
|
|
|
|
|
2010-03-10 11:41:41 +08:00
|
|
|
|
This is similar to ``values()`` except that instead of returning dictionaries,
|
|
|
|
|
it returns tuples when iterated over. Each tuple contains the value from the
|
2016-08-15 09:35:12 +08:00
|
|
|
|
respective field or expression passed into the ``values_list()`` call — so the
|
|
|
|
|
first item is the first field, etc. For example::
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
|
|
|
|
>>> Entry.objects.values_list('id', 'headline')
|
2017-07-12 02:15:17 +08:00
|
|
|
|
<QuerySet [(1, 'First entry'), ...]>
|
2016-08-15 09:35:12 +08:00
|
|
|
|
>>> from django.db.models.functions import Lower
|
|
|
|
|
>>> Entry.objects.values_list('id', Lower('headline'))
|
2017-07-12 02:15:17 +08:00
|
|
|
|
<QuerySet [(1, 'first entry'), ...]>
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
|
|
|
|
If you only pass in a single field, you can also pass in the ``flat``
|
|
|
|
|
parameter. If ``True``, this will mean the returned results are single values,
|
|
|
|
|
rather than one-tuples. An example should make the difference clearer::
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.values_list('id').order_by('id')
|
2017-07-12 02:15:17 +08:00
|
|
|
|
<QuerySet[(1,), (2,), (3,), ...]>
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
|
|
|
|
>>> Entry.objects.values_list('id', flat=True).order_by('id')
|
2017-07-12 02:15:17 +08:00
|
|
|
|
<QuerySet [1, 2, 3, ...]>
|
2008-08-29 13:04:26 +08:00
|
|
|
|
|
|
|
|
|
It is an error to pass in ``flat`` when there is more than one field.
|
|
|
|
|
|
2017-08-04 18:28:39 +08:00
|
|
|
|
You can pass ``named=True`` to get results as a
|
|
|
|
|
:func:`~python:collections.namedtuple`::
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.values_list('id', 'headline', named=True)
|
|
|
|
|
<QuerySet [Row(id=1, headline='First entry'), ...]>
|
|
|
|
|
|
|
|
|
|
Using a named tuple may make use of the results more readable, at the expense
|
|
|
|
|
of a small performance penalty for transforming the results into a named tuple.
|
|
|
|
|
|
2008-08-29 13:04:26 +08:00
|
|
|
|
If you don't pass any values to ``values_list()``, it will return all the
|
|
|
|
|
fields in the model, in the order they were declared.
|
|
|
|
|
|
2015-11-02 20:42:43 +08:00
|
|
|
|
A common need is to get a specific field value of a certain model instance. To
|
|
|
|
|
achieve that, use ``values_list()`` followed by a ``get()`` call::
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.values_list('headline', flat=True).get(pk=1)
|
|
|
|
|
'First entry'
|
|
|
|
|
|
2016-04-02 18:39:39 +08:00
|
|
|
|
``values()`` and ``values_list()`` are both intended as optimizations for a
|
|
|
|
|
specific use case: retrieving a subset of data without the overhead of creating
|
|
|
|
|
a model instance. This metaphor falls apart when dealing with many-to-many and
|
|
|
|
|
other multivalued relations (such as the one-to-many relation of a reverse
|
2016-04-08 23:43:02 +08:00
|
|
|
|
foreign key) because the "one row, one object" assumption doesn't hold.
|
2016-04-02 18:39:39 +08:00
|
|
|
|
|
|
|
|
|
For example, notice the behavior when querying across a
|
|
|
|
|
:class:`~django.db.models.ManyToManyField`::
|
|
|
|
|
|
|
|
|
|
>>> Author.objects.values_list('name', 'entry__headline')
|
2017-07-12 02:22:46 +08:00
|
|
|
|
<QuerySet [('Noam Chomsky', 'Impressions of Gaza'),
|
2016-04-02 18:39:39 +08:00
|
|
|
|
('George Orwell', 'Why Socialists Do Not Believe in Fun'),
|
|
|
|
|
('George Orwell', 'In Defence of English Cooking'),
|
2017-07-12 02:22:46 +08:00
|
|
|
|
('Don Quixote', None)]>
|
2016-04-02 18:39:39 +08:00
|
|
|
|
|
|
|
|
|
Authors with multiple entries appear multiple times and authors without any
|
|
|
|
|
entries have ``None`` for the entry headline.
|
|
|
|
|
|
|
|
|
|
Similarly, when querying a reverse foreign key, ``None`` appears for entries
|
|
|
|
|
not having any author::
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.values_list('authors')
|
2017-07-12 02:15:17 +08:00
|
|
|
|
<QuerySet [('Noam Chomsky',), ('George Orwell',), (None,)]>
|
2016-04-02 18:39:39 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``dates()``
|
|
|
|
|
~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: dates(field, kind, order='ASC')
|
|
|
|
|
|
2015-09-11 23:14:35 +08:00
|
|
|
|
Returns a ``QuerySet`` that evaluates to a list of :class:`datetime.date`
|
|
|
|
|
objects representing all available dates of a particular kind within the
|
|
|
|
|
contents of the ``QuerySet``.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2013-02-10 23:15:49 +08:00
|
|
|
|
``field`` should be the name of a ``DateField`` of your model.
|
2017-09-29 04:39:03 +08:00
|
|
|
|
``kind`` should be either ``"year"``, ``"month"``, ``"week"``, or ``"day"``.
|
|
|
|
|
Each :class:`datetime.date` object in the result list is "truncated" to the
|
|
|
|
|
given ``type``.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* ``"year"`` returns a list of all distinct year values for the field.
|
|
|
|
|
* ``"month"`` returns a list of all distinct year/month values for the
|
|
|
|
|
field.
|
2017-09-29 04:39:03 +08:00
|
|
|
|
* ``"week"`` returns a list of all distinct year/week values for the field. All
|
|
|
|
|
dates will be a Monday.
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* ``"day"`` returns a list of all distinct year/month/day values for the
|
|
|
|
|
field.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
``order``, which defaults to ``'ASC'``, should be either ``'ASC'`` or
|
|
|
|
|
``'DESC'``. This specifies how to order the results.
|
|
|
|
|
|
|
|
|
|
Examples::
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.dates('pub_date', 'year')
|
2013-02-10 23:15:49 +08:00
|
|
|
|
[datetime.date(2005, 1, 1)]
|
2008-08-24 06:25:40 +08:00
|
|
|
|
>>> Entry.objects.dates('pub_date', 'month')
|
2013-02-10 23:15:49 +08:00
|
|
|
|
[datetime.date(2005, 2, 1), datetime.date(2005, 3, 1)]
|
2017-09-29 04:39:03 +08:00
|
|
|
|
>>> Entry.objects.dates('pub_date', 'week')
|
|
|
|
|
[datetime.date(2005, 2, 14), datetime.date(2005, 3, 14)]
|
2008-08-24 06:25:40 +08:00
|
|
|
|
>>> Entry.objects.dates('pub_date', 'day')
|
2013-02-10 23:15:49 +08:00
|
|
|
|
[datetime.date(2005, 2, 20), datetime.date(2005, 3, 20)]
|
2008-08-24 06:25:40 +08:00
|
|
|
|
>>> Entry.objects.dates('pub_date', 'day', order='DESC')
|
2013-02-10 23:15:49 +08:00
|
|
|
|
[datetime.date(2005, 3, 20), datetime.date(2005, 2, 20)]
|
2008-08-24 06:25:40 +08:00
|
|
|
|
>>> Entry.objects.filter(headline__contains='Lennon').dates('pub_date', 'day')
|
2013-02-10 23:15:49 +08:00
|
|
|
|
[datetime.date(2005, 3, 20)]
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``datetimes()``
|
|
|
|
|
~~~~~~~~~~~~~~~
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2020-02-19 01:12:51 +08:00
|
|
|
|
.. method:: datetimes(field_name, kind, order='ASC', tzinfo=None, is_dst=None)
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2015-04-18 22:13:41 +08:00
|
|
|
|
Returns a ``QuerySet`` that evaluates to a list of :class:`datetime.datetime`
|
|
|
|
|
objects representing all available dates of a particular kind within the
|
|
|
|
|
contents of the ``QuerySet``.
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2014-10-28 06:18:37 +08:00
|
|
|
|
``field_name`` should be the name of a ``DateTimeField`` of your model.
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2017-09-29 04:39:03 +08:00
|
|
|
|
``kind`` should be either ``"year"``, ``"month"``, ``"week"``, ``"day"``,
|
|
|
|
|
``"hour"``, ``"minute"``, or ``"second"``. Each :class:`datetime.datetime`
|
|
|
|
|
object in the result list is "truncated" to the given ``type``.
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
|
|
|
|
``order``, which defaults to ``'ASC'``, should be either ``'ASC'`` or
|
|
|
|
|
``'DESC'``. This specifies how to order the results.
|
|
|
|
|
|
|
|
|
|
``tzinfo`` defines the time zone to which datetimes are converted prior to
|
|
|
|
|
truncation. Indeed, a given datetime has different representations depending
|
|
|
|
|
on the time zone in use. This parameter must be a :class:`datetime.tzinfo`
|
|
|
|
|
object. If it's ``None``, Django uses the :ref:`current time zone
|
|
|
|
|
<default-current-time-zone>`. It has no effect when :setting:`USE_TZ` is
|
|
|
|
|
``False``.
|
|
|
|
|
|
2020-02-19 01:12:51 +08:00
|
|
|
|
``is_dst`` indicates whether or not ``pytz`` should interpret nonexistent and
|
|
|
|
|
ambiguous datetimes in daylight saving time. By default (when ``is_dst=None``),
|
|
|
|
|
``pytz`` raises an exception for such datetimes.
|
|
|
|
|
|
|
|
|
|
.. versionadded:: 3.1
|
|
|
|
|
|
|
|
|
|
The ``is_dst`` parameter was added.
|
|
|
|
|
|
2013-02-10 23:15:49 +08:00
|
|
|
|
.. _database-time-zone-definitions:
|
|
|
|
|
|
|
|
|
|
.. note::
|
2011-11-18 21:01:06 +08:00
|
|
|
|
|
2013-02-10 23:15:49 +08:00
|
|
|
|
This function performs time zone conversions directly in the database.
|
|
|
|
|
As a consequence, your database must be able to interpret the value of
|
|
|
|
|
``tzinfo.tzname(None)``. This translates into the following requirements:
|
|
|
|
|
|
2016-10-08 09:06:49 +08:00
|
|
|
|
- SQLite: no requirements. Conversions are performed in Python with pytz_
|
|
|
|
|
(installed when you install Django).
|
2013-02-10 23:15:49 +08:00
|
|
|
|
- PostgreSQL: no requirements (see `Time Zones`_).
|
|
|
|
|
- Oracle: no requirements (see `Choosing a Time Zone File`_).
|
2016-10-08 09:06:49 +08:00
|
|
|
|
- MySQL: load the time zone tables with `mysql_tzinfo_to_sql`_.
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
|
|
|
|
.. _pytz: http://pytz.sourceforge.net/
|
2019-03-30 09:49:44 +08:00
|
|
|
|
.. _Time Zones: https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-TIMEZONES
|
2019-02-07 16:59:47 +08:00
|
|
|
|
.. _Choosing a Time Zone File: https://docs.oracle.com/en/database/oracle/
|
|
|
|
|
oracle-database/18/nlspg/datetime-data-types-and-time-zone-support.html
|
|
|
|
|
#GUID-805AB986-DE12-4FEA-AF56-5AABCD2132DF
|
2016-09-30 20:58:59 +08:00
|
|
|
|
.. _mysql_tzinfo_to_sql: https://dev.mysql.com/doc/refman/en/mysql-tzinfo-to-sql.html
|
2011-11-18 21:01:06 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``none()``
|
|
|
|
|
~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: none()
|
|
|
|
|
|
2012-10-24 05:04:37 +08:00
|
|
|
|
Calling none() will create a queryset that never returns any objects and no
|
|
|
|
|
query will be executed when accessing the results. A qs.none() queryset
|
|
|
|
|
is an instance of ``EmptyQuerySet``.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Examples::
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.none()
|
2015-10-06 07:07:34 +08:00
|
|
|
|
<QuerySet []>
|
2012-10-24 05:04:37 +08:00
|
|
|
|
>>> from django.db.models.query import EmptyQuerySet
|
|
|
|
|
>>> isinstance(Entry.objects.none(), EmptyQuerySet)
|
|
|
|
|
True
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``all()``
|
|
|
|
|
~~~~~~~~~
|
2010-05-09 13:48:45 +08:00
|
|
|
|
|
|
|
|
|
.. method:: all()
|
2008-09-11 08:24:02 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
Returns a *copy* of the current ``QuerySet`` (or ``QuerySet`` subclass). This
|
|
|
|
|
can be useful in situations where you might want to pass in either a model
|
|
|
|
|
manager or a ``QuerySet`` and do further filtering on the result. After calling
|
|
|
|
|
``all()`` on either object, you'll definitely have a ``QuerySet`` to work with.
|
2008-09-11 08:24:02 +08:00
|
|
|
|
|
2014-07-26 22:00:48 +08:00
|
|
|
|
When a ``QuerySet`` is :ref:`evaluated <when-querysets-are-evaluated>`, it
|
|
|
|
|
typically caches its results. If the data in the database might have changed
|
|
|
|
|
since a ``QuerySet`` was evaluated, you can get updated results for the same
|
|
|
|
|
query by calling ``all()`` on a previously evaluated ``QuerySet``.
|
|
|
|
|
|
2017-01-14 21:32:07 +08:00
|
|
|
|
``union()``
|
|
|
|
|
~~~~~~~~~~~
|
|
|
|
|
|
|
|
|
|
.. method:: union(*other_qs, all=False)
|
|
|
|
|
|
|
|
|
|
Uses SQL's ``UNION`` operator to combine the results of two or more
|
|
|
|
|
``QuerySet``\s. For example:
|
|
|
|
|
|
|
|
|
|
>>> qs1.union(qs2, qs3)
|
|
|
|
|
|
|
|
|
|
The ``UNION`` operator selects only distinct values by default. To allow
|
|
|
|
|
duplicate values, use the ``all=True`` argument.
|
|
|
|
|
|
|
|
|
|
``union()``, ``intersection()``, and ``difference()`` return model instances
|
|
|
|
|
of the type of the first ``QuerySet`` even if the arguments are ``QuerySet``\s
|
|
|
|
|
of other models. Passing different models works as long as the ``SELECT`` list
|
|
|
|
|
is the same in all ``QuerySet``\s (at least the types, the names don't matter
|
2017-11-12 21:28:11 +08:00
|
|
|
|
as long as the types in the same order). In such cases, you must use the column
|
|
|
|
|
names from the first ``QuerySet`` in ``QuerySet`` methods applied to the
|
|
|
|
|
resulting ``QuerySet``. For example::
|
2017-01-14 21:32:07 +08:00
|
|
|
|
|
2017-11-12 21:28:11 +08:00
|
|
|
|
>>> qs1 = Author.objects.values_list('name')
|
|
|
|
|
>>> qs2 = Entry.objects.values_list('headline')
|
|
|
|
|
>>> qs1.union(qs2).order_by('name')
|
|
|
|
|
|
|
|
|
|
In addition, only ``LIMIT``, ``OFFSET``, ``COUNT(*)``, ``ORDER BY``, and
|
|
|
|
|
specifying columns (i.e. slicing, :meth:`count`, :meth:`order_by`, and
|
|
|
|
|
:meth:`values()`/:meth:`values_list()`) are allowed on the resulting
|
2017-07-15 00:11:29 +08:00
|
|
|
|
``QuerySet``. Further, databases place restrictions on what operations are
|
|
|
|
|
allowed in the combined queries. For example, most databases don't allow
|
|
|
|
|
``LIMIT`` or ``OFFSET`` in the combined queries.
|
|
|
|
|
|
2017-01-14 21:32:07 +08:00
|
|
|
|
``intersection()``
|
|
|
|
|
~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
|
|
.. method:: intersection(*other_qs)
|
|
|
|
|
|
|
|
|
|
Uses SQL's ``INTERSECT`` operator to return the shared elements of two or more
|
|
|
|
|
``QuerySet``\s. For example:
|
|
|
|
|
|
2017-04-05 19:23:40 +08:00
|
|
|
|
>>> qs1.intersection(qs2, qs3)
|
2017-01-14 21:32:07 +08:00
|
|
|
|
|
|
|
|
|
See :meth:`union` for some restrictions.
|
|
|
|
|
|
|
|
|
|
``difference()``
|
|
|
|
|
~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
|
|
.. method:: difference(*other_qs)
|
|
|
|
|
|
|
|
|
|
Uses SQL's ``EXCEPT`` operator to keep only elements present in the
|
|
|
|
|
``QuerySet`` but not in some other ``QuerySet``\s. For example::
|
|
|
|
|
|
|
|
|
|
>>> qs1.difference(qs2, qs3)
|
|
|
|
|
|
|
|
|
|
See :meth:`union` for some restrictions.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``select_related()``
|
|
|
|
|
~~~~~~~~~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2013-10-15 20:15:02 +08:00
|
|
|
|
.. method:: select_related(*fields)
|
2010-05-09 13:48:45 +08:00
|
|
|
|
|
2013-10-15 20:15:02 +08:00
|
|
|
|
Returns a ``QuerySet`` that will "follow" foreign-key relationships, selecting
|
|
|
|
|
additional related-object data when it executes its query. This is a
|
|
|
|
|
performance booster which results in a single more complex query but means
|
|
|
|
|
later use of foreign-key relationships won't require database queries.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
The following examples illustrate the difference between plain lookups and
|
|
|
|
|
``select_related()`` lookups. Here's standard lookup::
|
|
|
|
|
|
|
|
|
|
# Hits the database.
|
|
|
|
|
e = Entry.objects.get(id=5)
|
|
|
|
|
|
|
|
|
|
# Hits the database again to get the related Blog object.
|
|
|
|
|
b = e.blog
|
|
|
|
|
|
|
|
|
|
And here's ``select_related`` lookup::
|
|
|
|
|
|
|
|
|
|
# Hits the database.
|
2013-10-15 20:15:02 +08:00
|
|
|
|
e = Entry.objects.select_related('blog').get(id=5)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
# Doesn't hit the database, because e.blog has been prepopulated
|
|
|
|
|
# in the previous query.
|
|
|
|
|
b = e.blog
|
|
|
|
|
|
2014-11-01 03:38:46 +08:00
|
|
|
|
You can use ``select_related()`` with any queryset of objects::
|
|
|
|
|
|
|
|
|
|
from django.utils import timezone
|
|
|
|
|
|
|
|
|
|
# Find all the blogs with entries scheduled to be published in the future.
|
|
|
|
|
blogs = set()
|
|
|
|
|
|
|
|
|
|
for e in Entry.objects.filter(pub_date__gt=timezone.now()).select_related('blog'):
|
|
|
|
|
# Without select_related(), this would make a database query for each
|
|
|
|
|
# loop iteration in order to fetch the related blog for each entry.
|
|
|
|
|
blogs.add(e.blog)
|
|
|
|
|
|
|
|
|
|
The order of ``filter()`` and ``select_related()`` chaining isn't important.
|
|
|
|
|
These querysets are equivalent::
|
|
|
|
|
|
2015-02-04 14:01:59 +08:00
|
|
|
|
Entry.objects.filter(pub_date__gt=timezone.now()).select_related('blog')
|
|
|
|
|
Entry.objects.select_related('blog').filter(pub_date__gt=timezone.now())
|
2014-11-01 03:38:46 +08:00
|
|
|
|
|
2013-10-15 20:15:02 +08:00
|
|
|
|
You can follow foreign keys in a similar way to querying them. If you have the
|
2008-08-24 06:25:40 +08:00
|
|
|
|
following models::
|
|
|
|
|
|
2013-05-18 18:12:26 +08:00
|
|
|
|
from django.db import models
|
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
class City(models.Model):
|
|
|
|
|
# ...
|
2011-10-09 02:07:30 +08:00
|
|
|
|
pass
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
class Person(models.Model):
|
|
|
|
|
# ...
|
2015-07-22 22:43:21 +08:00
|
|
|
|
hometown = models.ForeignKey(
|
|
|
|
|
City,
|
|
|
|
|
on_delete=models.SET_NULL,
|
|
|
|
|
blank=True,
|
|
|
|
|
null=True,
|
|
|
|
|
)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
class Book(models.Model):
|
|
|
|
|
# ...
|
2015-07-22 22:43:21 +08:00
|
|
|
|
author = models.ForeignKey(Person, on_delete=models.CASCADE)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2014-11-01 03:38:46 +08:00
|
|
|
|
... then a call to ``Book.objects.select_related('author__hometown').get(id=4)``
|
2014-07-25 01:57:00 +08:00
|
|
|
|
will cache the related ``Person`` *and* the related ``City``::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2018-01-29 23:12:58 +08:00
|
|
|
|
# Hits the database with joins to the author and hometown tables.
|
2014-11-01 03:38:46 +08:00
|
|
|
|
b = Book.objects.select_related('author__hometown').get(id=4)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
p = b.author # Doesn't hit the database.
|
|
|
|
|
c = p.hometown # Doesn't hit the database.
|
|
|
|
|
|
2018-01-29 23:12:58 +08:00
|
|
|
|
# Without select_related()...
|
|
|
|
|
b = Book.objects.get(id=4) # Hits the database.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
p = b.author # Hits the database.
|
|
|
|
|
c = p.hometown # Hits the database.
|
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
You can refer to any :class:`~django.db.models.ForeignKey` or
|
|
|
|
|
:class:`~django.db.models.OneToOneField` relation in the list of fields
|
2013-10-15 20:15:02 +08:00
|
|
|
|
passed to ``select_related()``.
|
2008-10-24 15:15:07 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
You can also refer to the reverse direction of a
|
2011-10-09 02:07:30 +08:00
|
|
|
|
:class:`~django.db.models.OneToOneField` in the list of fields passed to
|
2011-08-27 10:56:18 +08:00
|
|
|
|
``select_related`` — that is, you can traverse a
|
|
|
|
|
:class:`~django.db.models.OneToOneField` back to the object on which the field
|
|
|
|
|
is defined. Instead of specifying the field name, use the :attr:`related_name
|
|
|
|
|
<django.db.models.ForeignKey.related_name>` for the field on the related object.
|
2010-01-27 21:30:29 +08:00
|
|
|
|
|
2013-10-15 20:15:02 +08:00
|
|
|
|
There may be some situations where you wish to call ``select_related()`` with a
|
|
|
|
|
lot of related objects, or where you don't know all of the relations. In these
|
|
|
|
|
cases it is possible to call ``select_related()`` with no arguments. This will
|
|
|
|
|
follow all non-null foreign keys it can find - nullable foreign keys must be
|
|
|
|
|
specified. This is not recommended in most cases as it is likely to make the
|
|
|
|
|
underlying query more complex, and return more data, than is actually needed.
|
|
|
|
|
|
2013-05-30 23:05:42 +08:00
|
|
|
|
If you need to clear the list of related fields added by past calls of
|
|
|
|
|
``select_related`` on a ``QuerySet``, you can pass ``None`` as a parameter::
|
|
|
|
|
|
|
|
|
|
>>> without_relations = queryset.select_related(None)
|
|
|
|
|
|
2014-11-03 18:30:48 +08:00
|
|
|
|
Chaining ``select_related`` calls works in a similar way to other methods -
|
|
|
|
|
that is that ``select_related('foo', 'bar')`` is equivalent to
|
|
|
|
|
``select_related('foo').select_related('bar')``.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``prefetch_related()``
|
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
|
|
|
|
.. method:: prefetch_related(*lookups)
|
|
|
|
|
|
|
|
|
|
Returns a ``QuerySet`` that will automatically retrieve, in a single batch,
|
2011-10-08 00:05:53 +08:00
|
|
|
|
related objects for each of the specified lookups.
|
|
|
|
|
|
|
|
|
|
This has a similar purpose to ``select_related``, in that both are designed to
|
|
|
|
|
stop the deluge of database queries that is caused by accessing related objects,
|
|
|
|
|
but the strategy is quite different.
|
|
|
|
|
|
2013-09-06 06:23:48 +08:00
|
|
|
|
``select_related`` works by creating an SQL join and including the fields of the
|
2013-08-02 21:26:19 +08:00
|
|
|
|
related object in the ``SELECT`` statement. For this reason, ``select_related``
|
|
|
|
|
gets the related objects in the same database query. However, to avoid the much
|
2011-10-08 00:05:53 +08:00
|
|
|
|
larger result set that would result from joining across a 'many' relationship,
|
|
|
|
|
``select_related`` is limited to single-valued relationships - foreign key and
|
|
|
|
|
one-to-one.
|
|
|
|
|
|
|
|
|
|
``prefetch_related``, on the other hand, does a separate lookup for each
|
|
|
|
|
relationship, and does the 'joining' in Python. This allows it to prefetch
|
|
|
|
|
many-to-many and many-to-one objects, which cannot be done using
|
|
|
|
|
``select_related``, in addition to the foreign key and one-to-one relationships
|
|
|
|
|
that are supported by ``select_related``. It also supports prefetching of
|
2014-01-22 14:43:33 +08:00
|
|
|
|
:class:`~django.contrib.contenttypes.fields.GenericRelation` and
|
2015-10-30 06:56:57 +08:00
|
|
|
|
:class:`~django.contrib.contenttypes.fields.GenericForeignKey`, however, it
|
|
|
|
|
must be restricted to a homogeneous set of results. For example, prefetching
|
|
|
|
|
objects referenced by a ``GenericForeignKey`` is only supported if the query
|
|
|
|
|
is restricted to one ``ContentType``.
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
|
|
|
|
For example, suppose you have these models::
|
|
|
|
|
|
2013-05-18 18:12:26 +08:00
|
|
|
|
from django.db import models
|
|
|
|
|
|
2011-10-06 07:14:52 +08:00
|
|
|
|
class Topping(models.Model):
|
|
|
|
|
name = models.CharField(max_length=30)
|
|
|
|
|
|
|
|
|
|
class Pizza(models.Model):
|
|
|
|
|
name = models.CharField(max_length=50)
|
|
|
|
|
toppings = models.ManyToManyField(Topping)
|
|
|
|
|
|
2017-01-19 00:51:29 +08:00
|
|
|
|
def __str__(self):
|
2016-05-15 07:06:31 +08:00
|
|
|
|
return "%s (%s)" % (
|
|
|
|
|
self.name,
|
2016-05-27 21:12:54 +08:00
|
|
|
|
", ".join(topping.name for topping in self.toppings.all()),
|
2016-05-15 07:06:31 +08:00
|
|
|
|
)
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
2013-08-02 21:26:19 +08:00
|
|
|
|
and run::
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
|
|
|
|
>>> Pizza.objects.all()
|
2014-02-09 19:38:13 +08:00
|
|
|
|
["Hawaiian (ham, pineapple)", "Seafood (prawns, smoked salmon)"...
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
2014-02-09 19:38:13 +08:00
|
|
|
|
The problem with this is that every time ``Pizza.__str__()`` asks for
|
2013-08-02 21:26:19 +08:00
|
|
|
|
``self.toppings.all()`` it has to query the database, so
|
|
|
|
|
``Pizza.objects.all()`` will run a query on the Toppings table for **every**
|
|
|
|
|
item in the Pizza ``QuerySet``.
|
|
|
|
|
|
|
|
|
|
We can reduce to just two queries using ``prefetch_related``:
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
|
|
|
|
>>> Pizza.objects.all().prefetch_related('toppings')
|
|
|
|
|
|
2013-08-02 21:26:19 +08:00
|
|
|
|
This implies a ``self.toppings.all()`` for each ``Pizza``; now each time
|
|
|
|
|
``self.toppings.all()`` is called, instead of having to go to the database for
|
|
|
|
|
the items, it will find them in a prefetched ``QuerySet`` cache that was
|
|
|
|
|
populated in a single query.
|
|
|
|
|
|
|
|
|
|
That is, all the relevant toppings will have been fetched in a single query,
|
|
|
|
|
and used to make ``QuerySets`` that have a pre-filled cache of the relevant
|
|
|
|
|
results; these ``QuerySets`` are then used in the ``self.toppings.all()`` calls.
|
|
|
|
|
|
|
|
|
|
The additional queries in ``prefetch_related()`` are executed after the
|
|
|
|
|
``QuerySet`` has begun to be evaluated and the primary query has been executed.
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
2015-08-15 20:41:57 +08:00
|
|
|
|
If you have an iterable of model instances, you can prefetch related attributes
|
|
|
|
|
on those instances using the :func:`~django.db.models.prefetch_related_objects`
|
|
|
|
|
function.
|
|
|
|
|
|
2013-08-02 21:26:19 +08:00
|
|
|
|
Note that the result cache of the primary ``QuerySet`` and all specified related
|
|
|
|
|
objects will then be fully loaded into memory. This changes the typical
|
|
|
|
|
behavior of ``QuerySets``, which normally try to avoid loading all objects into
|
|
|
|
|
memory before they are needed, even after a query has been executed in the
|
|
|
|
|
database.
|
|
|
|
|
|
|
|
|
|
.. note::
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
2013-08-02 21:26:19 +08:00
|
|
|
|
Remember that, as always with ``QuerySets``, any subsequent chained methods
|
|
|
|
|
which imply a different database query will ignore previously cached
|
|
|
|
|
results, and retrieve data using a fresh database query. So, if you write
|
|
|
|
|
the following:
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
2013-08-02 21:26:19 +08:00
|
|
|
|
>>> pizzas = Pizza.objects.prefetch_related('toppings')
|
|
|
|
|
>>> [list(pizza.toppings.filter(spicy=True)) for pizza in pizzas]
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
2013-08-02 21:26:19 +08:00
|
|
|
|
...then the fact that ``pizza.toppings.all()`` has been prefetched will not
|
|
|
|
|
help you. The ``prefetch_related('toppings')`` implied
|
|
|
|
|
``pizza.toppings.all()``, but ``pizza.toppings.filter()`` is a new and
|
|
|
|
|
different query. The prefetched cache can't help here; in fact it hurts
|
|
|
|
|
performance, since you have done a database query that you haven't used. So
|
|
|
|
|
use this feature with caution!
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
2016-06-05 20:15:00 +08:00
|
|
|
|
Also, if you call the database-altering methods
|
|
|
|
|
:meth:`~django.db.models.fields.related.RelatedManager.add`,
|
|
|
|
|
:meth:`~django.db.models.fields.related.RelatedManager.remove`,
|
|
|
|
|
:meth:`~django.db.models.fields.related.RelatedManager.clear` or
|
|
|
|
|
:meth:`~django.db.models.fields.related.RelatedManager.set`, on
|
|
|
|
|
:class:`related managers<django.db.models.fields.related.RelatedManager>`,
|
|
|
|
|
any prefetched cache for the relation will be cleared.
|
|
|
|
|
|
2011-10-06 07:14:52 +08:00
|
|
|
|
You can also use the normal join syntax to do related fields of related
|
|
|
|
|
fields. Suppose we have an additional model to the example above::
|
|
|
|
|
|
|
|
|
|
class Restaurant(models.Model):
|
2016-05-31 22:54:24 +08:00
|
|
|
|
pizzas = models.ManyToManyField(Pizza, related_name='restaurants')
|
2017-09-29 23:38:28 +08:00
|
|
|
|
best_pizza = models.ForeignKey(Pizza, related_name='championed_by', on_delete=models.CASCADE)
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
|
|
|
|
The following are all legal:
|
|
|
|
|
|
|
|
|
|
>>> Restaurant.objects.prefetch_related('pizzas__toppings')
|
|
|
|
|
|
|
|
|
|
This will prefetch all pizzas belonging to restaurants, and all toppings
|
|
|
|
|
belonging to those pizzas. This will result in a total of 3 database queries -
|
|
|
|
|
one for the restaurants, one for the pizzas, and one for the toppings.
|
|
|
|
|
|
2011-10-08 00:05:53 +08:00
|
|
|
|
>>> Restaurant.objects.prefetch_related('best_pizza__toppings')
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
|
|
|
|
This will fetch the best pizza and all the toppings for the best pizza for each
|
2011-10-08 00:05:53 +08:00
|
|
|
|
restaurant. This will be done in 3 database queries - one for the restaurants,
|
2017-10-17 22:07:20 +08:00
|
|
|
|
one for the 'best pizzas', and one for the toppings.
|
2011-10-08 00:05:53 +08:00
|
|
|
|
|
2020-05-01 20:37:21 +08:00
|
|
|
|
The ``best_pizza`` relationship could also be fetched using ``select_related``
|
|
|
|
|
to reduce the query count to 2::
|
2011-10-08 00:05:53 +08:00
|
|
|
|
|
|
|
|
|
>>> Restaurant.objects.select_related('best_pizza').prefetch_related('best_pizza__toppings')
|
|
|
|
|
|
|
|
|
|
Since the prefetch is executed after the main query (which includes the joins
|
|
|
|
|
needed by ``select_related``), it is able to detect that the ``best_pizza``
|
|
|
|
|
objects have already been fetched, and it will skip fetching them again.
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
2011-10-08 00:05:53 +08:00
|
|
|
|
Chaining ``prefetch_related`` calls will accumulate the lookups that are
|
2013-08-15 19:14:10 +08:00
|
|
|
|
prefetched. To clear any ``prefetch_related`` behavior, pass ``None`` as a
|
2013-11-07 01:25:05 +08:00
|
|
|
|
parameter:
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
|
|
|
|
>>> non_prefetched = qs.prefetch_related(None)
|
|
|
|
|
|
2011-10-08 00:05:53 +08:00
|
|
|
|
One difference to note when using ``prefetch_related`` is that objects created
|
|
|
|
|
by a query can be shared between the different objects that they are related to
|
|
|
|
|
i.e. a single Python model instance can appear at more than one point in the
|
|
|
|
|
tree of objects that are returned. This will normally happen with foreign key
|
|
|
|
|
relationships. Typically this behavior will not be a problem, and will in fact
|
|
|
|
|
save both memory and CPU time.
|
|
|
|
|
|
|
|
|
|
While ``prefetch_related`` supports prefetching ``GenericForeignKey``
|
|
|
|
|
relationships, the number of queries will depend on the data. Since a
|
|
|
|
|
``GenericForeignKey`` can reference data in multiple tables, one query per table
|
|
|
|
|
referenced is needed, rather than one query for all the items. There could be
|
|
|
|
|
additional queries on the ``ContentType`` table if the relevant rows have not
|
|
|
|
|
already been fetched.
|
2011-10-06 07:14:52 +08:00
|
|
|
|
|
2013-09-06 06:23:48 +08:00
|
|
|
|
``prefetch_related`` in most cases will be implemented using an SQL query that
|
2013-08-02 21:26:19 +08:00
|
|
|
|
uses the 'IN' operator. This means that for a large ``QuerySet`` a large 'IN' clause
|
2011-11-25 01:18:56 +08:00
|
|
|
|
could be generated, which, depending on the database, might have performance
|
|
|
|
|
problems of its own when it comes to parsing or executing the SQL query. Always
|
|
|
|
|
profile for your use case!
|
|
|
|
|
|
2012-02-29 03:34:04 +08:00
|
|
|
|
Note that if you use ``iterator()`` to run the query, ``prefetch_related()``
|
|
|
|
|
calls will be ignored since these two optimizations do not make sense together.
|
|
|
|
|
|
2013-11-07 01:25:05 +08:00
|
|
|
|
You can use the :class:`~django.db.models.Prefetch` object to further control
|
|
|
|
|
the prefetch operation.
|
|
|
|
|
|
|
|
|
|
In its simplest form ``Prefetch`` is equivalent to the traditional string based
|
|
|
|
|
lookups:
|
|
|
|
|
|
2016-12-15 02:54:29 +08:00
|
|
|
|
>>> from django.db.models import Prefetch
|
2013-11-07 01:25:05 +08:00
|
|
|
|
>>> Restaurant.objects.prefetch_related(Prefetch('pizzas__toppings'))
|
|
|
|
|
|
|
|
|
|
You can provide a custom queryset with the optional ``queryset`` argument.
|
|
|
|
|
This can be used to change the default ordering of the queryset:
|
|
|
|
|
|
|
|
|
|
>>> Restaurant.objects.prefetch_related(
|
|
|
|
|
... Prefetch('pizzas__toppings', queryset=Toppings.objects.order_by('name')))
|
|
|
|
|
|
|
|
|
|
Or to call :meth:`~django.db.models.query.QuerySet.select_related()` when
|
|
|
|
|
applicable to reduce the number of queries even further:
|
|
|
|
|
|
|
|
|
|
>>> Pizza.objects.prefetch_related(
|
|
|
|
|
... Prefetch('restaurants', queryset=Restaurant.objects.select_related('best_pizza')))
|
|
|
|
|
|
|
|
|
|
You can also assign the prefetched result to a custom attribute with the optional
|
|
|
|
|
``to_attr`` argument. The result will be stored directly in a list.
|
|
|
|
|
|
|
|
|
|
This allows prefetching the same relation multiple times with a different
|
|
|
|
|
``QuerySet``; for instance:
|
|
|
|
|
|
|
|
|
|
>>> vegetarian_pizzas = Pizza.objects.filter(vegetarian=True)
|
|
|
|
|
>>> Restaurant.objects.prefetch_related(
|
2013-11-27 03:49:46 +08:00
|
|
|
|
... Prefetch('pizzas', to_attr='menu'),
|
2013-11-24 06:48:34 +08:00
|
|
|
|
... Prefetch('pizzas', queryset=vegetarian_pizzas, to_attr='vegetarian_menu'))
|
2013-11-07 01:25:05 +08:00
|
|
|
|
|
|
|
|
|
Lookups created with custom ``to_attr`` can still be traversed as usual by other
|
|
|
|
|
lookups:
|
|
|
|
|
|
|
|
|
|
>>> vegetarian_pizzas = Pizza.objects.filter(vegetarian=True)
|
|
|
|
|
>>> Restaurant.objects.prefetch_related(
|
2013-11-24 06:48:34 +08:00
|
|
|
|
... Prefetch('pizzas', queryset=vegetarian_pizzas, to_attr='vegetarian_menu'),
|
2013-11-07 01:25:05 +08:00
|
|
|
|
... 'vegetarian_menu__toppings')
|
|
|
|
|
|
|
|
|
|
Using ``to_attr`` is recommended when filtering down the prefetch result as it is
|
|
|
|
|
less ambiguous than storing a filtered result in the related manager's cache:
|
|
|
|
|
|
|
|
|
|
>>> queryset = Pizza.objects.filter(vegetarian=True)
|
|
|
|
|
>>>
|
|
|
|
|
>>> # Recommended:
|
|
|
|
|
>>> restaurants = Restaurant.objects.prefetch_related(
|
2014-03-21 14:30:10 +08:00
|
|
|
|
... Prefetch('pizzas', queryset=queryset, to_attr='vegetarian_pizzas'))
|
2013-11-07 01:25:05 +08:00
|
|
|
|
>>> vegetarian_pizzas = restaurants[0].vegetarian_pizzas
|
|
|
|
|
>>>
|
|
|
|
|
>>> # Not recommended:
|
|
|
|
|
>>> restaurants = Restaurant.objects.prefetch_related(
|
|
|
|
|
... Prefetch('pizzas', queryset=queryset))
|
|
|
|
|
>>> vegetarian_pizzas = restaurants[0].pizzas.all()
|
|
|
|
|
|
2014-01-17 02:24:39 +08:00
|
|
|
|
Custom prefetching also works with single related relations like
|
|
|
|
|
forward ``ForeignKey`` or ``OneToOneField``. Generally you'll want to use
|
|
|
|
|
:meth:`select_related()` for these relations, but there are a number of cases
|
|
|
|
|
where prefetching with a custom ``QuerySet`` is useful:
|
|
|
|
|
|
|
|
|
|
* You want to use a ``QuerySet`` that performs further prefetching
|
|
|
|
|
on related models.
|
|
|
|
|
|
|
|
|
|
* You want to prefetch only a subset of the related objects.
|
|
|
|
|
|
|
|
|
|
* You want to use performance optimization techniques like
|
|
|
|
|
:meth:`deferred fields <defer()>`:
|
|
|
|
|
|
|
|
|
|
>>> queryset = Pizza.objects.only('name')
|
|
|
|
|
>>>
|
|
|
|
|
>>> restaurants = Restaurant.objects.prefetch_related(
|
|
|
|
|
... Prefetch('best_pizza', queryset=queryset))
|
|
|
|
|
|
2013-11-07 01:25:05 +08:00
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
|
|
The ordering of lookups matters.
|
|
|
|
|
|
|
|
|
|
Take the following examples:
|
|
|
|
|
|
|
|
|
|
>>> prefetch_related('pizzas__toppings', 'pizzas')
|
|
|
|
|
|
|
|
|
|
This works even though it's unordered because ``'pizzas__toppings'``
|
|
|
|
|
already contains all the needed information, therefore the second argument
|
|
|
|
|
``'pizzas'`` is actually redundant.
|
|
|
|
|
|
|
|
|
|
>>> prefetch_related('pizzas__toppings', Prefetch('pizzas', queryset=Pizza.objects.all()))
|
|
|
|
|
|
|
|
|
|
This will raise a ``ValueError`` because of the attempt to redefine the
|
|
|
|
|
queryset of a previously seen lookup. Note that an implicit queryset was
|
|
|
|
|
created to traverse ``'pizzas'`` as part of the ``'pizzas__toppings'``
|
|
|
|
|
lookup.
|
|
|
|
|
|
|
|
|
|
>>> prefetch_related('pizza_list__toppings', Prefetch('pizzas', to_attr='pizza_list'))
|
|
|
|
|
|
|
|
|
|
This will trigger an ``AttributeError`` because ``'pizza_list'`` doesn't exist yet
|
|
|
|
|
when ``'pizza_list__toppings'`` is being processed.
|
|
|
|
|
|
|
|
|
|
This consideration is not limited to the use of ``Prefetch`` objects. Some
|
|
|
|
|
advanced techniques may require that the lookups be performed in a
|
|
|
|
|
specific order to avoid creating extra queries; therefore it's recommended
|
|
|
|
|
to always carefully order ``prefetch_related`` arguments.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``extra()``
|
|
|
|
|
~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: extra(select=None, where=None, params=None, tables=None, order_by=None, select_params=None)
|
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
Sometimes, the Django query syntax by itself can't easily express a complex
|
|
|
|
|
``WHERE`` clause. For these edge cases, Django provides the ``extra()``
|
2011-08-27 10:56:18 +08:00
|
|
|
|
``QuerySet`` modifier — a hook for injecting specific clauses into the SQL
|
2008-08-24 06:25:40 +08:00
|
|
|
|
generated by a ``QuerySet``.
|
|
|
|
|
|
2015-08-04 05:05:18 +08:00
|
|
|
|
.. admonition:: Use this method as a last resort
|
|
|
|
|
|
|
|
|
|
This is an old API that we aim to deprecate at some point in the future.
|
|
|
|
|
Use it only if you cannot express your query using other queryset methods.
|
|
|
|
|
If you do need to use it, please `file a ticket
|
|
|
|
|
<https://code.djangoproject.com/newticket>`_ using the `QuerySet.extra
|
|
|
|
|
keyword <https://code.djangoproject.com/query?status=assigned&status=new&keywords=~QuerySet.extra>`_
|
|
|
|
|
with your use case (please check the list of existing tickets first) so
|
|
|
|
|
that we can enhance the QuerySet API to allow removing ``extra()``. We are
|
|
|
|
|
no longer improving or fixing bugs for this method.
|
|
|
|
|
|
|
|
|
|
For example, this use of ``extra()``::
|
|
|
|
|
|
|
|
|
|
>>> qs.extra(
|
|
|
|
|
... select={'val': "select col from sometable where othercol = %s"},
|
|
|
|
|
... select_params=(someparam,),
|
|
|
|
|
... )
|
|
|
|
|
|
|
|
|
|
is equivalent to::
|
|
|
|
|
|
|
|
|
|
>>> qs.annotate(val=RawSQL("select col from sometable where othercol = %s", (someparam,)))
|
|
|
|
|
|
|
|
|
|
The main benefit of using :class:`~django.db.models.expressions.RawSQL` is
|
|
|
|
|
that you can set ``output_field`` if needed. The main downside is that if
|
|
|
|
|
you refer to some table alias of the queryset in the raw SQL, then it is
|
|
|
|
|
possible that Django might change that alias (for example, when the
|
|
|
|
|
queryset is used as a subquery in yet another query).
|
|
|
|
|
|
2014-04-25 02:10:03 +08:00
|
|
|
|
.. warning::
|
|
|
|
|
|
|
|
|
|
You should be very careful whenever you use ``extra()``. Every time you use
|
|
|
|
|
it, you should escape any parameters that the user can control by using
|
2017-11-08 02:07:12 +08:00
|
|
|
|
``params`` in order to protect against SQL injection attacks.
|
|
|
|
|
|
|
|
|
|
You also must not quote placeholders in the SQL string. This example is
|
2019-04-13 21:49:55 +08:00
|
|
|
|
vulnerable to SQL injection because of the quotes around ``%s``:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2017-11-08 02:07:12 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SELECT col FROM sometable WHERE othercol = '%s' # unsafe!
|
2017-11-08 02:07:12 +08:00
|
|
|
|
|
|
|
|
|
You can read more about how Django's :ref:`SQL injection protection
|
|
|
|
|
<sql-injection-protection>` works.
|
2014-04-25 02:10:03 +08:00
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
By definition, these extra lookups may not be portable to different database
|
|
|
|
|
engines (because you're explicitly writing SQL code) and violate the DRY
|
|
|
|
|
principle, so you should avoid them if possible.
|
|
|
|
|
|
|
|
|
|
Specify one or more of ``params``, ``select``, ``where`` or ``tables``. None
|
|
|
|
|
of the arguments is required, but you should use at least one of them.
|
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* ``select``
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
The ``select`` argument lets you put extra fields in the ``SELECT``
|
|
|
|
|
clause. It should be a dictionary mapping attribute names to SQL
|
|
|
|
|
clauses to use to calculate that attribute.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Example::
|
2010-12-05 04:41:35 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Entry.objects.extra(select={'is_recent': "pub_date > '2006-01-01'"})
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
As a result, each ``Entry`` object will have an extra attribute,
|
|
|
|
|
``is_recent``, a boolean representing whether the entry's ``pub_date``
|
|
|
|
|
is greater than Jan. 1, 2006.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Django inserts the given SQL snippet directly into the ``SELECT``
|
2019-04-13 21:49:55 +08:00
|
|
|
|
statement, so the resulting SQL of the above example would be something like:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
SELECT blog_entry.*, (pub_date > '2006-01-01') AS is_recent
|
|
|
|
|
FROM blog_entry;
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
The next example is more advanced; it does a subquery to give each
|
|
|
|
|
resulting ``Blog`` object an ``entry_count`` attribute, an integer count
|
|
|
|
|
of associated ``Entry`` objects::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Blog.objects.extra(
|
|
|
|
|
select={
|
|
|
|
|
'entry_count': 'SELECT COUNT(*) FROM blog_entry WHERE blog_entry.blog_id = blog_blog.id'
|
|
|
|
|
},
|
|
|
|
|
)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
In this particular case, we're exploiting the fact that the query will
|
|
|
|
|
already contain the ``blog_blog`` table in its ``FROM`` clause.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
The resulting SQL of the above example would be:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
SELECT blog_blog.*, (SELECT COUNT(*) FROM blog_entry WHERE blog_entry.blog_id = blog_blog.id) AS entry_count
|
|
|
|
|
FROM blog_blog;
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Note that the parentheses required by most database engines around
|
|
|
|
|
subqueries are not required in Django's ``select`` clauses. Also note
|
|
|
|
|
that some database backends, such as some MySQL versions, don't support
|
|
|
|
|
subqueries.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
In some rare cases, you might wish to pass parameters to the SQL
|
|
|
|
|
fragments in ``extra(select=...)``. For this purpose, use the
|
2019-02-05 19:22:08 +08:00
|
|
|
|
``select_params`` parameter.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
This will work, for example::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Blog.objects.extra(
|
2019-02-05 19:22:08 +08:00
|
|
|
|
select={'a': '%s', 'b': '%s'},
|
|
|
|
|
select_params=('one', 'two'),
|
|
|
|
|
)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2014-09-10 15:23:58 +08:00
|
|
|
|
If you need to use a literal ``%s`` inside your select string, use
|
|
|
|
|
the sequence ``%%s``.
|
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* ``where`` / ``tables``
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
You can define explicit SQL ``WHERE`` clauses — perhaps to perform
|
|
|
|
|
non-explicit joins — by using ``where``. You can manually add tables to
|
|
|
|
|
the SQL ``FROM`` clause by using ``tables``.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
``where`` and ``tables`` both take a list of strings. All ``where``
|
|
|
|
|
parameters are "AND"ed to any other search criteria.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Example::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2012-04-09 08:43:08 +08:00
|
|
|
|
Entry.objects.extra(where=["foo='a' OR bar = 'a'", "baz = 'a'"])
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
...translates (roughly) into the following SQL:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2011-09-30 18:28:39 +08:00
|
|
|
|
|
2012-04-09 08:43:08 +08:00
|
|
|
|
SELECT * FROM blog_entry WHERE (foo='a' OR bar='a') AND (baz='a')
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Be careful when using the ``tables`` parameter if you're specifying
|
|
|
|
|
tables that are already used in the query. When you add extra tables
|
|
|
|
|
via the ``tables`` parameter, Django assumes you want that table
|
|
|
|
|
included an extra time, if it is already included. That creates a
|
|
|
|
|
problem, since the table name will then be given an alias. If a table
|
|
|
|
|
appears multiple times in an SQL statement, the second and subsequent
|
|
|
|
|
occurrences must use aliases so the database can tell them apart. If
|
|
|
|
|
you're referring to the extra table you added in the extra ``where``
|
|
|
|
|
parameter this is going to cause errors.
|
2011-09-30 18:28:39 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Normally you'll only be adding extra tables that don't already appear
|
|
|
|
|
in the query. However, if the case outlined above does occur, there are
|
|
|
|
|
a few solutions. First, see if you can get by without including the
|
|
|
|
|
extra table and use the one already in the query. If that isn't
|
|
|
|
|
possible, put your ``extra()`` call at the front of the queryset
|
|
|
|
|
construction so that your table is the first use of that table.
|
|
|
|
|
Finally, if all else fails, look at the query produced and rewrite your
|
|
|
|
|
``where`` addition to use the alias given to your extra table. The
|
|
|
|
|
alias will be the same each time you construct the queryset in the same
|
|
|
|
|
way, so you can rely upon the alias name to not change.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* ``order_by``
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
If you need to order the resulting queryset using some of the new
|
|
|
|
|
fields or tables you have included via ``extra()`` use the ``order_by``
|
|
|
|
|
parameter to ``extra()`` and pass in a sequence of strings. These
|
|
|
|
|
strings should either be model fields (as in the normal
|
|
|
|
|
:meth:`order_by()` method on querysets), of the form
|
|
|
|
|
``table_name.column_name`` or an alias for a column that you specified
|
|
|
|
|
in the ``select`` parameter to ``extra()``.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
For example::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
q = Entry.objects.extra(select={'is_recent': "pub_date > '2006-01-01'"})
|
|
|
|
|
q = q.extra(order_by = ['-is_recent'])
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
This would sort all the items for which ``is_recent`` is true to the
|
|
|
|
|
front of the result set (``True`` sorts before ``False`` in a
|
|
|
|
|
descending ordering).
|
2011-09-30 18:28:39 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
This shows, by the way, that you can make multiple calls to ``extra()``
|
|
|
|
|
and it will behave as you expect (adding new constraints each time).
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* ``params``
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
The ``where`` parameter described above may use standard Python
|
|
|
|
|
database string placeholders — ``'%s'`` to indicate parameters the
|
|
|
|
|
database engine should automatically quote. The ``params`` argument is
|
|
|
|
|
a list of any extra parameters to be substituted.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Example::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Entry.objects.extra(where=['headline=%s'], params=['Lennon'])
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Always use ``params`` instead of embedding values directly into
|
|
|
|
|
``where`` because ``params`` will ensure values are quoted correctly
|
|
|
|
|
according to your particular backend. For example, quotes will be
|
|
|
|
|
escaped correctly.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Bad::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
Entry.objects.extra(where=["headline='Lennon'"])
|
|
|
|
|
|
|
|
|
|
Good::
|
|
|
|
|
|
|
|
|
|
Entry.objects.extra(where=['headline=%s'], params=['Lennon'])
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2014-04-21 04:13:41 +08:00
|
|
|
|
.. warning::
|
|
|
|
|
|
|
|
|
|
If you are performing queries on MySQL, note that MySQL's silent type coercion
|
|
|
|
|
may cause unexpected results when mixing types. If you query on a string
|
|
|
|
|
type column, but with an integer value, MySQL will coerce the types of all values
|
|
|
|
|
in the table to an integer before performing the comparison. For example, if your
|
|
|
|
|
table contains the values ``'abc'``, ``'def'`` and you query for ``WHERE mycolumn=0``,
|
|
|
|
|
both rows will match. To prevent this, perform the correct typecasting
|
|
|
|
|
before using the value in a query.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``defer()``
|
|
|
|
|
~~~~~~~~~~~
|
2009-03-19 17:06:04 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: defer(*fields)
|
|
|
|
|
|
2009-03-19 17:06:04 +08:00
|
|
|
|
In some complex data-modeling situations, your models might contain a lot of
|
|
|
|
|
fields, some of which could contain a lot of data (for example, text fields),
|
|
|
|
|
or require expensive processing to convert them to Python objects. If you are
|
2012-09-16 05:24:01 +08:00
|
|
|
|
using the results of a queryset in some situation where you don't know
|
2011-08-27 10:56:18 +08:00
|
|
|
|
if you need those particular fields when you initially fetch the data, you can
|
|
|
|
|
tell Django not to retrieve them from the database.
|
2009-03-19 17:06:04 +08:00
|
|
|
|
|
|
|
|
|
This is done by passing the names of the fields to not load to ``defer()``::
|
|
|
|
|
|
2010-02-22 07:39:08 +08:00
|
|
|
|
Entry.objects.defer("headline", "body")
|
2009-03-19 17:06:04 +08:00
|
|
|
|
|
|
|
|
|
A queryset that has deferred fields will still return model instances. Each
|
|
|
|
|
deferred field will be retrieved from the database if you access that field
|
|
|
|
|
(one at a time, not all the deferred fields at once).
|
|
|
|
|
|
|
|
|
|
You can make multiple calls to ``defer()``. Each call adds new fields to the
|
|
|
|
|
deferred set::
|
|
|
|
|
|
2010-08-28 19:23:54 +08:00
|
|
|
|
# Defers both the body and headline fields.
|
2010-02-22 07:39:08 +08:00
|
|
|
|
Entry.objects.defer("body").filter(rating=5).defer("headline")
|
2009-03-19 17:06:04 +08:00
|
|
|
|
|
2010-11-29 02:15:40 +08:00
|
|
|
|
The order in which fields are added to the deferred set does not matter.
|
|
|
|
|
Calling ``defer()`` with a field name that has already been deferred is
|
|
|
|
|
harmless (the field will still be deferred).
|
2009-03-19 17:06:04 +08:00
|
|
|
|
|
|
|
|
|
You can defer loading of fields in related models (if the related models are
|
2011-08-27 10:56:18 +08:00
|
|
|
|
loading via :meth:`select_related()`) by using the standard double-underscore
|
2009-03-19 17:06:04 +08:00
|
|
|
|
notation to separate related fields::
|
|
|
|
|
|
2010-02-22 07:39:08 +08:00
|
|
|
|
Blog.objects.select_related().defer("entry__headline", "entry__body")
|
2009-03-19 17:06:04 +08:00
|
|
|
|
|
|
|
|
|
If you want to clear the set of deferred fields, pass ``None`` as a parameter
|
|
|
|
|
to ``defer()``::
|
|
|
|
|
|
|
|
|
|
# Load all fields immediately.
|
|
|
|
|
my_queryset.defer(None)
|
|
|
|
|
|
2013-08-19 20:29:32 +08:00
|
|
|
|
Some fields in a model won't be deferred, even if you ask for them. You can
|
|
|
|
|
never defer the loading of the primary key. If you are using
|
|
|
|
|
:meth:`select_related()` to retrieve related models, you shouldn't defer the
|
|
|
|
|
loading of the field that connects from the primary model to the related
|
|
|
|
|
one, doing so will result in an error.
|
2009-03-19 17:06:04 +08:00
|
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
The ``defer()`` method (and its cousin, :meth:`only()`, below) are only for
|
|
|
|
|
advanced use-cases. They provide an optimization for when you have analyzed
|
|
|
|
|
your queries closely and understand *exactly* what information you need and
|
|
|
|
|
have measured that the difference between returning the fields you need and
|
|
|
|
|
the full set of fields for the model will be significant.
|
2011-08-26 14:19:30 +08:00
|
|
|
|
|
|
|
|
|
Even if you think you are in the advanced use-case situation, **only use
|
|
|
|
|
defer() when you cannot, at queryset load time, determine if you will need
|
|
|
|
|
the extra fields or not**. If you are frequently loading and using a
|
|
|
|
|
particular subset of your data, the best choice you can make is to
|
|
|
|
|
normalize your models and put the non-loaded data into a separate model
|
|
|
|
|
(and database table). If the columns *must* stay in the one table for some
|
|
|
|
|
reason, create a model with ``Meta.managed = False`` (see the
|
2011-08-27 10:56:18 +08:00
|
|
|
|
:attr:`managed attribute <django.db.models.Options.managed>` documentation)
|
|
|
|
|
containing just the fields you normally need to load and use that where you
|
|
|
|
|
might otherwise call ``defer()``. This makes your code more explicit to the
|
|
|
|
|
reader, is slightly faster and consumes a little less memory in the Python
|
|
|
|
|
process.
|
2011-08-26 14:19:30 +08:00
|
|
|
|
|
2015-04-22 11:42:50 +08:00
|
|
|
|
For example, both of these models use the same underlying database table::
|
|
|
|
|
|
|
|
|
|
class CommonlyUsedModel(models.Model):
|
|
|
|
|
f1 = models.CharField(max_length=10)
|
|
|
|
|
|
|
|
|
|
class Meta:
|
|
|
|
|
managed = False
|
|
|
|
|
db_table = 'app_largetable'
|
|
|
|
|
|
|
|
|
|
class ManagedModel(models.Model):
|
|
|
|
|
f1 = models.CharField(max_length=10)
|
|
|
|
|
f2 = models.CharField(max_length=10)
|
|
|
|
|
|
|
|
|
|
class Meta:
|
|
|
|
|
db_table = 'app_largetable'
|
|
|
|
|
|
|
|
|
|
# Two equivalent QuerySets:
|
|
|
|
|
CommonlyUsedModel.objects.all()
|
|
|
|
|
ManagedModel.objects.all().defer('f2')
|
|
|
|
|
|
|
|
|
|
If many fields need to be duplicated in the unmanaged model, it may be best
|
|
|
|
|
to create an abstract model with the shared fields and then have the
|
|
|
|
|
unmanaged and managed models inherit from the abstract model.
|
|
|
|
|
|
2012-08-13 03:17:54 +08:00
|
|
|
|
.. note::
|
|
|
|
|
|
2012-12-25 22:56:22 +08:00
|
|
|
|
When calling :meth:`~django.db.models.Model.save()` for instances with
|
|
|
|
|
deferred fields, only the loaded fields will be saved. See
|
|
|
|
|
:meth:`~django.db.models.Model.save()` for more details.
|
2012-08-13 03:17:54 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``only()``
|
|
|
|
|
~~~~~~~~~~
|
2010-05-09 13:48:45 +08:00
|
|
|
|
|
|
|
|
|
.. method:: only(*fields)
|
2009-03-19 17:06:04 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
The ``only()`` method is more or less the opposite of :meth:`defer()`. You call
|
|
|
|
|
it with the fields that should *not* be deferred when retrieving a model. If
|
|
|
|
|
you have a model where almost all the fields need to be deferred, using
|
|
|
|
|
``only()`` to specify the complementary set of fields can result in simpler
|
2009-03-19 17:06:04 +08:00
|
|
|
|
code.
|
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
Suppose you have a model with fields ``name``, ``age`` and ``biography``. The
|
2009-03-19 17:06:04 +08:00
|
|
|
|
following two querysets are the same, in terms of deferred fields::
|
|
|
|
|
|
|
|
|
|
Person.objects.defer("age", "biography")
|
|
|
|
|
Person.objects.only("name")
|
|
|
|
|
|
|
|
|
|
Whenever you call ``only()`` it *replaces* the set of fields to load
|
|
|
|
|
immediately. The method's name is mnemonic: **only** those fields are loaded
|
|
|
|
|
immediately; the remainder are deferred. Thus, successive calls to ``only()``
|
|
|
|
|
result in only the final fields being considered::
|
|
|
|
|
|
|
|
|
|
# This will defer all fields except the headline.
|
2010-08-28 19:23:54 +08:00
|
|
|
|
Entry.objects.only("body", "rating").only("headline")
|
2009-03-19 17:06:04 +08:00
|
|
|
|
|
|
|
|
|
Since ``defer()`` acts incrementally (adding fields to the deferred list), you
|
|
|
|
|
can combine calls to ``only()`` and ``defer()`` and things will behave
|
|
|
|
|
logically::
|
|
|
|
|
|
|
|
|
|
# Final result is that everything except "headline" is deferred.
|
|
|
|
|
Entry.objects.only("headline", "body").defer("body")
|
|
|
|
|
|
|
|
|
|
# Final result loads headline and body immediately (only() replaces any
|
|
|
|
|
# existing set of fields).
|
|
|
|
|
Entry.objects.defer("body").only("headline", "body")
|
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
All of the cautions in the note for the :meth:`defer` documentation apply to
|
2011-08-26 14:19:30 +08:00
|
|
|
|
``only()`` as well. Use it cautiously and only after exhausting your other
|
2013-03-25 13:53:48 +08:00
|
|
|
|
options.
|
2011-08-26 14:19:30 +08:00
|
|
|
|
|
2013-08-19 20:29:32 +08:00
|
|
|
|
Using :meth:`only` and omitting a field requested using :meth:`select_related`
|
|
|
|
|
is an error as well.
|
2012-08-13 03:17:54 +08:00
|
|
|
|
|
2013-08-19 20:29:32 +08:00
|
|
|
|
.. note::
|
2013-03-25 13:53:48 +08:00
|
|
|
|
|
2013-08-19 20:29:32 +08:00
|
|
|
|
When calling :meth:`~django.db.models.Model.save()` for instances with
|
|
|
|
|
deferred fields, only the loaded fields will be saved. See
|
|
|
|
|
:meth:`~django.db.models.Model.save()` for more details.
|
2012-08-13 03:17:54 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``using()``
|
|
|
|
|
~~~~~~~~~~~
|
2010-05-09 13:48:45 +08:00
|
|
|
|
|
|
|
|
|
.. method:: using(alias)
|
2009-12-22 23:18:51 +08:00
|
|
|
|
|
|
|
|
|
This method is for controlling which database the ``QuerySet`` will be
|
|
|
|
|
evaluated against if you are using more than one database. The only argument
|
|
|
|
|
this method takes is the alias of a database, as defined in
|
|
|
|
|
:setting:`DATABASES`.
|
|
|
|
|
|
|
|
|
|
For example::
|
|
|
|
|
|
|
|
|
|
# queries the database with the 'default' alias.
|
|
|
|
|
>>> Entry.objects.all()
|
|
|
|
|
|
|
|
|
|
# queries the database with the 'backup' alias
|
|
|
|
|
>>> Entry.objects.using('backup')
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``select_for_update()``
|
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~
|
2011-04-21 04:42:07 +08:00
|
|
|
|
|
2020-05-11 00:25:06 +08:00
|
|
|
|
.. method:: select_for_update(nowait=False, skip_locked=False, of=(), no_key=False)
|
2011-04-21 04:42:07 +08:00
|
|
|
|
|
|
|
|
|
Returns a queryset that will lock rows until the end of the transaction,
|
|
|
|
|
generating a ``SELECT ... FOR UPDATE`` SQL statement on supported databases.
|
|
|
|
|
|
|
|
|
|
For example::
|
|
|
|
|
|
2018-09-22 21:02:20 +08:00
|
|
|
|
from django.db import transaction
|
2011-04-21 04:42:07 +08:00
|
|
|
|
|
2018-09-22 21:02:20 +08:00
|
|
|
|
entries = Entry.objects.select_for_update().filter(author=request.user)
|
|
|
|
|
with transaction.atomic():
|
|
|
|
|
for entry in entries:
|
|
|
|
|
...
|
|
|
|
|
|
|
|
|
|
When the queryset is evaluated (``for entry in entries`` in this case), all
|
|
|
|
|
matched entries will be locked until the end of the transaction block, meaning
|
|
|
|
|
that other transactions will be prevented from changing or acquiring locks on
|
|
|
|
|
them.
|
2011-04-21 04:42:07 +08:00
|
|
|
|
|
|
|
|
|
Usually, if another transaction has already acquired a lock on one of the
|
|
|
|
|
selected rows, the query will block until the lock is released. If this is
|
2011-05-13 12:33:42 +08:00
|
|
|
|
not the behavior you want, call ``select_for_update(nowait=True)``. This will
|
2011-04-21 04:42:07 +08:00
|
|
|
|
make the call non-blocking. If a conflicting lock is already acquired by
|
2011-08-27 10:56:18 +08:00
|
|
|
|
another transaction, :exc:`~django.db.DatabaseError` will be raised when the
|
2016-06-23 23:52:14 +08:00
|
|
|
|
queryset is evaluated. You can also ignore locked rows by using
|
|
|
|
|
``select_for_update(skip_locked=True)`` instead. The ``nowait`` and
|
|
|
|
|
``skip_locked`` are mutually exclusive and attempts to call
|
|
|
|
|
``select_for_update()`` with both options enabled will result in a
|
|
|
|
|
:exc:`ValueError`.
|
2011-04-21 04:42:07 +08:00
|
|
|
|
|
2017-06-30 04:00:15 +08:00
|
|
|
|
By default, ``select_for_update()`` locks all rows that are selected by the
|
|
|
|
|
query. For example, rows of related objects specified in :meth:`select_related`
|
|
|
|
|
are locked in addition to rows of the queryset's model. If this isn't desired,
|
|
|
|
|
specify the related objects you want to lock in ``select_for_update(of=(...))``
|
|
|
|
|
using the same fields syntax as :meth:`select_related`. Use the value ``'self'``
|
|
|
|
|
to refer to the queryset's model.
|
|
|
|
|
|
2019-12-02 14:57:19 +08:00
|
|
|
|
.. admonition:: Lock parents models in ``select_for_update(of=(...))``
|
|
|
|
|
|
|
|
|
|
If you want to lock parents models when using :ref:`multi-table inheritance
|
|
|
|
|
<multi-table-inheritance>`, you must specify parent link fields (by default
|
|
|
|
|
``<parent_model_name>_ptr``) in the ``of`` argument. For example::
|
|
|
|
|
|
|
|
|
|
Restaurant.objects.select_for_update(of=('self', 'place_ptr'))
|
|
|
|
|
|
2020-05-11 00:25:06 +08:00
|
|
|
|
On PostgreSQL only, you can pass ``no_key=True`` in order to acquire a weaker
|
|
|
|
|
lock, that still allows creating rows that merely reference locked rows
|
|
|
|
|
(through a foreign key, for example) whilst the lock is in place. The
|
|
|
|
|
PostgreSQL documentation has more details about `row-level lock modes
|
|
|
|
|
<https://www.postgresql.org/docs/current/explicit-locking.html#LOCKING-ROWS>`_.
|
|
|
|
|
|
2017-10-17 11:28:00 +08:00
|
|
|
|
You can't use ``select_for_update()`` on nullable relations::
|
|
|
|
|
|
|
|
|
|
>>> Person.objects.select_related('hometown').select_for_update()
|
|
|
|
|
Traceback (most recent call last):
|
|
|
|
|
...
|
|
|
|
|
django.db.utils.NotSupportedError: FOR UPDATE cannot be applied to the nullable side of an outer join
|
|
|
|
|
|
|
|
|
|
To avoid that restriction, you can exclude null objects if you don't care about
|
|
|
|
|
them::
|
|
|
|
|
|
|
|
|
|
>>> Person.objects.select_related('hometown').select_for_update().exclude(hometown=None)
|
|
|
|
|
<QuerySet [<Person: ...)>, ...]>
|
|
|
|
|
|
2015-08-05 22:08:56 +08:00
|
|
|
|
Currently, the ``postgresql``, ``oracle``, and ``mysql`` database
|
2019-11-05 05:44:10 +08:00
|
|
|
|
backends support ``select_for_update()``. However, MariaDB 10.3+ supports only
|
|
|
|
|
the ``nowait`` argument and MySQL 8.0.1+ supports the ``nowait`` and
|
|
|
|
|
``skip_locked`` arguments. MySQL and MariaDB don't support the ``of`` argument.
|
2020-05-11 00:25:06 +08:00
|
|
|
|
The ``no_key`` argument is supported only on PostgreSQL.
|
2011-04-21 04:42:07 +08:00
|
|
|
|
|
2020-05-11 00:25:06 +08:00
|
|
|
|
Passing ``nowait=True``, ``skip_locked=True``, ``no_key=True``, or ``of`` to
|
2017-06-30 04:00:15 +08:00
|
|
|
|
``select_for_update()`` using database backends that do not support these
|
|
|
|
|
options, such as MySQL, raises a :exc:`~django.db.NotSupportedError`. This
|
|
|
|
|
prevents code from unexpectedly blocking.
|
2011-04-21 04:42:07 +08:00
|
|
|
|
|
2015-06-01 02:15:45 +08:00
|
|
|
|
Evaluating a queryset with ``select_for_update()`` in autocommit mode on
|
|
|
|
|
backends which support ``SELECT ... FOR UPDATE`` is a
|
|
|
|
|
:exc:`~django.db.transaction.TransactionManagementError` error because the
|
2014-07-27 16:12:39 +08:00
|
|
|
|
rows are not locked in that case. If allowed, this would facilitate data
|
|
|
|
|
corruption and could easily be caused by calling code that expects to be run in
|
|
|
|
|
a transaction outside of one.
|
2014-03-31 01:03:35 +08:00
|
|
|
|
|
2014-07-27 16:12:39 +08:00
|
|
|
|
Using ``select_for_update()`` on backends which do not support
|
2011-04-21 04:42:07 +08:00
|
|
|
|
``SELECT ... FOR UPDATE`` (such as SQLite) will have no effect.
|
2015-06-01 02:15:45 +08:00
|
|
|
|
``SELECT ... FOR UPDATE`` will not be added to the query, and an error isn't
|
|
|
|
|
raised if ``select_for_update()`` is used in autocommit mode.
|
2009-03-19 17:06:04 +08:00
|
|
|
|
|
2014-07-27 16:12:39 +08:00
|
|
|
|
.. warning::
|
|
|
|
|
|
|
|
|
|
Although ``select_for_update()`` normally fails in autocommit mode, since
|
|
|
|
|
:class:`~django.test.TestCase` automatically wraps each test in a
|
|
|
|
|
transaction, calling ``select_for_update()`` in a ``TestCase`` even outside
|
|
|
|
|
an :func:`~django.db.transaction.atomic()` block will (perhaps unexpectedly)
|
|
|
|
|
pass without raising a ``TransactionManagementError``. To properly test
|
|
|
|
|
``select_for_update()`` you should use
|
|
|
|
|
:class:`~django.test.TransactionTestCase`.
|
|
|
|
|
|
2017-09-18 21:42:29 +08:00
|
|
|
|
.. admonition:: Certain expressions may not be supported
|
|
|
|
|
|
|
|
|
|
PostgreSQL doesn't support ``select_for_update()`` with
|
|
|
|
|
:class:`~django.db.models.expressions.Window` expressions.
|
|
|
|
|
|
2020-05-11 00:25:06 +08:00
|
|
|
|
.. versionchanged:: 3.2
|
|
|
|
|
|
|
|
|
|
The ``no_key`` argument was added.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``raw()``
|
|
|
|
|
~~~~~~~~~
|
2013-07-29 17:47:49 +08:00
|
|
|
|
|
|
|
|
|
.. method:: raw(raw_query, params=None, translations=None)
|
|
|
|
|
|
|
|
|
|
Takes a raw SQL query, executes it, and returns a
|
|
|
|
|
``django.db.models.query.RawQuerySet`` instance. This ``RawQuerySet`` instance
|
2019-04-14 15:44:56 +08:00
|
|
|
|
can be iterated over just like a normal ``QuerySet`` to provide object
|
|
|
|
|
instances.
|
2013-07-29 17:47:49 +08:00
|
|
|
|
|
2014-04-25 02:10:03 +08:00
|
|
|
|
See the :doc:`/topics/db/sql` for more information.
|
2013-07-29 17:47:49 +08:00
|
|
|
|
|
|
|
|
|
.. warning::
|
|
|
|
|
|
|
|
|
|
``raw()`` always triggers a new query and doesn't account for previous
|
|
|
|
|
filtering. As such, it should generally be called from the ``Manager`` or
|
|
|
|
|
from a fresh ``QuerySet`` instance.
|
|
|
|
|
|
2018-06-22 04:01:53 +08:00
|
|
|
|
Operators that return new ``QuerySet``\s
|
|
|
|
|
----------------------------------------
|
|
|
|
|
|
|
|
|
|
Combined querysets must use the same model.
|
|
|
|
|
|
|
|
|
|
AND (``&``)
|
|
|
|
|
~~~~~~~~~~~
|
|
|
|
|
|
|
|
|
|
Combines two ``QuerySet``\s using the SQL ``AND`` operator.
|
|
|
|
|
|
|
|
|
|
The following are equivalent::
|
|
|
|
|
|
|
|
|
|
Model.objects.filter(x=1) & Model.objects.filter(y=2)
|
|
|
|
|
Model.objects.filter(x=1, y=2)
|
|
|
|
|
from django.db.models import Q
|
|
|
|
|
Model.objects.filter(Q(x=1) & Q(y=2))
|
|
|
|
|
|
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
|
|
|
|
|
|
|
|
|
SELECT ... WHERE x=1 AND y=2
|
|
|
|
|
|
|
|
|
|
OR (``|``)
|
|
|
|
|
~~~~~~~~~~
|
|
|
|
|
|
|
|
|
|
Combines two ``QuerySet``\s using the SQL ``OR`` operator.
|
|
|
|
|
|
|
|
|
|
The following are equivalent::
|
|
|
|
|
|
|
|
|
|
Model.objects.filter(x=1) | Model.objects.filter(y=2)
|
|
|
|
|
from django.db.models import Q
|
|
|
|
|
Model.objects.filter(Q(x=1) | Q(y=2))
|
|
|
|
|
|
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
|
|
|
|
|
|
|
|
|
SELECT ... WHERE x=1 OR y=2
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
Methods that do not return ``QuerySet``\s
|
|
|
|
|
-----------------------------------------
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
The following ``QuerySet`` methods evaluate the ``QuerySet`` and return
|
|
|
|
|
something *other than* a ``QuerySet``.
|
|
|
|
|
|
|
|
|
|
These methods do not use a cache (see :ref:`caching-and-querysets`). Rather,
|
|
|
|
|
they query the database each time they're called.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``get()``
|
|
|
|
|
~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: get(**kwargs)
|
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
Returns the object matching the given lookup parameters, which should be in
|
|
|
|
|
the format described in `Field lookups`_.
|
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
``get()`` raises :exc:`~django.core.exceptions.MultipleObjectsReturned` if more
|
|
|
|
|
than one object was found. The
|
2012-12-25 22:56:22 +08:00
|
|
|
|
:exc:`~django.core.exceptions.MultipleObjectsReturned` exception is an
|
2011-08-27 10:56:18 +08:00
|
|
|
|
attribute of the model class.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2015-05-08 09:30:04 +08:00
|
|
|
|
``get()`` raises a :exc:`~django.db.models.Model.DoesNotExist` exception if an
|
|
|
|
|
object wasn't found for the given parameters. This exception is an attribute
|
|
|
|
|
of the model class. Example::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Entry.objects.get(id='foo') # raises Entry.DoesNotExist
|
|
|
|
|
|
2015-05-08 09:30:04 +08:00
|
|
|
|
The :exc:`~django.db.models.Model.DoesNotExist` exception inherits from
|
2011-08-27 10:56:18 +08:00
|
|
|
|
:exc:`django.core.exceptions.ObjectDoesNotExist`, so you can target multiple
|
2015-05-08 09:30:04 +08:00
|
|
|
|
:exc:`~django.db.models.Model.DoesNotExist` exceptions. Example::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
from django.core.exceptions import ObjectDoesNotExist
|
|
|
|
|
try:
|
|
|
|
|
e = Entry.objects.get(id=3)
|
|
|
|
|
b = Blog.objects.get(id=1)
|
|
|
|
|
except ObjectDoesNotExist:
|
2012-09-09 03:15:10 +08:00
|
|
|
|
print("Either the entry or blog doesn't exist.")
|
2016-08-20 02:21:33 +08:00
|
|
|
|
|
|
|
|
|
If you expect a queryset to return one row, you can use ``get()`` without any
|
|
|
|
|
arguments to return the object for that row::
|
|
|
|
|
|
|
|
|
|
entry = Entry.objects.filter(...).exclude(...).get()
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``create()``
|
|
|
|
|
~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: create(**kwargs)
|
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
A convenience method for creating an object and saving it all in one step. Thus::
|
|
|
|
|
|
|
|
|
|
p = Person.objects.create(first_name="Bruce", last_name="Springsteen")
|
|
|
|
|
|
|
|
|
|
and::
|
|
|
|
|
|
|
|
|
|
p = Person(first_name="Bruce", last_name="Springsteen")
|
2008-08-29 01:18:05 +08:00
|
|
|
|
p.save(force_insert=True)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
are equivalent.
|
|
|
|
|
|
2008-08-29 01:18:05 +08:00
|
|
|
|
The :ref:`force_insert <ref-models-force-insert>` parameter is documented
|
|
|
|
|
elsewhere, but all it means is that a new object will always be created.
|
|
|
|
|
Normally you won't need to worry about this. However, if your model contains a
|
|
|
|
|
manual primary key value that you set and if that value already exists in the
|
2011-03-09 03:51:19 +08:00
|
|
|
|
database, a call to ``create()`` will fail with an
|
2011-08-27 10:56:18 +08:00
|
|
|
|
:exc:`~django.db.IntegrityError` since primary keys must be unique. Be
|
|
|
|
|
prepared to handle the exception if you are using manual primary keys.
|
2008-08-29 01:18:05 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``get_or_create()``
|
|
|
|
|
~~~~~~~~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2013-05-18 19:49:06 +08:00
|
|
|
|
.. method:: get_or_create(defaults=None, **kwargs)
|
2010-05-09 13:48:45 +08:00
|
|
|
|
|
2013-07-03 19:59:35 +08:00
|
|
|
|
A convenience method for looking up an object with the given ``kwargs`` (may be
|
2013-05-25 04:36:09 +08:00
|
|
|
|
empty if your model has defaults for all fields), creating one if necessary.
|
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
Returns a tuple of ``(object, created)``, where ``object`` is the retrieved or
|
|
|
|
|
created object and ``created`` is a boolean specifying whether a new object was
|
|
|
|
|
created.
|
|
|
|
|
|
2019-05-17 18:23:10 +08:00
|
|
|
|
This is meant to prevent duplicate objects from being created when requests are
|
|
|
|
|
made in parallel, and as a shortcut to boilerplatish code. For example::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
try:
|
|
|
|
|
obj = Person.objects.get(first_name='John', last_name='Lennon')
|
|
|
|
|
except Person.DoesNotExist:
|
|
|
|
|
obj = Person(first_name='John', last_name='Lennon', birthday=date(1940, 10, 9))
|
|
|
|
|
obj.save()
|
|
|
|
|
|
2019-05-17 18:23:10 +08:00
|
|
|
|
Here, with concurrent requests, multiple attempts to save a ``Person`` with
|
|
|
|
|
the same parameters may be made. To avoid this race condition, the above
|
|
|
|
|
example can be rewritten using ``get_or_create()`` like so::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2016-05-15 07:06:31 +08:00
|
|
|
|
obj, created = Person.objects.get_or_create(
|
|
|
|
|
first_name='John',
|
|
|
|
|
last_name='Lennon',
|
|
|
|
|
defaults={'birthday': date(1940, 10, 9)},
|
|
|
|
|
)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
Any keyword arguments passed to ``get_or_create()`` — *except* an optional one
|
|
|
|
|
called ``defaults`` — will be used in a :meth:`get()` call. If an object is
|
2018-03-25 14:04:36 +08:00
|
|
|
|
found, ``get_or_create()`` returns a tuple of that object and ``False``.
|
|
|
|
|
|
2019-05-17 18:23:10 +08:00
|
|
|
|
.. warning::
|
|
|
|
|
|
|
|
|
|
This method is atomic assuming that the database enforces uniqueness of the
|
|
|
|
|
keyword arguments (see :attr:`~django.db.models.Field.unique` or
|
|
|
|
|
:attr:`~django.db.models.Options.unique_together`). If the fields used in the
|
|
|
|
|
keyword arguments do not have a uniqueness constraint, concurrent calls to
|
|
|
|
|
this method may result in multiple rows with the same parameters being
|
|
|
|
|
inserted.
|
|
|
|
|
|
2018-03-25 14:04:36 +08:00
|
|
|
|
You can specify more complex conditions for the retrieved object by chaining
|
|
|
|
|
``get_or_create()`` with ``filter()`` and using :class:`Q objects
|
|
|
|
|
<django.db.models.Q>`. For example, to retrieve Robert or Bob Marley if either
|
|
|
|
|
exists, and create the latter otherwise::
|
|
|
|
|
|
|
|
|
|
from django.db.models import Q
|
|
|
|
|
|
|
|
|
|
obj, created = Person.objects.filter(
|
|
|
|
|
Q(first_name='Bob') | Q(first_name='Robert'),
|
|
|
|
|
).get_or_create(last_name='Marley', defaults={'first_name': 'Bob'})
|
|
|
|
|
|
|
|
|
|
If multiple objects are found, ``get_or_create()`` raises
|
2012-11-22 23:36:18 +08:00
|
|
|
|
:exc:`~django.core.exceptions.MultipleObjectsReturned`. If an object is *not*
|
|
|
|
|
found, ``get_or_create()`` will instantiate and save a new object, returning a
|
|
|
|
|
tuple of the new object and ``True``. The new object will be created roughly
|
|
|
|
|
according to this algorithm::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2014-12-07 05:00:09 +08:00
|
|
|
|
params = {k: v for k, v in kwargs.items() if '__' not in k}
|
2016-06-05 00:04:40 +08:00
|
|
|
|
params.update({k: v() if callable(v) else v for k, v in defaults.items()})
|
2008-08-24 06:25:40 +08:00
|
|
|
|
obj = self.model(**params)
|
|
|
|
|
obj.save()
|
|
|
|
|
|
|
|
|
|
In English, that means start with any non-``'defaults'`` keyword argument that
|
|
|
|
|
doesn't contain a double underscore (which would indicate a non-exact lookup).
|
|
|
|
|
Then add the contents of ``defaults``, overriding any keys if necessary, and
|
2016-05-26 04:33:35 +08:00
|
|
|
|
use the result as the keyword arguments to the model class. If there are any
|
|
|
|
|
callables in ``defaults``, evaluate them. As hinted at above, this is a
|
|
|
|
|
simplification of the algorithm that is used, but it contains all the pertinent
|
|
|
|
|
details. The internal implementation has some more error-checking than this and
|
|
|
|
|
handles some extra edge-conditions; if you're interested, read the code.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
If you have a field named ``defaults`` and want to use it as an exact lookup in
|
2019-06-17 22:54:55 +08:00
|
|
|
|
``get_or_create()``, use ``'defaults__exact'``, like so::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Foo.objects.get_or_create(defaults__exact='bar', defaults={'defaults': 'baz'})
|
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
The ``get_or_create()`` method has similar error behavior to :meth:`create()`
|
|
|
|
|
when you're using manually specified primary keys. If an object needs to be
|
|
|
|
|
created and the key already exists in the database, an
|
|
|
|
|
:exc:`~django.db.IntegrityError` will be raised.
|
2008-08-29 01:18:05 +08:00
|
|
|
|
|
2014-01-23 22:01:26 +08:00
|
|
|
|
Finally, a word on using ``get_or_create()`` in Django views. Please make sure
|
|
|
|
|
to use it only in ``POST`` requests unless you have a good reason not to.
|
|
|
|
|
``GET`` requests shouldn't have any effect on data. Instead, use ``POST``
|
|
|
|
|
whenever a request to a page has a side effect on your data. For more, see
|
2016-05-02 20:35:05 +08:00
|
|
|
|
:rfc:`Safe methods <7231#section-4.2.1>` in the HTTP spec.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2013-05-19 20:15:36 +08:00
|
|
|
|
.. warning::
|
|
|
|
|
|
|
|
|
|
You can use ``get_or_create()`` through :class:`~django.db.models.ManyToManyField`
|
|
|
|
|
attributes and reverse relations. In that case you will restrict the queries
|
|
|
|
|
inside the context of that relation. That could lead you to some integrity
|
|
|
|
|
problems if you don't use it consistently.
|
|
|
|
|
|
|
|
|
|
Being the following models::
|
|
|
|
|
|
|
|
|
|
class Chapter(models.Model):
|
|
|
|
|
title = models.CharField(max_length=255, unique=True)
|
|
|
|
|
|
|
|
|
|
class Book(models.Model):
|
|
|
|
|
title = models.CharField(max_length=256)
|
|
|
|
|
chapters = models.ManyToManyField(Chapter)
|
|
|
|
|
|
|
|
|
|
You can use ``get_or_create()`` through Book's chapters field, but it only
|
|
|
|
|
fetches inside the context of that book::
|
|
|
|
|
|
|
|
|
|
>>> book = Book.objects.create(title="Ulysses")
|
|
|
|
|
>>> book.chapters.get_or_create(title="Telemachus")
|
|
|
|
|
(<Chapter: Telemachus>, True)
|
|
|
|
|
>>> book.chapters.get_or_create(title="Telemachus")
|
|
|
|
|
(<Chapter: Telemachus>, False)
|
|
|
|
|
>>> Chapter.objects.create(title="Chapter 1")
|
|
|
|
|
<Chapter: Chapter 1>
|
|
|
|
|
>>> book.chapters.get_or_create(title="Chapter 1")
|
|
|
|
|
# Raises IntegrityError
|
|
|
|
|
|
|
|
|
|
This is happening because it's trying to get or create "Chapter 1" through the
|
|
|
|
|
book "Ulysses", but it can't do any of them: the relation can't fetch that
|
|
|
|
|
chapter because it isn't related to that book, but it can't create it either
|
|
|
|
|
because ``title`` field should be unique.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``update_or_create()``
|
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~
|
2013-05-18 19:49:06 +08:00
|
|
|
|
|
|
|
|
|
.. method:: update_or_create(defaults=None, **kwargs)
|
|
|
|
|
|
|
|
|
|
A convenience method for updating an object with the given ``kwargs``, creating
|
|
|
|
|
a new one if necessary. The ``defaults`` is a dictionary of (field, value)
|
2016-05-26 04:33:35 +08:00
|
|
|
|
pairs used to update the object. The values in ``defaults`` can be callables.
|
2013-05-18 19:49:06 +08:00
|
|
|
|
|
|
|
|
|
Returns a tuple of ``(object, created)``, where ``object`` is the created or
|
|
|
|
|
updated object and ``created`` is a boolean specifying whether a new object was
|
|
|
|
|
created.
|
|
|
|
|
|
|
|
|
|
The ``update_or_create`` method tries to fetch an object from database based on
|
|
|
|
|
the given ``kwargs``. If a match is found, it updates the fields passed in the
|
|
|
|
|
``defaults`` dictionary.
|
|
|
|
|
|
|
|
|
|
This is meant as a shortcut to boilerplatish code. For example::
|
|
|
|
|
|
2016-10-13 23:02:02 +08:00
|
|
|
|
defaults = {'first_name': 'Bob'}
|
2013-05-18 19:49:06 +08:00
|
|
|
|
try:
|
|
|
|
|
obj = Person.objects.get(first_name='John', last_name='Lennon')
|
2016-10-13 23:02:02 +08:00
|
|
|
|
for key, value in defaults.items():
|
2013-05-18 19:49:06 +08:00
|
|
|
|
setattr(obj, key, value)
|
|
|
|
|
obj.save()
|
|
|
|
|
except Person.DoesNotExist:
|
2016-10-13 23:02:02 +08:00
|
|
|
|
new_values = {'first_name': 'John', 'last_name': 'Lennon'}
|
|
|
|
|
new_values.update(defaults)
|
|
|
|
|
obj = Person(**new_values)
|
2013-05-18 19:49:06 +08:00
|
|
|
|
obj.save()
|
|
|
|
|
|
|
|
|
|
This pattern gets quite unwieldy as the number of fields in a model goes up.
|
|
|
|
|
The above example can be rewritten using ``update_or_create()`` like so::
|
|
|
|
|
|
|
|
|
|
obj, created = Person.objects.update_or_create(
|
2016-10-13 23:02:02 +08:00
|
|
|
|
first_name='John', last_name='Lennon',
|
|
|
|
|
defaults={'first_name': 'Bob'},
|
|
|
|
|
)
|
2013-05-18 19:49:06 +08:00
|
|
|
|
|
|
|
|
|
For detailed description how names passed in ``kwargs`` are resolved see
|
|
|
|
|
:meth:`get_or_create`.
|
|
|
|
|
|
|
|
|
|
As described above in :meth:`get_or_create`, this method is prone to a
|
|
|
|
|
race-condition which can result in multiple rows being inserted simultaneously
|
|
|
|
|
if uniqueness is not enforced at the database level.
|
2013-05-19 20:15:36 +08:00
|
|
|
|
|
2018-02-14 22:57:31 +08:00
|
|
|
|
Like :meth:`get_or_create` and :meth:`create`, if you're using manually
|
|
|
|
|
specified primary keys and an object needs to be created but the key already
|
|
|
|
|
exists in the database, an :exc:`~django.db.IntegrityError` is raised.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``bulk_create()``
|
|
|
|
|
~~~~~~~~~~~~~~~~~
|
2011-09-10 03:22:28 +08:00
|
|
|
|
|
2017-10-03 07:35:38 +08:00
|
|
|
|
.. method:: bulk_create(objs, batch_size=None, ignore_conflicts=False)
|
2011-09-10 03:22:28 +08:00
|
|
|
|
|
|
|
|
|
This method inserts the provided list of objects into the database in an
|
|
|
|
|
efficient manner (generally only 1 query, no matter how many objects there
|
|
|
|
|
are)::
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.bulk_create([
|
2016-12-27 22:58:42 +08:00
|
|
|
|
... Entry(headline='This is a test'),
|
|
|
|
|
... Entry(headline='This is only a test'),
|
2011-09-10 03:22:28 +08:00
|
|
|
|
... ])
|
|
|
|
|
|
|
|
|
|
This has a number of caveats though:
|
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* The model's ``save()`` method will not be called, and the ``pre_save`` and
|
2011-10-12 02:23:08 +08:00
|
|
|
|
``post_save`` signals will not be sent.
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* It does not work with child models in a multi-table inheritance scenario.
|
2020-03-25 16:58:21 +08:00
|
|
|
|
* If the model's primary key is an :class:`~django.db.models.AutoField`, the
|
|
|
|
|
primary key attribute can only be retrieved on certain databases (currently
|
|
|
|
|
PostgreSQL and MariaDB 10.5+). On other databases, it will not be set.
|
2014-01-17 09:47:02 +08:00
|
|
|
|
* It does not work with many-to-many relationships.
|
2018-01-13 08:56:16 +08:00
|
|
|
|
* It casts ``objs`` to a list, which fully evaluates ``objs`` if it's a
|
|
|
|
|
generator. The cast allows inspecting all objects so that any objects with a
|
|
|
|
|
manually set primary key can be inserted first. If you want to insert objects
|
|
|
|
|
in batches without evaluating the entire generator at once, you can use this
|
|
|
|
|
technique as long as the objects don't have any manually set primary keys::
|
|
|
|
|
|
|
|
|
|
from itertools import islice
|
|
|
|
|
|
|
|
|
|
batch_size = 100
|
2018-02-08 17:33:45 +08:00
|
|
|
|
objs = (Entry(headline='Test %s' % i) for i in range(1000))
|
2018-01-13 08:56:16 +08:00
|
|
|
|
while True:
|
|
|
|
|
batch = list(islice(objs, batch_size))
|
|
|
|
|
if not batch:
|
|
|
|
|
break
|
|
|
|
|
Entry.objects.bulk_create(batch, batch_size)
|
2011-09-10 03:22:28 +08:00
|
|
|
|
|
2017-10-26 07:03:21 +08:00
|
|
|
|
The ``batch_size`` parameter controls how many objects are created in a single
|
2012-04-29 09:22:05 +08:00
|
|
|
|
query. The default is to create all objects in one batch, except for SQLite
|
2014-07-31 01:03:54 +08:00
|
|
|
|
where the default is such that at most 999 variables per query are used.
|
2012-03-05 01:34:22 +08:00
|
|
|
|
|
2019-02-05 00:07:46 +08:00
|
|
|
|
On databases that support it (all but Oracle), setting the ``ignore_conflicts``
|
|
|
|
|
parameter to ``True`` tells the database to ignore failure to insert any rows
|
|
|
|
|
that fail constraints such as duplicate unique values. Enabling this parameter
|
|
|
|
|
disables setting the primary key on each model instance (if the database
|
|
|
|
|
normally supports it).
|
2017-10-03 07:35:38 +08:00
|
|
|
|
|
2020-01-20 18:32:31 +08:00
|
|
|
|
Returns ``objs`` as cast to a list, in the same order as provided.
|
|
|
|
|
|
2020-03-25 16:58:21 +08:00
|
|
|
|
.. versionchanged:: 3.1
|
|
|
|
|
|
|
|
|
|
Support for the fetching primary key attributes on MariaDB 10.5+ was added.
|
|
|
|
|
|
2018-09-19 04:14:44 +08:00
|
|
|
|
``bulk_update()``
|
|
|
|
|
~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
|
|
.. method:: bulk_update(objs, fields, batch_size=None)
|
|
|
|
|
|
|
|
|
|
This method efficiently updates the given fields on the provided model
|
|
|
|
|
instances, generally with one query::
|
|
|
|
|
|
|
|
|
|
>>> objs = [
|
|
|
|
|
... Entry.objects.create(headline='Entry 1'),
|
|
|
|
|
... Entry.objects.create(headline='Entry 2'),
|
|
|
|
|
... ]
|
|
|
|
|
>>> objs[0].headline = 'This is entry 1'
|
|
|
|
|
>>> objs[1].headline = 'This is entry 2'
|
|
|
|
|
>>> Entry.objects.bulk_update(objs, ['headline'])
|
|
|
|
|
|
|
|
|
|
:meth:`.QuerySet.update` is used to save the changes, so this is more efficient
|
|
|
|
|
than iterating through the list of models and calling ``save()`` on each of
|
|
|
|
|
them, but it has a few caveats:
|
|
|
|
|
|
|
|
|
|
* You cannot update the model's primary key.
|
|
|
|
|
* Each model's ``save()`` method isn't called, and the
|
|
|
|
|
:attr:`~django.db.models.signals.pre_save` and
|
|
|
|
|
:attr:`~django.db.models.signals.post_save` signals aren't sent.
|
|
|
|
|
* If updating a large number of columns in a large number of rows, the SQL
|
|
|
|
|
generated can be very large. Avoid this by specifying a suitable
|
|
|
|
|
``batch_size``.
|
|
|
|
|
* Updating fields defined on multi-table inheritance ancestors will incur an
|
|
|
|
|
extra query per ancestor.
|
2019-01-10 06:54:46 +08:00
|
|
|
|
* If ``objs`` contains duplicates, only the first one is updated.
|
2018-09-19 04:14:44 +08:00
|
|
|
|
|
|
|
|
|
The ``batch_size`` parameter controls how many objects are saved in a single
|
2019-01-18 01:31:48 +08:00
|
|
|
|
query. The default is to update all objects in one batch, except for SQLite
|
2018-09-19 04:14:44 +08:00
|
|
|
|
and Oracle which have restrictions on the number of variables used in a query.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``count()``
|
|
|
|
|
~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: count()
|
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
Returns an integer representing the number of objects in the database matching
|
2018-08-03 05:36:02 +08:00
|
|
|
|
the ``QuerySet``.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
# Returns the total number of entries in the database.
|
|
|
|
|
Entry.objects.count()
|
|
|
|
|
|
|
|
|
|
# Returns the number of entries whose headline contains 'Lennon'
|
|
|
|
|
Entry.objects.filter(headline__contains='Lennon').count()
|
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
A ``count()`` call performs a ``SELECT COUNT(*)`` behind the scenes, so you
|
|
|
|
|
should always use ``count()`` rather than loading all of the record into Python
|
2009-12-23 01:32:30 +08:00
|
|
|
|
objects and calling ``len()`` on the result (unless you need to load the
|
|
|
|
|
objects into memory anyway, in which case ``len()`` will be faster).
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2015-01-20 23:20:02 +08:00
|
|
|
|
Note that if you want the number of items in a ``QuerySet`` and are also
|
|
|
|
|
retrieving model instances from it (for example, by iterating over it), it's
|
|
|
|
|
probably more efficient to use ``len(queryset)`` which won't cause an extra
|
|
|
|
|
database query like ``count()`` would.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``in_bulk()``
|
|
|
|
|
~~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2017-03-28 23:57:23 +08:00
|
|
|
|
.. method:: in_bulk(id_list=None, field_name='pk')
|
2010-05-09 13:48:45 +08:00
|
|
|
|
|
2017-03-28 23:57:23 +08:00
|
|
|
|
Takes a list of field values (``id_list``) and the ``field_name`` for those
|
|
|
|
|
values, and returns a dictionary mapping each value to an instance of the
|
|
|
|
|
object with the given field value. If ``id_list`` isn't provided, all objects
|
|
|
|
|
in the queryset are returned. ``field_name`` must be a unique field, and it
|
|
|
|
|
defaults to the primary key.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
>>> Blog.objects.in_bulk([1])
|
2008-08-28 21:40:20 +08:00
|
|
|
|
{1: <Blog: Beatles Blog>}
|
2008-08-24 06:25:40 +08:00
|
|
|
|
>>> Blog.objects.in_bulk([1, 2])
|
2008-08-28 21:40:20 +08:00
|
|
|
|
{1: <Blog: Beatles Blog>, 2: <Blog: Cheddar Talk>}
|
2008-08-24 06:25:40 +08:00
|
|
|
|
>>> Blog.objects.in_bulk([])
|
|
|
|
|
{}
|
2015-10-30 14:24:46 +08:00
|
|
|
|
>>> Blog.objects.in_bulk()
|
|
|
|
|
{1: <Blog: Beatles Blog>, 2: <Blog: Cheddar Talk>, 3: <Blog: Django Weblog>}
|
2017-03-28 23:57:23 +08:00
|
|
|
|
>>> Blog.objects.in_bulk(['beatles_blog'], field_name='slug')
|
|
|
|
|
{'beatles_blog': <Blog: Beatles Blog>}
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
If you pass ``in_bulk()`` an empty list, you'll get an empty dictionary.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``iterator()``
|
|
|
|
|
~~~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2017-06-02 04:56:51 +08:00
|
|
|
|
.. method:: iterator(chunk_size=2000)
|
2010-05-09 13:48:45 +08:00
|
|
|
|
|
2011-09-05 05:17:30 +08:00
|
|
|
|
Evaluates the ``QuerySet`` (by performing the query) and returns an iterator
|
|
|
|
|
(see :pep:`234`) over the results. A ``QuerySet`` typically caches its results
|
|
|
|
|
internally so that repeated evaluations do not result in additional queries. In
|
|
|
|
|
contrast, ``iterator()`` will read results directly, without doing any caching
|
|
|
|
|
at the ``QuerySet`` level (internally, the default iterator calls ``iterator()``
|
|
|
|
|
and caches the return value). For a ``QuerySet`` which returns a large number of
|
2013-05-14 18:40:33 +08:00
|
|
|
|
objects that you only need to access once, this can result in better
|
2011-08-27 10:56:18 +08:00
|
|
|
|
performance and a significant reduction in memory.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
Note that using ``iterator()`` on a ``QuerySet`` which has already been
|
|
|
|
|
evaluated will force it to evaluate again, repeating the query.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2012-02-29 03:34:04 +08:00
|
|
|
|
Also, use of ``iterator()`` causes previous ``prefetch_related()`` calls to be
|
|
|
|
|
ignored since these two optimizations do not make sense together.
|
|
|
|
|
|
2017-05-28 09:12:18 +08:00
|
|
|
|
Depending on the database backend, query results will either be loaded all at
|
|
|
|
|
once or streamed from the database using server-side cursors.
|
|
|
|
|
|
|
|
|
|
With server-side cursors
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
|
|
Oracle and :ref:`PostgreSQL <postgresql-server-side-cursors>` use server-side
|
|
|
|
|
cursors to stream results from the database without loading the entire result
|
|
|
|
|
set into memory.
|
|
|
|
|
|
|
|
|
|
The Oracle database driver always uses server-side cursors.
|
2016-06-04 06:31:21 +08:00
|
|
|
|
|
2017-06-02 04:56:51 +08:00
|
|
|
|
With server-side cursors, the ``chunk_size`` parameter specifies the number of
|
|
|
|
|
results to cache at the database driver level. Fetching bigger chunks
|
|
|
|
|
diminishes the number of round trips between the database driver and the
|
|
|
|
|
database, at the expense of memory.
|
|
|
|
|
|
2017-05-06 10:19:34 +08:00
|
|
|
|
On PostgreSQL, server-side cursors will only be used when the
|
|
|
|
|
:setting:`DISABLE_SERVER_SIDE_CURSORS <DATABASE-DISABLE_SERVER_SIDE_CURSORS>`
|
|
|
|
|
setting is ``False``. Read :ref:`transaction-pooling-server-side-cursors` if
|
2017-05-28 09:12:18 +08:00
|
|
|
|
you're using a connection pooler configured in transaction pooling mode. When
|
|
|
|
|
server-side cursors are disabled, the behavior is the same as databases that
|
|
|
|
|
don't support server-side cursors.
|
|
|
|
|
|
|
|
|
|
Without server-side cursors
|
|
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
2018-07-21 02:57:17 +08:00
|
|
|
|
MySQL doesn't support streaming results, hence the Python database driver loads
|
|
|
|
|
the entire result set into memory. The result set is then transformed into
|
|
|
|
|
Python row objects by the database adapter using the ``fetchmany()`` method
|
|
|
|
|
defined in :pep:`249`.
|
|
|
|
|
|
|
|
|
|
SQLite can fetch results in batches using ``fetchmany()``, but since SQLite
|
|
|
|
|
doesn't provide isolation between queries within a connection, be careful when
|
|
|
|
|
writing to the table being iterated over. See :ref:`sqlite-isolation` for
|
|
|
|
|
more information.
|
2017-05-06 10:19:34 +08:00
|
|
|
|
|
2017-06-02 04:56:51 +08:00
|
|
|
|
The ``chunk_size`` parameter controls the size of batches Django retrieves from
|
|
|
|
|
the database driver. Larger batches decrease the overhead of communicating with
|
|
|
|
|
the database driver at the expense of a slight increase in memory consumption.
|
|
|
|
|
|
|
|
|
|
The default value of ``chunk_size``, 2000, comes from `a calculation on the
|
|
|
|
|
psycopg mailing list <https://www.postgresql.org/message-id/4D2F2C71.8080805%40dndg.it>`_:
|
|
|
|
|
|
|
|
|
|
Assuming rows of 10-20 columns with a mix of textual and numeric data, 2000
|
|
|
|
|
is going to fetch less than 100KB of data, which seems a good compromise
|
|
|
|
|
between the number of rows transferred and the data discarded if the loop
|
|
|
|
|
is exited early.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``latest()``
|
|
|
|
|
~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2017-06-22 04:28:16 +08:00
|
|
|
|
.. method:: latest(*fields)
|
2010-05-09 13:48:45 +08:00
|
|
|
|
|
2017-06-22 04:28:16 +08:00
|
|
|
|
Returns the latest object in the table based on the given field(s).
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
This example returns the latest ``Entry`` in the table, according to the
|
|
|
|
|
``pub_date`` field::
|
|
|
|
|
|
|
|
|
|
Entry.objects.latest('pub_date')
|
|
|
|
|
|
2017-06-22 04:28:16 +08:00
|
|
|
|
You can also choose the latest based on several fields. For example, to select
|
|
|
|
|
the ``Entry`` with the earliest ``expire_date`` when two entries have the same
|
|
|
|
|
``pub_date``::
|
|
|
|
|
|
|
|
|
|
Entry.objects.latest('pub_date', '-expire_date')
|
|
|
|
|
|
|
|
|
|
The negative sign in ``'-expire_date'`` means to sort ``expire_date`` in
|
|
|
|
|
*descending* order. Since ``latest()`` gets the last result, the ``Entry`` with
|
|
|
|
|
the earliest ``expire_date`` is selected.
|
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
If your model's :ref:`Meta <meta-options>` specifies
|
2017-06-22 04:28:16 +08:00
|
|
|
|
:attr:`~django.db.models.Options.get_latest_by`, you can omit any arguments to
|
|
|
|
|
``earliest()`` or ``latest()``. The fields specified in
|
|
|
|
|
:attr:`~django.db.models.Options.get_latest_by` will be used by default.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2013-01-12 16:37:19 +08:00
|
|
|
|
Like :meth:`get()`, ``earliest()`` and ``latest()`` raise
|
2015-05-08 09:30:04 +08:00
|
|
|
|
:exc:`~django.db.models.Model.DoesNotExist` if there is no object with the
|
2013-01-12 16:37:19 +08:00
|
|
|
|
given parameters.
|
|
|
|
|
|
|
|
|
|
Note that ``earliest()`` and ``latest()`` exist purely for convenience and
|
|
|
|
|
readability.
|
|
|
|
|
|
2016-06-03 01:52:58 +08:00
|
|
|
|
.. admonition:: ``earliest()`` and ``latest()`` may return instances with null dates.
|
|
|
|
|
|
|
|
|
|
Since ordering is delegated to the database, results on fields that allow
|
|
|
|
|
null values may be ordered differently if you use different databases. For
|
|
|
|
|
example, PostgreSQL and MySQL sort null values as if they are higher than
|
|
|
|
|
non-null values, while SQLite does the opposite.
|
|
|
|
|
|
|
|
|
|
You may want to filter out null values::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(pub_date__isnull=False).latest('pub_date')
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``earliest()``
|
|
|
|
|
~~~~~~~~~~~~~~
|
2013-01-12 16:37:19 +08:00
|
|
|
|
|
2017-06-22 04:28:16 +08:00
|
|
|
|
.. method:: earliest(*fields)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2013-01-12 16:37:19 +08:00
|
|
|
|
Works otherwise like :meth:`~django.db.models.query.QuerySet.latest` except
|
|
|
|
|
the direction is changed.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``first()``
|
|
|
|
|
~~~~~~~~~~~
|
2013-05-21 23:35:12 +08:00
|
|
|
|
|
2014-03-24 23:42:56 +08:00
|
|
|
|
.. method:: first()
|
2013-05-21 23:35:12 +08:00
|
|
|
|
|
|
|
|
|
Returns the first object matched by the queryset, or ``None`` if there
|
|
|
|
|
is no matching object. If the ``QuerySet`` has no ordering defined, then the
|
2017-10-20 00:36:55 +08:00
|
|
|
|
queryset is automatically ordered by the primary key. This can affect
|
|
|
|
|
aggregation results as described in :ref:`aggregation-ordering-interaction`.
|
2013-05-21 23:35:12 +08:00
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
p = Article.objects.order_by('title', 'pub_date').first()
|
|
|
|
|
|
|
|
|
|
Note that ``first()`` is a convenience method, the following code sample is
|
|
|
|
|
equivalent to the above example::
|
|
|
|
|
|
|
|
|
|
try:
|
|
|
|
|
p = Article.objects.order_by('title', 'pub_date')[0]
|
|
|
|
|
except IndexError:
|
|
|
|
|
p = None
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``last()``
|
|
|
|
|
~~~~~~~~~~
|
|
|
|
|
|
2013-05-21 23:35:12 +08:00
|
|
|
|
.. method:: last()
|
|
|
|
|
|
2013-05-22 00:18:35 +08:00
|
|
|
|
Works like :meth:`first()`, but returns the last object in the queryset.
|
2013-05-21 23:35:12 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``aggregate()``
|
|
|
|
|
~~~~~~~~~~~~~~~
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: aggregate(*args, **kwargs)
|
|
|
|
|
|
2016-01-11 00:48:16 +08:00
|
|
|
|
Returns a dictionary of aggregate values (averages, sums, etc.) calculated over
|
2011-08-27 10:56:18 +08:00
|
|
|
|
the ``QuerySet``. Each argument to ``aggregate()`` specifies a value that will
|
|
|
|
|
be included in the dictionary that is returned.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
The aggregation functions that are provided by Django are described in
|
2013-12-25 21:13:18 +08:00
|
|
|
|
`Aggregation Functions`_ below. Since aggregates are also :doc:`query
|
|
|
|
|
expressions </ref/models/expressions>`, you may combine aggregates with other
|
|
|
|
|
aggregates or values to create complex aggregates.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
Aggregates specified using keyword arguments will use the keyword as the name
|
|
|
|
|
for the annotation. Anonymous arguments will have a name generated for them
|
|
|
|
|
based upon the name of the aggregate function and the model field that is being
|
2013-12-25 21:13:18 +08:00
|
|
|
|
aggregated. Complex aggregates cannot use anonymous arguments and must specify
|
|
|
|
|
a keyword argument as an alias.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
For example, when you are working with blog entries, you may want to know the
|
|
|
|
|
number of authors that have contributed blog entries::
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2013-05-18 18:12:26 +08:00
|
|
|
|
>>> from django.db.models import Count
|
2009-01-15 19:06:34 +08:00
|
|
|
|
>>> q = Blog.objects.aggregate(Count('entry'))
|
|
|
|
|
{'entry__count': 16}
|
|
|
|
|
|
|
|
|
|
By using a keyword argument to specify the aggregate function, you can
|
|
|
|
|
control the name of the aggregation value that is returned::
|
|
|
|
|
|
|
|
|
|
>>> q = Blog.objects.aggregate(number_of_entries=Count('entry'))
|
2009-02-17 18:17:29 +08:00
|
|
|
|
{'number_of_entries': 16}
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2010-08-20 03:27:44 +08:00
|
|
|
|
For an in-depth discussion of aggregation, see :doc:`the topic guide on
|
|
|
|
|
Aggregation </topics/db/aggregation>`.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``exists()``
|
|
|
|
|
~~~~~~~~~~~~
|
2009-10-24 08:28:39 +08:00
|
|
|
|
|
2010-05-09 13:48:45 +08:00
|
|
|
|
.. method:: exists()
|
|
|
|
|
|
2011-03-09 03:51:19 +08:00
|
|
|
|
Returns ``True`` if the :class:`.QuerySet` contains any results, and ``False``
|
2009-10-24 08:28:39 +08:00
|
|
|
|
if not. This tries to perform the query in the simplest and fastest way
|
2012-09-08 23:19:49 +08:00
|
|
|
|
possible, but it *does* execute nearly the same query as a normal
|
|
|
|
|
:class:`.QuerySet` query.
|
|
|
|
|
|
|
|
|
|
:meth:`~.QuerySet.exists` is useful for searches relating to both
|
|
|
|
|
object membership in a :class:`.QuerySet` and to the existence of any objects in
|
|
|
|
|
a :class:`.QuerySet`, particularly in the context of a large :class:`.QuerySet`.
|
|
|
|
|
|
|
|
|
|
The most efficient method of finding whether a model with a unique field
|
|
|
|
|
(e.g. ``primary_key``) is a member of a :class:`.QuerySet` is::
|
|
|
|
|
|
|
|
|
|
entry = Entry.objects.get(pk=123)
|
2013-03-08 22:15:23 +08:00
|
|
|
|
if some_queryset.filter(pk=entry.pk).exists():
|
2012-09-09 03:15:10 +08:00
|
|
|
|
print("Entry contained in queryset")
|
2012-09-08 23:19:49 +08:00
|
|
|
|
|
|
|
|
|
Which will be faster than the following which requires evaluating and iterating
|
|
|
|
|
through the entire queryset::
|
|
|
|
|
|
2013-03-08 22:15:23 +08:00
|
|
|
|
if entry in some_queryset:
|
2012-09-09 03:15:10 +08:00
|
|
|
|
print("Entry contained in QuerySet")
|
2012-09-08 23:19:49 +08:00
|
|
|
|
|
|
|
|
|
And to find whether a queryset contains any items::
|
|
|
|
|
|
2013-03-08 22:15:23 +08:00
|
|
|
|
if some_queryset.exists():
|
|
|
|
|
print("There is at least one object in some_queryset")
|
2012-09-08 23:19:49 +08:00
|
|
|
|
|
|
|
|
|
Which will be faster than::
|
|
|
|
|
|
2013-03-08 22:15:23 +08:00
|
|
|
|
if some_queryset:
|
|
|
|
|
print("There is at least one object in some_queryset")
|
2012-09-08 23:19:49 +08:00
|
|
|
|
|
|
|
|
|
... but not by a large degree (hence needing a large queryset for efficiency
|
|
|
|
|
gains).
|
|
|
|
|
|
2013-03-08 22:15:23 +08:00
|
|
|
|
Additionally, if a ``some_queryset`` has not yet been evaluated, but you know
|
|
|
|
|
that it will be at some point, then using ``some_queryset.exists()`` will do
|
2011-08-27 10:56:18 +08:00
|
|
|
|
more overall work (one query for the existence check plus an extra one to later
|
2019-06-17 22:54:55 +08:00
|
|
|
|
retrieve the results) than using ``bool(some_queryset)``, which retrieves the
|
|
|
|
|
results and then checks if any were returned.
|
2009-10-24 08:28:39 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``update()``
|
|
|
|
|
~~~~~~~~~~~~
|
2010-10-09 18:00:13 +08:00
|
|
|
|
|
|
|
|
|
.. method:: update(**kwargs)
|
|
|
|
|
|
|
|
|
|
Performs an SQL update query for the specified fields, and returns
|
2012-10-06 19:02:11 +08:00
|
|
|
|
the number of rows matched (which may not be equal to the number of rows
|
|
|
|
|
updated if some rows already have the new value).
|
2010-10-09 18:00:13 +08:00
|
|
|
|
|
2011-07-06 04:09:00 +08:00
|
|
|
|
For example, to turn comments off for all blog entries published in 2010,
|
|
|
|
|
you could do this::
|
2010-10-09 18:00:13 +08:00
|
|
|
|
|
2011-07-06 04:09:00 +08:00
|
|
|
|
>>> Entry.objects.filter(pub_date__year=2010).update(comments_on=False)
|
|
|
|
|
|
|
|
|
|
(This assumes your ``Entry`` model has fields ``pub_date`` and ``comments_on``.)
|
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
You can update multiple fields — there's no limit on how many. For example,
|
2011-07-06 04:09:00 +08:00
|
|
|
|
here we update the ``comments_on`` and ``headline`` fields::
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.filter(pub_date__year=2010).update(comments_on=False, headline='This is old')
|
|
|
|
|
|
|
|
|
|
The ``update()`` method is applied instantly, and the only restriction on the
|
|
|
|
|
:class:`.QuerySet` that is updated is that it can only update columns in the
|
|
|
|
|
model's main table, not on related models. You can't do this, for example::
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.update(blog__name='foo') # Won't work!
|
|
|
|
|
|
|
|
|
|
Filtering based on related fields is still possible, though::
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.filter(blog__id=1).update(comments_on=True)
|
|
|
|
|
|
|
|
|
|
You cannot call ``update()`` on a :class:`.QuerySet` that has had a slice taken
|
|
|
|
|
or can otherwise no longer be filtered.
|
|
|
|
|
|
|
|
|
|
The ``update()`` method returns the number of affected rows::
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.filter(id=64).update(comments_on=True)
|
|
|
|
|
1
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.filter(slug='nonexistent-slug').update(comments_on=True)
|
|
|
|
|
0
|
|
|
|
|
|
|
|
|
|
>>> Entry.objects.filter(pub_date__year=2010).update(comments_on=False)
|
|
|
|
|
132
|
|
|
|
|
|
|
|
|
|
If you're just updating a record and don't need to do anything with the model
|
2011-08-27 10:56:18 +08:00
|
|
|
|
object, the most efficient approach is to call ``update()``, rather than
|
|
|
|
|
loading the model object into memory. For example, instead of doing this::
|
2011-07-06 04:09:00 +08:00
|
|
|
|
|
|
|
|
|
e = Entry.objects.get(id=10)
|
|
|
|
|
e.comments_on = False
|
|
|
|
|
e.save()
|
|
|
|
|
|
|
|
|
|
...do this::
|
|
|
|
|
|
2011-07-06 08:21:32 +08:00
|
|
|
|
Entry.objects.filter(id=10).update(comments_on=False)
|
2010-10-09 18:00:13 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
Using ``update()`` also prevents a race condition wherein something might
|
|
|
|
|
change in your database in the short period of time between loading the object
|
|
|
|
|
and calling ``save()``.
|
2011-07-06 04:16:08 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
Finally, realize that ``update()`` does an update at the SQL level and, thus,
|
|
|
|
|
does not call any ``save()`` methods on your models, nor does it emit the
|
|
|
|
|
:attr:`~django.db.models.signals.pre_save` or
|
|
|
|
|
:attr:`~django.db.models.signals.post_save` signals (which are a consequence of
|
2012-12-25 16:40:08 +08:00
|
|
|
|
calling :meth:`Model.save() <django.db.models.Model.save>`). If you want to
|
2011-08-27 10:56:18 +08:00
|
|
|
|
update a bunch of records for a model that has a custom
|
2012-10-05 23:32:28 +08:00
|
|
|
|
:meth:`~django.db.models.Model.save()` method, loop over them and call
|
2011-08-27 10:56:18 +08:00
|
|
|
|
:meth:`~django.db.models.Model.save()`, like this::
|
2010-10-09 18:00:13 +08:00
|
|
|
|
|
2011-07-06 04:09:00 +08:00
|
|
|
|
for e in Entry.objects.filter(pub_date__year=2010):
|
|
|
|
|
e.comments_on = False
|
|
|
|
|
e.save()
|
2010-10-09 18:00:13 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``delete()``
|
|
|
|
|
~~~~~~~~~~~~
|
2010-11-10 00:33:48 +08:00
|
|
|
|
|
|
|
|
|
.. method:: delete()
|
|
|
|
|
|
2015-03-07 16:56:25 +08:00
|
|
|
|
Performs an SQL delete query on all rows in the :class:`.QuerySet` and
|
|
|
|
|
returns the number of objects deleted and a dictionary with the number of
|
|
|
|
|
deletions per object type.
|
|
|
|
|
|
|
|
|
|
The ``delete()`` is applied instantly. You cannot call ``delete()`` on a
|
2011-03-09 03:51:19 +08:00
|
|
|
|
:class:`.QuerySet` that has had a slice taken or can otherwise no longer be
|
2010-11-10 00:33:48 +08:00
|
|
|
|
filtered.
|
|
|
|
|
|
|
|
|
|
For example, to delete all the entries in a particular blog::
|
|
|
|
|
|
|
|
|
|
>>> b = Blog.objects.get(pk=1)
|
|
|
|
|
|
|
|
|
|
# Delete all the entries belonging to this Blog.
|
|
|
|
|
>>> Entry.objects.filter(blog=b).delete()
|
2015-03-07 16:56:25 +08:00
|
|
|
|
(4, {'weblog.Entry': 2, 'weblog.Entry_authors': 2})
|
|
|
|
|
|
2010-11-10 00:46:42 +08:00
|
|
|
|
By default, Django's :class:`~django.db.models.ForeignKey` emulates the SQL
|
2011-08-27 10:56:18 +08:00
|
|
|
|
constraint ``ON DELETE CASCADE`` — in other words, any objects with foreign
|
2010-11-10 00:46:42 +08:00
|
|
|
|
keys pointing at the objects to be deleted will be deleted along with them.
|
|
|
|
|
For example::
|
2010-11-10 00:33:48 +08:00
|
|
|
|
|
2015-03-07 16:56:25 +08:00
|
|
|
|
>>> blogs = Blog.objects.all()
|
|
|
|
|
|
2010-11-10 00:33:48 +08:00
|
|
|
|
# This will delete all Blogs and all of their Entry objects.
|
2015-03-07 16:56:25 +08:00
|
|
|
|
>>> blogs.delete()
|
|
|
|
|
(5, {'weblog.Blog': 1, 'weblog.Entry': 2, 'weblog.Entry_authors': 2})
|
2010-11-10 00:33:48 +08:00
|
|
|
|
|
2012-09-20 04:39:14 +08:00
|
|
|
|
This cascade behavior is customizable via the
|
|
|
|
|
:attr:`~django.db.models.ForeignKey.on_delete` argument to the
|
|
|
|
|
:class:`~django.db.models.ForeignKey`.
|
2010-11-10 00:46:42 +08:00
|
|
|
|
|
2010-11-10 00:33:48 +08:00
|
|
|
|
The ``delete()`` method does a bulk delete and does not call any ``delete()``
|
|
|
|
|
methods on your models. It does, however, emit the
|
|
|
|
|
:data:`~django.db.models.signals.pre_delete` and
|
|
|
|
|
:data:`~django.db.models.signals.post_delete` signals for all deleted objects
|
|
|
|
|
(including cascaded deletions).
|
|
|
|
|
|
2012-09-20 23:51:30 +08:00
|
|
|
|
Django needs to fetch objects into memory to send signals and handle cascades.
|
|
|
|
|
However, if there are no cascades and no signals, then Django may take a
|
|
|
|
|
fast-path and delete objects without fetching into memory. For large
|
|
|
|
|
deletes this can result in significantly reduced memory usage. The amount of
|
|
|
|
|
executed queries can be reduced, too.
|
|
|
|
|
|
|
|
|
|
ForeignKeys which are set to :attr:`~django.db.models.ForeignKey.on_delete`
|
2013-08-19 20:29:32 +08:00
|
|
|
|
``DO_NOTHING`` do not prevent taking the fast-path in deletion.
|
2012-09-20 23:51:30 +08:00
|
|
|
|
|
|
|
|
|
Note that the queries generated in object deletion is an implementation
|
|
|
|
|
detail subject to change.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``as_manager()``
|
|
|
|
|
~~~~~~~~~~~~~~~~
|
2013-07-26 16:59:40 +08:00
|
|
|
|
|
|
|
|
|
.. classmethod:: as_manager()
|
|
|
|
|
|
|
|
|
|
Class method that returns an instance of :class:`~django.db.models.Manager`
|
2013-08-06 00:23:26 +08:00
|
|
|
|
with a copy of the ``QuerySet``’s methods. See
|
2013-07-26 16:59:40 +08:00
|
|
|
|
:ref:`create-manager-with-queryset-methods` for more details.
|
|
|
|
|
|
2017-09-10 22:34:18 +08:00
|
|
|
|
``explain()``
|
|
|
|
|
~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
|
|
.. method:: explain(format=None, **options)
|
|
|
|
|
|
|
|
|
|
Returns a string of the ``QuerySet``’s execution plan, which details how the
|
|
|
|
|
database would execute the query, including any indexes or joins that would be
|
|
|
|
|
used. Knowing these details may help you improve the performance of slow
|
|
|
|
|
queries.
|
|
|
|
|
|
|
|
|
|
For example, when using PostgreSQL::
|
|
|
|
|
|
|
|
|
|
>>> print(Blog.objects.filter(title='My Blog').explain())
|
|
|
|
|
Seq Scan on blog (cost=0.00..35.50 rows=10 width=12)
|
|
|
|
|
Filter: (title = 'My Blog'::bpchar)
|
|
|
|
|
|
|
|
|
|
The output differs significantly between databases.
|
|
|
|
|
|
|
|
|
|
``explain()`` is supported by all built-in database backends except Oracle
|
|
|
|
|
because an implementation there isn't straightforward.
|
|
|
|
|
|
2019-10-24 21:16:25 +08:00
|
|
|
|
The ``format`` parameter changes the output format from the databases's
|
|
|
|
|
default, which is usually text-based. PostgreSQL supports ``'TEXT'``,
|
|
|
|
|
``'JSON'``, ``'YAML'``, and ``'XML'`` formats. MariaDB and MySQL support
|
|
|
|
|
``'TEXT'`` (also called ``'TRADITIONAL'``) and ``'JSON'`` formats. MySQL
|
|
|
|
|
8.0.16+ also supports an improved ``'TREE'`` format, which is similar to
|
|
|
|
|
PostgreSQL's ``'TEXT'`` output and is used by default, if supported.
|
2017-09-10 22:34:18 +08:00
|
|
|
|
|
|
|
|
|
Some databases accept flags that can return more information about the query.
|
|
|
|
|
Pass these flags as keyword arguments. For example, when using PostgreSQL::
|
|
|
|
|
|
|
|
|
|
>>> print(Blog.objects.filter(title='My Blog').explain(verbose=True))
|
|
|
|
|
Seq Scan on public.blog (cost=0.00..35.50 rows=10 width=12) (actual time=0.004..0.004 rows=10 loops=1)
|
|
|
|
|
Output: id, title
|
|
|
|
|
Filter: (blog.title = 'My Blog'::bpchar)
|
|
|
|
|
Planning time: 0.064 ms
|
|
|
|
|
Execution time: 0.058 ms
|
|
|
|
|
|
|
|
|
|
On some databases, flags may cause the query to be executed which could have
|
2019-10-22 00:34:19 +08:00
|
|
|
|
adverse effects on your database. For example, the ``ANALYZE`` flag supported
|
|
|
|
|
by MariaDB, MySQL 8.0.18+, and PostgreSQL could result in changes to data if
|
|
|
|
|
there are triggers or if a function is called, even for a ``SELECT`` query.
|
2017-09-10 22:34:18 +08:00
|
|
|
|
|
2019-10-22 00:32:56 +08:00
|
|
|
|
.. versionchanged:: 3.1
|
|
|
|
|
|
2019-10-22 00:34:19 +08:00
|
|
|
|
Support for the ``'TREE'`` format on MySQL 8.0.16+ and ``analyze`` option
|
|
|
|
|
on MariaDB and MySQL 8.0.18+ were added.
|
2019-10-22 00:32:56 +08:00
|
|
|
|
|
2009-12-26 08:55:06 +08:00
|
|
|
|
.. _field-lookups:
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``Field`` lookups
|
|
|
|
|
-----------------
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Field lookups are how you specify the meat of an SQL ``WHERE`` clause. They're
|
2011-08-27 10:56:18 +08:00
|
|
|
|
specified as keyword arguments to the ``QuerySet`` methods :meth:`filter()`,
|
|
|
|
|
:meth:`exclude()` and :meth:`get()`.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
For an introduction, see :ref:`models and database queries documentation
|
|
|
|
|
<field-lookups-intro>`.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2015-11-13 06:29:03 +08:00
|
|
|
|
Django's built-in lookups are listed below. It is also possible to write
|
2014-06-11 23:09:10 +08:00
|
|
|
|
:doc:`custom lookups </howto/custom-lookups>` for model fields.
|
2014-01-18 17:09:43 +08:00
|
|
|
|
|
2014-02-15 23:26:22 +08:00
|
|
|
|
As a convenience when no lookup type is provided (like in
|
|
|
|
|
``Entry.objects.get(id=14)``) the lookup type is assumed to be :lookup:`exact`.
|
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: exact
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``exact``
|
|
|
|
|
~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
Exact match. If the value provided for comparison is ``None``, it will be
|
|
|
|
|
interpreted as an SQL ``NULL`` (see :lookup:`isnull` for more details).
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Examples::
|
|
|
|
|
|
|
|
|
|
Entry.objects.get(id__exact=14)
|
|
|
|
|
Entry.objects.get(id__exact=None)
|
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalents:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE id = 14;
|
|
|
|
|
SELECT ... WHERE id IS NULL;
|
|
|
|
|
|
|
|
|
|
.. admonition:: MySQL comparisons
|
|
|
|
|
|
2008-09-10 11:57:52 +08:00
|
|
|
|
In MySQL, a database table's "collation" setting determines whether
|
|
|
|
|
``exact`` comparisons are case-sensitive. This is a database setting, *not*
|
|
|
|
|
a Django setting. It's possible to configure your MySQL tables to use
|
|
|
|
|
case-sensitive comparisons, but some trade-offs are involved. For more
|
|
|
|
|
information about this, see the :ref:`collation section <mysql-collation>`
|
2010-08-20 03:27:44 +08:00
|
|
|
|
in the :doc:`databases </ref/databases>` documentation.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: iexact
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``iexact``
|
|
|
|
|
~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2015-01-27 04:39:52 +08:00
|
|
|
|
Case-insensitive exact match. If the value provided for comparison is ``None``,
|
|
|
|
|
it will be interpreted as an SQL ``NULL`` (see :lookup:`isnull` for more
|
|
|
|
|
details).
|
2013-12-03 17:24:45 +08:00
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Blog.objects.get(name__iexact='beatles blog')
|
2013-12-03 17:24:45 +08:00
|
|
|
|
Blog.objects.get(name__iexact=None)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalents:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE name ILIKE 'beatles blog';
|
2014-07-31 01:03:54 +08:00
|
|
|
|
SELECT ... WHERE name IS NULL;
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2014-07-31 01:03:54 +08:00
|
|
|
|
Note the first query will match ``'Beatles Blog'``, ``'beatles blog'``,
|
|
|
|
|
``'BeAtLes BLoG'``, etc.
|
2009-01-06 11:34:47 +08:00
|
|
|
|
|
|
|
|
|
.. admonition:: SQLite users
|
|
|
|
|
|
2017-01-21 05:04:05 +08:00
|
|
|
|
When using the SQLite backend and non-ASCII strings, bear in mind the
|
|
|
|
|
:ref:`database note <sqlite-string-matching>` about string comparisons.
|
|
|
|
|
SQLite does not do case-insensitive matching for non-ASCII strings.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: contains
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``contains``
|
|
|
|
|
~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Case-sensitive containment test.
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.get(headline__contains='Lennon')
|
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE headline LIKE '%Lennon%';
|
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
Note this will match the headline ``'Lennon honored today'`` but not ``'lennon
|
|
|
|
|
honored today'``.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-08-26 16:42:38 +08:00
|
|
|
|
.. admonition:: SQLite users
|
|
|
|
|
|
|
|
|
|
SQLite doesn't support case-sensitive ``LIKE`` statements; ``contains``
|
|
|
|
|
acts like ``icontains`` for SQLite. See the :ref:`database note
|
|
|
|
|
<sqlite-string-matching>` for more information.
|
|
|
|
|
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: icontains
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``icontains``
|
|
|
|
|
~~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Case-insensitive containment test.
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.get(headline__icontains='Lennon')
|
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE headline ILIKE '%Lennon%';
|
|
|
|
|
|
2009-01-06 11:34:47 +08:00
|
|
|
|
.. admonition:: SQLite users
|
|
|
|
|
|
2017-01-21 05:04:05 +08:00
|
|
|
|
When using the SQLite backend and non-ASCII strings, bear in mind the
|
|
|
|
|
:ref:`database note <sqlite-string-matching>` about string comparisons.
|
2009-01-06 11:34:47 +08:00
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: in
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``in``
|
|
|
|
|
~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2018-06-17 03:53:54 +08:00
|
|
|
|
In a given iterable; often a list, tuple, or queryset. It's not a common use
|
|
|
|
|
case, but strings (being iterables) are accepted.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2018-06-17 03:53:54 +08:00
|
|
|
|
Examples::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Entry.objects.filter(id__in=[1, 3, 4])
|
2018-06-17 03:53:54 +08:00
|
|
|
|
Entry.objects.filter(headline__in='abc')
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalents:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE id IN (1, 3, 4);
|
2018-06-17 03:53:54 +08:00
|
|
|
|
SELECT ... WHERE headline IN ('a', 'b', 'c');
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
You can also use a queryset to dynamically evaluate the list of values
|
2009-01-05 19:47:48 +08:00
|
|
|
|
instead of providing a list of literal values::
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2009-01-05 19:47:48 +08:00
|
|
|
|
inner_qs = Blog.objects.filter(name__contains='Cheddar')
|
|
|
|
|
entries = Entry.objects.filter(blog__in=inner_qs)
|
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
This queryset will be evaluated as subselect statement:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2009-01-05 19:47:48 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE blog.id IN (SELECT id FROM ... WHERE NAME LIKE '%Cheddar%')
|
|
|
|
|
|
2015-01-30 15:26:13 +08:00
|
|
|
|
If you pass in a ``QuerySet`` resulting from ``values()`` or ``values_list()``
|
|
|
|
|
as the value to an ``__in`` lookup, you need to ensure you are only extracting
|
|
|
|
|
one field in the result. For example, this will work (filtering on the blog
|
|
|
|
|
names)::
|
2009-01-16 18:59:43 +08:00
|
|
|
|
|
|
|
|
|
inner_qs = Blog.objects.filter(name__contains='Ch').values('name')
|
|
|
|
|
entries = Entry.objects.filter(blog__name__in=inner_qs)
|
|
|
|
|
|
|
|
|
|
This example will raise an exception, since the inner query is trying to
|
|
|
|
|
extract two field values, where only one is expected::
|
|
|
|
|
|
|
|
|
|
# Bad code! Will raise a TypeError.
|
|
|
|
|
inner_qs = Blog.objects.filter(name__contains='Ch').values('name', 'id')
|
|
|
|
|
entries = Entry.objects.filter(blog__name__in=inner_qs)
|
|
|
|
|
|
2013-09-27 07:35:53 +08:00
|
|
|
|
.. _nested-queries-performance:
|
|
|
|
|
|
2009-01-05 19:47:48 +08:00
|
|
|
|
.. admonition:: Performance considerations
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2009-01-05 19:47:48 +08:00
|
|
|
|
Be cautious about using nested queries and understand your database
|
|
|
|
|
server's performance characteristics (if in doubt, benchmark!). Some
|
|
|
|
|
database backends, most notably MySQL, don't optimize nested queries very
|
|
|
|
|
well. It is more efficient, in those cases, to extract a list of values
|
|
|
|
|
and then pass that into the second query. That is, execute two queries
|
|
|
|
|
instead of one::
|
|
|
|
|
|
|
|
|
|
values = Blog.objects.filter(
|
|
|
|
|
name__contains='Cheddar').values_list('pk', flat=True)
|
2010-01-11 01:25:44 +08:00
|
|
|
|
entries = Entry.objects.filter(blog__in=list(values))
|
|
|
|
|
|
|
|
|
|
Note the ``list()`` call around the Blog ``QuerySet`` to force execution of
|
2010-01-27 21:30:29 +08:00
|
|
|
|
the first query. Without it, a nested query would be executed, because
|
2010-01-11 01:25:44 +08:00
|
|
|
|
:ref:`querysets-are-lazy`.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: gt
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``gt``
|
|
|
|
|
~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Greater than.
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(id__gt=4)
|
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE id > 4;
|
|
|
|
|
|
2010-05-09 12:24:21 +08:00
|
|
|
|
.. fieldlookup:: gte
|
2010-03-20 13:04:31 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``gte``
|
|
|
|
|
~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Greater than or equal to.
|
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: lt
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``lt``
|
|
|
|
|
~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Less than.
|
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: lte
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``lte``
|
|
|
|
|
~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Less than or equal to.
|
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: startswith
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``startswith``
|
|
|
|
|
~~~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Case-sensitive starts-with.
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
2017-02-25 05:35:08 +08:00
|
|
|
|
Entry.objects.filter(headline__startswith='Lennon')
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2017-02-25 05:35:08 +08:00
|
|
|
|
SELECT ... WHERE headline LIKE 'Lennon%';
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SQLite doesn't support case-sensitive ``LIKE`` statements; ``startswith`` acts
|
|
|
|
|
like ``istartswith`` for SQLite.
|
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: istartswith
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``istartswith``
|
|
|
|
|
~~~~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Case-insensitive starts-with.
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
2017-02-25 05:35:08 +08:00
|
|
|
|
Entry.objects.filter(headline__istartswith='Lennon')
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2017-02-25 05:35:08 +08:00
|
|
|
|
SELECT ... WHERE headline ILIKE 'Lennon%';
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2009-01-06 11:34:47 +08:00
|
|
|
|
.. admonition:: SQLite users
|
|
|
|
|
|
2017-01-21 05:04:05 +08:00
|
|
|
|
When using the SQLite backend and non-ASCII strings, bear in mind the
|
|
|
|
|
:ref:`database note <sqlite-string-matching>` about string comparisons.
|
2009-01-06 11:34:47 +08:00
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: endswith
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``endswith``
|
|
|
|
|
~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Case-sensitive ends-with.
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
2017-02-25 05:35:08 +08:00
|
|
|
|
Entry.objects.filter(headline__endswith='Lennon')
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2017-02-25 05:35:08 +08:00
|
|
|
|
SELECT ... WHERE headline LIKE '%Lennon';
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
.. admonition:: SQLite users
|
|
|
|
|
|
|
|
|
|
SQLite doesn't support case-sensitive ``LIKE`` statements; ``endswith``
|
|
|
|
|
acts like ``iendswith`` for SQLite. Refer to the :ref:`database note
|
|
|
|
|
<sqlite-string-matching>` documentation for more.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: iendswith
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``iendswith``
|
|
|
|
|
~~~~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Case-insensitive ends-with.
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
2017-02-25 05:35:08 +08:00
|
|
|
|
Entry.objects.filter(headline__iendswith='Lennon')
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2017-02-25 05:35:08 +08:00
|
|
|
|
SELECT ... WHERE headline ILIKE '%Lennon'
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2009-01-06 11:34:47 +08:00
|
|
|
|
.. admonition:: SQLite users
|
|
|
|
|
|
2017-01-21 05:04:05 +08:00
|
|
|
|
When using the SQLite backend and non-ASCII strings, bear in mind the
|
|
|
|
|
:ref:`database note <sqlite-string-matching>` about string comparisons.
|
2009-01-06 11:34:47 +08:00
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: range
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``range``
|
|
|
|
|
~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Range test (inclusive).
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
2013-05-18 18:12:26 +08:00
|
|
|
|
import datetime
|
2008-08-24 06:25:40 +08:00
|
|
|
|
start_date = datetime.date(2005, 1, 1)
|
|
|
|
|
end_date = datetime.date(2005, 3, 31)
|
|
|
|
|
Entry.objects.filter(pub_date__range=(start_date, end_date))
|
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE pub_date BETWEEN '2005-01-01' and '2005-03-31';
|
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
You can use ``range`` anywhere you can use ``BETWEEN`` in SQL — for dates,
|
2008-08-24 06:25:40 +08:00
|
|
|
|
numbers and even characters.
|
|
|
|
|
|
2012-09-08 23:00:04 +08:00
|
|
|
|
.. warning::
|
|
|
|
|
|
|
|
|
|
Filtering a ``DateTimeField`` with dates won't include items on the last
|
|
|
|
|
day, because the bounds are interpreted as "0am on the given date". If
|
|
|
|
|
``pub_date`` was a ``DateTimeField``, the above expression would be turned
|
2019-04-13 21:49:55 +08:00
|
|
|
|
into this SQL:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2012-09-08 23:00:04 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE pub_date BETWEEN '2005-01-01 00:00:00' and '2005-03-31 00:00:00';
|
|
|
|
|
|
|
|
|
|
Generally speaking, you can't mix dates and datetimes.
|
|
|
|
|
|
2015-03-08 05:20:29 +08:00
|
|
|
|
.. fieldlookup:: date
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``date``
|
|
|
|
|
~~~~~~~~
|
2015-03-08 05:20:29 +08:00
|
|
|
|
|
|
|
|
|
For datetime fields, casts the value as date. Allows chaining additional field
|
|
|
|
|
lookups. Takes a date value.
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(pub_date__date=datetime.date(2005, 1, 1))
|
|
|
|
|
Entry.objects.filter(pub_date__date__gt=datetime.date(2005, 1, 1))
|
|
|
|
|
|
|
|
|
|
(No equivalent SQL code fragment is included for this lookup because
|
|
|
|
|
implementation of the relevant query varies among different database engines.)
|
|
|
|
|
|
|
|
|
|
When :setting:`USE_TZ` is ``True``, fields are converted to the current time
|
2019-08-28 05:37:24 +08:00
|
|
|
|
zone before filtering. This requires :ref:`time zone definitions in the
|
|
|
|
|
database <database-time-zone-definitions>`.
|
2015-03-08 05:20:29 +08:00
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: year
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``year``
|
|
|
|
|
~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2015-03-23 04:30:57 +08:00
|
|
|
|
For date and datetime fields, an exact year match. Allows chaining additional
|
|
|
|
|
field lookups. Takes an integer year.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(pub_date__year=2005)
|
2015-03-23 04:30:57 +08:00
|
|
|
|
Entry.objects.filter(pub_date__year__gte=2005)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2012-09-08 23:00:04 +08:00
|
|
|
|
SELECT ... WHERE pub_date BETWEEN '2005-01-01' AND '2005-12-31';
|
2015-03-23 04:30:57 +08:00
|
|
|
|
SELECT ... WHERE pub_date >= '2005-01-01';
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
(The exact SQL syntax varies for each database engine.)
|
|
|
|
|
|
2013-02-10 23:15:49 +08:00
|
|
|
|
When :setting:`USE_TZ` is ``True``, datetime fields are converted to the
|
2019-08-28 05:37:24 +08:00
|
|
|
|
current time zone before filtering. This requires :ref:`time zone definitions
|
|
|
|
|
in the database <database-time-zone-definitions>`.
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2017-09-29 04:28:48 +08:00
|
|
|
|
.. fieldlookup:: iso_year
|
|
|
|
|
|
|
|
|
|
``iso_year``
|
|
|
|
|
~~~~~~~~~~~~
|
|
|
|
|
|
|
|
|
|
For date and datetime fields, an exact ISO 8601 week-numbering year match.
|
|
|
|
|
Allows chaining additional field lookups. Takes an integer year.
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(pub_date__iso_year=2005)
|
|
|
|
|
Entry.objects.filter(pub_date__iso_year__gte=2005)
|
|
|
|
|
|
|
|
|
|
(The exact SQL syntax varies for each database engine.)
|
|
|
|
|
|
|
|
|
|
When :setting:`USE_TZ` is ``True``, datetime fields are converted to the
|
2019-08-28 05:37:24 +08:00
|
|
|
|
current time zone before filtering. This requires :ref:`time zone definitions
|
|
|
|
|
in the database <database-time-zone-definitions>`.
|
2017-09-29 04:28:48 +08:00
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: month
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``month``
|
|
|
|
|
~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2015-03-23 04:30:57 +08:00
|
|
|
|
For date and datetime fields, an exact month match. Allows chaining additional
|
|
|
|
|
field lookups. Takes an integer 1 (January) through 12 (December).
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(pub_date__month=12)
|
2015-03-23 04:30:57 +08:00
|
|
|
|
Entry.objects.filter(pub_date__month__gte=6)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE EXTRACT('month' FROM pub_date) = '12';
|
2015-03-23 04:30:57 +08:00
|
|
|
|
SELECT ... WHERE EXTRACT('month' FROM pub_date) >= '6';
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
(The exact SQL syntax varies for each database engine.)
|
|
|
|
|
|
2013-02-10 23:15:49 +08:00
|
|
|
|
When :setting:`USE_TZ` is ``True``, datetime fields are converted to the
|
2013-09-12 21:27:35 +08:00
|
|
|
|
current time zone before filtering. This requires :ref:`time zone definitions
|
|
|
|
|
in the database <database-time-zone-definitions>`.
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: day
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``day``
|
|
|
|
|
~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2015-03-23 04:30:57 +08:00
|
|
|
|
For date and datetime fields, an exact day match. Allows chaining additional
|
|
|
|
|
field lookups. Takes an integer day.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(pub_date__day=3)
|
2015-03-23 04:30:57 +08:00
|
|
|
|
Entry.objects.filter(pub_date__day__gte=3)
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE EXTRACT('day' FROM pub_date) = '3';
|
2015-03-23 04:30:57 +08:00
|
|
|
|
SELECT ... WHERE EXTRACT('day' FROM pub_date) >= '3';
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
(The exact SQL syntax varies for each database engine.)
|
|
|
|
|
|
|
|
|
|
Note this will match any record with a pub_date on the third day of the month,
|
|
|
|
|
such as January 3, July 3, etc.
|
|
|
|
|
|
2013-02-10 23:15:49 +08:00
|
|
|
|
When :setting:`USE_TZ` is ``True``, datetime fields are converted to the
|
2013-09-12 21:27:35 +08:00
|
|
|
|
current time zone before filtering. This requires :ref:`time zone definitions
|
|
|
|
|
in the database <database-time-zone-definitions>`.
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2016-11-11 21:01:40 +08:00
|
|
|
|
.. fieldlookup:: week
|
|
|
|
|
|
|
|
|
|
``week``
|
|
|
|
|
~~~~~~~~
|
|
|
|
|
|
|
|
|
|
For date and datetime fields, return the week number (1-52 or 53) according
|
|
|
|
|
to `ISO-8601 <https://en.wikipedia.org/wiki/ISO-8601>`_, i.e., weeks start
|
2018-03-22 08:00:39 +08:00
|
|
|
|
on a Monday and the first week contains the year's first Thursday.
|
2016-11-11 21:01:40 +08:00
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(pub_date__week=52)
|
|
|
|
|
Entry.objects.filter(pub_date__week__gte=32, pub_date__week__lte=38)
|
|
|
|
|
|
|
|
|
|
(No equivalent SQL code fragment is included for this lookup because
|
|
|
|
|
implementation of the relevant query varies among different database engines.)
|
|
|
|
|
|
2019-08-28 05:37:24 +08:00
|
|
|
|
When :setting:`USE_TZ` is ``True``, datetime fields are converted to the
|
|
|
|
|
current time zone before filtering. This requires :ref:`time zone definitions
|
|
|
|
|
in the database <database-time-zone-definitions>`.
|
2016-11-11 21:01:40 +08:00
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: week_day
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``week_day``
|
|
|
|
|
~~~~~~~~~~~~
|
2009-02-08 13:08:06 +08:00
|
|
|
|
|
2015-03-23 04:30:57 +08:00
|
|
|
|
For date and datetime fields, a 'day of the week' match. Allows chaining
|
|
|
|
|
additional field lookups.
|
2009-02-08 13:08:06 +08:00
|
|
|
|
|
2010-05-09 12:50:00 +08:00
|
|
|
|
Takes an integer value representing the day of week from 1 (Sunday) to 7
|
|
|
|
|
(Saturday).
|
|
|
|
|
|
2009-02-08 13:08:06 +08:00
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(pub_date__week_day=2)
|
2015-03-23 04:30:57 +08:00
|
|
|
|
Entry.objects.filter(pub_date__week_day__gte=2)
|
2009-02-08 13:08:06 +08:00
|
|
|
|
|
2010-05-09 12:50:00 +08:00
|
|
|
|
(No equivalent SQL code fragment is included for this lookup because
|
|
|
|
|
implementation of the relevant query varies among different database engines.)
|
2009-02-08 13:08:06 +08:00
|
|
|
|
|
2011-08-27 10:56:18 +08:00
|
|
|
|
Note this will match any record with a ``pub_date`` that falls on a Monday (day
|
|
|
|
|
2 of the week), regardless of the month or year in which it occurs. Week days
|
2009-02-08 13:08:06 +08:00
|
|
|
|
are indexed with day 1 being Sunday and day 7 being Saturday.
|
|
|
|
|
|
2013-02-10 23:15:49 +08:00
|
|
|
|
When :setting:`USE_TZ` is ``True``, datetime fields are converted to the
|
2013-09-12 21:27:35 +08:00
|
|
|
|
current time zone before filtering. This requires :ref:`time zone definitions
|
2019-10-01 06:12:19 +08:00
|
|
|
|
in the database <database-time-zone-definitions>`.
|
|
|
|
|
|
|
|
|
|
.. fieldlookup:: iso_week_day
|
|
|
|
|
|
|
|
|
|
``iso_week_day``
|
|
|
|
|
~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
|
|
.. versionadded:: 3.1
|
|
|
|
|
|
|
|
|
|
For date and datetime fields, an exact ISO 8601 day of the week match. Allows
|
|
|
|
|
chaining additional field lookups.
|
|
|
|
|
|
|
|
|
|
Takes an integer value representing the day of the week from 1 (Monday) to 7
|
|
|
|
|
(Sunday).
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(pub_date__iso_week_day=1)
|
|
|
|
|
Entry.objects.filter(pub_date__iso_week_day__gte=1)
|
|
|
|
|
|
|
|
|
|
(No equivalent SQL code fragment is included for this lookup because
|
|
|
|
|
implementation of the relevant query varies among different database engines.)
|
|
|
|
|
|
|
|
|
|
Note this will match any record with a ``pub_date`` that falls on a Monday (day
|
|
|
|
|
1 of the week), regardless of the month or year in which it occurs. Week days
|
|
|
|
|
are indexed with day 1 being Monday and day 7 being Sunday.
|
|
|
|
|
|
|
|
|
|
When :setting:`USE_TZ` is ``True``, datetime fields are converted to the
|
|
|
|
|
current time zone before filtering. This requires :ref:`time zone definitions
|
2013-09-12 21:27:35 +08:00
|
|
|
|
in the database <database-time-zone-definitions>`.
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2017-06-09 03:15:29 +08:00
|
|
|
|
.. fieldlookup:: quarter
|
|
|
|
|
|
|
|
|
|
``quarter``
|
|
|
|
|
~~~~~~~~~~~
|
|
|
|
|
|
|
|
|
|
For date and datetime fields, a 'quarter of the year' match. Allows chaining
|
|
|
|
|
additional field lookups. Takes an integer value between 1 and 4 representing
|
|
|
|
|
the quarter of the year.
|
|
|
|
|
|
|
|
|
|
Example to retrieve entries in the second quarter (April 1 to June 30)::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(pub_date__quarter=2)
|
|
|
|
|
|
|
|
|
|
(No equivalent SQL code fragment is included for this lookup because
|
|
|
|
|
implementation of the relevant query varies among different database engines.)
|
|
|
|
|
|
|
|
|
|
When :setting:`USE_TZ` is ``True``, datetime fields are converted to the
|
|
|
|
|
current time zone before filtering. This requires :ref:`time zone definitions
|
|
|
|
|
in the database <database-time-zone-definitions>`.
|
|
|
|
|
|
2016-06-19 11:39:26 +08:00
|
|
|
|
.. fieldlookup:: time
|
|
|
|
|
|
|
|
|
|
``time``
|
|
|
|
|
~~~~~~~~
|
|
|
|
|
|
|
|
|
|
For datetime fields, casts the value as time. Allows chaining additional field
|
|
|
|
|
lookups. Takes a :class:`datetime.time` value.
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(pub_date__time=datetime.time(14, 30))
|
2018-04-04 21:43:36 +08:00
|
|
|
|
Entry.objects.filter(pub_date__time__range=(datetime.time(8), datetime.time(17)))
|
2016-06-19 11:39:26 +08:00
|
|
|
|
|
|
|
|
|
(No equivalent SQL code fragment is included for this lookup because
|
|
|
|
|
implementation of the relevant query varies among different database engines.)
|
|
|
|
|
|
|
|
|
|
When :setting:`USE_TZ` is ``True``, fields are converted to the current time
|
2019-08-28 05:37:24 +08:00
|
|
|
|
zone before filtering. This requires :ref:`time zone definitions in the
|
|
|
|
|
database <database-time-zone-definitions>`.
|
2016-06-19 11:39:26 +08:00
|
|
|
|
|
2013-02-10 23:15:49 +08:00
|
|
|
|
.. fieldlookup:: hour
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``hour``
|
|
|
|
|
~~~~~~~~
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2015-05-23 03:16:26 +08:00
|
|
|
|
For datetime and time fields, an exact hour match. Allows chaining additional
|
|
|
|
|
field lookups. Takes an integer between 0 and 23.
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Event.objects.filter(timestamp__hour=23)
|
2015-05-23 03:16:26 +08:00
|
|
|
|
Event.objects.filter(time__hour=5)
|
2015-03-23 04:30:57 +08:00
|
|
|
|
Event.objects.filter(timestamp__hour__gte=12)
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE EXTRACT('hour' FROM timestamp) = '23';
|
2015-05-23 03:16:26 +08:00
|
|
|
|
SELECT ... WHERE EXTRACT('hour' FROM time) = '5';
|
2015-03-23 04:30:57 +08:00
|
|
|
|
SELECT ... WHERE EXTRACT('hour' FROM timestamp) >= '12';
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
|
|
|
|
(The exact SQL syntax varies for each database engine.)
|
|
|
|
|
|
2019-08-28 05:37:24 +08:00
|
|
|
|
When :setting:`USE_TZ` is ``True``, datetime fields are converted to the
|
|
|
|
|
current time zone before filtering. This requires :ref:`time zone definitions
|
|
|
|
|
in the database <database-time-zone-definitions>`.
|
2015-05-23 03:16:26 +08:00
|
|
|
|
|
2013-02-10 23:15:49 +08:00
|
|
|
|
.. fieldlookup:: minute
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``minute``
|
|
|
|
|
~~~~~~~~~~
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2015-05-23 03:16:26 +08:00
|
|
|
|
For datetime and time fields, an exact minute match. Allows chaining additional
|
|
|
|
|
field lookups. Takes an integer between 0 and 59.
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Event.objects.filter(timestamp__minute=29)
|
2015-05-23 03:16:26 +08:00
|
|
|
|
Event.objects.filter(time__minute=46)
|
2015-03-23 04:30:57 +08:00
|
|
|
|
Event.objects.filter(timestamp__minute__gte=29)
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE EXTRACT('minute' FROM timestamp) = '29';
|
2015-05-23 03:16:26 +08:00
|
|
|
|
SELECT ... WHERE EXTRACT('minute' FROM time) = '46';
|
2015-03-23 04:30:57 +08:00
|
|
|
|
SELECT ... WHERE EXTRACT('minute' FROM timestamp) >= '29';
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
|
|
|
|
(The exact SQL syntax varies for each database engine.)
|
|
|
|
|
|
2019-08-28 05:37:24 +08:00
|
|
|
|
When :setting:`USE_TZ` is ``True``, datetime fields are converted to the
|
|
|
|
|
current time zone before filtering. This requires :ref:`time zone definitions
|
|
|
|
|
in the database <database-time-zone-definitions>`.
|
2015-05-23 03:16:26 +08:00
|
|
|
|
|
2013-02-10 23:15:49 +08:00
|
|
|
|
.. fieldlookup:: second
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``second``
|
|
|
|
|
~~~~~~~~~~
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2015-05-23 03:16:26 +08:00
|
|
|
|
For datetime and time fields, an exact second match. Allows chaining additional
|
|
|
|
|
field lookups. Takes an integer between 0 and 59.
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Event.objects.filter(timestamp__second=31)
|
2015-05-23 03:16:26 +08:00
|
|
|
|
Event.objects.filter(time__second=2)
|
2015-03-23 04:30:57 +08:00
|
|
|
|
Event.objects.filter(timestamp__second__gte=31)
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE EXTRACT('second' FROM timestamp) = '31';
|
2015-05-23 03:16:26 +08:00
|
|
|
|
SELECT ... WHERE EXTRACT('second' FROM time) = '2';
|
2015-03-23 04:30:57 +08:00
|
|
|
|
SELECT ... WHERE EXTRACT('second' FROM timestamp) >= '31';
|
2013-02-10 23:15:49 +08:00
|
|
|
|
|
|
|
|
|
(The exact SQL syntax varies for each database engine.)
|
2011-11-18 21:01:06 +08:00
|
|
|
|
|
2019-08-28 05:37:24 +08:00
|
|
|
|
When :setting:`USE_TZ` is ``True``, datetime fields are converted to the
|
|
|
|
|
current time zone before filtering. This requires :ref:`time zone definitions
|
|
|
|
|
in the database <database-time-zone-definitions>`.
|
2015-05-23 03:16:26 +08:00
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: isnull
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``isnull``
|
|
|
|
|
~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Takes either ``True`` or ``False``, which correspond to SQL queries of
|
|
|
|
|
``IS NULL`` and ``IS NOT NULL``, respectively.
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.filter(pub_date__isnull=True)
|
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalent:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE pub_date IS NULL;
|
|
|
|
|
|
2019-10-12 02:10:31 +08:00
|
|
|
|
.. deprecated:: 3.1
|
|
|
|
|
|
|
|
|
|
Using non-boolean values as the right-hand side is deprecated, use ``True``
|
|
|
|
|
or ``False`` instead. In Django 4.0, the exception will be raised.
|
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: regex
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``regex``
|
|
|
|
|
~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Case-sensitive regular expression match.
|
|
|
|
|
|
2009-09-13 07:36:00 +08:00
|
|
|
|
The regular expression syntax is that of the database backend in use.
|
|
|
|
|
In the case of SQLite, which has no built in regular expression support,
|
|
|
|
|
this feature is provided by a (Python) user-defined REGEXP function, and
|
|
|
|
|
the regular expression syntax is therefore that of Python's ``re`` module.
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.get(title__regex=r'^(An?|The) +')
|
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalents:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE title REGEXP BINARY '^(An?|The) +'; -- MySQL
|
|
|
|
|
|
2015-09-16 04:01:31 +08:00
|
|
|
|
SELECT ... WHERE REGEXP_LIKE(title, '^(An?|The) +', 'c'); -- Oracle
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE title ~ '^(An?|The) +'; -- PostgreSQL
|
|
|
|
|
|
|
|
|
|
SELECT ... WHERE title REGEXP '^(An?|The) +'; -- SQLite
|
|
|
|
|
|
|
|
|
|
Using raw strings (e.g., ``r'foo'`` instead of ``'foo'``) for passing in the
|
|
|
|
|
regular expression syntax is recommended.
|
|
|
|
|
|
2010-03-20 13:04:31 +08:00
|
|
|
|
.. fieldlookup:: iregex
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``iregex``
|
|
|
|
|
~~~~~~~~~~
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
Case-insensitive regular expression match.
|
|
|
|
|
|
|
|
|
|
Example::
|
|
|
|
|
|
|
|
|
|
Entry.objects.get(title__iregex=r'^(an?|the) +')
|
|
|
|
|
|
2019-04-13 21:49:55 +08:00
|
|
|
|
SQL equivalents:
|
|
|
|
|
|
|
|
|
|
.. code-block:: sql
|
2008-08-24 06:25:40 +08:00
|
|
|
|
|
|
|
|
|
SELECT ... WHERE title REGEXP '^(an?|the) +'; -- MySQL
|
|
|
|
|
|
|
|
|
|
SELECT ... WHERE REGEXP_LIKE(title, '^(an?|the) +', 'i'); -- Oracle
|
|
|
|
|
|
|
|
|
|
SELECT ... WHERE title ~* '^(an?|the) +'; -- PostgreSQL
|
|
|
|
|
|
|
|
|
|
SELECT ... WHERE title REGEXP '(?i)^(an?|the) +'; -- SQLite
|
|
|
|
|
|
2009-01-15 19:06:34 +08:00
|
|
|
|
.. _aggregation-functions:
|
|
|
|
|
|
2011-01-03 21:29:17 +08:00
|
|
|
|
Aggregation functions
|
2009-01-15 19:06:34 +08:00
|
|
|
|
---------------------
|
|
|
|
|
|
2011-02-16 09:56:53 +08:00
|
|
|
|
.. currentmodule:: django.db.models
|
|
|
|
|
|
2009-01-15 19:06:34 +08:00
|
|
|
|
Django provides the following aggregation functions in the
|
2009-01-15 20:13:17 +08:00
|
|
|
|
``django.db.models`` module. For details on how to use these
|
2013-12-25 21:13:18 +08:00
|
|
|
|
aggregate functions, see :doc:`the topic guide on aggregation
|
|
|
|
|
</topics/db/aggregation>`. See the :class:`~django.db.models.Aggregate`
|
|
|
|
|
documentation to learn how to create your aggregates.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2013-01-11 13:57:54 +08:00
|
|
|
|
.. warning::
|
|
|
|
|
|
|
|
|
|
SQLite can't handle aggregation on date/time fields out of the box.
|
|
|
|
|
This is because there are no native date/time fields in SQLite and Django
|
|
|
|
|
currently emulates these features using a text field. Attempts to use
|
|
|
|
|
aggregation on date/time fields in SQLite will raise
|
|
|
|
|
``NotImplementedError``.
|
|
|
|
|
|
2014-05-29 14:53:25 +08:00
|
|
|
|
.. admonition:: Note
|
|
|
|
|
|
2014-06-04 21:58:28 +08:00
|
|
|
|
Aggregation functions return ``None`` when used with an empty
|
|
|
|
|
``QuerySet``. For example, the ``Sum`` aggregation function returns ``None``
|
|
|
|
|
instead of ``0`` if the ``QuerySet`` contains no entries. An exception is
|
|
|
|
|
``Count``, which does return ``0`` if the ``QuerySet`` is empty.
|
2014-05-29 14:53:25 +08:00
|
|
|
|
|
2013-12-25 21:13:18 +08:00
|
|
|
|
All aggregates have the following parameters in common:
|
|
|
|
|
|
2018-11-24 14:36:18 +08:00
|
|
|
|
``expressions``
|
|
|
|
|
~~~~~~~~~~~~~~~
|
2013-12-25 21:13:18 +08:00
|
|
|
|
|
2018-11-24 14:36:18 +08:00
|
|
|
|
Strings that reference fields on the model, or :doc:`query expressions
|
2013-12-25 21:13:18 +08:00
|
|
|
|
</ref/models/expressions>`.
|
|
|
|
|
|
|
|
|
|
``output_field``
|
|
|
|
|
~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
|
|
An optional argument that represents the :doc:`model field </ref/models/fields>`
|
|
|
|
|
of the return value
|
|
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
|
|
When combining multiple field types, Django can only determine the
|
|
|
|
|
``output_field`` if all fields are of the same type. Otherwise, you
|
|
|
|
|
must provide the ``output_field`` yourself.
|
|
|
|
|
|
2018-11-27 23:57:26 +08:00
|
|
|
|
.. _aggregate-filter:
|
|
|
|
|
|
2017-04-22 23:44:51 +08:00
|
|
|
|
``filter``
|
|
|
|
|
~~~~~~~~~~
|
|
|
|
|
|
|
|
|
|
An optional :class:`Q object <django.db.models.Q>` that's used to filter the
|
|
|
|
|
rows that are aggregated.
|
|
|
|
|
|
|
|
|
|
See :ref:`conditional-aggregation` and :ref:`filtering-on-annotations` for
|
|
|
|
|
example usage.
|
|
|
|
|
|
2013-12-25 21:13:18 +08:00
|
|
|
|
``**extra``
|
|
|
|
|
~~~~~~~~~~~
|
|
|
|
|
|
|
|
|
|
Keyword arguments that can provide extra context for the SQL generated
|
|
|
|
|
by the aggregate.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``Avg``
|
|
|
|
|
~~~~~~~
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2019-07-31 02:08:55 +08:00
|
|
|
|
.. class:: Avg(expression, output_field=None, distinct=False, filter=None, **extra)
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2015-04-24 08:13:24 +08:00
|
|
|
|
Returns the mean value of the given expression, which must be numeric
|
|
|
|
|
unless you specify a different ``output_field``.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* Default alias: ``<field>__avg``
|
2018-12-02 07:46:28 +08:00
|
|
|
|
* Return type: ``float`` if input is ``int``, otherwise same as input
|
|
|
|
|
field, or ``output_field`` if supplied
|
2015-04-24 08:13:24 +08:00
|
|
|
|
|
2019-07-31 02:08:55 +08:00
|
|
|
|
Has one optional argument:
|
|
|
|
|
|
|
|
|
|
.. attribute:: distinct
|
|
|
|
|
|
|
|
|
|
If ``distinct=True``, ``Avg`` returns the mean value of unique values.
|
|
|
|
|
This is the SQL equivalent of ``AVG(DISTINCT <field>)``. The default
|
|
|
|
|
value is ``False``.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``Count``
|
|
|
|
|
~~~~~~~~~
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2017-04-22 23:44:51 +08:00
|
|
|
|
.. class:: Count(expression, distinct=False, filter=None, **extra)
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2013-12-25 21:13:18 +08:00
|
|
|
|
Returns the number of objects that are related through the provided
|
|
|
|
|
expression.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* Default alias: ``<field>__count``
|
2012-02-22 02:28:35 +08:00
|
|
|
|
* Return type: ``int``
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-02-16 09:56:53 +08:00
|
|
|
|
Has one optional argument:
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-02-16 09:56:53 +08:00
|
|
|
|
.. attribute:: distinct
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2012-02-22 02:28:35 +08:00
|
|
|
|
If ``distinct=True``, the count will only include unique instances.
|
|
|
|
|
This is the SQL equivalent of ``COUNT(DISTINCT <field>)``. The default
|
|
|
|
|
value is ``False``.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``Max``
|
|
|
|
|
~~~~~~~
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2017-04-22 23:44:51 +08:00
|
|
|
|
.. class:: Max(expression, output_field=None, filter=None, **extra)
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2013-12-25 21:13:18 +08:00
|
|
|
|
Returns the maximum value of the given expression.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* Default alias: ``<field>__max``
|
2013-12-25 21:13:18 +08:00
|
|
|
|
* Return type: same as input field, or ``output_field`` if supplied
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``Min``
|
|
|
|
|
~~~~~~~
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2017-04-22 23:44:51 +08:00
|
|
|
|
.. class:: Min(expression, output_field=None, filter=None, **extra)
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2013-12-25 21:13:18 +08:00
|
|
|
|
Returns the minimum value of the given expression.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* Default alias: ``<field>__min``
|
2013-12-25 21:13:18 +08:00
|
|
|
|
* Return type: same as input field, or ``output_field`` if supplied
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``StdDev``
|
|
|
|
|
~~~~~~~~~~
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2018-12-20 07:03:42 +08:00
|
|
|
|
.. class:: StdDev(expression, output_field=None, sample=False, filter=None, **extra)
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2013-12-25 21:13:18 +08:00
|
|
|
|
Returns the standard deviation of the data in the provided expression.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* Default alias: ``<field>__stddev``
|
2018-12-20 07:03:42 +08:00
|
|
|
|
* Return type: ``float`` if input is ``int``, otherwise same as input
|
|
|
|
|
field, or ``output_field`` if supplied
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-02-16 09:56:53 +08:00
|
|
|
|
Has one optional argument:
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-02-16 09:56:53 +08:00
|
|
|
|
.. attribute:: sample
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-02-16 09:56:53 +08:00
|
|
|
|
By default, ``StdDev`` returns the population standard deviation. However,
|
|
|
|
|
if ``sample=True``, the return value will be the sample standard deviation.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``Sum``
|
|
|
|
|
~~~~~~~
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2019-07-31 02:08:55 +08:00
|
|
|
|
.. class:: Sum(expression, output_field=None, distinct=False, filter=None, **extra)
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2013-12-25 21:13:18 +08:00
|
|
|
|
Computes the sum of all values of the given expression.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* Default alias: ``<field>__sum``
|
2013-12-25 21:13:18 +08:00
|
|
|
|
* Return type: same as input field, or ``output_field`` if supplied
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2019-07-31 02:08:55 +08:00
|
|
|
|
Has one optional argument:
|
|
|
|
|
|
|
|
|
|
.. attribute:: distinct
|
|
|
|
|
|
|
|
|
|
If ``distinct=True``, ``Sum`` returns the sum of unique values. This is
|
|
|
|
|
the SQL equivalent of ``SUM(DISTINCT <field>)``. The default value is
|
|
|
|
|
``False``.
|
|
|
|
|
|
2016-01-25 05:26:11 +08:00
|
|
|
|
``Variance``
|
|
|
|
|
~~~~~~~~~~~~
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2018-12-20 07:04:25 +08:00
|
|
|
|
.. class:: Variance(expression, output_field=None, sample=False, filter=None, **extra)
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2013-12-25 21:13:18 +08:00
|
|
|
|
Returns the variance of the data in the provided expression.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-10-11 01:32:33 +08:00
|
|
|
|
* Default alias: ``<field>__variance``
|
2018-12-20 07:04:25 +08:00
|
|
|
|
* Return type: ``float`` if input is ``int``, otherwise same as input
|
|
|
|
|
field, or ``output_field`` if supplied
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-02-16 09:56:53 +08:00
|
|
|
|
Has one optional argument:
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-02-16 09:56:53 +08:00
|
|
|
|
.. attribute:: sample
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2011-02-16 09:56:53 +08:00
|
|
|
|
By default, ``Variance`` returns the population variance. However,
|
|
|
|
|
if ``sample=True``, the return value will be the sample variance.
|
2009-01-15 19:06:34 +08:00
|
|
|
|
|
2015-08-15 20:41:57 +08:00
|
|
|
|
Query-related tools
|
|
|
|
|
===================
|
2015-01-12 04:02:33 +08:00
|
|
|
|
|
|
|
|
|
This section provides reference material for query-related tools not documented
|
|
|
|
|
elsewhere.
|
|
|
|
|
|
|
|
|
|
``Q()`` objects
|
|
|
|
|
---------------
|
|
|
|
|
|
|
|
|
|
.. class:: Q
|
|
|
|
|
|
2020-05-06 12:43:28 +08:00
|
|
|
|
A ``Q()`` object represents an SQL condition that can be used in
|
|
|
|
|
database-related operations. It's similar to how an
|
|
|
|
|
:class:`F() <django.db.models.F>` object represents the value of a model field
|
|
|
|
|
or annotation. They make it possible to define and reuse conditions, and
|
|
|
|
|
combine them using operators such as ``|`` (``OR``) and ``&`` (``AND``). See
|
|
|
|
|
:ref:`complex-lookups-with-q`.
|
2015-01-12 04:02:33 +08:00
|
|
|
|
|
|
|
|
|
``Prefetch()`` objects
|
|
|
|
|
----------------------
|
|
|
|
|
|
|
|
|
|
.. class:: Prefetch(lookup, queryset=None, to_attr=None)
|
|
|
|
|
|
|
|
|
|
The ``Prefetch()`` object can be used to control the operation of
|
|
|
|
|
:meth:`~django.db.models.query.QuerySet.prefetch_related()`.
|
|
|
|
|
|
|
|
|
|
The ``lookup`` argument describes the relations to follow and works the same
|
|
|
|
|
as the string based lookups passed to
|
2015-03-10 13:17:33 +08:00
|
|
|
|
:meth:`~django.db.models.query.QuerySet.prefetch_related()`. For example:
|
|
|
|
|
|
2016-12-15 02:54:29 +08:00
|
|
|
|
>>> from django.db.models import Prefetch
|
2015-03-10 13:17:33 +08:00
|
|
|
|
>>> Question.objects.prefetch_related(Prefetch('choice_set')).get().choice_set.all()
|
2015-10-06 07:07:34 +08:00
|
|
|
|
<QuerySet [<Choice: Not much>, <Choice: The sky>, <Choice: Just hacking again>]>
|
2015-03-10 13:17:33 +08:00
|
|
|
|
# This will only execute two queries regardless of the number of Question
|
|
|
|
|
# and Choice objects.
|
|
|
|
|
>>> Question.objects.prefetch_related(Prefetch('choice_set')).all()
|
2017-06-10 00:48:24 +08:00
|
|
|
|
<QuerySet [<Question: What's up?>]>
|
2015-01-12 04:02:33 +08:00
|
|
|
|
|
|
|
|
|
The ``queryset`` argument supplies a base ``QuerySet`` for the given lookup.
|
|
|
|
|
This is useful to further filter down the prefetch operation, or to call
|
|
|
|
|
:meth:`~django.db.models.query.QuerySet.select_related()` from the prefetched
|
2015-03-10 13:17:33 +08:00
|
|
|
|
relation, hence reducing the number of queries even further:
|
|
|
|
|
|
|
|
|
|
>>> voted_choices = Choice.objects.filter(votes__gt=0)
|
|
|
|
|
>>> voted_choices
|
2015-10-06 07:07:34 +08:00
|
|
|
|
<QuerySet [<Choice: The sky>]>
|
2015-03-10 13:17:33 +08:00
|
|
|
|
>>> prefetch = Prefetch('choice_set', queryset=voted_choices)
|
|
|
|
|
>>> Question.objects.prefetch_related(prefetch).get().choice_set.all()
|
2015-10-06 07:07:34 +08:00
|
|
|
|
<QuerySet [<Choice: The sky>]>
|
2015-01-12 04:02:33 +08:00
|
|
|
|
|
|
|
|
|
The ``to_attr`` argument sets the result of the prefetch operation to a custom
|
2015-03-10 13:17:33 +08:00
|
|
|
|
attribute:
|
|
|
|
|
|
|
|
|
|
>>> prefetch = Prefetch('choice_set', queryset=voted_choices, to_attr='voted_choices')
|
|
|
|
|
>>> Question.objects.prefetch_related(prefetch).get().voted_choices
|
2019-01-31 22:09:08 +08:00
|
|
|
|
[<Choice: The sky>]
|
2015-03-10 13:17:33 +08:00
|
|
|
|
>>> Question.objects.prefetch_related(prefetch).get().choice_set.all()
|
2015-10-06 07:07:34 +08:00
|
|
|
|
<QuerySet [<Choice: Not much>, <Choice: The sky>, <Choice: Just hacking again>]>
|
2015-01-12 04:02:33 +08:00
|
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
|
|
When using ``to_attr`` the prefetched result is stored in a list. This can
|
|
|
|
|
provide a significant speed improvement over traditional
|
|
|
|
|
``prefetch_related`` calls which store the cached result within a
|
|
|
|
|
``QuerySet`` instance.
|
2015-08-15 20:41:57 +08:00
|
|
|
|
|
|
|
|
|
``prefetch_related_objects()``
|
|
|
|
|
------------------------------
|
|
|
|
|
|
|
|
|
|
.. function:: prefetch_related_objects(model_instances, *related_lookups)
|
|
|
|
|
|
|
|
|
|
Prefetches the given lookups on an iterable of model instances. This is useful
|
|
|
|
|
in code that receives a list of model instances as opposed to a ``QuerySet``;
|
|
|
|
|
for example, when fetching models from a cache or instantiating them manually.
|
|
|
|
|
|
|
|
|
|
Pass an iterable of model instances (must all be of the same class) and the
|
|
|
|
|
lookups or :class:`Prefetch` objects you want to prefetch for. For example::
|
|
|
|
|
|
|
|
|
|
>>> from django.db.models import prefetch_related_objects
|
|
|
|
|
>>> restaurants = fetch_top_restaurants_from_cache() # A list of Restaurants
|
|
|
|
|
>>> prefetch_related_objects(restaurants, 'pizzas__toppings')
|
2017-09-22 23:53:17 +08:00
|
|
|
|
|
|
|
|
|
``FilteredRelation()`` objects
|
|
|
|
|
------------------------------
|
|
|
|
|
|
|
|
|
|
.. class:: FilteredRelation(relation_name, *, condition=Q())
|
|
|
|
|
|
|
|
|
|
.. attribute:: FilteredRelation.relation_name
|
|
|
|
|
|
|
|
|
|
The name of the field on which you'd like to filter the relation.
|
|
|
|
|
|
|
|
|
|
.. attribute:: FilteredRelation.condition
|
|
|
|
|
|
|
|
|
|
A :class:`~django.db.models.Q` object to control the filtering.
|
|
|
|
|
|
|
|
|
|
``FilteredRelation`` is used with :meth:`~.QuerySet.annotate()` to create an
|
|
|
|
|
``ON`` clause when a ``JOIN`` is performed. It doesn't act on the default
|
|
|
|
|
relationship but on the annotation name (``pizzas_vegetarian`` in example
|
|
|
|
|
below).
|
|
|
|
|
|
|
|
|
|
For example, to find restaurants that have vegetarian pizzas with
|
|
|
|
|
``'mozzarella'`` in the name::
|
|
|
|
|
|
|
|
|
|
>>> from django.db.models import FilteredRelation, Q
|
|
|
|
|
>>> Restaurant.objects.annotate(
|
|
|
|
|
... pizzas_vegetarian=FilteredRelation(
|
|
|
|
|
... 'pizzas', condition=Q(pizzas__vegetarian=True),
|
|
|
|
|
... ),
|
|
|
|
|
... ).filter(pizzas_vegetarian__name__icontains='mozzarella')
|
|
|
|
|
|
|
|
|
|
If there are a large number of pizzas, this queryset performs better than::
|
|
|
|
|
|
|
|
|
|
>>> Restaurant.objects.filter(
|
|
|
|
|
... pizzas__vegetarian=True,
|
|
|
|
|
... pizzas__name__icontains='mozzarella',
|
|
|
|
|
... )
|
|
|
|
|
|
|
|
|
|
because the filtering in the ``WHERE`` clause of the first queryset will only
|
|
|
|
|
operate on vegetarian pizzas.
|
|
|
|
|
|
|
|
|
|
``FilteredRelation`` doesn't support:
|
|
|
|
|
|
|
|
|
|
* Conditions that span relational fields. For example::
|
|
|
|
|
|
|
|
|
|
>>> Restaurant.objects.annotate(
|
|
|
|
|
... pizzas_with_toppings_startswith_n=FilteredRelation(
|
|
|
|
|
... 'pizzas__toppings',
|
|
|
|
|
... condition=Q(pizzas__toppings__name__startswith='n'),
|
|
|
|
|
... ),
|
|
|
|
|
... )
|
|
|
|
|
Traceback (most recent call last):
|
|
|
|
|
...
|
|
|
|
|
ValueError: FilteredRelation's condition doesn't support nested relations (got 'pizzas__toppings__name__startswith').
|
|
|
|
|
* :meth:`.QuerySet.only` and :meth:`~.QuerySet.prefetch_related`.
|
|
|
|
|
* A :class:`~django.contrib.contenttypes.fields.GenericForeignKey`
|
|
|
|
|
inherited from a parent model.
|