ForeignKey or ManyToManyField attribute ``limit_choices_to`` can now
be a callable that returns either a ``Q`` object or a dict.
Thanks michael at actrix.gen.nz for the original suggestion.
Overriding the error messages now works for both unique fields, unique_together
and unique_for_date.
This patch changed the overriding logic to allow customizing NON_FIELD_ERRORS
since previously only fields' errors were customizable.
Refs #20199.
Thanks leahculver for the suggestion.
This commit touchs various parts of the code base and test framework. Any
found usage of opening a cursor for the sake of initializing a connection
has been replaced with 'ensure_connection()'.
Updated SQLUpdateCompiler.execute_sql to match the behavior described in
the docstring; the 'first non-empty query' will now include all queries,
not just the main and first related update.
Added CURSOR and NO_RESULTS result_type constants to make the usages more
self documenting and allow execute_sql to explicitly close the cursor when
it is no longer needed.
Broke InspectDBTestCase.test_field_types in two:
- a test_number_field_types, which now passes on Oracle too
- a test_field_types, for all non-numeric fields, which is still expected to fail
Also made some pep8 fixes in the tests file. Refs #19884
Thanks Tim Graham for review.
The combination of BaseManager.from_queryset() and RenameMethodsBase results in
Manager.__module__ having the wrong value. This can be an issue when trying to
pickle the Manager class.
Wherever possible this filesystem path is derived automatically from the app
module's ``__path__`` and ``__file__`` attributes (this avoids any
backwards-compatibility problems).
AppConfig allows specifying an app's filesystem location explicitly, which
overrides all autodetection based on ``__path__`` and ``__file__``. This
permits Django to support any type of module as an app (namespace packages,
fake modules, modules loaded by other hypothetical non-filesystem module
loaders), as long as the app is configured with an explicit filesystem path.
Thanks Aymeric for review and discussion.
This is the result of Christopher Medrela's 2013 Summer of Code project.
Thanks also to Preston Holmes, Tim Graham, Anssi Kääriäinen, Florian
Apolloner, and Alex Gaynor for review notes along the way.
Also: Fixes#8579, fixes#3055, fixes#19844.
Allowed users to specify which lookups or transforms ("nested lookus")
are available for fields. The implementation is now class based.
Squashed commit of the following:
commit fa7a7195f1
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Jan 18 10:53:24 2014 +0200
Added lookup registration API docs
commit eb1c8ce164
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Tue Jan 14 18:59:36 2014 +0200
Release notes and other minor docs changes
commit 11501c29c9
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sun Jan 12 20:53:03 2014 +0200
Forgot to add custom_lookups tests in prev commit
commit 83173b960e
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sun Jan 12 19:59:12 2014 +0200
Renamed Extract -> Transform
commit 3b18d9f3a1
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sun Jan 12 19:51:53 2014 +0200
Removed suggestion of temporary lookup registration from docs
commit 21d0c7631c
Merge: 2509006f2dc442
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sun Jan 12 09:38:23 2014 -0800
Merge pull request #2 from mjtamlyn/lookups_3
Reworked custom lookups docs.
commit f2dc4429a1
Author: Marc Tamlyn <marc.tamlyn@gmail.com>
Date: Sun Jan 12 13:15:05 2014 +0000
Reworked custom lookups docs.
Mostly just formatting and rewording, but also replaced the example
using ``YearExtract`` to use an example which is unlikely to ever be
possible directly in the ORM.
commit 2509006506
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sun Jan 12 13:19:13 2014 +0200
Removed unused import
commit 4fba5dfaa0
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Jan 11 22:34:41 2014 +0200
Added docs to index
commit 6d53963f37
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Jan 11 22:10:24 2014 +0200
Dead code removal
commit f9cc039007
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Jan 11 19:00:43 2014 +0200
A new try for docs
commit 33aa18a6e3
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Jan 11 14:57:12 2014 +0200
Renamed get_cols to get_group_by_cols
commit c7d5f8661b
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Jan 11 14:45:53 2014 +0200
Altered query string customization for backends vendors
The new way is trying to call first method 'as_' + connection.vendor.
If that doesn't exist, then call as_sql().
Also altered how lookup registration is done. There is now
RegisterLookupMixin class that is used by Field, Extract and
sql.Aggregate. This allows one to register lookups for extracts and
aggregates in the same way lookup registration is done for fields.
commit 90e7004ec1
Merge: 66649fff7c2c0a
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Jan 11 13:21:01 2014 +0200
Merge branch 'master' into lookups_3
commit 66649ff891
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Jan 11 13:16:01 2014 +0200
Some rewording in docs
commit 31b8faa627
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sun Dec 29 15:52:29 2013 +0200
Cleanup based on review comments
commit 1016159f34
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Dec 28 18:37:04 2013 +0200
Proof-of-concept fix for #16731
Implemented only for SQLite and PostgreSQL, and only for startswith
and istartswith lookups.
commit 193cd097ca
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Dec 28 17:57:58 2013 +0200
Fixed#11722 -- iexact=F() produced invalid SQL
commit 08ed3c3b49
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Dec 21 23:59:52 2013 +0200
Made Lookup and Extract available from django.db.models
commit b99c8d83c9
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Dec 21 23:06:29 2013 +0200
Fixed review notes by Loic
commit 049eebc070
Merge: ed8fab7b80a835
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Dec 21 22:53:10 2013 +0200
Merge branch 'master' into lookups_3
Conflicts:
django/db/models/fields/__init__.py
django/db/models/sql/compiler.py
django/db/models/sql/query.py
tests/null_queries/tests.py
commit ed8fab7fe8
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Dec 21 22:47:23 2013 +0200
Made Extracts aware of full lookup path
commit 27a57b7aed
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sun Dec 1 21:10:11 2013 +0200
Removed debugger import
commit 074e0f5aca
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sun Dec 1 21:02:16 2013 +0200
GIS lookup support added
commit 760e28e72b
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sun Dec 1 20:04:31 2013 +0200
Removed usage of Constraint, used Lookup instead
commit eac4776684
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sun Dec 1 02:22:30 2013 +0200
Minor cleanup of Lookup API
commit 2adf50428d
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sun Dec 1 02:14:19 2013 +0200
Added documentation, polished implementation
commit 32c04357a8
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Nov 30 23:10:15 2013 +0200
Avoid OrderedDict creation on lookup aggregate check
commit 7c8b3a32cc
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Sat Nov 30 23:04:34 2013 +0200
Implemented nested lookups
But there is no support of using lookups outside filtering yet.
commit 4d219d4cde
Author: Anssi Kääriäinen <akaariai@gmail.com>
Date: Wed Nov 27 22:07:30 2013 +0200
Initial implementation of custom lookups
Also ensured the transaction state is clean on Oracle while I was there.
This change cannot be backported to 1.6 because it's
backwards-incompatible for custom database backends.
When settings.DATABASES['default']['AUTOCOMMIT'] = False, the connection
wasn't in autocommit mode but Django pretended it was.
Thanks Anssi for analysing this issue.
Refs #17062.
Now that the refactorings are complete, it isn't particularly useful any
more, nor very well named. Let's keep the API as simple as possible.
Fixed#21689.
Since it triggers imports, it shouldn't be done lightly.
This commit adds a public API for doing it explicitly, django.setup(),
and does it automatically when using manage.py and wsgi.py.
SQLite accepts the relevant standard SQL (although by default it doesn't
enforce the constraint), and the 'traditional' creation backend helper
generate it, so this allows us to:
- Maintain the status quo
- Improve readability of the SQL code generated for that backend.
Also, we will need this for when we fix Refs #14204.
Returning None on errors required unpythonic error checking and was
inconsistent with get_app_config.
get_model was a private API until the previous commit, but given that it
was certainly used in third party software, the change is explained in
the release notes.
Applied the same change to get_registered_model, which is a new private
API introduced during the recent refactoring.
ContentTypes are only created for installed applications, and I could
make a case for not returning a model that isn't installed any more.
The check for stale ContentTypes in update_contenttypes doesn't use
model_class.
ModelSignal actually needs get_registered_model since the lookup happens
at import time. I took this opportunity to perform a small refactoring.
This removes the gap between the master app registry and ad-hoc app
registries created by the migration framework, specifically in terms
of behavior of the get_model[s] methods.
This commit contains a stealth feature that I'd rather not describe.
Made it use 'AUTOINCREMENT' suffix for PK creation. This way it doeesn't
regress when compared with the 'traditional' DB backend creation
infrastructure.
Refs #10164.
The last component of the dotted path to the application module is
consistently referenced as the application "label". For instance it's
AppConfig.label. appname could be confused with AppConfig.name, which is
the full dotted path.
It was called _populate() before I renamed it to populate(). Since it
has been superseded by populate_models() there's no reason to keep it.
Removed the can_postpone argument of load_app() as it was only used by
populate(). It's a private API and there's no replacement. Simplified
load_app() accordingly. Then new version behaves exactly like the old
one even though it's much shorter.
Since applications that aren't installed no longer have an application
configuration, it is now always True in practice.
Provided an abstraction to temporarily add or remove applications as
several tests messed with app_config.installed to achieve this effect.
For now this API is _-prefixed because it looks dangerous.
Got rid of AppConfig._stub. As a side effect, app_cache.app_configs now
only contains entries for applications that are in INSTALLED_APPS, which
is a good thing and will allow dramatic simplifications (which I will
perform in the next commit). That required adjusting all methods that
iterate on app_configs without checking the "installed" flag, hence the
large changes in get_model[s].
Introduced AppCache.all_models to store models:
- while the app cache is being populated and a suitable app config
object to register models isn't available yet;
- for applications that aren't in INSTALLED_APPS since they don't have
an app config any longer.
Replaced get_model(seed_cache=False) by registered_model() which can be
kept simple and safe to call at any time, and removed the seed_cache
argument to get_model[s]. There's no replacement for that private API.
Allowed non-master app caches to go through populate() as it is now
safe to do so. They were introduced in 1.7 so backwards compatibility
isn't a concern as long as the migrations framework keeps working.
Used the information from the app cache instead of creating a duplicate
based on INSTALLED_APPS.
Model._meta.installed is no longer writable. It was a rather sketchy way
to alter private internals anyway.
Improved Andrew's hack to create temporary app caches to handle
migrations. Now the main app cache has a "master" flag set to True
(which is a non-default keyword argument, thus unlikely to be used by
mistake). Other app cache instances have "master" set to False.
The only sanctioned way to access the app cache is by importing
django.core.apps.app_cache.
If you were instanciating an app cache and relying on the Borg pattern,
you'll have to refactor your code.
Several parts of Django call get_apps() with a comment along this lines
of "this has the side effect of calling _populate()". I fail to see how
this is better than just calling populate()!
Since the original ones in django.db.models.loading were kept only for
backwards compatibility, there's no need to recreate them. However, many
internals of Django still relied on them.
They were also imported in django.db.models. They never appear in the
documentation, except a quick mention of get_models and get_app in the
1.2 release notes to document an edge case in GIS. I don't think that
makes them a public API.
This commit doesn't change the overall amount of global state but
clarifies that it's tied to the app_cache object instead of hiding it
behind half a dozen functions.
The `remove()` and `clear()` methods of the related managers created by
`ForeignKey`, `GenericForeignKey`, and `ManyToManyField` suffered from a
number of issues. Some operations ran multiple data modifying queries without
wrapping them in a transaction, and some operations didn't respect default
filtering when it was present (i.e. when the default manager on the related
model implemented a custom `get_queryset()`).
Fixing the issues introduced some backward incompatible changes:
- The implementation of `remove()` for `ForeignKey` related managers changed
from a series of `Model.save()` calls to a single `QuerySet.update()` call.
The change means that `pre_save` and `post_save` signals aren't called anymore.
- The `remove()` and `clear()` methods for `GenericForeignKey` related
managers now perform bulk delete so `Model.delete()` isn't called anymore.
- The `remove()` and `clear()` methods for `ManyToManyField` related
managers perform nested queries when filtering is involved, which may
or may not be an issue depending on the database and the data itself.
Refs. #3871, #21174.
Thanks Anssi Kääriäinen and Tim Graham for the reviews.
This patch introduces the Prefetch object which allows customizing prefetch
operations.
This enables things like filtering prefetched relations, calling select_related
from a prefetched relation, or prefetching the same relation multiple times
with different querysets.
When a Prefetch instance specifies a to_attr argument, the result is stored
in a list rather than a QuerySet. This has the fortunate consequence of being
significantly faster. The preformance improvement is due to the fact that we
save the costly creation of a QuerySet instance.
Thanks @akaariai for the original patch and @bmispelon and @timgraham
for the reviews.
This commit introduced a new class JoinPromoter that can be used to
abstract away join promotion problems for complex filter conditions.
Query._add_q() and Query.combine() now use the new class.
Also, added a lot of comments about why join promotion is done the way
it is.
Thanks to Tim Graham for original report and testing the changes, and
for Loic Bistuer for review.
Thanks dan at dlo.me for the initial patch.
- Added __pow__ and __rpow__ to ExpressionNode
- Added oracle and mysql specific power expressions
- Added used-defined power function for sqlite
The typo could have consequences in exceptional cases, but I didn't
figure out a way to actually produce such a case, so not tests.
Report & patch by Michael Manfre.
select_related('foo').select_related('bar') is now equivalent to
select_related('foo', 'bar').
Also reworded docs to recommend select_related(*fields) over select_related()
Squashed commit of the following:
commit 63ddb271a44df389b2c302e421fc17b7f0529755
Author: Aymeric Augustin <aymeric.augustin@m4x.org>
Date: Sun Sep 29 22:51:00 2013 +0200
Clarified interactions between atomic and exceptions.
commit 2899ec299228217c876ba3aa4024e523a41c8504
Author: Aymeric Augustin <aymeric.augustin@m4x.org>
Date: Sun Sep 22 22:45:32 2013 +0200
Fixed TransactionManagementError in tests.
Previous commit introduced an additional check to prevent running
queries in transactions that will be rolled back, which triggered a few
failures in the tests. In practice using transaction.atomic instead of
the low-level savepoint APIs was enough to fix the problems.
commit 4a639b059ea80aeb78f7f160a7d4b9f609b9c238
Author: Aymeric Augustin <aymeric.augustin@m4x.org>
Date: Tue Sep 24 22:24:17 2013 +0200
Allowed nesting constraint_checks_disabled inside atomic.
Since MySQL handles transactions loosely, this isn't a problem.
commit 2a4ab1cb6e83391ff7e25d08479e230ca564bfef
Author: Aymeric Augustin <aymeric.augustin@m4x.org>
Date: Sat Sep 21 18:43:12 2013 +0200
Prevented running queries in transactions that will be rolled back.
This avoids a counter-intuitive behavior in an edge case on databases
with non-atomic transaction semantics.
It prevents using savepoint_rollback() inside an atomic block without
calling set_rollback(False) first, which is backwards-incompatible in
tests.
Refs #21134.
commit 8e3db393853c7ac64a445b66e57f3620a3fde7b0
Author: Aymeric Augustin <aymeric.augustin@m4x.org>
Date: Sun Sep 22 22:14:17 2013 +0200
Replaced manual savepoints by atomic blocks.
This ensures the rollback flag is handled consistently in internal APIs.
Previously, if a database request spanned a related object manager, the
first manager encountered would cause a request to the router, and this
would bind all subsequent queries to the same database returned by the
router. Unfortunately, the first router query would be performed using
a read request to the router, resulting in bad routing information being
used if the subsequent query was actually a write.
This change defers the call to the router until the final query is acutally
made.
It includes a small *BACKWARDS INCOMPATIBILITY* on an edge case - see the
release notes for details.
Thanks to Paul Collins (@paulcollinsiii) for the excellent debugging
work and patch.
The use of OrderedDict (even an empty one) was surprisingly slow. By
initializing OrderedDict only when needed it is possible to save
non-trivial amount of computing time (Model.save() is around 30% faster
for example).
This commit targetted sql.Query only, there are likely other places
which could use similar optimizations.
exc_value might be None even though there's an exception, at least on
Python 2.6. Thanks Thomas Chaumeny for the report.
Fixed#21034.
Forward-port of a8624b2 from 1.6.x.
The option can be used to force pre 1.6 style SELECT on save behaviour.
This is needed in case the database returns zero updated rows even if
there is a matching row in the DB. One such case is PostgreSQL update
trigger that returns NULL.
Reviewed by Tim Graham.
Refs #16649
The __eq__ method now considers two instances without primary key value
equal only when they have same id(). The __hash__ method raises
TypeError for no primary key case.
Fixed#18864, fixed#18250
Thanks to Tim Graham for docs review.
In cases where the same connection (from model A to model B along the
same field) was needed multiple times in a select_related query, the
join setup code mistakenly reused an existing join.
Cleaned up the internal implementation of m2m fields by removing
related.py _get_fk_val(). The _get_fk_val() was doing the wrong thing
if asked for the foreign key value on foreign key to parent model's
primary key when child model had different primary key field.
If LEFT JOINs are required for correct results, then trimming the join
can lead to incorrect results. Consider case:
TBL A: ID | TBL B: ID A_ID
1 1 1
2
Now A.order_by('b__a') did use a join to B, and B's a_id column. This
was seen to contain the same value as A's id, and so the join was
trimmed. But this wasn't correct as the join is LEFT JOIN, and for row
A.id = 2 the B.a_id column is NULL.
Cleaned up the internal implementation of m2m fields by removing
related.py _get_fk_val(). The _get_fk_val() was doing the wrong thing
if asked for the foreign key value on foreign key to parent model's
primary key when child model had different primary key field.
If LEFT JOINs are required for correct results, then trimming the join
can lead to incorrect results. Consider case:
TBL A: ID | TBL B: ID A_ID
1 1 1
2
Now A.order_by('b__a') did use a join to B, and B's a_id column. This
was seen to contain the same value as A's id, and so the join was
trimmed. But this wasn't correct as the join is LEFT JOIN, and for row
A.id = 2 the B.a_id column is NULL.