SQLite accepts the relevant standard SQL (although by default it doesn't
enforce the constraint), and the 'traditional' creation backend helper
generate it, so this allows us to:
- Maintain the status quo
- Improve readability of the SQL code generated for that backend.
Also, we will need this for when we fix Refs #14204.
Returning None on errors required unpythonic error checking and was
inconsistent with get_app_config.
get_model was a private API until the previous commit, but given that it
was certainly used in third party software, the change is explained in
the release notes.
Applied the same change to get_registered_model, which is a new private
API introduced during the recent refactoring.
ContentTypes are only created for installed applications, and I could
make a case for not returning a model that isn't installed any more.
The check for stale ContentTypes in update_contenttypes doesn't use
model_class.
ModelSignal actually needs get_registered_model since the lookup happens
at import time. I took this opportunity to perform a small refactoring.
This removes the gap between the master app registry and ad-hoc app
registries created by the migration framework, specifically in terms
of behavior of the get_model[s] methods.
This commit contains a stealth feature that I'd rather not describe.
Made it use 'AUTOINCREMENT' suffix for PK creation. This way it doeesn't
regress when compared with the 'traditional' DB backend creation
infrastructure.
Refs #10164.
The last component of the dotted path to the application module is
consistently referenced as the application "label". For instance it's
AppConfig.label. appname could be confused with AppConfig.name, which is
the full dotted path.
It was called _populate() before I renamed it to populate(). Since it
has been superseded by populate_models() there's no reason to keep it.
Removed the can_postpone argument of load_app() as it was only used by
populate(). It's a private API and there's no replacement. Simplified
load_app() accordingly. Then new version behaves exactly like the old
one even though it's much shorter.
Since applications that aren't installed no longer have an application
configuration, it is now always True in practice.
Provided an abstraction to temporarily add or remove applications as
several tests messed with app_config.installed to achieve this effect.
For now this API is _-prefixed because it looks dangerous.
Got rid of AppConfig._stub. As a side effect, app_cache.app_configs now
only contains entries for applications that are in INSTALLED_APPS, which
is a good thing and will allow dramatic simplifications (which I will
perform in the next commit). That required adjusting all methods that
iterate on app_configs without checking the "installed" flag, hence the
large changes in get_model[s].
Introduced AppCache.all_models to store models:
- while the app cache is being populated and a suitable app config
object to register models isn't available yet;
- for applications that aren't in INSTALLED_APPS since they don't have
an app config any longer.
Replaced get_model(seed_cache=False) by registered_model() which can be
kept simple and safe to call at any time, and removed the seed_cache
argument to get_model[s]. There's no replacement for that private API.
Allowed non-master app caches to go through populate() as it is now
safe to do so. They were introduced in 1.7 so backwards compatibility
isn't a concern as long as the migrations framework keeps working.
Used the information from the app cache instead of creating a duplicate
based on INSTALLED_APPS.
Model._meta.installed is no longer writable. It was a rather sketchy way
to alter private internals anyway.
Improved Andrew's hack to create temporary app caches to handle
migrations. Now the main app cache has a "master" flag set to True
(which is a non-default keyword argument, thus unlikely to be used by
mistake). Other app cache instances have "master" set to False.
The only sanctioned way to access the app cache is by importing
django.core.apps.app_cache.
If you were instanciating an app cache and relying on the Borg pattern,
you'll have to refactor your code.
Several parts of Django call get_apps() with a comment along this lines
of "this has the side effect of calling _populate()". I fail to see how
this is better than just calling populate()!
Since the original ones in django.db.models.loading were kept only for
backwards compatibility, there's no need to recreate them. However, many
internals of Django still relied on them.
They were also imported in django.db.models. They never appear in the
documentation, except a quick mention of get_models and get_app in the
1.2 release notes to document an edge case in GIS. I don't think that
makes them a public API.
This commit doesn't change the overall amount of global state but
clarifies that it's tied to the app_cache object instead of hiding it
behind half a dozen functions.
The `remove()` and `clear()` methods of the related managers created by
`ForeignKey`, `GenericForeignKey`, and `ManyToManyField` suffered from a
number of issues. Some operations ran multiple data modifying queries without
wrapping them in a transaction, and some operations didn't respect default
filtering when it was present (i.e. when the default manager on the related
model implemented a custom `get_queryset()`).
Fixing the issues introduced some backward incompatible changes:
- The implementation of `remove()` for `ForeignKey` related managers changed
from a series of `Model.save()` calls to a single `QuerySet.update()` call.
The change means that `pre_save` and `post_save` signals aren't called anymore.
- The `remove()` and `clear()` methods for `GenericForeignKey` related
managers now perform bulk delete so `Model.delete()` isn't called anymore.
- The `remove()` and `clear()` methods for `ManyToManyField` related
managers perform nested queries when filtering is involved, which may
or may not be an issue depending on the database and the data itself.
Refs. #3871, #21174.
Thanks Anssi Kääriäinen and Tim Graham for the reviews.
This patch introduces the Prefetch object which allows customizing prefetch
operations.
This enables things like filtering prefetched relations, calling select_related
from a prefetched relation, or prefetching the same relation multiple times
with different querysets.
When a Prefetch instance specifies a to_attr argument, the result is stored
in a list rather than a QuerySet. This has the fortunate consequence of being
significantly faster. The preformance improvement is due to the fact that we
save the costly creation of a QuerySet instance.
Thanks @akaariai for the original patch and @bmispelon and @timgraham
for the reviews.
This commit introduced a new class JoinPromoter that can be used to
abstract away join promotion problems for complex filter conditions.
Query._add_q() and Query.combine() now use the new class.
Also, added a lot of comments about why join promotion is done the way
it is.
Thanks to Tim Graham for original report and testing the changes, and
for Loic Bistuer for review.
Thanks dan at dlo.me for the initial patch.
- Added __pow__ and __rpow__ to ExpressionNode
- Added oracle and mysql specific power expressions
- Added used-defined power function for sqlite
The typo could have consequences in exceptional cases, but I didn't
figure out a way to actually produce such a case, so not tests.
Report & patch by Michael Manfre.
select_related('foo').select_related('bar') is now equivalent to
select_related('foo', 'bar').
Also reworded docs to recommend select_related(*fields) over select_related()
Squashed commit of the following:
commit 63ddb271a44df389b2c302e421fc17b7f0529755
Author: Aymeric Augustin <aymeric.augustin@m4x.org>
Date: Sun Sep 29 22:51:00 2013 +0200
Clarified interactions between atomic and exceptions.
commit 2899ec299228217c876ba3aa4024e523a41c8504
Author: Aymeric Augustin <aymeric.augustin@m4x.org>
Date: Sun Sep 22 22:45:32 2013 +0200
Fixed TransactionManagementError in tests.
Previous commit introduced an additional check to prevent running
queries in transactions that will be rolled back, which triggered a few
failures in the tests. In practice using transaction.atomic instead of
the low-level savepoint APIs was enough to fix the problems.
commit 4a639b059ea80aeb78f7f160a7d4b9f609b9c238
Author: Aymeric Augustin <aymeric.augustin@m4x.org>
Date: Tue Sep 24 22:24:17 2013 +0200
Allowed nesting constraint_checks_disabled inside atomic.
Since MySQL handles transactions loosely, this isn't a problem.
commit 2a4ab1cb6e83391ff7e25d08479e230ca564bfef
Author: Aymeric Augustin <aymeric.augustin@m4x.org>
Date: Sat Sep 21 18:43:12 2013 +0200
Prevented running queries in transactions that will be rolled back.
This avoids a counter-intuitive behavior in an edge case on databases
with non-atomic transaction semantics.
It prevents using savepoint_rollback() inside an atomic block without
calling set_rollback(False) first, which is backwards-incompatible in
tests.
Refs #21134.
commit 8e3db393853c7ac64a445b66e57f3620a3fde7b0
Author: Aymeric Augustin <aymeric.augustin@m4x.org>
Date: Sun Sep 22 22:14:17 2013 +0200
Replaced manual savepoints by atomic blocks.
This ensures the rollback flag is handled consistently in internal APIs.
Previously, if a database request spanned a related object manager, the
first manager encountered would cause a request to the router, and this
would bind all subsequent queries to the same database returned by the
router. Unfortunately, the first router query would be performed using
a read request to the router, resulting in bad routing information being
used if the subsequent query was actually a write.
This change defers the call to the router until the final query is acutally
made.
It includes a small *BACKWARDS INCOMPATIBILITY* on an edge case - see the
release notes for details.
Thanks to Paul Collins (@paulcollinsiii) for the excellent debugging
work and patch.
The use of OrderedDict (even an empty one) was surprisingly slow. By
initializing OrderedDict only when needed it is possible to save
non-trivial amount of computing time (Model.save() is around 30% faster
for example).
This commit targetted sql.Query only, there are likely other places
which could use similar optimizations.
exc_value might be None even though there's an exception, at least on
Python 2.6. Thanks Thomas Chaumeny for the report.
Fixed#21034.
Forward-port of a8624b2 from 1.6.x.
The option can be used to force pre 1.6 style SELECT on save behaviour.
This is needed in case the database returns zero updated rows even if
there is a matching row in the DB. One such case is PostgreSQL update
trigger that returns NULL.
Reviewed by Tim Graham.
Refs #16649