This adds a new method, Apps.lazy_model_operation(), and a helper function,
lazy_related_operation(), which together supersede add_lazy_relation() and
make lazy model operations the responsibility of the App registry. This
system no longer uses the class_prepared signal.
Field.rel is now deprecated. Rel objects have now also remote_field
attribute. This means that self == self.remote_field.remote_field.
In addition, made the Rel objects a bit more like Field objects. Still,
marked ManyToManyFields as null=True.
Previously related fields didn't implement get_lookup, instead
related fields were treated specially. This commit removed some of
the special handling. In particular, related fields return Lookup
instances now, too.
Other notable changes in this commit is removal of support for
annotations in names_to_path().
If Field.choices is provided as an iterator, consume it in __init__ instead
of using itertools.tee (which ends up holding everything in memory
anyway). Fixes a bug where deconstruct() was consuming the iterator but
bypassing the call to `tee`.
Table alterations in SQLite require creating a new table and copying
data over from the old one. This change ensures that no Django model
ever exists with the temporary table name as its db_table attribute.
The query used a construct of qs.annotate().values().aggregate() where
the first annotate used an F-object reference and the values() and
aggregate() calls referenced that F-object.
Also made sure the inner query's select clause is as simple as possible,
and made sure .values().distinct().aggreate() works correctly.
Set apps.ready to False when rendering multiple models. This prevents
that the cache on Model._meta is expired on all models after each time a
single model is rendered. Prevented that Apps.clear_cache() refills the
cache on Apps.get_models(), so that the wrong value cannot be cached
when cloning a StateApps.
There's no reason to assume that sys.path[0] is an appropriate location
for generating code. Specifically that doesn't work with extend_sys_path
which puts the additional directories at the end of sys.path.
In order to create a new migrations module, instead of using an
arbitrary entry from sys.path, import as much as possible from the path
to the module, then create missing submodules from there.
Without this change, the tests introduced in the following commit fail,
which seems sufficient to prevent regressions for such a refactoring.
These cached properies were causing problems with pickling, and in
addition they were confusingly defined: field.rel.model._meta was
not the same as field.rel.opts.
Instead users should use field.rel.related_model._meta inplace of
field.rel.opts, and field.rel.to._meta in place of field.rel.to_opts.
Switched from an adjancency list and uncached, iterative depth-first
search to a Node-based design with direct parent/child links and a
cached, recursive depth-first search. With this change, calculating
a migration plan for a large graph takes several seconds instead of
several hours.
Marked test `migrations.test_graph.GraphTests.test_dfs` as an expected
failure due to reaching the maximum recursion depth.
The new signature enables better support for routing RunPython and
RunSQL operations, especially w.r.t. reusable and third-party apps.
This commit also takes advantage of the deprecation cycle for the old
signature to remove the backward incompatibility introduced in #22583;
RunPython and RunSQL won't call allow_migrate() when when the router
has the old signature.
Thanks Aymeric Augustin and Tim Graham for helping shape up the patch.
Refs 22583.
As suggested by Anssi. This has the slightly strange side effect of
passing the expression to Expression.convert_value has the expression
passed back to it, but it allows more complex patterns of expressions.
Swapped out models don't have a _default_manager unless they have
explicitly defined managers. ModelState.from_model() now accounts for
this case and uses an empty list for managers if no explicit managers
are defined and a model is swapped out.
Instead of naively reloading only directly related models (FK, O2O, M2M
relationship) the project state needs to reload their relations as well
as the model changes as well. Furthermore inheriting models (and super
models) need to be reloaded in order to keep inherited fields in sync.
To prevent endless recursive calls an iterative approach is taken.
Previously Django only checked for the table name in CreateModel
operations in initial migrations and faked the migration automatically.
This led to various errors and unexpected behavior. The newly introduced
--fake-initial flag to the migrate command must be passed to get the
same behavior again. With this change Django will bail out in with a
"duplicate relation / table" error instead.
Thanks Carl Meyer and Tim Graham for the documentation update, report
and review.
During direct assignment, evaluating the iterable before the transaction
is started avoids leaving the transaction dirty if an exception is raised.
This is slightly more wasteful but probably not enough to warrant a change
of behavior.
Thanks Anssi for the feedback. Refs #6707.
Instead of splitting filter clauses to where and having parts before
adding them to query.where or query.having, add all filter clauses to
query.where, and when compiling the query split the where to having and
where parts.
The method is mainly intended for use with UUIDField. For UUIDField we
want to call the field's default even when primary key value is
explicitly set to None to match the behavior of AutoField.
Thanks to Marc Tamlyn and Tim Graham for review.
An explicit `__exact` lookup in the related managers filters
was interpreted as a reference to a foreign `exact` field.
Thanks to Trac alias zhiyajun11 for the report, Josh for the investigation,
Loïc for the test name and Tim for the review.
Several issues resolved here, following from a report that a base_field
of GenericIpAddressField was failing.
We were using get_prep_value instead of get_db_prep_value in ArrayField
which was bypassing any extra modifications to the value being made in
the base field's get_db_prep_value. Changing this broke datetime
support, so the postgres backend has gained the relevant operation
methods to send dates/times/datetimes directly to the db backend instead
of casting them to strings. Similarly, a new database feature has been
added allowing the uuid to be passed directly to the backend, as we do
with timedeltas.
On the other side, psycopg2 expects an Inet() instance for IP address
fields, so we add a value_to_db_ipaddress method to wrap the strings on
postgres. We also have to manually add a database adapter to psycopg2,
as we do not wish to use the built in adapter which would turn
everything into Inet() instances.
Thanks to smclenithan for the report.
It was mainly for MySQL on Python 3, but now the current
recommended MySQL driver for Python 3 (mysqlclient) does support
binary fields, it is unneeded. Refs #20377.
These refactorings making overriding some text based lookup names on
other fields (specifically `contains`) much cleaner. It also removes a
bunch of duplication in the contrib.postgres lookups.
Refactored compiler SELECT, GROUP BY and ORDER BY generation.
While there, also refactored select_related() implementation
(get_cached_row() and get_klass_info() are now gone!).
Made get_db_converters() method work on expressions instead of
internal_type. This allows the backend converters to target
specific expressions if need be.
Added query.context, this can be used to set per-query state.
Also changed the signature of database converters. They now accept
context as an argument.