Several issues resolved here, following from a report that a base_field
of GenericIpAddressField was failing.
We were using get_prep_value instead of get_db_prep_value in ArrayField
which was bypassing any extra modifications to the value being made in
the base field's get_db_prep_value. Changing this broke datetime
support, so the postgres backend has gained the relevant operation
methods to send dates/times/datetimes directly to the db backend instead
of casting them to strings. Similarly, a new database feature has been
added allowing the uuid to be passed directly to the backend, as we do
with timedeltas.
On the other side, psycopg2 expects an Inet() instance for IP address
fields, so we add a value_to_db_ipaddress method to wrap the strings on
postgres. We also have to manually add a database adapter to psycopg2,
as we do not wish to use the built in adapter which would turn
everything into Inet() instances.
Thanks to smclenithan for the report.
These refactorings making overriding some text based lookup names on
other fields (specifically `contains`) much cleaner. It also removes a
bunch of duplication in the contrib.postgres lookups.
Refactored compiler SELECT, GROUP BY and ORDER BY generation.
While there, also refactored select_related() implementation
(get_cached_row() and get_klass_info() are now gone!).
Made get_db_converters() method work on expressions instead of
internal_type. This allows the backend converters to target
specific expressions if need be.
Added query.context, this can be used to set per-query state.
Also changed the signature of database converters. They now accept
context as an argument.
Added functions and tests
Added docs and more tests
Added TextField converter to mysql backend
Aliased Value as V in example docs and tests
Removed unicode_compatible in example
Fixed console emulation in examples
Use INTERVAL DAY(9) TO SECOND(6) for Durationfield on Oracle rather than
storing as a NUMBER(19) of microseconds.
There are issues with cx_Oracle which require some extra data
manipulation in the database backend when constructing queries, but it
handles the conversion back to timedelta objects cleanly.
Thanks to Shai for the review.
A field for storing periods of time - modeled in Python by timedelta. It
is stored in the native interval data type on PostgreSQL and as a bigint
of microseconds on other backends.
Also includes significant changes to the internals of time related maths
in expressions, including the removal of DateModifierNode.
Thanks to Tim and Josh in particular for reviews.
Also removed Query.join_map. This structure was used to speed up join
reuse calculation. Initial benchmarking shows that this isn't actually
needed. If there are use cases where the removal has real-world
performance implications, it should be relatively straightforward to
reintroduce it as map {alias: [Join-like objects]}.
Aggregation over subquery produced syntactically incorrect queries in
some cases as Django didn't ensure that source expressions of the
aggregation were present in the subquery.
The .dates() queries were implemented by using custom Query, QuerySet,
and Compiler classes. Instead implement them by using expressions and
database converters APIs.
Added relabeled_clone() method to sql.Query to fix the problem. It
manifested itself in rare cases where at least double nested subquery's
filter condition might target non-existing alias.
Thanks to Trac alias ris for reporting the problem.
Made _do_update behave more strictly according to its docs,
including a corner case when specific concurent updates are
executed and select_on_save is set.
Fixed issue with warning message displayed for unbound naive datetime
objects when USE_TZ is True. Adds unit test that demonstrates the issue
(discoverable when using a custom lookup in MySQL).
This also defines QuerySet.__bool__ for consistency though this should not have any consequence as bool(qs) used to fallback on QuerySet.__len__ in Py3.
Added update_or_create to RelatedManager, ManyRelatedManager and
GenericRelatedObjectManager.
Added missing get_or_create to GenericRelatedObjectManager.
Validates that related_name is a valid Python id or ends with a '+' and
it's not a keyword. Without a check it passed silently leading to
unpredictable problems.
Thanks Konrad Świat for the initial work.
and ReverseSingleRelatedObjectDescriptor so they actually return QuerySet
instances.
Also ensured that SingleRelatedObjectDescriptor.get_queryset() accounts
for use_for_related_fields=True.
This cleanup lays the groundwork for #23533.
Thanks Anssi Kääriäinen for the review.
Complete rework of translating data values from database
Deprecation of SubfieldBase, removal of resolve_columns and
convert_values in favour of a more general converter based approach and
public API Field.from_db_value(). Now works seamlessly with aggregation,
.values() and raw queries.
Thanks to akaariai in particular for extensive advice and inspiration,
also to shaib, manfre and timograham for their reviews.
SQLite doesn't work with more than 1000 parameters in a single query.
The deletion code could generate queries that try to get related
objects for more than 1000 objects thus breaking the limit. Django now
splits the related object fetching into batches with at most 1000
parameters.
The tests and patch include some work done by Trac alias NiGhTTraX in
ticket #21205.
A regression caused queries to produce incorrect results for cases where
extra(select) is excluded by values() but included by extra(order_by)
The regression was caused by 2f35c6f10f.
The reason for the regression was that the GenericForeignKey field isn't
something meta.get_field_by_name() should return. The reason is that a
couple of places in Django expects get_field_by_name() to work this way.
It could make sense to return GFKs from get_field_by_name(), but that
should likely be done as part of meta refactoring or virtual fields
refactoring patches.
Thanks to glicerinu@gmail.com for the report and to Tim for working on
the issue.
The Model.from_db() is intended to be used in cases where customization
of model loading is needed. Reasons can be performance, or adding custom
behavior to the model (for example "dirty field tracking" to issue
automatic update_fields when saving models).
A big thank you to Tim Graham for the review!
Regression from f51c1f59 when using select_related then prefetch_related
on the reverse side of an O2O:
Author.objects.select_related('bio').prefetch_related('bio__books')
Thanks Aymeric Augustin for the report and tests. Refs #17001.
Loading fixtures were failing since the refactoring in 244e2b71f5 for
inheritance setups where the chain contains abstract models and the
root ancestor contains a M2M relation.
Thanks Stanislas Guerra for the report.
Refs #20946.
Added DateTimeCheckMixin to avoid the use of default, auto_now, and
auto_now_add options together. Added the fields.E151 Error that is raised
if one or more of these options are used together.