Use INTERVAL DAY(9) TO SECOND(6) for Durationfield on Oracle rather than
storing as a NUMBER(19) of microseconds.
There are issues with cx_Oracle which require some extra data
manipulation in the database backend when constructing queries, but it
handles the conversion back to timedelta objects cleanly.
Thanks to Shai for the review.
A field for storing periods of time - modeled in Python by timedelta. It
is stored in the native interval data type on PostgreSQL and as a bigint
of microseconds on other backends.
Also includes significant changes to the internals of time related maths
in expressions, including the removal of DateModifierNode.
Thanks to Tim and Josh in particular for reviews.
Also removed Query.join_map. This structure was used to speed up join
reuse calculation. Initial benchmarking shows that this isn't actually
needed. If there are use cases where the removal has real-world
performance implications, it should be relatively straightforward to
reintroduce it as map {alias: [Join-like objects]}.
Aggregation over subquery produced syntactically incorrect queries in
some cases as Django didn't ensure that source expressions of the
aggregation were present in the subquery.
The .dates() queries were implemented by using custom Query, QuerySet,
and Compiler classes. Instead implement them by using expressions and
database converters APIs.
Added relabeled_clone() method to sql.Query to fix the problem. It
manifested itself in rare cases where at least double nested subquery's
filter condition might target non-existing alias.
Thanks to Trac alias ris for reporting the problem.
Made _do_update behave more strictly according to its docs,
including a corner case when specific concurent updates are
executed and select_on_save is set.
Fixed issue with warning message displayed for unbound naive datetime
objects when USE_TZ is True. Adds unit test that demonstrates the issue
(discoverable when using a custom lookup in MySQL).
This also defines QuerySet.__bool__ for consistency though this should not have any consequence as bool(qs) used to fallback on QuerySet.__len__ in Py3.
Added update_or_create to RelatedManager, ManyRelatedManager and
GenericRelatedObjectManager.
Added missing get_or_create to GenericRelatedObjectManager.
Validates that related_name is a valid Python id or ends with a '+' and
it's not a keyword. Without a check it passed silently leading to
unpredictable problems.
Thanks Konrad Świat for the initial work.
and ReverseSingleRelatedObjectDescriptor so they actually return QuerySet
instances.
Also ensured that SingleRelatedObjectDescriptor.get_queryset() accounts
for use_for_related_fields=True.
This cleanup lays the groundwork for #23533.
Thanks Anssi Kääriäinen for the review.
Complete rework of translating data values from database
Deprecation of SubfieldBase, removal of resolve_columns and
convert_values in favour of a more general converter based approach and
public API Field.from_db_value(). Now works seamlessly with aggregation,
.values() and raw queries.
Thanks to akaariai in particular for extensive advice and inspiration,
also to shaib, manfre and timograham for their reviews.
SQLite doesn't work with more than 1000 parameters in a single query.
The deletion code could generate queries that try to get related
objects for more than 1000 objects thus breaking the limit. Django now
splits the related object fetching into batches with at most 1000
parameters.
The tests and patch include some work done by Trac alias NiGhTTraX in
ticket #21205.
A regression caused queries to produce incorrect results for cases where
extra(select) is excluded by values() but included by extra(order_by)
The regression was caused by 2f35c6f10f.
The reason for the regression was that the GenericForeignKey field isn't
something meta.get_field_by_name() should return. The reason is that a
couple of places in Django expects get_field_by_name() to work this way.
It could make sense to return GFKs from get_field_by_name(), but that
should likely be done as part of meta refactoring or virtual fields
refactoring patches.
Thanks to glicerinu@gmail.com for the report and to Tim for working on
the issue.
The Model.from_db() is intended to be used in cases where customization
of model loading is needed. Reasons can be performance, or adding custom
behavior to the model (for example "dirty field tracking" to issue
automatic update_fields when saving models).
A big thank you to Tim Graham for the review!
Regression from f51c1f59 when using select_related then prefetch_related
on the reverse side of an O2O:
Author.objects.select_related('bio').prefetch_related('bio__books')
Thanks Aymeric Augustin for the report and tests. Refs #17001.
Loading fixtures were failing since the refactoring in 244e2b71f5 for
inheritance setups where the chain contains abstract models and the
root ancestor contains a M2M relation.
Thanks Stanislas Guerra for the report.
Refs #20946.
Added DateTimeCheckMixin to avoid the use of default, auto_now, and
auto_now_add options together. Added the fields.E151 Error that is raised
if one or more of these options are used together.
Previously, known related objects overwrote related objects loaded
though select_related. This could cancel the effect of select_related
when it was used over more than one level.
Thanks boxm for the bug report and timo for bisecting the regression.
In some cases, this could lead to migrations written with Python 2
being incompatible with Python 3.
Thanks Tim Graham for the report and Loïc Bistuer for the advices.
Ordering by reverse foreign key was broken by custom lookups patch
(commit 20bab2cf9d).
Thanks to everybody who helped solving this issue. Special thanks to
Trac alias takis for reporting this.
When custom lookups were added, converting the search lookup to use
the new Lookup infrastructure wasn't done.
Some changes were needed to the added test, main change done by
committer was ensuring the test works on MySQL versions prior to 5.6.
So as the save step is centralized in create(), especially useful
when customizing behavior in subclasses.
Thanks craig.labenz@gmail.com for the report.
The ticket was originally about two failing tests, which are
fixed by putting their queries in transactions.
Thanks Tim Graham for the report, Aymeric Augustin for the fix,
and Simon Charette, Tim Graham & Loïc Bistuer for review.
Since assignments on M2M or reverse FK descriptors is composed of a `clear()`,
followed by an `add()`, `clear()` could potentially affect the value of the
assigned queryset before the `add()` step; pre-evaluating it solves the problem.
This patch fixes the issue for ForeignRelatedObjectsDescriptor,
ManyRelatedObjectsDescriptor, and ReverseGenericRelatedObjectsDescriptor.
It completes 6cb6e1 which addressed ReverseManyRelatedObjectsDescriptor.
db_parameters should respect an already existing db_type method and
return that as its type string. In particular, this was causing some
fields from gis to not be generated.
Thanks to @bigsassy and @blueyed for their work on the patch.
Also fixed#22260
Previously, saving a model instance to a non-related field (in
particular a FloatField) would silently convert the model to an Integer
(the pk) and save it. This is undesirable behaviour, and likely to cause
confusion so the validatio has been hardened.
Thanks to @PirosB3 for the patch and @jarshwah for the review.
GenericRelation now supports an optional related_query_name argument.
Setting related_query_name adds a relation from the related object back to
the content type for filtering, ordering and other query operations.
Thanks to Loic Bistuer for spotting a couple of important issues in
his review.
The original patch for custom prefetches didn't allow usage of custom
queryset for single valued relations (along ForeignKey or OneToOneKey).
Allowing these enables calling performance oriented queryset methods like
select_related or defer/only.
Thanks @akaariai and @timgraham for the reviews. Refs #17001.
Previously, doing so resulted in invalid data or crash.
Thanks jtiai for the report and Karol Jochelson,
Jakub Nowak, Loic Bistuer, and Baptiste Mispelon for reviews.
ForeignKey or ManyToManyField attribute ``limit_choices_to`` can now
be a callable that returns either a ``Q`` object or a dict.
Thanks michael at actrix.gen.nz for the original suggestion.
Overriding the error messages now works for both unique fields, unique_together
and unique_for_date.
This patch changed the overriding logic to allow customizing NON_FIELD_ERRORS
since previously only fields' errors were customizable.
Refs #20199.
Thanks leahculver for the suggestion.
This commit touchs various parts of the code base and test framework. Any
found usage of opening a cursor for the sake of initializing a connection
has been replaced with 'ensure_connection()'.
Updated SQLUpdateCompiler.execute_sql to match the behavior described in
the docstring; the 'first non-empty query' will now include all queries,
not just the main and first related update.
Added CURSOR and NO_RESULTS result_type constants to make the usages more
self documenting and allow execute_sql to explicitly close the cursor when
it is no longer needed.
The combination of BaseManager.from_queryset() and RenameMethodsBase results in
Manager.__module__ having the wrong value. This can be an issue when trying to
pickle the Manager class.
This is the result of Christopher Medrela's 2013 Summer of Code project.
Thanks also to Preston Holmes, Tim Graham, Anssi Kääriäinen, Florian
Apolloner, and Alex Gaynor for review notes along the way.
Also: Fixes#8579, fixes#3055, fixes#19844.