The use of OrderedDict (even an empty one) was surprisingly slow. By
initializing OrderedDict only when needed it is possible to save
non-trivial amount of computing time (Model.save() is around 30% faster
for example).
This commit targetted sql.Query only, there are likely other places
which could use similar optimizations.
The bug was already fixed by 01b9c3d519,
so only tests added.
At the same time promote_joins()'s uncoditional flag is gone, it isn't
needed for anything any more.
In the combination of .values().aggregate() the aggregate_select_mask
didn't include the aggregates added. This resulted in bogus query.
Thanks to Trac alias debanshuk for report.
There were a couple of places which used Query.join() directly. By
using setup_joins() in these places the code is more DRY, and in
addition there is no need to directly call field.get_joining_columns()
unless the field is the given join_field from get_path_info(). This
makes it easier to make sure a ForeignObject subclass generates joins
correctly in all cases.
The join used by select_related was incorrectly INNER when the query
had an ORed filter for nullable join that was trimmed away. Fixed this
by forcing the join type to LOUTER even when a join was trimmed away
in ORed queries.
The patch for #19385 caused a regression in certain generic relations
.exclude() filters if a subquery was needed. The fix contains a
refactoring to how Query.split_exclude() and Query.trim_start()
interact.
Thanks to Trac alias nferrari for the report.
The sql/query.py add_q method did a lot of where/having tree hacking to
get complex queries to work correctly. The logic was refactored so that
it should be simpler to understand. The new logic should also produce
leaner WHERE conditions.
The changes cascade somewhat, as some other parts of Django (like
add_filter() and WhereNode) expect boolean trees in certain format or
they fail to work. So to fix the add_q() one must fix utils/tree.py,
some things in add_filter(), WhereNode and so on.
This commit also fixed add_filter to see negate clauses up the path.
A query like .exclude(Q(reversefk__in=a_list)) didn't work similarly to
.filter(~Q(reversefk__in=a_list)). The reason for this is that only
the immediate parent negate clauses were seen by add_filter, and thus a
tree like AND: (NOT AND: (AND: condition)) will not be handled
correctly, as there is one intermediary AND node in the tree. The
example tree is generated by .exclude(~Q(reversefk__in=a_list)).
Still, aggregation lost connectors in OR cases, and F() objects and
aggregates in same filter clause caused GROUP BY problems on some
databases.
Fixed#17600, fixed#13198, fixed#17025, fixed#17000, fixed#11293.
Before there was need to have both .relabel_aliases() and .clone() for
many structs. Now there is only relabeled_clone() for those structs
where alias is the only mutable attribute.
The join promote=True was over-aggressive in select_related handling.
After that was removed, the only other user was query.combine(). That
use case is very easy to handle locally, so there is no more need for
the join(promote=True) flag.
Refs #19849.
The refactoring mainly concentrates on making sure the inner and outer
query agree about the split position. The split position is where the
multijoin happens, and thus the split position also determines the
columns used in the "WHERE col1 IN (SELECT col2 from ...)" condition.
This commit fixes a regression caused by #10790 and commit
69597e5bcc. The regression was caused
by wrong cols in the split position.
Previously it was possible to call clear_ordering without the
force_empty argument. The result was that the query was still ordered
by model's meta ordering if that was defined. By making the arg
mandatory it will be easier to spot possible errors caused by assuming
clear_ordering will remove all ordering.
Thanks to Dylan Klomparens for the suggestion. Refs #19720.
The original problem was that queryset cloning was really expensive
when filtering with F() clauses. The __deepcopy__ went too deep copying
_meta attributes of the models used. To fix this the use of
__deepcopy__ in qs cloning was removed.
This commit results in some speed improvements across the djangobench
benchmark suite. Most query_* tests are 20-30% faster, save() is 50%
faster and finally complex filtering situations can see 2x to order
of magnitude improvments.
Thanks to Suor, Alex and lrekucki for valuable feedback.
The guarantee that no queries will be made when accessing results is
done by new EmptyWhere class which is used for query.where and having.
Thanks to Simon Charette for reviewing and valuable suggestions.
The added promotion logic is based on promoting any joins used in only
some of the childs of an OR clause unless the join existed before the
OR clause addition.
The ORM didn't reuse joins for direct foreign key traversals when using
chained filters. For example:
qs.filter(fk__somefield=1).filter(fk__somefield=2))
produced two joins.
As a bonus, reverse onetoone filters can now reuse joins correctly
The regression was caused by the join() method refactor in commit
68847135bc
Thanks for Simon Charette for spotting some issues with the first draft
of the patch.
This is a rather large refactoring. The "lookup traversal" code was
splitted out from the setup_joins. There is now names_to_path() method
which does the lookup traveling, the actual work of setup_joins() is
calling names_to_path() and then adding the joins found into the query.
As a side effect it was possible to remove the "process_extra"
functionality used by genric relations. This never worked for left
joins. Now the extra restriction is appended directly to the join
condition instead of the where clause.
To generate the extra condition we need to have the join field
available in the compiler. This has the side-effect that we need more
ugly code in Query.__getstate__ and __setstate__ as Field objects
aren't pickleable.
The join trimming code got a big change - now we trim all direct joins
and never trim reverse joins. This also fixes the problem in #10790
which was join trimming in null filter cases.
The trim argument was used by split_exclude() only to trim the last
join from the given lookup. It is cleaner to just trim the last part
from the lookup in split_exclude() directly so that there is no need
to burden add_filter() with the logic needed for only split_exclude().
F() expressions reuse joins like any lookup in a .filter() call -
reuse multijoins generated in the same .filter() call else generate
new joins. Also, lookups can now reuse joins generated by F().
This change is backwards incompatible, but it is required to prevent
dict randomization from generating different queries depending on
.filter() kwarg ordering. The new way is also more consistent in how
joins are reused.
The dupe avoidance logic was removed as it doesn't seem to do anything,
it is complicated, and it has nearly zero documentation.
The removal of dupe_avoidance allowed for refactoring of both the
implementation and signature of Query.join(). This refactoring cascades
again to some other parts. The most significant of them is the changes
in qs.combine(), and compiler.select_related_descent().
The Query.select and Query.select_fields were collapsed into one list
because the attributes had to be always in sync. Now that they are in
one attribute it is impossible to edit them out of sync.
Similar collapse was done for Query.related_select_cols and
Query.related_select_fields.