Passed the engine instance to loaders. This is a prerequisite for
looking up configuration on the engine instance instead of global
settings.
This is backwards incompatible for custom template loaders that override
__init__. However the documentation doesn't talk about __init__ and the
way to pass arguments to custom template loaders isn't specified. I'm
considering it a private API.
This reverts commit f36151ed16.
Adding kwargs to deconstructed objects does not achieve useful
forward-compatibility in general, since additional arguments are silently
dropped rather than having their intended effect. In fact, it can make the
failure more difficult to diagnose. Thanks Shai Berger for discussion.
I don't agree with flake8 here about the right indentation, but as long as
we're using it, we should stick to it. I don't want to disable its hanging
indent checks just because of this case.
This change preserves backwards-compatibility for a very common misuse
of render_to_response which even occurred in the official documentation.
It fixes that misuse wherever it happened in the code base and docs.
Context.__init__ is documented as accepting a dict and nothing else.
Since Context is dict-like, Context(Context({})) could work to some
extent. However, things get complicated with RequestContext and that
gets in the way of refactoring the template engine. This is the real
rationale for this change.
A lambda all_items_equal() replaced a reduce() that was broken for potential
3+-way merges. A reduce(operator.eq, ...) accumulates bools and can't
generically check equality of all items in a sequence:
>>> bool(reduce(operator.eq, [('migrations', '0001_initial')] * 3))
False
The code now counts the number of common ancestors to calculate slice offsets
for the branches. Each branch shares the same number of common ancestors.
The common_ancestor for loop statement had incomplete branch coverage.
Added relabeled_clone() method to sql.Query to fix the problem. It
manifested itself in rare cases where at least double nested subquery's
filter condition might target non-existing alias.
Thanks to Trac alias ris for reporting the problem.
This removes the concept of equality between operations to guarantee
compatilibity with Python 3.
Python 3 requires equality to result in identical object hashes. It's
impossible to implement a unique hash that preserves equality as
operations such as field creation depend on being able to accept
arbitrary dicts that cannot be hashed reliably.
Thanks Klaas van Schelven for the original patch in
13d613f800.