This can be used to make Django's test suite significantly faster by
reducing the number of models for which content types and permissions
must be created and tables must be flushed in each non-transactional
test.
It's documented for Django contributors and committers but it's branded
as a private API to preserve our freedom to change it in the future.
Most of the credit goes to Anssi. He got the idea and did the research.
Fixed#20483.
Allows a `GenericForeignKey` to reference proxy models. The default
for `for_concrete_model` is `True` to keep backwards compatibility.
Also added the analog `for_concrete_model` kwarg to
`generic_inlineformset_factory` to provide an API at the form level.
When a stale ContentType is fetched, the _add_to_cache() function
didn't detect that `model_class()` returns `None` (which it does by
design). However, the `app_label` + `model` fields can be used instead
to as local cache key. Third party apps can detect stale models by
checking whether `model_class()` returns `None`.
Ticket: https://code.djangoproject.com/ticket/20442
This also updates all dependent functionality, including modelform_factory
and modelformset_factory, and the generic views `ModelFormMixin`,
`CreateView` and `UpdateView` which gain a new `fields` attribute.
This is provided as a new "validate_max" formset_factory option defaulting to
False, since the non-validating behavior of max_num is longstanding, and there
is certainly code relying on it. (In fact, even the Django admin relies on it
for the case where there are more existing inlines than the given max_num). It
may be that at some point we want to deprecate validate_max=False and
eventually remove the option, but this commit takes no steps in that direction.
This also fixes the DoS-prevention absolute_max enforcement so that it causes a
form validation error rather than an IndexError, and ensures that absolute_max
is always 1000 more than max_num, to prevent surprising changes in behavior
with max_num close to absolute_max.
Lastly, this commit fixes the previous inconsistency between a regular formset
and a model formset in the precedence of max_num and initial data. Previously
in a regular formset, if the provided initial data was longer than max_num, it
was truncated; in a model formset, all initial forms would be displayed
regardless of max_num. Now regular formsets are the same as model formsets; all
initial forms are displayed, even if more than max_num. (But if validate_max is
True, submitting these forms will result in a "too many forms" validation
error!) This combination of behaviors was chosen to keep the max_num validation
simple and consistent, and avoid silent data loss due to truncation of initial
data.
Thanks to Preston for discussion of the design choices.
Also, according to the comments on the ticket and its duplicates, added
tests execising saving an instance of a model with a GFK to:
* An unsaved object -- This actually doesn't generate the same failure
but another ORM-level exception. The test verifies it's the case.
* An instance of a model with a __nonzero__() method thant returns False
for it. This doesn't fail because that code path isn't executed.
* An instance of a model with a CharField PK and an empty value for it.
This doesn't fail.
This is a rather large refactoring. The "lookup traversal" code was
splitted out from the setup_joins. There is now names_to_path() method
which does the lookup traveling, the actual work of setup_joins() is
calling names_to_path() and then adding the joins found into the query.
As a side effect it was possible to remove the "process_extra"
functionality used by genric relations. This never worked for left
joins. Now the extra restriction is appended directly to the join
condition instead of the where clause.
To generate the extra condition we need to have the join field
available in the compiler. This has the side-effect that we need more
ugly code in Query.__getstate__ and __setstate__ as Field objects
aren't pickleable.
The join trimming code got a big change - now we trim all direct joins
and never trim reverse joins. This also fixes the problem in #10790
which was join trimming in null filter cases.
* Renamed the __unicode__ methods
* Applied the python_2_unicode_compatible decorator
* Removed the StrAndUnicode mix-in that is superseded by
python_2_unicode_compatible
* Kept the __unicode__ methods in classes that specifically
test it under Python 2
* Renamed smart_unicode to smart_text (but kept the old name under
Python 2 for backwards compatibility).
* Renamed smart_str to smart_bytes.
* Re-introduced smart_str as an alias for smart_text under Python 3
and smart_bytes under Python 2 (which is backwards compatible).
Thus smart_str always returns a str objects.
* Used the new smart_str in a few places where both Python 2 and 3
want a str.
Added kwargs for_concrete_model and for_concrete_models to ContentType
methods get_for_model() and get_for_models(). By setting the flag to
False, it is possible to get the contenttype for proxy models.