This required moving the entirety of DELETE SQL generation to the
compiler where it should have been in the first place and implementing
a specialized compiler on MySQL/MariaDB.
The MySQL compiler relies on the "DELETE table FROM table JOIN" syntax
for queries spanning over multiple tables.
Added internal interface to QuerySet that allows to defer next filter
call till .query is accessed. Used it to optimize prefetch_related().
Thanks Simon Charette for the review.
Changed __eq__ to return NotImplemented instead of False if compared to
an object of the same type, as is recommended by the Python data model
reference. Now these models can be compared to ANY (or other objects
with __eq__ overwritten) without returning False automatically.
Removed DatabaseIntrospection.table_name_converter()/column_name_converter()
and use instead DatabaseIntrospection.identifier_converter().
Removed DatabaseFeatures.uppercases_column_names.
Thanks Tim Graham for the initial patch and review and Simon Charette
for the review.
Since CPython implements a C level attrgetter(*attrs) it even outperforms the
most common case of a single known related object since the resulting attribute
values tuple is built in C.
Adjusted known related objects handling of target fields which relies on
from and to_fields and has the side effect of fixing a bug bug causing
N+1 queries when using reverse foreign objects.
Thanks Carsten Fuchs for the report.
Checked the following locations:
* Model.save(): If there are parents involved, take the safe way and use
transactions since this should be an all or nothing operation.
If the model has no parents:
* Signals are executed before and after the previous existing
transaction -- they were never been part of the transaction.
* if `force_insert` is set then only one query is executed -> atomic
by definition and no transaction needed.
* same applies to `force_update`.
* If a primary key is set and no `force_*` is set Django will try an
UPDATE and if that returns zero rows it tries an INSERT. The first
case is completly save (single query). In the second case a
transaction should not produce different results since the update
query is basically a no-op then (might miss something though).
* QuerySet.update(): no signals issued, single query -> no transaction
needed.
* Model/Collector.delete(): This one is fun due to the fact that is
does many things at once.
Most importantly though: It does send signals as part of the
transaction, so for maximum backwards compatibility we need to be
conservative.
To ensure maximum compatibility the transaction here is removed only
if the following holds true:
* A single instance is being deleted.
* There are no signal handlers attached to that instance.
* There are no deletions/updates to cascade.
* There are no parents which also need deletion.
A race condition happened when the object didn't already exist and
another process/thread created the object before update_or_create()
did and then attempted to update the object, also before update_or_create()
saved the object. The update by the other process/thread could be lost.