- Mutexes have been consolidated. Instead of one per DFA (which can easily get to hundreds of them) we only have one mutex in the Recognizer class and all other parties use this for serialization. It's only about protected the DFA anyway, which is stored in a recognizer (lexer/parser).
- ATNState::getStateType() returns a size_t value now (actually an enum).
- Replaced checks via RTTI for transitions by the (serialization) type of the transition, for simplicity.
- Added some missing initialization for fields in certain ATN state classes.
- Fixed mem leak in DFA by shadowing the s0 field. That way still have a ref to the self created instance, even is s0 was replaced later.
- Added variable init in code generation for a rule context declaration (e.g. for labels).
The transaction field was access with trivial getters and setters which server no real purpose, so I made the vector public and remove all the obsolete accessor functions. This simplifies the API (e.g. use .size() instead of getNumberOfTransitions()).
This change required a few other changes in runtime.
Additionally, I radically cut down the ATN::toString method to return only what the Java runtime is doing instead of a full dump of the state and it's transitions.
There are 2 parts in an ANTLR genrated parser where memory is allocated: the actual parsing (with or w/o creating a parse tree) and the prediction part (via DFA/ATN etc.). The first part is highly volatile as it recreates parse tree instances (the class) on each parser run. In fact also lexer tokens belong to that part, but are already managed via unique pointers. This first part works without any smart pointer now. Instead there is a simple tracker class which holds all created references and frees them when the parser is reset or destroyed. This is a bit less optimal if the parser is set to create no parse tree, as created rule context objects are not freed immediately (like with smart pointers), but during reset. On the other hand this change gives (depending on the input) a nice speed up (0%-100%, after the warm up phase). Additionally memory consumption drops by a good amount.
Everything in the simulartors (and interpreters) remains unchanged. This is the shared prediction part.