update comments

This commit is contained in:
Terence Parr 2012-08-04 12:00:13 -07:00
parent b160a3b14d
commit b7b2a45c8b
1 changed files with 33 additions and 23 deletions

View File

@ -91,17 +91,21 @@ import java.util.Set;
The next time we reach this DFA state with an SLL conflict, through The next time we reach this DFA state with an SLL conflict, through
DFA simulation, we will again retry the ATN simulation using full DFA simulation, we will again retry the ATN simulation using full
context mode. This is slow because we can't save the results and have context mode. This is slow because we can't save the results and have
to "interpret" the ATN each time we get that input. We could cache to "interpret" the ATN each time we get that input.
results from full context to predicted alternative easily and that
saves a lot of time but doesn't work in presence of predicates. The set CACHING FULL CONTEXT PREDICTIONS
of visible predicates from the ATN start state changes depending on
the context, because closure can fall off the end of a rule. I tried We could cache results from full context to predicted
to cache tuples (stack context, semantic context, predicted alt) but alternative easily and that saves a lot of time but doesn't work
it was slower than interpreting and much more complicated. Also in presence of predicates. The set of visible predicates from
the ATN start state changes depending on the context, because
closure can fall off the end of a rule. I tried to cache
tuples (stack context, semantic context, predicted alt) but it
was slower than interpreting and much more complicated. Also
required a huge amount of memory. The goal is not to create the required a huge amount of memory. The goal is not to create the
world's fastest parser anyway. I'd like to keep this algorithm world's fastest parser anyway. I'd like to keep this algorithm
simple. By launching multiple threads, we can improve the speed of simple. By launching multiple threads, we can improve the speed
parsing across a large number of files. of parsing across a large number of files.
There is no strict ordering between the amount of input used by There is no strict ordering between the amount of input used by
SLL vs LL, which makes it really hard to build a cache for full SLL vs LL, which makes it really hard to build a cache for full
@ -117,6 +121,10 @@ import java.util.Set;
during the previous prediction. That amounts to a cache that maps X during the previous prediction. That amounts to a cache that maps X
to a specific DFA for that context. to a specific DFA for that context.
Something should be done for left-recursive expression predictions.
They are likely LL(1) + pred eval. Easier to do the whole SLL unless
error and retry with full LL thing Sam does.
AVOIDING FULL CONTEXT PREDICTION AVOIDING FULL CONTEXT PREDICTION
We avoid doing full context retry when the outer context is empty, We avoid doing full context retry when the outer context is empty,
@ -215,20 +223,22 @@ import java.util.Set;
If it does not get a syntax error, then we're done. If it does get a If it does not get a syntax error, then we're done. If it does get a
syntax error, we need to retry with the combined SLL/LL strategy. syntax error, we need to retry with the combined SLL/LL strategy.
The reason this works is as follows. If there are no SLL conflicts The reason this works is as follows. If there are no SLL
then the grammar is SLL for sure. If there is an SLL conflict, the conflicts then the grammar is SLL for sure, at least for that
full LL analysis must yield a set of ambiguous alternatives that is no input set. If there is an SLL conflict, the full LL analysis
larger than the SLL set. If the LL set is a singleton, then the must yield a set of ambiguous alternatives that is no larger
grammar is LL but not SLL. If the LL set is the same size as the SLL than the SLL set. If the LL set is a singleton, then the grammar
is LL but not SLL. If the LL set is the same size as the SLL
set, the decision is SLL. If the LL set has size > 1, then that set, the decision is SLL. If the LL set has size > 1, then that
decision is truly ambiguous on the current input. If the LL set is decision is truly ambiguous on the current input. If the LL set
smaller, then the SLL conflict resolution might choose an alternative is smaller, then the SLL conflict resolution might choose an
that the full LL would rule out as a possibility based upon better alternative that the full LL would rule out as a possibility
context information. If that's the case, then the SLL parse will based upon better context information. If that's the case, then
definitely get an error because the full LL analysis says it's not the SLL parse will definitely get an error because the full LL
viable. If SLL conflict resolution chooses an alternative within the analysis says it's not viable. If SLL conflict resolution
LL set, them both SLL and LL would choose the same alternative because chooses an alternative within the LL set, them both SLL and LL
they both choose the minimum of multiple conflicting alternatives. would choose the same alternative because they both choose the
minimum of multiple conflicting alternatives.
Let's say we have a set of SLL conflicting alternatives {1, 2, 3} and Let's say we have a set of SLL conflicting alternatives {1, 2, 3} and
a smaller LL set called s. If s is {2, 3}, then SLL parsing will get a smaller LL set called s. If s is {2, 3}, then SLL parsing will get
@ -645,7 +655,7 @@ public class ParserATNSimulator<Symbol extends Token> extends ATNSimulator {
// CONFLICT, GREEDY (TYPICAL SITUATION) // CONFLICT, GREEDY (TYPICAL SITUATION)
if ( outerContext == ParserRuleContext.EMPTY || // in grammar start rule if ( outerContext == ParserRuleContext.EMPTY || // in grammar start rule
!D.configs.dipsIntoOuterContext || // didn't fall out of rule !D.configs.dipsIntoOuterContext || // didn't fall out of rule
SLL ) // not forcing SLL only SLL ) // forcing SLL only
{ {
// SPECIAL CASE WHERE SLL KNOWS CONFLICT IS AMBIGUITY // SPECIAL CASE WHERE SLL KNOWS CONFLICT IS AMBIGUITY
if ( !D.configs.hasSemanticContext ) { if ( !D.configs.hasSemanticContext ) {