LALR Parser Generator


Revision as of 11:37, 22 February 2016 by Thomas Linder Puls (talk | contribs) (simplify and correct)

Visual Prolog 7.5 Commercial Edition (build 7501+) contains an LALR(1) parser generator in the examples (in the directory vipLalrGen). The example directory also contains a program exprEval, which uses a generated parser to parse arithmetical expressions.

The parser generator itself also uses such parser to parse grammar files, so it can be seen as another example.

The directory $(ProDir)/vip/vipSyntax contains a grammar file vipSyntax.vipgrm defining the Visual Prolog syntax. The directory also contains the corresponding generated files, and all other files necessary to parse Visual Prolog programs. But that is not the topic of this article, it is only mentioned for reference.

The theory of parsers is huge and cannot be covered here. An LALR(1) parser parses from Left to right, and produces an inverse Rightmost derivation using 1 Look-Ahead symbol. See for example wikipedia:LALR parser for more information.


The purpose of a parser is to validate that an input string fulfills a certain grammar specification, and:

  • if the grammar is fulfilled: construct a corresponding parse tree.
  • if the grammar is not fulfilled: output "syntax error" messages describing the problems.

The overall parsing process consist of splitting the input into lexical elements known as terminals and parsing these according to the grammar specification.

The lexing and the parsing is performed intermixed in a single start-to-end scan of the text (without backtracking).

Parser structure

The parser of the exprEval demo can be used as illustration of the parser components and how the overall parsing works.

The grammar file and the parser components are in the sub directory/package expressionGrm.

  • expcessionGrm.vipgrm the grammar specification.
  • expcessionGrm.i, and the parser.
  • expressionLexer contains the lexer.
  • expressionSem contains a support class for building the resulting parse tree.

expcessionGrm.i, and are generated by the vipLalrGen program from the grammar specification expcessionGrm.vipgrm.

The directory sub directory/package expressionGrmSem contains support predicates for the semantic actions in the grammar.

The directory sub directory/package expressionLexer contains the lexical analyzer. It is based on the class pfc\syntax\lexer_string, which makes it very easy to implement lexers uses Visual Prolog number, string and comment syntax. Basically the programmer only needs to define things like keywords and operators.


The parser generator itself is in the vipLalrGen sub directory (i.e. in <examples root>\vipLalrGen\vipLalrGen, and it should be built before it can be used.

The vipLalrGen program will read grammar files and generate LALR(1) parsers as Visual Prolog source code.

vipLalrGen.exe - LALR parser generator for Visual Prolog
    vipLalrGen.exe [options] <grammar files>
    @<File>     Read options from <File>
    -help       Displays the help message
    -out <OutDir>       Generate files in <OutDir> (default: the directory containing the grammar file)
    -details    Add details to the log file
    -nodetails  Don't add details to the log file (default)

It is recommended to have grammars in files with extension vipgrm; the IDE will token color files with that extension.

Example To run vipLalrGen on the grammar file in the exprEval example from a command console:
>cd <example path>\vipLalrGen
>vipLalrGen\Exe\vipLalrGen.exe exprEval\expressionGrm\expressionGrm.vipgrm
OK: exprEval\expressionGrm\expressionGrm.vipgrm

When running vipLalrGen on a grammar file it always produces the file:

  • log/<grammar>.log containing detailed information about the grammar
  • If successful: <grammar>.i, <grammar>.cl, <grammar>.pro containing the generated parser

Grammar files

The input vipLalrGen is a grammar file. As mentioned the IDE supports token coloring if the extension is vipgrm.

A grammar file contains a named grammar:

grammar expressionGrm
    open expression, expressionGrmSem
nonassoc t_cmp.
left t_plus.
left t_mult.
right t_power.
    exp : expression.
    exp { mkBinOp(Op, A, B) } ==>
        exp { A },
        [t_cmp] { Op },
        exp { B }.
    exp { mkBinOp(Op, A, B) } ==>
        exp { A },
        [t_power] { Op },
        exp { B }.
    exp { mkBinOp(Op, A, B) } ==>
        exp { A },
        [t_mult] { Op },
        exp { B }.
    exp { mkBinOp(Op, A, B) } ==>
        exp { A },
        [t_plus] { Op },
        exp { B }.
    exp { E } ==>
        exp { E },
    exp { number(toTerm(real, N)) } ==>
        [t_number] { N }.
    exp { bool(toTerm(boolean, N)) } ==>
        [t_boolean] { N }.
end grammar expressionGrm

nonterminals & rules

The grammar file (among other) contains nonterminals and rules sections.

nonterminals sections declares nonterminal symbols and the type of the corresponding parse trees. The nonterminals section above states that exp is a nonterminal symbol and that it produces parse trees of the (Visual Prolog) type expression.

rules sections contains rules that both defines how valid derivations of nonterminal system looks and how the corresponding parse tree is constructed.

Everything in braces have to do with the parse trees. If we initially disregard it, we only see what have to do with the derivations

    exp ==> exp, [t_cmp], exp.
    exp ==> exp, [t_power], exp.
    exp ==> exp, [t_mult], exp.
    exp ==> exp, [t_plus], exp.
    exp ==> [t_lpar], exp, [t_rpar].
    exp ==> [t_number].
    exp ==> [t_boolean].

The first rule says that from the nonterminal symbol exp we can derive exp followed by [t_cmp] followed by exp, where [t_cmp] is the terminal symbol t_cmp.


Here is a derivation that corresponds to the expression 7 < 5 (assuming that t_number are numbers and t_cmp are comparison operators):

    ==> exp, [t_cmp], exp 
    ==> exp, [t_cmp], [t_number] 
    ==> [t_number], [t_cmp], [t_number]

The derivation in the example is a rightmost derivation, because in each step we derive something from the rightmost nonterminal symbol. An LR parser will make reductions in the inverse order of the derivations in a rightmost derivation. LR parsing means that we scan tokens from Left to right and produce an inverse Rightmost derivation.

As mentioned the braces describes how to construct a corresponding parse tree. The braces on the left hand side contains a Visual Prolog expression that constructs the node in the parse tree. The braces in the right hand side of the rules defines variable names for the corresponding sub-trees.

Given the rule

   exp { mkBinOp(Op, A, B) } ==>
        exp { A },
        [t_cmp] { Op },
        exp { B }.

A and B contains the parse-trees of the two exps and Op contains the string of the terminal symbol t_cmp. The resulting parse tree is calculated as mkBinOp(Op, A, B).


The grammar rules in the expression grammar are by themselves ambiguous, in the sense that 3 + 4 * 5 can derived by both these rightmost derivations:

    ==> exp, [t_add], exp 
    ==> exp, [t_add], exp, [t_mult], exp 
    ==> exp, [t_add], exp, [t_mult], [t_number] 
    ==> exp, [t_add], [t_number], [t_mult], [t_number] 
    ==> [t_number], [t_add], [t_number], [t_mult], [t_number]
    ==> exp, [t_mult], exp 
    ==> exp, [t_mult], [t_number] 
    ==> exp, [t_add], exp, [t_mult], [t_number] 
    ==> exp, [t_add], [t_number], [t_mult], [t_number] 
    ==> [t_number], [t_add], [t_number], [t_mult], [t_number]

The ambiguity is not relevant with regards to whether the expression is a valid expression or not, but the two derivations corresponds to two different parse trees (corresponding to the expressions (3 + 4) * 5 and 3 + (4 * 5), respectively). Obviously, we are interested in a particular parse tree (i.e. the latter).

Moreover the ambiguity in the grammar actually makes it impossible for the parser generator to produce a parser (see next section).

This particular kind of ambiguity can be solved by stating the precedence of the terminal symbols corresponding to the operators:

nonassoc t_cmp.
left t_plus.
left t_mult.
right t_power.

a precedence declaration states one of the associativites (nonassoc, left or right) followed by a comma separated sequence of terminal symbols and terminated by a dot.

The terminals in a precedence declaration have the same precedence level and associativity. The declarations are stated in increasing precedence (i.e. the first declaration have lowest precedence).

Given A [t1] B [t2] C

  • If t1 has higher precedence than t2 then A [t1] B will be reduced to AB and then AB [t2] C will be reduced.
  • If t1 has lower precedence than t2 then B [t2] C will be reduced to BC and then a [t1] BC will be reduced.
  • If t1 and t2 have same precedence, then
    • If they are left then A [t1] B will be reduced to AB and then AB [t2] C will be reduced.
    • If they are right then B [t2] C will be reduced to BC and then a [t1] BC will be reduced.
    • If they are nonassoc the construction is illegal and the parser will issue s syntax error.

Error recovery

During parsing the parser will look at the next terminal symbol and:

  • Decide that parsing is successfully ended and thus accept the input
  • shift the terminal symbol on its stack
  • reduce the top of the stack
  • or else conclude that there is a syntax error in the input

In the last case the parser will report the error and it will not produce a parse tree. It is however desirable to try to detect additional syntax errors in the remaining input. Error recovery is attempting to get the parser and input back in synchronization, so that additional syntax errors can be detected.

The parsing machinery in PFC can use a technique involving an special error-terminal. In the grammar files the terminal [error] is reserved for the error terminal.

The basic use of the [error] terminal is to add grammar rules of the form:

    something ==>

When the parser detects an error it will do recovery in the following way:

  • It will pop entities from its stack (corresponding to skipping backwards in the things that are in progress of being parsed), until it get to a state where the [error] terminal can be shifted on the stack. Given the grammar rule above this would be in a place where something is a valid "next thing".
  • It will then perform actions corresponding to the grammar rule (shift the error terminal and perform the reduction)
  • Finally it will skip input until it finds a terminal that can be shifted onto the stack

If all that succeeds recovery is completed and parsing resumes normally.


This strategy works quite good if the language have some clear "synchronization" points. In Visual Prolog the section keywords predicates, clauses, etc are very good synchronization points: It is quite certain how to parse things after one of these words.

    sectionList ==> .
    sectionList ==> sectionList, section.
    section ==> [t_predicates], ...
    section ==> [t_clauses], ...
    section ==> [error].

If a syntax error is detected in a section the parser stack will contain a sectionList followed by whatever it is in the middle of parsing in the current section.

It will then pop until it reaches the sectionList, because at that point our error-section (and thus the [error] terminal) would be a valid next thing.

It will shift and reduce the error-section and then skip terminal symbols until we meet one that can legally follow a section, like a section keyword, end interface, ...


For many reasons (for example to give error messages and create browse and debug info) the compiler deals with positions in the input. Every rule have access to a (predefined) variable named Cursor. This variable holds four pieces of information:

  • The start position of the construction
  • The end position of the construction
  • Comments before in the construction (comments are discussed in the next section)
  • Comments after the construction

The parser calculates such a cursor for a rule from the cursors of the components of the rule. The position handling is quite simple, given a rule:

b  ==> a1, a2, ... an.
  • The start position of b is the start position of a1
  • The end postion of b is the end position of an.

The Cursor variable can be used in the semantic action in the rule head together with the variables defined for the body components:

b { mkB(Cursor, A1, A2, ... An ) } ==> a1 { A1 }, a2 { A2 }, ... an { An }.


In most parsing schemes comments are simply discarded by the lexer, so the parser never see any comments at all. This is fine for many purposes, but there may be situations where you want to use the comments. Visual Prolog programs can for example contain documentation comments which it may be nice to deal with. And if you want to pretty print or restructure a program programmatically you don't want to discard the comments.

As mentioned above comments are placed in the cursors. Many cursors are however discarded during parsing, but the comments in these cursors should not be discarded. Therefore the parser moves comments from discarded cursors to cursors that stay are not discarded.

The parser assumes that cursors that is used in a semantical action by means of the Cursor variable will stay alive, and will therefore assume that comments on such a cursor will survive. Subsequently it may move additional comments to such a cursor.

Consider a rule for an if-then (for simplicity without an else part) construction:

term { mkIfThen(Cursor, Cond, Then ) } ==>
     term { Cond },
     term { Then },

When the parser have to reduce this rule it have access to the six cursors corresponding to the sub-components, it also knows which of them that can carry comments and which will be discarded.

In general pre-comments are moved leftwards and post comments are moved rightwards. So pre-comment of [t_then] will be moved to become post-comments of Cond (given that Cond can carry comments). Likewise the post-comments of [t_then] will be moved to become pre-comments of Then (given that Then can carry comments).

If the first symbol is going to be discarded its pre comments will move to become pre-comments in the parent cursor. And likewise the the post comments of the last symbol will be moved to the post-comments of the parent cursor.

If several adjacent symbols cannot carry comments then the exact movement of comments depends on the placement in the rule. For the if-then rule above:

  • the pre-comments of [t_end] will become post comments in Then, and
  • the post-comments of vpgrm>[t_end]</vpgrm> and all comments of [t_if] will go to the parent cursor

The same principle of "most goes to the parent" will be used in the beginning of the rule. In the middle of a rule most comments will move to the left.

It is important to notice that comments end's up on cursors that have been used with the Cursor variable, so to preserve all comments it is important to retain all such referenced cursors. Or to put it differently if a semantic action uses the Cursor variable it also takes responsibility for the comments on that cursor.

It is also important to notice that the parser moves comments around by updating making destructive updates to the cursor structs, so it is not only important to retain cursors received during parsing, you must retain exactly those cursors your semantic actions receive. To be safe you should not assume that comment are stable until the entire parsing is completed.


As briefly mentioned above there are situations where the parser generator cannot produce a parser. Briefly speaking the parser generator generates parset tables that in a given situation instructs the parser either to shift the next terminal symbol onto an parser stack or to reduce the top elements of the parser stack using one of the grammar rules (aka production rule).

But when calculating the parser table it may turn out that it would both be equally good/bad to shift the next terminal onto the stack and to reduce the stack top. This is a so called shift-reduce conflict. Likewise it can turn out that there are two equally good reductions that could be applied to the top of the parse stack, i.e. a so called reduce-reduce conflict.

Details about parsing conflicts can be found in the produced log file.

A grammar for a certain language can be written in many ways, and it may often be possible to rewrite a grammar that have conflicts to another one that doesn't have conflicts (but recognizes the same language).

It is outside the scope of this article to go into details about conflicts and how to resolve them, unfortunately it may require relatively deep understanding of parsing. As mentioned above vipLalrGen is an LALR(1) parser generator like YACC and the like, so there is lots of existing material about the subject "out there".

External References

Basic reading is the "Dragon Book".