10

This is part of a series of questions which focuses on the sister project to the Abstraction Project, which aims to abstract the concepts used in language design in the form of a framework. The sister project is called OILexer, which aims to construct a parser from grammar files, without the use of code injection on matches.

Some other pages associated to these questions, related to structural typing, can be viewed here, and ease of use, found here. The meta-topic associated to an inquiry about the framework and the proper place to post can be found here.

I'm getting to the point where I'm about to start extracting the parse tree out of a given grammar, followed by a Recursive Descent parser which uses DFA to discern forward paths (similar to ANTLR 4's LL(*)), so I figured I'd open it up to get insight.

In a parser compiler, what kinds of features are ideal?

So far here is a brief overview of what's implemented:

  1. Templates
  2. Look ahead prediction, knowing what's valid at a given point.
  3. Rule 'Deliteralization' taking the literals within rules and resolving which token they're from.
  4. Nondeterministic Automata
  5. Deterministic Automata
  6. Simple lexical state machine for token recognition
  7. Token automation methods:
    • Scan - useful for comments: Comment := "/*" Scan("*/");
    • Subtract - Useful for Identifiers: Identifier := Subtract(IdentifierBody, Keywords);
      • Ensures the identifier doesn't accept keywords.
    • Encode - Encodes an automation as a series X count of base N transitions.
      • UnicodeEscape := "\\u" BaseEncode(IdentifierCharNoEscape, 16, 4);
        • Makes a unicode escape in hexadecimal, with hex 4-transitions. The difference between this and: [0-9A-Fa-f]{4} is the resulted automation with Encode limits the allowed set of hexadecimal values to the scope of IdentifierCharNoEscape. So if you give it \u005c, the encode version will not accept the value. Things like this have a serious caveat: Use sparingly. The resulted automation could be quite complex.

What isn't implemented is CST generation, I need to adjust the Deterministic automations to carry over the proper context to get this working.

For anyone interested, I've uploaded a pretty printed of the original form of the T*y♯ project. Each file should link to every other file, I started to link in the individual rules to follow them, but it would've taken far too long (would've been simpler to automate!)

If more context is needed, please post accordingly.

Edit 5-14-2013: I've written code to create GraphViz graphs for the state machines within a given language. Here is a GraphViz digraph of the AssemblyPart. The members linked in the language description should have a rulename.txt in their relative folder with the digraph for that rule. Some of the language description has changed since I posted the example, this is due to simplifying things about the grammar. Here's an interesting graphviz image.

  • 8
    Wall o' text. Don't take this the wrong way, I appreciate a thoroughly explained problem. In this case it is simply a little too verbose. From what I've gathered, you're asking what features should be included in a grammar parser or how to make one without starting from scratch? Please edit to answer the following questions (you don't have to rewrite, simply append to the end in summary): What is your problem? What constraints are you bound by in possible solutions to your problem (it must be fast, it must be LL*, etc.)? – Neil May 07 '13 at 07:22
  • 1
    I'm asking for insight towards the feature set. The focus is on ease of use. The difficulty lies in getting someone who doesn't know the project, insight on the project so they're informed towards its focus. I'm not asking for 'how to do', I'm asking related to usability. Suggestions on how to trim the question are appreciated. – Allen Clark Copeland Jr May 07 '13 at 07:25
  • I've trimmed the question to the major focus. I'll add context shortly. – Allen Clark Copeland Jr May 07 '13 at 07:33
  • I do appreciate the trim. It is much more readable, and I would go far as to say the chance of someone answering this question is greatly increased, however I personally can't answer. – Neil May 07 '13 at 07:41
  • 1
    To me, it is not obvious what the project is all about. For example, since the days of yacc, we have seen lots of parser generators. What is different in your OILexer? What is the new? – Ingo May 10 '13 at 04:35
  • 1
    The goal of this project is to simplify parser generation. Similar yes, to YACC/Bison and FLEX/LEX. The major difference is to avoid the complexity involved in those programs. Keeping things simple and to the point is the main goal. This is why I created a format that is devoid of odd sectioning, but rather the goal is to make it similar to regular programming: only specific to language development. Tokens are defined using ':=' after their name, rules are defined using ::= after their name. Templates use '<' and '>' for their arguments followed by "::=" since they share rule syntax. – Allen Clark Copeland Jr May 10 '13 at 06:06
  • I assume you've seen ANTLR 4. That does a really nice job of simplifying things. Anyway, about the one thing I would really want is an automatically created (and filled) AST. I.e., I don't want to have to define my own since it will just mirror my grammar anyway. If I could list my productions and then know that I'll get an AST back (with line numbers for error reporting!) then I can get to work a lot faster. – chrisaycock May 10 '13 at 17:57
  • @chrisaycock - http://www.antlr2.org/doc/options.html see buildAST. – psr May 10 '13 at 20:28
  • 3
    This hellish focus on parsing seems misplaced; its a well solved problem, and it hardly dents what you need to process programming languages. Google for my essay on "life after parsing". – Ira Baxter Nov 20 '13 at 03:59
  • 1
    @IraBaxter: I'm aware it's a well understood problem area; however, that's for a majority of professionals that have already long since studied this realm. I'm not a professional in Language Theory, so I'm using it to teach myself about it. Opening discussion to others might shed light on things I've overlooked: on parsing. I'm also constructing an AST relative to the end-goal of the project: a fully functional compiler. The focus is to teach me the entirety of language theory from start, to end. I can't expect to see what other's haven't if I don't understand the processes involved. – Allen Clark Copeland Jr Nov 20 '13 at 19:58
  • @IraBaxter that is a nice article and I learned a lot from it; I acknowledge that you are far more knowledgeable than me. However, I still think you are exaggerating when you say that parsing is a "well-solved problem". While the theory is well-understood and there are many fine tools, I haven't found a single tool which completely solves everybody's parsing needs, without any drawbacks or tradeoffs (perhaps I just need to try out more). And while there are tradeoffs, there's room for improvement (I hope). Plus, programmers need to know enough to understand which tradeoffs to choose ... –  Dec 02 '13 at 21:37
  • I also wish you had not used the word "hellish". It makes an otherwise useful comment seem vaguely rude and insulting. –  Dec 02 '13 at 21:38
  • 1
    Sorry, I see "if I just had a parser" far too often. I don't know what a "list of everybody's needs" are, but I can tell you from 15 years of experience that GLR parsers are pretty effective. You write context-free grammars, they pretty much parse them. GLL parsers are supposed to have the same effect, but I feel no compunction to switch from an already working solution. More importantly, there's so much additional machinery needed to process languages, that focusing on the parsing part just doesn't seem worth the overall trouble. I grant that you can learn a lot by building one. – Ira Baxter Dec 02 '13 at 21:54

4 Answers4

6

I'm not experienced in language design, but I've had a go at writing a parser once, when I was creating and IDE for a game engine.

Something that is important to your ultimate end users, in my opinion, is error messages that make sense. Not a particularly earth-shattering point, I know, but following it backwards, one of the key implications of this is that you need to be able to avoid false positives. Where do the false positives come from? They come from the parser falling over at the first error and never quite recovering. C/C++ is notorious for this (although the newer compilers are a bit smarter).

So what do you need instead? I think that rather than just know what is/isn't valid at a point, you need to know how take what isn't valid and make a minimal change to make it valid - so that you can continue parsing without generating false errors relating to your recursive descent getting confused. Being able to build a parser that can generate this information not only gives you a very robust parser, but it opens up some fantastic features for the software that consumes the parser.

I realize that I may be suggesting something really difficult, or stupidly obvious, sorry if this is the case. If this is not the sort of thing you're looking for, I'll happily delete my answer.

guysherman
  • 669
  • 4
  • 8
  • This is one things I plan on doing. To assist with my knowledge of the domain, a friend of mine suggested writing an actual parser by hand to get the hang of automating it. One thing I realized pretty quickly: parsers are complex and there's things we do by hand which simplify the process. Rules and Templates share the same syntax; however, there are language elements which are valid in Templates but not rules, there's internal states that handle this task. Which brought an idea to mind: rules should be able to specify path aids to make sharing sub-rules easier. – Allen Clark Copeland Jr May 10 '13 at 15:30
  • ... this should be fairly simple to carry over into the automation, but would require the automations to have state-based conditions. I'll work on this some and get back to you. ANTLR uses finite state automations to handle cycles of say: "T"*, where as I'll use it to handle most of the parse process since the reductions should be cleaner as states when there's 800+ variations in a rule (this would bloat quickly as spaghetti code in standard if/else form.) – Allen Clark Copeland Jr May 10 '13 at 15:33
6

I've been working on a lot of parsing recently, and some of the key features are:

  • A programmatic API -- so it can be used from within a programming language, ideally by simply importing a library. It can have a GUI or BNF-like interface as well, but the programmatic one is the key, because you can re-use your tooling, IDE, static analysis, testing, language abstraction facilities, programmer familiarity, documentation generator, build process, etc. Plus, you can interactively play with tiny parsers, which dramatically reduces the learning curve. These reasons place it at the top of my "important features" list.

  • Error reporting, as @guysherman mentioned. When an error is found, I want to know where the error was and what was going on when it happened. Unfortunately, I haven't been able to find good resources for explaining how to generate decent errors when backtracking comes in to play. (Although note @Sk-logic's comment below).

  • Partial results. When parsing fails, I want to be able to see what was successfully parsed from the part of the input that was before the location of the error.

  • Abstraction. You can never build in enough functions, and the user will always need more, so trying to figure out all the possible functions up-front is doomed to failure. Is this what you mean by templates?

  • I agree with your #2 (look-ahead prediction). I think it helps to generate good error reports. Is it useful for anything else?

  • Support for building a parse tree as parsing occurs, perhaps:

    • a concrete syntax tree, for which the structure of the tree corresponds directly to the grammar and includes layout information for error reporting of later phases. In this case, the client shouldn't have to do anything to get the right tree structure -- it should depend directly on the grammar.
    • An abstract syntax tree. In this case, the user is able to fiddle with any and all parse trees
  • Some kind of logging. I'm not sure about this one; maybe to show a trace of the rules the parser has tried, or to keep track of junk tokens such as whitespace or comments, in case (for instance) you want to use the tokens to generate HTML documentation.

  • Ability to deal with context sensitive languages. Not sure how important this one is, either -- in practice, it seems cleaner to parse a superset of a language with a context-free grammar, then to check context-sensitive constraints in additional later passes.

  • Custom error messages, so that I can tune the error reports in specific situations and perhaps more quickly understand and fix problems.

On the other hand, I don't find error-correction particularly important -- although I'm not up to date on current progress. The problems I've noticed are that the potential corrections that the tools supply are: 1) too numerous, and 2) don't correspond to the actual mistakes made, and so aren't all that helpful. Hopefully this situation will improve (or perhaps has already done so).

Robert Harvey
  • 198,589
  • 55
  • 464
  • 673
  • I've edited the question body to include a link to the PrecedenceHelper in the bullet that reads 'Templates'. It allows parameter array tuples, so if you have four parameters, each parameter arrays, the template must be used in argument sets of four. – Allen Clark Copeland Jr May 18 '13 at 03:48
  • The main reason that you would construct the CST is the overall layout of the parsed file. If you wanted to *prettyprint* the document, your best bet is using a CST because ASTs as their name implies lack information to handle the odd spacing that a CST would capture. Transforming a CST is usually pretty easy if it's a good CST. – Allen Clark Copeland Jr May 18 '13 at 16:24
  • I've edited the question again on the topic of built-in functions available to use. – Allen Clark Copeland Jr May 20 '13 at 23:38
  • I think I didn't go a great job expressing my point about templates/functions: I meant that because you can never have enough, a system shouldn't try to figure them out ahead of time: the user needs to be able to create his own. –  May 21 '13 at 02:54
  • 1
    I found one approach particularly useful for error reporting with infinite backtracking (Packrat): each production rule is annotated with error messages (worded as "blah-blah-blah expected"), and upon failure such message is stored in the stream the same way as memoised tokens. If all the failures are not recoverable (parsing terminated before reaching the end of the stream), the rightmost error message (or a collection of such messages) is the most appropriate. That's the easiest thing to do, of course there are ways of refining it further with more annotations. – SK-logic May 21 '13 at 07:57
1

Maybe MIX might be of general interest. The linked MIX paper refers to a paper titled like so:

MIX: A SELF-APPLICABLE PARTIAL EVALUATOR FOR EXPERIMENTS IN COMPILER GENERATION

It uses partial evaluation to compile a compiler.

Fun stuff, I hat to use during my CS studies to indeed write a compiler. Some time ago ;-)

SteAp
  • 101
  • 3
0

The grammar must not have restrictions like "don't have left recursive rules". It is ridiculous that tools widely used today have this and can only understand sucking LL grammars - almost 50 years after yacc did it right.

An example for right recursion (using yacc syntax):

list: 
      elem                  { $$ = singleton($1); }
    | elem ',' list         { $$ = cons($1, $2);  }
    ;

An example for left recursion (using yacc synatx):

funapp:
    term                    { $$ = $1; }
    | funapp term           { $$ = application($1, $2); }
    ;

Now, this could maybe "refactored" to something else, but in both cases, the specific kind of recursion is just the "right" way to write this, as (in the example language) lists are right recursive while function application is left recursive.

One can expect from advanced tools that they support the natural way to write stuff down, instead of requiring one to "refactor" everything to be left/right recursive form the tool imposes onto one.

Ingo
  • 3,903
  • 18
  • 23
  • Yes, it's nice not to have arbitrary restrictions like that. However, what is an example of left-recursion that can't be replaced with a repetition operator (such as regexes `*` or `+` quantifiers)? I freely admit that my knowledge in this area is limited, but I've never run into a use of left-recursion that couldn't be refactored into repetition. And I found the repetition version more clear as well (although that's just personal preference). –  Dec 10 '13 at 14:46
  • 1
    @MattFenwick See my edit. Note how the syntax directy leads to simple and natural semantic actions (e.g.) for creating a syntax tree. While with repetition, (which is not available in yacc, btw), I guess you often need to check whether you got an empty list, a singleton, etc. – Ingo Dec 10 '13 at 14:59
  • Thank you for the response. I think I understand better now -- I would prefer to write those examples as `list = sepBy1(',', elem)` and `funapp = term{+}` (and of course `sepBy1` and `+` would be implemented in terms of left-/right-recursion, and produce standard syntax trees). So it's not that I think left- and right-recursion are bad, it's just that I feel they're low-level and would like to use a higher-level abstraction where possible to make things clearer. Thanks again! –  Dec 10 '13 at 17:22
  • 1
    You're welcome @MattFenwick. But then it is maybe a matter of taste. For me, recursion is (at least in the context of languages, which are all inherently recursive or totally uninteresting) the more natural way to think of it. Also, a tree is a recursive data structure, so I see no need to *fall back* to iteration to simulate recursion. But, of course, preferences are different. – Ingo Dec 10 '13 at 17:28