I have been programming for many years, but one task that still takes me inordinately long is to specify a grammar for a parser, and even after this excessive effort, I'm never sure that the grammar I've come up with is good (by any reasonable measure of "good").
I don't expect that there is an algorithm for automating the process of specifying a grammar, but I hope that there are ways to structure the problem that eliminate much of the guesswork and trial-and-error of my current approach.
My first thought has been to read about parsers, and I've done some of this, but everything I've read on this subject takes the grammar as a given (or trivial enough that one can specify it by inspection), and focuses on the problem of translating this grammar into a parser. I'm interested in the problem immediately before: how to specify the grammar in the first place.
I'm primarily interested in the problem of specifying a grammar that formally represents a collection of concrete examples (positive and negative). This is different from the problem of designing a new syntax. Thanks to Macneil for pointing out this distinction.
I had never really appreciated the distinction between a grammar and a syntax, but now that I'm beginning to see it, I could sharpen my first clarification by saying that I'm primarily interested in the problem of specifying a grammar that will enforce a predefined syntax: it just so happens that in my case, the basis for this syntax is usually a collection of positive and negative examples.
How is the grammar specified for a parser? Is there a book or reference out there that's the de-facto standard for describing best practices, design methodologies, and other helpful information about specifying a grammar for a parser? What points, when reading about parser grammar, should I be focusing on?