There is a chapter in "Making Software: What really works and why we believe it" By Andy Oram and Greg Wilson about software defects, and what metrics can be used to predict them.
To summarize (from what I can remember), they explained that they used a C codebase that was open source and had a published defect tracking history. They used a variety of well known metrics to see what was best at predicting the presence of defects. The first metric that they started with was lines of code (minus comments), which showed a correlation to defects (.i.e. as LOC increase so do defects). They did the same for a variety of other metrics (don't remember what off the top of my head) and ultimately concluded that more complex metrics were not significantly better at predicting defects than simple LOC count.
It would be easy to infer from this that choosing a less verbose language (a dynamic language?) will result in less lines of code and thus less defects. But the research in "Making Software" did not discuss the effect of language choice on defects, or on the class of defects. For example, perhaps a java program can be rewritten in clojure (or scala, or groovy, or...) resulting in more than 10x LOC savings. And you might infer 10x less defects because of that.
But is it possible that the concise language, while less verbose, is more prone to programmer errors (relative to the more verbose language)? Or, that defects written in the less verbose language are 10x harder to find and fix? The research in "Making Software" was a fine start, but it left me wanting more. Is there anything published on this topic?