2

I know programmers tend to get defensive with their paradigms and tools that they use. But in your experience, with the most generic, typical pieces of code that you see with Java or C++ or C, is the code more error prone than a similar piece of code in an declarative or functional programming language.

For example, with Java there can be a lot of boilerplate and setup code need to call your target routine. Usually developers may need to look at the implementation details to really understand what happens if they do or do not provide the correct dependencies. Normally the developer never does that so you end up with NullPointerException bugs and other logic errors.

berlinbrown2
  • 525
  • 2
  • 14

5 Answers5

9

The question is a bit unclear. Is one programming style more "error-prone" than another? What would this mean? Humans cause errors, not programs. However, some programming styles are better at mitigating certain types of human error. I think it's reasonable to say that a compiled language that doesn't give a syntax error at compile-time is more error-prone than one that does, and it is this sense that I will use.

Some languages do provide more mitigation than others. For instance, statically typed languages provide errors that mitigate type violations*. In some languages, such as Haskell, the type system is so powerful that "getting the types right is most of the battle". I would argue that these languages provide more mitigation than others. That is to say, it is easier to have unexpected type violations in dynamically typed languages, but that does not make them worse languages

So, no language will make humans less error-prone, so no language is less error-prone per se. But some languages will have more effective mitigation strategies that will catch human errors before they become user-facing bugs.

Of course, none of this will make a bad programmer good, nor does it replace more robust mitigation strategies like unit testing.

* This is not to say that dynamically typed languages are strictly more error-prone, only that there is a class of bug (type violation) that will not be mitigated directly by the language.

Robert Harvey
  • 198,589
  • 55
  • 464
  • 673
Rein Henrichs
  • 13,112
  • 42
  • 66
  • Good response. Sorry for the less than obvious question, but I thought I would throw it out there and someone would understand my intent. – berlinbrown2 Apr 21 '11 at 19:32
  • Ada is arguably less error prone than C because of its strict type system, yet both are imperitive languages. Functional and strict typing are not the same thing despite what the evangelists say. – mattnz Mar 12 '14 at 02:52
1

Errors come from humans not languages. Neither language nor language type is relevant in determining error proclivity.

Lazy coders on the other hand are more error prone than diligent ones.

Joel Etherton
  • 11,674
  • 6
  • 45
  • 55
  • 1
    So you don't think that (say) Haskell's type system makes it more difficult to make mistakes? – Rein Henrichs Apr 21 '11 at 15:05
  • @Rein Henrichs: No, I don't think that it does. I would agree that it takes more effort to reach a level of proficiency with some languages than others, but the mistakes are still borne of human inattentiveness. Someone who is equally proficient with language A as with language B will make similar mistakes in both. If you take the human out of the equation and have the code auto-generated, one language is not more "mistake prone" than the other. – Joel Etherton Apr 21 '11 at 15:11
  • 7
    And yet Haskell's type system fails early (at compile time), catching a large variety of "human inattentiveness" errors. This is sort of like saying that humans will make mistakes driving no matter what, so anti-locked breaks don't make crashes less likely. Is human error the most common cause of program error? Of course. Is it impossible for a language to have features that mitigate human error? I don't think so. – Rein Henrichs Apr 21 '11 at 15:17
  • 1
    @Rein Henrichs: To use your own analogy, anti-lock brakes didn't stop the driver from following too closely or speeding in the first place. Keep in mind, the damage errors do can be limited by constructs within the language, but they are caused by humans. You won't find anti-lock brakes on a formula 1 race car. Does that mean that the car is more error prone than a Toyota Camry? A higher degree of skill is needed to be considered proficient. The mitigation you're referring to just masks poor performance. It does not reduce mistakes. – Joel Etherton Apr 21 '11 at 15:24
  • 1
    @Rein Henrichs: Oh, and for your analogy as well. I believe anti-lock brakes do not make crashes less likely. Here is a link to a study that agrees: http://www.sciencedaily.com/releases/2006/09/060927201332.htm – Joel Etherton Apr 21 '11 at 15:28
  • @Joes whether they do or not is irrelevant, I can easily pick another example. If "the damage errors do can be limited by constructs within the language", then you're saying that some languages make human-caused program error less likely. I agree. Perhaps you should edit your original answer. – Rein Henrichs Apr 21 '11 at 15:32
  • 1
    @Rein Henrichs: No I'm not saying they're less likely. I'm saying they're less damaging. I stand by my original answer. Languages are not prone to errors. People are. – Joel Etherton Apr 21 '11 at 15:34
  • 1
    @Rein Henrichs: I invite you to provide an answer to this question with an opposing viewpoint. I am not saying you are wrong. I am simply standing by my own opinion. – Joel Etherton Apr 21 '11 at 15:35
  • 1
    If you're arguing that languages can't make humans less error-prone per se, that seems both obviously true and irrelevant. The question as I understand it is whether some languages *mitigate* the human error that does occur, which we both think is true. In fact they all do, but some do so more than others. – Rein Henrichs Apr 21 '11 at 15:40
  • 1
    @Rein Henrich: I don't think that is true. Languages can NOT mitigate human error. In select cases they may reduce the damage it causes, but it still happened. Languages can't make algorithms not faulty, can't force programmers to use appropriate types. There is much more to programming (in every language type) than syntax. If you give a Lisp expert notepad and tell him to write an error-free program in Logo, how much mitigation do you think the language is going to give him? Do you think he will be able to do it perfectly the first time/every time? – Joel Etherton Apr 21 '11 at 15:45
  • @Rein Henrich: Do not confuse language with tools. I see through trolling that you prefer Ruby. If I give you vi editor and ask you to write an application using C#, how confident are you that the language will keep you from making errors? – Joel Etherton Apr 21 '11 at 15:46
  • 1
    I am confident that it will keep me from making the type of errors it is designed to mitigate, for instance syntax errors and type errors. That is to say, I will still make them, but it will mitigate my errors by catching them early, before they become bugs. – Rein Henrichs Apr 21 '11 at 15:54
  • If mitigate doesn't mean "reduce the damage", what do you think it means? – Rein Henrichs Apr 21 '11 at 15:54
  • If you choose to interpret the question as "can languages prevent human error", it becomes trivial and uninteresting. We both agree that languages cannot *prevent* human error from occurring. We also apparently agree that languages can mitigate the "damage caused" by human error. I don't see how you can say that a syntax error is not a mitigation strategy and does not make a language "less error prone" under any useful, non-trivial definition of "error prone". – Rein Henrichs Apr 21 '11 at 15:59
  • In any event, I've tried to unpack the question a bit more in my answer. Hopefully that will help make my position a bit more clear. – Rein Henrichs Apr 21 '11 at 16:00
  • 9
    I am perfectly confident that I will make fewer errors writing complicated software in Lisp than in some assembly language. Moreover, I'll be spending a lot of time in the assembler program checking things that the Lisp compiler would take care of for me. Therefore, I think you're completely wrong in considering language irrelevant. – David Thornley Apr 21 '11 at 16:12
  • I must say, the haskell compiler makes it REALLY hard to cause segfaults ;) – sara Jun 23 '16 at 10:18
1

That's a highly subjective question. I would argue that it's ultimately clarity that counts most when it comes to reducing errors in programs, and for me personally, declarative programming is clearer. If I can understand the gist of what a method is out to accomplish without reading further, I'm already a leg up over someone who has to understand what that method does to understand the big picture, in my opinion. That's not to say that I may not actually know what it does (programmer naming methods in misleading ways), which in many ways is worse than not understanding what it does at all.

I suppose ultimately it depends on what's clearer to you rather more than anything else.

Neil
  • 22,670
  • 45
  • 76
  • 1
    Is it not possible to have a declarative programming method that is constructed in an unclear manner? Is this the language's fault? – Joel Etherton Apr 21 '11 at 15:26
  • @Joel: Humans can mess anything up. – Michael K Apr 21 '11 at 15:35
  • 1
    @Michael: Tru dat, brudda. That's basically what I'm saying. If it weren't for humans, code would be perfect. I'm sure the day isn't far off when we're all bowing before the machine overlords. – Joel Etherton Apr 21 '11 at 15:38
0

An unnecessarily imperative style is more error-prone than a style relying more on pure functions and immutable values because mutable values have the potential for aliasing bugs. If a particular program can be implemented both ways, the imperative approach has a higher risk of bugs all else being equal.

However, not all languages support a pure functional style well. It's relatively difficult to make good use immutability in Java, for example, because neither the language nor the standard libraries support it.

Doval
  • 15,347
  • 3
  • 43
  • 58
0

Humans make errors. The more a human has to do, the more errors he makes. Therefore, there are two things a language can do:

  • Help detect errors early, e.g. through a strong type system. This has been covered by other answers.
  • Reduce the amount of work the human has to do.

The second point isn't simply about terseness. "Work" the human has to do isn't just the typing of the code. A lot of the work is in mapping from your idea of how something works to a way of expressing that in the code.

As a consequence, the more directly your ideas map to constructs of the programming language, the less translating you have to do, and the fewer errors you introduce in that translation. For example, in a language without higher-order functions, if you have a list of things and you want a list of things that are derived from the original things, you need to translate from the abstract idea "give me a list of derivatives of things in another list" to "create a list; go over the original list; compute the derivative of the current element; add it to the target list" and then translate that into code like

result = new List();
for each element in source { // what if you don't even have foreach?
  result.add(derive_from(element));
}

In a language that has higher-order functions, you can more clearly express your idea, because you only have to translate it to "use the thing that gives me a list with elements corresponding to another list; give it the way to derive a new element", or in code:

result = map(source, derive_from);

The code is succinct, but more importantly, it more directly expresses the intention. There was less mental translation, and thus less room for error.

Sebastian Redl
  • 14,950
  • 7
  • 54
  • 51