104

I'm working through the book "Head First Python" (it's my language to learn this year) and I got to a section where they argue about two code techniques:
Checking First vs Exception handling.

Here is a sample of the Python code:

# Checking First
for eachLine in open("../../data/sketch.txt"):
    if eachLine.find(":") != -1:
        (role, lineSpoken) = eachLine.split(":",1)
        print("role=%(role)s lineSpoken=%(lineSpoken)s" % locals())

# Exception handling        
for eachLine in open("../../data/sketch.txt"):
    try:
        (role, lineSpoken) = eachLine.split(":",1)
        print("role=%(role)s lineSpoken=%(lineSpoken)s" % locals())
    except:
        pass

The first example deals directly with a problem in the .split function. The second one just lets the exception handler deal with it (and ignores the problem).

They argue in the book to use exception handling instead of checking first. The argument is that the exception code will catch all errors, where checking first will only catch the things you think about (and you miss the corner cases). I have been taught to check first, so my intial instinct was to do that, but their idea is interesting. I had never thought of using the exception handling to deal with cases.

Which of the two is the generally considered the better practice?

Deduplicator
  • 8,591
  • 5
  • 31
  • 50
jmq
  • 6,048
  • 5
  • 28
  • 39
  • 13
    That section in the book is not smart. If you are in a loop and you are throwing exceptions over and over its very costly. I tried to outline some good points of when to do this. – King Friday Mar 11 '12 at 05:29
  • 9
    Just don't fall into the "file exists check" trap. File exists != has access to file, or that it will exist in the 10 ms it takes to get to my file open call, etc. http://blogs.msdn.com/b/jaredpar/archive/2009/04/27/understanding-the-is-was-and-will-of-programming.aspx – Billy ONeal Mar 11 '12 at 06:00
  • 12
    Exceptions are thought of differently in Python than in other languages. For instance the way to iterate through a collection is to call .next() on it until it throws an exception. – WuHoUnited Mar 11 '12 at 14:25
  • 4
    @emeraldcode.com That's not entirely true about Python. I don't know the specifics, but the language has been built around that paradigm, so exception throwing isn't nearly as costly as in other languages. – Izkata Mar 12 '12 at 00:51
  • 1
    That said, for this example, I would use a guard statement: `if -1 == eachLine.find(":"): continue`, then the remainder of the loop wouldn't be indented, either. – Izkata Mar 12 '12 at 00:54
  • @BillyONeal Which is why fopen returns a null file pointer in C/C++... no exception needed. – Powerlord Nov 17 '12 at 18:52
  • @Powerlord: Did I claim otherwise? Error codes are how error handling works in C, so obviously any kind of exception mechanism wouldn't be okay. – Billy ONeal Nov 17 '12 at 21:43
  • There's a false dichotomy suggested between "deals directly with the problem" and "lets the exception handler deal with it (and ignores the problem)" since the first is in fact "deals directly with the problem by ignoring it", but the phrasing makes the latter seem to be negligent compared to the former when they're either equally correct in ignoring the problem, or equally wrong. – Jon Hanna Dec 13 '14 at 10:48
  • One thing that the small examples in books do not examine is the case where you have exceptions that are caught a couple of stack frames up. If you want to avoid throwing an exception you have to designate special return codes for each function, and check those return values at each function evaluation. – Mutant Bob May 21 '15 at 15:37
  • The book’s example is not a good one. LBYL is the correct way to handle things like data like this IMO because it makes the intention more clear and puts related logic closer together (i.e., skip any lines without semicolons). However, things like `open()` cannot be LBYL: if you check for a file’s existence/accessibility before opening it, there is no guarantee that it will still be openable when your `open()` call runs. So you **have** to use the result of `open()` (whether that be an exception or `None` or whatever)—putting in LBYL is pointless bloat and gives a false sense of security. – binki Oct 01 '20 at 18:34

9 Answers9

81

In Python in particular, it is usually considered better practice to catch the exception. It tends to get called Easier to Ask for Forgiveness than Permission (EAFP), compared to Look Before You Leap (LBYL). There are cases where LBYL will give you subtle bugs in some cases.

However, do be careful of bare except: statements as well as overbroad except statements, since they can both also mask bugs - something like this would be better:

for eachLine in open("../../data/sketch.txt"):
    try:
        role, lineSpoken = eachLine.split(":",1)
    except ValueError:
        pass
    else:
        print("role=%(role)s lineSpoken=%(lineSpoken)s" % locals())
lvc
  • 1,006
  • 6
  • 3
  • 10
    As a .NET programmer, I cringe at this. But then again, _you people_ do everything weird. :) – Phil Apr 04 '14 at 19:24
  • This is exceptionally frustrating (pun not intended) when APIs are not consistent about which exceptions are thrown under which circumstances, or when multiple different kinds of failures are thrown under the same exception type. – Jack Dec 31 '15 at 20:34
  • 1
    So you end up using the same mechanism for unexpected errors and expected kind-of return values. That's about as great as using 0 as a number, a false bool, AND an invalid pointer that will quit your process with an exit code of 128 + SIGSEGV, because how convenient, you don't need different things now. Like the spork! Or shoes with toes... – yeoman May 31 '16 at 07:55
  • 3
    @yeoman when to *throw* an exception is a different question, this one is about using `try`/`except` rather than setting up a conditional for "is the following likely to throw an exception", and Python practice is definitely to prefer the former. It doesn't hurt that that approach is (probably) more efficient here, since, in the case where the split succeeds, you only walk the string once. As to whether `split` should throw an exception here, I would say it definitely should - one common rule is you should throw when you can't do what your name says, and you can't split at a missing delimiter. – lvc May 31 '16 at 08:44
  • I don't find it bad or slow or terrible, especially as only a specific exception is caught. Ans I actually like Python. It's just funny how it sometimes shows no taste at all, as said C use of the number zero, the Spork, and Randall Munroe's all-time favorite shoes with toes :) Plus, when I'm in Python and an API says this is the way to do it, I'll go for it :) Checking conditions in advance is of course never a good idea because of concurrency, coroutines, or one of those being added further down the road... – yeoman May 31 '16 at 10:31
  • 2
    I like this answer best, but it still contains the biggest danger with _Ask Forgiveness Not Permission_: the assumption that there's only one potential cause for an exception. Even if we honestly don't care about any line in the input file that doesn't contain a colon, `ValueError` is a pretty broad exception, only slightly better than Pokémon-style ('gotta catch 'em all') exception handling. In a real-world example, it's easy to imagine code being added within the **`try`** block that could cause a different `ValueError` that would be silently ignored, creating a difficult-to-find bug. – Michael Scheper Nov 23 '16 at 18:30
  • 1
    @MichaelScheper: This is a major disagreement I have with anti-exception folks: They keep trying to apply logic from languages with rare or no exceptions to languages with common exceptions (see also Joel's quote in the accepted answer). In Python, a `try` block with multiple lines is a code smell. Cleanup code anywhere other than `finally` or `with` is also a code smell. If you avoid those and other obvious code smells, then Python's exceptions are not a problem. But people keep complaining about these problems that don't actually exist... as if you just can write C in Python. – Kevin Oct 27 '18 at 20:07
  • 1
    @Kevin: I'm definitely not anti-exception, not even anti-AFNP. I'm just tired of hard-to-find bugs caused by poor application of AFNP, and have been trying to prevent my team of smart but inexperienced devs to avoid coding such bugs. I honestly haven't heard of multiline `try` blocks being a smell, and I've been doing a lot of reading, so I ask you to cite that, in case I've overlooked an important reference. In any case, a single line of code often _can_ generate numerous classes of exceptions, so Pokémon does still smell to me. Every linting tool I've used sniffs them out, too. – Michael Scheper Oct 29 '18 at 14:32
  • Could you please check the first link? It's not working! Could you please check the 2nd and 3rd links as well? Because both of them more or less lead to the same page: https://docs.python.org/3/ and I can't find any specific point I believe you wanted to make. Thanks in advance! – Milan Oct 07 '21 at 20:25
74

In .NET, it is common practice to avoid the overuse of Exceptions. One argument is performance: in .NET, throwing an exception is computationally expensive.

Another reason to avoid their overuse is that it can be very difficult to read code that relies too much on them. Joel Spolsky's blog entry does a good job of describing the issue.

At the heart of the argument is the following quote:

The reasoning is that I consider exceptions to be no better than "goto's", considered harmful since the 1960s, in that they create an abrupt jump from one point of code to another. In fact they are significantly worse than goto's:

1. They are invisible in the source code. Looking at a block of code, including functions which may or may not throw exceptions, there is no way to see which exceptions might be thrown and from where. This means that even careful code inspection doesn't reveal potential bugs.

2. They create too many possible exit points for a function. To write correct code, you really have to think about every possible code path through your function. Every time you call a function that can raise an exception and don't catch it on the spot, you create opportunities for surprise bugs caused by functions that terminated abruptly, leaving data in an inconsistent state, or other code paths that you didn't think about.

Personally, I throw exceptions when my code can't do what it is contracted to do. I tend to use try/catch when I'm about to deal with something outside of my process boundary, for instance a SOAP call, a database call, file IO, or a system call. Otherwise, I attempt to code defensively. It's not a hard and fast rule, but it is a general practice.

Scott Hanselman also writes about exceptions in .NET here. In this article he describes several rules of thumb regarding exceptions. My favourite?

You shouldn't throw exceptions for things that happen all the time. Then they'd be "ordinaries".

Kyle
  • 2,773
  • 17
  • 24
  • 5
    here's another point: if exception logging is enabled application-wide, it's better to use exception only for exceptional conditions, not for ordinaries. Otherwise the log will become cluttered and the real error-causing reasons will be obscured. – rwong Mar 11 '12 at 04:12
  • 2
    Nice answer. Note though exceptions have a high performance hit on most platforms. However, as you will have noted with my comments on other answeres, performance is not a consideration in the case of deciding a blanket rule for how to codify something. – mattnz Mar 11 '12 at 04:42
  • 2
    The quote from Scott Hanselman better describes the .Net attitude towards exceptions than "overuse". Performance is frequently mention, but the real argument is the inverse of why you SHOULD use exceptions - it makes the code harder to understand and deal with when an ordinary condition results in an exception. As for Joel, point 1 is actually a positive (invisible means that code shows what it does, not what it doesn't), and point 2 irrelevant (you are already in an inconsistent state, or there shouldn't be an exception). Still, +1 for "can't do what it has been asked to do". – jmoreno Mar 11 '12 at 17:14
  • +1. I like the idea of exception handling, but I try to use error checking as the first option, not only for avoiding "exception laziness" (using try..catch as a replacement for a better program logic), but also because it avoids some dangerous bugs: e.g. at my workplace the function (not python) that queries the DB returns false (no exception) when the SQL query fails, and if you try to you use an uninitialized result set it throws an exception, but maybe you have it "initialized" of previous use (e.g. when it fails in the middle of a loop), so instead of failing you are using the wrong data. – Alberto Martinez Mar 11 '12 at 17:28
  • 5
    While this answer is fine for .Net, it isn't very [pythonic](http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#eafp-vs-lbyl), so given that this is a python question, I fail to see why [Ivc](http://programmers.stackexchange.com/a/139184/22493)'s answer hasn't been voted up more. – Mark Booth Mar 13 '12 at 13:57
  • @jmoreno "invisible means that code shows what it does, not what it doesn't" - do you mean the exceptional cases should be invisible so you can focus on the ordinary path? But 90% of writing solid code is handling the exceptional cases properly. I'd argue that this needs to be as visible as possible. I've seen too much code where the off-normal cases are left to a generic exception handler "because you with exceptions you just don't have to worry about it". – Ian Goldby Sep 26 '16 at 09:07
  • 2
    @IanGoldby: no. Exception handling is actually better described as exception recovery. If you can't recover from an exception, then you probably shouldn't have any exception handling code. If method A calls method B which calls C, and C throws, then most likely EITHER A OR B should recover, not both. The decision "if I can't do X I'll do Y" should be avoided if Y requires someone else to finish the task. If you can't finish the task, all that is left is cleanup and logging. Cleanup in .net should be automatic, logging should be centralized. – jmoreno Sep 26 '16 at 10:37
  • @jmoreno Sorry, I don't understand what you just wrote has to do with my comment. (BTW, I agree that you shouldn't handle exceptions you can't reliably recover from. My comment wasn't related to that though - I was suggesting that making the off-normal paths visible in the code that causes them improves clarity.) – Ian Goldby Sep 26 '16 at 10:56
  • 2
    I down-voted not because you wrote anything bad but because of the quote I find offensive. The guy just does not understand the concept of exception handling, comparing it to goto and talking about exit points. Not worthy to repeat. – Martin Maat Oct 27 '18 at 18:14
  • 1
    -1: Because the finally block is only, what, 20 years old? – Kevin Oct 27 '18 at 20:12
  • .Net is .Net; Python is Python. Arguing from one language to the other is simply bad form at best. – Jack Aidley Oct 29 '18 at 13:40
29

A Pragmatic Approach

You should be defensive but to a point. You should write exception handling but to a point. I'm going to use web programming as an example because this is where I live.

  1. Assume all user input is bad and write defensively only to the point of data type verification, pattern checks, and malicious injection. Defensive programming should be things that can potentially happen very often that you cannot control.

  2. Write exception handling for networked services that may fail at times and handle gracefully for user feedback. Exception programming should be used for networked things that may fail from time to time but are usually solid AND you need to keep your program working.

  3. Don't bother to write defensively within your application after the input data has been validated. It's a waste of time and bloats your app. Let it blow up because it's either something very rare that isn't worth handling or it means you need to look at steps 1 and 2 more carefully.

  4. Never write exception handling within your core code that is not dependent on a networked device. Doing so is bad programming and costly to performance. For example, writing a try-catch in the case of out of bounds array in a loop means you didn't program the loop correctly in the first place.

  5. Let everything be handled by central error logging that catches exceptions in one place after following the above procedures. You cannot catch every edge case as that may be infinite, you only need to write code that handles expected operation. That's why you use central error handling as the last resort.

  6. TDD is nice because in a way is try-catching for you without bloat, meaning giving you some assurance of normal operation.

  7. Bonus point is to use a code coverage tool for example Istanbul is a good one for the node as this shows you where you aren't testing.

  8. The caveat to all of this is developer-friendly exceptions. For example, a language would throw if you used the syntax wrong and explain why. So should your utility libraries that the bulk of your code depends on.

This is from experience working in large team scenarios.

An Analogy

Imagine if you wore a spacesuit inside the ISS ALL the time. It would be hard to go to the bathroom or eat, at all. It would be super bulky inside the space module to move around. It would suck. Writing a bunch of try-catches inside your code is kind of like that. You have to have some point where you say, hey I secured the ISS and my astronauts inside are OK so it's just not practical to wear a spacesuit for every scenario that could possibly happen.

Milan
  • 117
  • 1
  • 4
King Friday
  • 688
  • 4
  • 12
  • 6
    The problem with Point 3 is it assumes the program, and programmers working on it, are perfect. They aren't, so it's best program defensively with these in mind. Approriate amounts at key juncture can make the software far more reliable than the "If the inputs are checked everythings perfect" mentality. – mattnz Mar 11 '12 at 07:34
  • that's what testing is for. – King Friday Mar 11 '12 at 15:59
  • 4
    Testing isn't a catch all. I have yet to see a test suite that has 100% code and "environmental" coverage. – Marjan Venema Mar 11 '12 at 16:57
  • see point 5 for reference – King Friday Mar 11 '12 at 17:06
  • 2
    @emeraldcode : Do you want a job with me, I would love to have someone on the team that always, with exception, tests every permutation of every edge case the software will ever execute. Must be nice knowing with abosoluite certainty that your code is perfectly tested. – mattnz Mar 11 '12 at 19:12
  • @mattnz I'm only stating not to program the guts of your known code around exceptions. Its basic. Besides, you don't make enough money to hire me. – King Friday Mar 11 '12 at 20:25
  • 1
    Agree. There are scenarios that both defensive programming and exception handling work well and bad, and we being programmers should learn to recognize them, and choose the technique that best fits. I like Point 3 because I believe we need to assume, at a certain level of the code, that some contextual conditions should be satisfied. These conditions are satisfied by coding defensively in the outer layer of code, and I think exception handling is a fit when these assumptions are broken in the inner layer. – yaobin Feb 19 '16 at 15:05
  • 1
    To confirm, by TDD, you meant "Test-driven Development", right? Thanks in advance! – Milan Oct 07 '21 at 21:33
17

The book's main argument is that the exception version of the code is better because it will catch anything that you might have overlooked if you tried to write your own error checking.

I think this statement is true only in very specific circumstances - where you don't care if the output is correct.

There is no doubt that raising exceptions is a sound and safe practice. You should do so whenever you feel there's something in the current state of the program that you (as a developer) cannot, or don't want to, deal with.

Your example, however, is about catching exceptions. If you catch an exception, you're not protecting yourself from scenarios you might have overlooked. You are doing precisely the opposite: you assume that you haven't overlooked any scenario that might have caused this type of exception, and therefore you're confident that it's alright to catch it (and thus prevent it from causing the program to exit, as any uncaught exception would).

Using the exception approach, if you see ValueError exception, you skip a line. Using the traditional non-exception approach, you count the number of returned values from split, and if it's less than 2, you skip a line. Should you feel more secure with the exception approach, since you may have forgotten some other "error" situations in your traditional error check, and except ValueError would catch them for you?

This depends on the nature of your program.

If you're writing, for example, a web browser or a video player, a problem with inputs should not cause it to crash with an uncaught exception. It's far better to output something remotely sensible (even if, strictly speaking, incorrect) than to quit.

If you're writing an application where correctness matters (such as business or engineering software), this would be a terrible approach. If you forgot about some scenario that raises ValueError, the worst thing you can do is to silently ignore this unknown scenario and simply skip the line. That's how very subtle and costly bugs end up in software.

You might think that the only way you can see ValueError in this code, is if split returned only one value (instead of two). But what if your print statement later starts using an expression that raises ValueError under some conditions? This will cause you to skip some lines not because they miss :, but because print fails on them. This is an example of a subtle bug I was referring to earlier - you would not notice anything, just lose some lines.

My recommendation is to avoid catching (but not raising!) exceptions in the code where producing incorrect output is worse than exiting. The only time I'd catch an exception in such code is when I have a truly trivial expression, so I can easily reason what may cause each of the possible exception types.

As to the performance impact of using exceptions, it is trivial (in Python) unless exceptions are encountered frequently.

If you do use exceptions to handle routinely occurring conditions, you may in some cases pay a huge performance cost. For example, suppose you remotely execute some command. You could check that your command text passes at least the minimum validation (e.g., syntax). Or you could wait for an exception to be raised (which happens only after the remote server parses your command and finds a problem with it). Obviously, the former is orders of magnitude faster. Another simple example: you can check whether a number is zero ~10 times faster than trying to execute the division and then catching ZeroDivisionError exception.

These considerations only matter if you frequently send malformed command strings to remote servers or receive zero-valued arguments which you use for division.

Note: I assume you would use except ValueError instead of the just except; as others pointed out, and as the book itself says in a few pages, you should never use bare except.

Another note: the proper non-exception approach is to count the number of values returned by split, rather than search for :. The latter is far too slow, since it repeats the work done by split and may nearly double the execution time.

max
  • 1,075
  • 11
  • 19
7

As a general rule, if you know a statement could generate an invalid result, test for that and deal with it. Use exceptions for things you do not expect; stuff that is "exceptional". It makes the code clearer in a contractual sense ("should not be null" as an example).

Ian
  • 5,462
  • 22
  • 26
1

Use what ever works well in..

  • your chosen programming language in terms of code readability and efficiency
  • your team and the set of agreed code conventions

Both exception handling and defensive programming are different ways of expressing the same intent.

Sri
  • 129
  • 3
0

TBH, it doesn't matter if you use the try/except mechanic or an if statement check. You commonly see both EAFP and LBYL in most Python baselines, with EAFP being slightly more common. Sometimes EAFP is much more readable/idiomatic, but in this particular case I think it's fine either way.

However...

I'd be careful using your current reference. A couple of glaring issues with their code:

  1. The file descriptor is leaked. Modern versions of CPython (a specific Python interpreter) will actually close it, since it's an anonymous object that's only in scope during the loop (gc will nuke it after the loop). However, other interpreters do not have this guarantee. They may leak the descriptor outright. You almost always want to use the with idiom when reading files in Python: there are very few exceptions. This isn't one of them.
  2. Pokemon exception handling is frowned upon as it masks errors (i.e. bare except statement that doesn't catch a specific exception)
  3. Nit: You don't need parens for tuple unpacking. Can just do role, lineSpoken = eachLine.split(":",1)

Ivc has a good answer about this and EAFP, but is also leaking the descriptor.

The LBYL version is not necessarily as performant as the EAFP version, so saying that throwing exceptions is "expensive in terms of performance" is categorically false. It really depends on the type of strings you're processing:

In [33]: def lbyl(lines):
    ...:     for line in lines:
    ...:         if line.find(":") != -1:
    ...:             # Nuke the parens, do tuple unpacking like an idiomatic Python dev.
    ...:             role, lineSpoken = line.split(":",1)
    ...:             # no print, since output is obnoxiously long with %timeit
    ...:

In [34]: def eafp(lines):
    ...:     for line in lines:
    ...:         try:
    ...:             # Nuke the parens, do tuple unpacking like an idiomatic Python dev.
    ...:             role, lineSpoken = eachLine.split(":",1)
    ...:             # no print, since output is obnoxiously long with %timeit
    ...:         except:
    ...:             pass
    ...:

In [35]: lines = ["abc:def", "onetwothree", "xyz:hij"]

In [36]: %timeit lbyl(lines)
100000 loops, best of 3: 1.96 µs per loop

In [37]: %timeit eafp(lines)
100000 loops, best of 3: 4.02 µs per loop

In [38]: lines = ["a"*100000 + ":" + "b", "onetwothree", "abconetwothree"*100]

In [39]: %timeit lbyl(lines)
10000 loops, best of 3: 119 µs per loop

In [40]: %timeit eafp(lines)
100000 loops, best of 3: 4.2 µs per loop
-5

Basically Exception handling supposed to be more appropriate for OOP languages.

Second point is the performance, because you don't have to execute eachLine.find for every line.

Elalfer
  • 99
  • 3
-7

I think defensive programming hurts performance. You should also catch only the exceptions you are going to handle, let the runtime deal with the exception you don't know how to handle.

Manoj
  • 131
  • 2
  • 7
    Yet anotehr -1 for worring about performance over readablity, maintainablity bla bla bla. Performance is not a reason. – mattnz Mar 11 '12 at 04:41
  • May I know why are you running around distributing -1s without explaining? Defensive programming means more lines of code, that means poorer performance. Anybody care to explain before shoting down the score? – Manoj Mar 11 '12 at 16:23
  • 3
    @Manoj: Unless you've measured with a profiler and found a block of code to be unacceptably slow, code for readability and maintainability far before performance. – Daenyth Mar 11 '12 at 17:28
  • What @Manoj said with the addition that less code universially means less to work on when debugging and maintaining. The hit on developer time for anything less that perfect code is extremely high. I am assuming (like me) you don't write perfect code, forgive me if I am wrong. – mattnz Mar 11 '12 at 19:08
  • I am arguing the use of Exceptions Vs Defensive code. This means assuming that input is valid and deal with exceptions as exceptions. In Defensive programming, you recognize rogue input fast but impact the processing time of the correct input. In Exception handling, your code may not be optimized in returning errors fast, but is faster in processing the correct input. It's an obvious choice whether you want to optimize for rogue input or the correct input. In either case, your code perfectly recognizes and handles rogue input. – Manoj Mar 12 '12 at 01:26
  • About readibility and maintainability, I think if you don't write defensive code, that's more readable and maintainable (less code, or less cluttered code). – Manoj Mar 12 '12 at 01:31
  • Don't just go by my opinion, read following articles: http://danielroop.com/blog/2009/10/15/why-defensive-programming-is-rubbish/ http://c2.com/cgi/wiki?OffensiveProgramming http://www.erlang.se/doc/programming_rules.shtml#HDR11 And please, don't shot down the score without understanding the point. @mattnz: It's my life long pursuit to continuously learn to write prefect code, it's just that the definitions of perfect code keeps changing. – Manoj Mar 12 '12 at 01:40
  • 2
    Thanks for the link - Interesting read that I have to agree with it, to a point... Working on a life critical systems, as I do "The system printed the stack trace, so we know exactly why those 300 people died needlessly....." isn't really going to go down too well in the witness stand. I suppose it's one of those things where every situation has a different appropriate response. – mattnz Mar 12 '12 at 02:30
  • If you understand and agree now, I will appreciate reversing the -1. Thanks. – Manoj Mar 12 '12 at 03:33
  • @Manoj: You're conflating the concept of *correctness* with performance. Handling incorrect input correctly is a matter of functionality. I didn't downvote, but I'd imagine those who did, did so because your answer says nothing about functional correctness, only about placing performance before maintainability. – Daenyth Mar 12 '12 at 19:07
  • There are two ways of handling incorrect input: 1. Check input up-front, 2. Don't check up-front but handle exceptions. In both ways, you are not compromising on the functionality. Garbage input still gets garbage output. What's the difference: #1 let's you report errors fast, but involves extra lines of code. #2 may be slow in reporting errors, but doesn't involve extra checks for legit input. The point I am arguing here is that in most cases we should optimize for the actual processing and not the error reporting. In both cases program correctness, tightness, functionality are same. – Manoj Mar 12 '12 at 21:22