I've just learnt how lazy evaluation works and I was wondering: why isn't lazy evaluation applied in every software currently produced? Why still using eager evaluation?
-
2Here's an example of what can happen if you mix mutable state and lazy evaluation. http://alicebobandmallory.com/articles/2011/01/01/lazy-evaluation-is-no-friend-of-mutable-state – Jonas Elfström Mar 15 '13 at 09:08
-
2@JonasElfström: Please, do not confuse mutable state with one of its possible implementations. Mutable state can be implemented using an infinite, lazy stream of values. Then you do not have the problem of mutable variables. – Giorgio Jun 07 '15 at 09:55
-
In imperative programming languages, "lazy evaluation" requires a conscious effort from the programmer. Generic programming in imperative languages has made this easy, but it will never be transparent. The answer to the other side of the question brings out another question: "Why isn't functional programming languages used everywhere?", and the currently answer is simply "no" as a matter of current affairs. – rwong Jun 11 '15 at 06:28
-
2Functional programming languages aren't used everywhere for the same reason we don't use hammers on screws, not every problem can easily be expressed in a functional input -> output manner, GUI for example is better suited to be expressed in an imperative manner. – ALXGTV Jun 11 '15 at 06:36
-
Further there are two classes of functional programming languages (or at least both claim to be functional), the imperative functional languages e.g. Clojure, Scala and the declarative e.g. Haskell, OCaml. – ALXGTV Jun 11 '15 at 06:38
-
Laziness can force you to sacrifice immutability. https://anilbey.github.io/posts/laziness-immutability-dilemma/ and https://stackoverflow.com/questions/74841526/why-does-stditerpeekablepeek-mutably-borrow-the-self-argument – anilbey Jan 09 '23 at 12:46
6 Answers
Lazy evaluation requires book-keeping overhead- you have to know if it's been evaluated yet and such things. Eager evaluation is always evaluated, so you don't have to know. This is especially true in concurrent contexts.
Secondly, it's trivial to convert eager evaluation into lazy evaluation by packaging it into a function object to be called later, if you so wish.
Thirdly, lazy evaluation implies a loss of control. What if I lazily evaluated reading a file from a disk? Or getting the time? That's not acceptable.
Eager evaluation can be more efficient and more controllable, and is trivially converted to lazy evaluation. Why would you want lazy evaluation?

- 36,794
- 8
- 70
- 139
-
12Lazily reading a file from disk is actually really neat--for most of my simple programs and scripts, Haskell's `readFile` is *exactly* what I need. Besides, converting from lazy to eager evaluation is just as trivial. – Tikhon Jelvis Dec 12 '11 at 08:18
-
3Agree with you all except the last paragraph. Lazy evaluation is more efficient when there is a chain operation, and it can have more control of when you actually need the data – SwiftMango Feb 28 '15 at 21:43
-
5The functor laws would like to have a word with you regarding "loss of control". If you write pure functions that operate on immutable datatypes, lazy evaluation is a godsend. Languages like haskell are fundamentally based around the concept of laziness. It's cumbersome in some languages, especially when mixed with "unsafe" code, but you're making it sound like laziness is dangerous or bad by default. It's only "dangerous" in dangerous code. – sara May 17 '16 at 14:35
-
Yes, but do I actually spend all my time writing pure functions that operate on immutable datatypes? No. Wouldn't that actually make it impossible to distinguish between lazy and eager evaluation in any case? – DeadMG Sep 07 '16 at 20:35
-
1@DeadMG Not if you care about whether or not your code terminates... What does `head [1 ..]` give you in an eagerly evaluated pure language, because in Haskell it gives `1`? – semicolon Oct 16 '16 at 08:27
-
1For many languages, implementing lazy evaluation will at the very least introduce complexity. Sometimes that' complexity is needed, and having the lazy evaluation improves overall efficiency--particularly if what is being evaluated is only conditionally needed. However, done poorly it can introduce subtle bugs or hard to explain performance problems due to bad assumptions when writing the code. There's a trade-off. – Berin Loritsch Jun 27 '18 at 14:41
Mainly because lazy code and state can mix badly and cause some hard to find bugs. If the state of a dependent object changes the value of your lazy object can be wrong when evaluated. It's much better to have the programmer explicitly code the object to be lazy when he/she knows the situation is appropriate.
On a side note Haskell uses Lazy evaluation for everything. This is possible because it's a functional language and doesn't use state (except in a few exceptional circumstances where they are clearly marked)

- 17,695
- 11
- 67
- 88
-
Yeah, mutable state + lazy evaluation = death. I think the only points I lost on my SICP final were about using `set!` in a lazy Scheme interpreter. >:( – Tikhon Jelvis Dec 12 '11 at 08:14
-
4"lazy code and state can mix badly": It really depends on how you implement state. If you implement it using shared mutable variables, and you depend on the order of evaluation for your state to be consistent, then you are right. – Giorgio Jun 07 '15 at 09:38
Lazy evaluation's is not always better.
The performance benefits of lazy evaluation can be great, but it is not hard to avoid most unnecessary evaluation in eager environments- surely lazy makes it easy and complete, but rarely is unnecessary evaluation in code a major problem.
The good thing about lazy evaluation is when it lets you write clearer code; getting the 10th prime by filtering an infinite natural numbers list and taking the 10th element of that list is one of the most concise and clear way of proceeding: (pseudocode)
let numbers = [1,2...]
fun is_prime x = none (map (y-> x mod y == 0) [2..x-1])
let primes = filter is_prime numbers
let tenth_prime = first (take primes 10)
I believe it would be quite difficult to express things so concisely without lazyness.
But lazyness isn't the answer to everything. For starters, lazyness cannot be applied transparently in the presence of state, and I believe statefulness cannot be automatically detected (unless you are working in say, Haskell, when state is quite explicit). So, in most languages, lazyness needs to be done manually, which makes things less clear and thus removes one of the big benefits of lazy eval.
Furthermore, lazyness has performance drawbacks, as it incurs a significant overhead of keeping non-evaluated expressions around; they use up storage and they are slower to work with than simple values. It is not uncommon to find out that you have to eager-ify code because the lazy version is dog slow- and it is sometimes hard to reason about performance.
As it tends to happen, there is no absolute best strategy. Lazy is great if you can write better code taking advantage of infinite data structures or other strategies it allows you to use, but eager can be easier to optimize.

- 2,904
- 1
- 15
- 19
-
Would it be possible for a *really* clever compiler to mitigate the overhead significantly. or even take advantage of laziness for extra optimizations? – Tikhon Jelvis Dec 12 '11 at 08:20
Here is a short comparison of the pros and cons of eager and lazy evaluation:
Eager evaluation:
Potential overhead of needlessly evaluating stuff.
Unhindered, fast evaluation.
Lazy evaluation:
No unnecessary evaluation.
Bookkeeping overhead at every use of a value.
So, if you have many expressions that never have to be evaluated, lazy is better; yet if you never have an expression that does not need to be evaluated, lazy is pure overhead.
Now, lets take a look at real world software: How many of the functions that you write do not require evaluation of all their arguments? Especially with the modern short functions that only do one thing, the percentage of functions fall into this category is very low. Thus, lazy evaluation would just introduce the bookkeeping overhead most of the time, without the chance to actually save anything.
Consequently, lazy evaluation simply does not pay on average, eager evaluation is the better fit for modern code.

- 7,941
- 1
- 22
- 25
-
2"Bookkeeping overhead at every use of a value.": I do not think the bookkeeping overhead is bigger than, say, checking for null references in a language like Java. In both cases you need to check one bit of information (evaluated / pending versus null / non-null) and you need to do it every time you use a value. So, yes, there is a overhead, but it is minimal. – Giorgio Jun 07 '15 at 09:43
-
1"How many of the functions that you write do not require evaluation of all their arguments?": This is just one example application. What about recursive, infinite data structures? Can you implement them with eager evaluation? You can use iterators, but the solution is not always as concise. Of course you probably do not miss something that you have never had the chance to use extensively. – Giorgio Jun 07 '15 at 09:46
-
2"Consequently, lazy evaluation simply does not pay on average, eager evaluation is the better fit for modern code.": This statement does not hold: it really depends on what you are trying to implement. – Giorgio Jun 07 '15 at 09:47
-
1@Giorgio The overhead may not seem much to you, but conditionals are one of the things modern CPUs suck at: A mispredicted branch usually forces a complete pipeline flush, throwing away the work of more than ten CPU cycles. You don't want unnecessary conditions in your inner loop. Paying ten cycles extra per function argument is almost as unacceptable for performance sensitive code as coding the thing in java. You are right that lazy evaluation allows you to pull off some tricks that you can't easily do with eager evaluation. But the vast majority of code does not need these tricks. – cmaster - reinstate monica Jun 07 '15 at 12:53
-
2This seems to be an answer out of inexperience with languages with lazy evaluation. For example, what about infinite data structures? – Andres F. Jun 07 '15 at 16:34
-
It seems like your argument basically only applies to C, or languages with very similar performance to C (Ada, Fortran etc.). Seeing as the vast majority of programming is done in language at about Java's speed (C#, Scala, OCaml, Haskell) or slower (JavaScript, Python, Ruby). Your argument is only valid for a narrow subset of programming. And even in very fast programming languages, you can do some lazy stuff when it is warranted, and a smart compiler can do strictness analysis to frequently get rid of a lot (when it works, basically all) of the overhead. – semicolon Apr 04 '16 at 20:26
-
@semicolon No, my argument is quite generic: You *always* pay for lazy evaluation in terms of performance. Even if your compiler is smart enough to optimize the overhead away, that optimization itself slows down the compilation step. Also, I've come to distrust compilers to magically clean up after me, experience tells me that that fails more often than not. However, you have a point as well: The slower the language, the less significant the costs for lazy evaluation are in comparison. – cmaster - reinstate monica Apr 04 '16 at 21:47
-
@cmaster eh, I guess. I mean even C has limited laziness, such as with boolean operators. Also one very important thing you are forgetting is that laziness often gives you a much cleaner way to implement something, such as with infinite streams. While they should all be implementable strictly using a combination of a caching, and replacing the lists with function calls, they will be MUCH less clean. (e.g `fibs = 0 : 1 : zipWith (+) fibs (tail fibs)` in Haskell). Ignoring performance I would say laziness is a significant advantage, but I do agree that on average you will get worse performance. – semicolon Apr 04 '16 at 22:31
-
@cmaster so to sum up I would say for most tasks, the advantages of laziness are worth the on average slightly worse performance (also a solution designed for a strict language should still be decent when done lazily, but the opposite could even lead to non-termination). This is all assuming referential transparency (can't arbitrarily mutate whatever you want), otherwise laziness can break things. – semicolon Apr 04 '16 at 22:41
-
@semicolon Good point about C's logical operators :-) And, yes, even those come with a certain performance hit: They force sequential evaluation of their arguments, introducing much stricter data dependencies into the stream of CPU commands, which leads to non-overlapable latencies of the involved commands, thereby slowing execution down. It's an effect, that is hard to appreciate when you have not done any significant assembler programming yourself, but it is actually quite a noticeable effect. – cmaster - reinstate monica Apr 05 '16 at 05:51
-
@cmaster that wasn't my only point, it was a minor afterthought... Also a lot of things force sequential operation, such as anything that has any affect on state, even mutating a variable forces sequential evaluation with other changes to that variable. – semicolon Apr 05 '16 at 19:29
-
@semicolon True, variable accesses also have the tendency to produce sequentialization. However, sequentialization due to the short circuit operators tends to be rather severe compared to normal data-dependencies because the latency of the involved processor commands is much higher. Really, from a performance perspective it seems quite odd that the makers of C chose to define that short-circuit behavior. And I really think it was only justified by making stuff like `if(ptr && *ptr == 42)` well defined. – cmaster - reinstate monica Apr 05 '16 at 20:30
-
@semicolon Incidentally, I just stumbled across this StackOverflow question http://stackoverflow.com/a/36415365/2445184 , which touches precisely on the performance impact of the short-circuit operators in C, and they are really more severe than you would think... – cmaster - reinstate monica Apr 05 '16 at 20:38
-
I mean if you read the full answer it looks like clang figured it out. Also that is due to weird behavior from C being C, in a better designed language that kind of thing wouldn't be an issue. – semicolon Apr 06 '16 at 04:19
As @DeadMG noted Lazy evaluation requires book-keeping overhead. This can be expensive relative to eager evaluation. Consider this statement:
i = (243 * 414 + 6562 / 435.0 ) ^ 0.5 ** 3
This will take a bit of calculation to calculate. If I use lazy evaluation, then I need to check if it has been evaluated every time I use it. If this is inside a heavily used tight loop then the overhead increases significantly, but there is no benefit.
With eager evaluation and a decent compiler the formula is calculated at compile time. Most optimizers will move the assignment out of any loops it occurs in if appropriate.
Lazy evaluation is best suited to loading data which will be infrequently accessed and has a high overhead to retrieve. It is therefore more appropriate to edge cases than core functionality.
In general it is good practice to evaluate things that are frequently accessed as early as possible. Lazy evaluation does not work with this practice. If you will always access something, all lazy evaluation will do is add overhead. The cost/benefit of using lazy evaluation decreases as the item being accessed becomes less likely to be accessed.
Always using lazy evaluation also implies early optimization. This is a bad practice which often results in code which is much more complex and expensive that might otherwise be the case. Unfortunately, premature optimization often results in code that performs slower than simpler code. Until you can measure the effect of optimization, it is a bad idea to optimize your code.
Avoiding premature optimization does not conflict with good coding practices. If good practices were not applied, initial optimizations may consist of applying good coding practices such as moving calculations out of loops.

- 6,232
- 17
- 17
-
1You seem to be arguing out of inexperience. I suggest you read the paper "Why Functional Programming Matters" by Wadler. It devotes a major section explaining the *why* of lazy evaluation (hint: it has little to do with performance, early optimization or "loading infrequently accessed data", and everything to do with modularity). – Andres F. Jun 07 '15 at 16:31
-
@AndresF I've read the paper you refer to. I agree with the use of lazy evaluation in such cases. Early evaluation may not be appropriate, but I would argue returning the sub-tree for the selected move may have a significant benefit if additional moves can be added easily. However, building that functionality could be premature optimization. Outside functional programming, I have seed significant issues with use of lazy evaluation, and the failure to use lazy evaluation. There are reports of significant performance costs resulting from lazy evaluation in functional programming. – BillThor Jun 08 '15 at 02:37
-
2Such as? There are reports of significant performance costs when using eager evaluation as well (costs in the form of either unneeded evaluation, as well as program non-termination). There are costs to almost any other (mis)used feature, come to think of it. Modularity itself may come at a cost; the issue is whether it's worth it. – Andres F. Jun 08 '15 at 04:03
If we potentially have to fully evaluate an expression to determine it's value then lazy evaluation can be a disadvantage. Say we have a long list of boolean values and we want to find out if all of them are true:
[True, True, True, ... False]
In order to do this we have to look at every element in the list, no matter what, so there is no possibility of lazily cutting off evaluation. We can use a fold to determine if all the boolean values in the list are true. If we use a fold right, which uses lazy evaluation, we don't get any of the benefits of lazy evaluation because we have to look at every element in the list:
foldr (&&) True [True, True, True, ... False]
> 0.27 secs
A fold right will be much slower in this case than a strict fold left, which does not use lazy evaluation:
foldl' (&&) True [True, True, True, ... False]
> 0.09 secs
The reason is a strict fold left uses tail recursion, which means it accumulates the return value and doesn't build up and store in memory a large chain of operations. This is much faster than the lazy fold right because both functions have to look at the entire list anyway and the fold right can't use tail recursion. So, the point is, you should use whatever is best for the task at hand.

- 131
- 2
-
"So, the point is, you should use whatever is best for the task at hand." +1 – Giorgio Jun 07 '15 at 09:51