
Develop the three great virtues of a programmer: laziness, impatience, and hubris - ingve
http://raganwald.com/2016/04/15/laziness-is-a-virtue.html
======
mattmcknight
"To distinguish between things that have the same interfaces, but also have
semantic or other contractual differences, we need types."

I don't see that "types" solve this problem, as there are far more varieties
of function than lazy and eager, which themselves are not always consistent.
Perhaps Eiffel is a better guide for the sort of pre-condition / post-
condition assumptions of a function, if you want to go down this route.

[https://en.wikipedia.org/wiki/Eiffel_(programming_language)#...](https://en.wikipedia.org/wiki/Eiffel_\(programming_language\)#Design_by_contract)

~~~
lmm
Contracts that are enforced at compile time _are_ types, whether you call them
that or not.

~~~
asQuirreL
Some types are not (and cannot) be enforced at compile time [0]. In fact what
makes something a type is not something internal to the type, but its
participation in a larger type system or type theory, so in that sense, almost
any abstract model of computation could give rise to types, as long as it was
sound.

[0]:
[https://en.wikipedia.org/wiki/Dependent_type](https://en.wikipedia.org/wiki/Dependent_type)

~~~
gergoerdi
Wait what? The whole point of dependetly-typed languages is that programs you
write in them are fully typechecked at compile time.

~~~
asQuirreL
In a dependently typed language, types may rely upon arbitrary terms that are
computed at runtime, and yet you are correct that dependently typed programs
are checked statically. The way this is achieved is if a property can only be
checked at runtime, then the burden of proof for that property is moved from
the typechecker to the program.

The program must verify the property at runtime and create a witness to the
proof, which it can then pass to functions to say that it has done the
necessary work.

So I guess the answer is yes and no. I would say the whole point of
dependently-typed languages is that we can statically determine that
_somebody_ is maintaining correctness, be it at compile time or at run time.

------
rossta
It's great to see generators presented in an engaging and approachable way. I
like the layering of examples and comparisons between eager and lazy
functions. I don't find generators to be an easy concept to understand for
most but the payoff is substantial since they have so many useful applications
- not just in JavaScript.

------
Animats
Lazy evaluation is only a win if you're never going to use the result.

Some systems can figure out by themselves when lazy evaluation is useful.
Consider this SQL:

    
    
        SELECT * FROM tab ORDER BY score DESC LIMIT 1;
    

Assume there's no index for "score". The brute-force approach is to sort "tab"
by "score", then take the first element. This is O(N log N) and requires
generating a temporary file. Most SQL implementations are smarter than that,
and will make one linear pass, for O(N) time. Some will do that for small
LIMIT values greater than 1.

~~~
bad_user
The example you gave is not of lazy evaluation. An "ORDER BY" that's truly
lazy, followed by a LIMIT, would behave like Quickselect [1], meaning that the
complexity would be O(n), but not as a special optimization and definitely not
for the reasons you mention. Sorting in Haskell behaves like this.

When dealing with functional programming (e.g. pure functions, immutable data-
structures, referential transparency), sooner or later you need to deal with
recursive algorithms and data-structures. Usually tail-recursive, so they take
constant stack space, but recursive nonetheless. And when dealing with
recursive algorithms or data-structures, you end up wanting laziness, because
otherwise you cannot express the algorithms that you want to express.

You mentioned LIMIT. This operation can be expressed in terms of "foldRight",
a really, really useful operation for functional programming. But here's the
catch: if it's not lazy, then it's not going to work for infinite streams (and
it should), it will need O(n) memory and depending on implementation, it will
probably blow up your stack. My current language is Scala. And in Scala the
"foldRight" implemented on the standard collections is totally useless. Which
is a pity really.

Going back to your original assertion, laziness isn't really about preventing
a result to be evaluated, although sometimes that's a useful side-effect. No,
lazy evaluation is about being able to short-circuit the iteration ;-)

[1]
[https://en.wikipedia.org/wiki/Quickselect](https://en.wikipedia.org/wiki/Quickselect)

~~~
cousin_it
> _When dealing with functional programming (e.g. pure functions, immutable
> data-structures, referential transparency), sooner or later you need to deal
> with recursive algorithms and data-structures. Usually tail-recursive, so
> they take constant stack space, but recursive nonetheless. And when dealing
> with recursive algorithms or data-structures, you end up wanting laziness,
> because otherwise you cannot express the algorithms that you want to
> express._

This might be tangential to your point, but tail recursion doesn't really work
with laziness, e.g. foldl in Haskell takes linear space.

~~~
bad_user
If foldLeft is strict, like in Scala, then it needs constant stack space, but
then you can't short-circuit it so you need to traverse the whole list to
return a result, which might lead to O(n) space depending on what you want to
get out of it. And by tail recursion I'm not necessarily referring to the
actual call-stack. Maybe I'm a little loose with the terms here.

Tail-recursive algorithms are those that can use constant memory (stack or
heap) and which in a language like Scala can be translated to usage of a loop
or a trampoline, whereas the actually recursive algorithms are those that
really need some sort of _stack_ to work and that grows directly proportional
to the input size. Usage of a stack is the definition of recursivity from
algorithms books, like Cormen et al. For example doing an in-depth traversal
of a balanced tree will really, really need a O(log2 n) stack, regardless if
the evaluation is strict or lazy or whatever language tricks you can pull.

Does that make any sense? :-)

------
KerrickStaley
The algorithm discussed in this post isn't actually the Sieve of Eratosthenes;
it's an inefficient algorithm with performance worse than trial division. See
[https://www.cs.hmc.edu/~oneill/papers/Sieve-
JFP.pdf](https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf)

~~~
braythwayt
The paper discusses the fundamental difference between the Sieve of
Eratosthenes and Trial Division, then goes on to discuss a number of ways that
the Sieve of Eratosthenes can be implemented in a pure functional style with
greater and greater optimizations.

Since I wanted to talk about laziness and not priority queues, I eschewed
trying to write the fastest possible implementation in favour of the simplest
implementation that is still recognizable as “cross off every nth number.”

If you feel that the implementation is inefficient, I agree. But if you feel
it isn’t actually the Sieve of Eratosthenes in some fundamental way, please
explain the difference in a little more detail, it would be instructive to
share with HN and me.

~~~
cousin_it
As someone who's also concerned with precision, I'd say your sieve is about
25% as bad as an O(n^2) quicksort. That's not horrible, whether you think it's
okay is up to you.

~~~
braythwayt
When writing about X, quite often the algorithm that communicates “X” most
succinctly ignores considerations “Y” and “Z.”

Sometimes, Y and Z would just get in the way. But then again, sometimes their
omission distracts the reader and provokes a lot of bikeshedding.

There is no easy answer, and I often get this balance wrong.

This is, I think, what makes some writers so brilliant: They find a way to
present “X” in a clear and strong way without making an absolute hash of “Y”
and “Z.”

Such writing is a treasure.

------
jewbacca
9 comments in this thread as of the time I'm posting this, and only 4 of them
got past the title.

~~~
partiallypro
I guess you could say they were lazy and too impatient to read the article;
but had enough hubris to comment.

~~~
ikeboy
Relevant:
[http://www.amazon.com/gp/product/B0049U444U](http://www.amazon.com/gp/product/B0049U444U)

(Disclaimer; haven't read past the free preview)

------
luismarques
If you want to check how this idea is taken to way more sophisticated levels
than this, then check out D's ranges and algorithms. This article only covers
the equivalent of input iterators / ranges. You can also find in D
sophisticated ways to deal with the last part of the article, regarding how to
ascertain the different capabilities of your range/type, in ways that go
beyond the traditional type system and OOP concepts.

edit: (also, D's lazy keyword, which performs the transformation described in
the article automatically)

~~~
jewbacca
It also wouldn't be a complete discussion of the subject without at least
mentioning Haskell, in which _everything_ is lazy by default:

    
    
        numbers = [0,1,2,3,4,5,6,7,8,9]
        take 5 numbers
        -- [0,1,2,3,4]
    
        numbers = [0..]
        take 5 numbers
        -- [0,1,2,3,4]
    
        evenNumbers = map (* 2) numbers
        take 5 evenNumbers
        -- [0,2,4,6,8]
    
        last evenNumbers
        -- <<infinite loop>>
    

I used simple list stuff in this illustration, but _everything_ is lazy. I/O
most notably (and sometimes problematically). Nothing runs until it's
forced[0], and none of this requires any special annotation or plumbing.

\----

[0] Yes, I know. But it's subtle, and we're in public here.
[https://wiki.haskell.org/Lazy_vs._non-
strict](https://wiki.haskell.org/Lazy_vs._non-strict)

~~~
JoshTriplett
Haskell's laziness can also be a notorious source of "space leaks": an
algorithm that appears O(1) in space, such as summing up a list of numbers,
can actually use O(n) space by accumulating unevaluated invocations of (+) in
memory. With more complex data structures, the proportion of expected memory
usage to actual memory usage can get even worse.

In larger Haskell programs, I've found that the most challenging issue to
debug: "why does my program use way too much memory?".

~~~
tamana
Practically, the way to go here is: make your data structures strict (with
"!") and make your control structures lazy.

So, don't create a list of numbers if you intend to sum it, use a non- lazy
data structure.

The trick is that Haskell's common default structures are lazy.

------
riffraff
not 100% IT, but: If you have not yet read larry wall's "Diligence, Patience,
Umility"[0], I strongly advise you to, it's just.. great.

[0]
[http://www.oreilly.com/openbook/opensources/book/larry.html](http://www.oreilly.com/openbook/opensources/book/larry.html)

~~~
RobertKerans
That's great, thanks for posting the link. Larry Wall has a satisfying writing
style, the little touches of humour are very well judged.

------
jph
Lazy is a win for two reasons. The article talks about the first win: the code
can skip evaluating items that are never reached. The more important win IMHO
is in time-important areas such as UI/UX and heuristics.

For example with UI/UX and lazy loading, you can use lazy initialization to
get your UI/UX on the screen faster for the user, by deferring a bunch of
content initialization to after the UI/UX.

For example with heuristics and lazy calculations, you can use techniques such
as successive approximations, caching of recent similar results, eventual
consistency, and partial filling, so you can provide decent information to the
user.

~~~
techbio
Can cause problems while appearing to satisfy the desire for responsiveness.
For example, on the iPhone: recent messages or calls may update in the middle
of a touch, so the location of the input (thumb) actually initiates a call or
brings up a thread not intended by the original display.

Showing what had been, and then updating the action of a display location,
creates a UX of "shifting sands".

Often annoyingly occurs with popup ads, but in an iOS app itself is
inexcusable.

------
rsp1984
I find the laziness part being covered in the article but couldn't find
anything about impatience or hubris. Am I just reading it wrong or is the
title a bit misleading?

~~~
Retra
The article is talking about lazy evaluation, not lazy programmers. And thus
it's not talking about impatience or hubris either, because it's not talking
about that quote at all, just using it because it contains the word 'lazy' in
a programming context.

Lazy evaluation is probably better named "deferred evaluation", and is more
analogous to procrastination, not laziness.

------
fao_
> If JavaScript was lazy, it would not evaluate 2+3 in the expression ifThen(1
> === 0, 2 + 3)

It depends. Haskell is lazy, but I think it would still constant fold that
expression.

~~~
chrisseaton
If Haskell were lazy, then it wouldn't evaluate 2+3, but Haskell is non-
strict, rather than lazy, so it can evaluate it, or not.

~~~
hvidgaard
Haskell is lazy, but the compiler can be smart enough to know that 2+3 is
semantically equal to 5 in it's model of computation. This is without
violating the lazy nature of the language. So it can (and will I believe, but
I haven't checked) fold any such expressions to their constant value when
compiling for performance reasons.

------
tombert
Lazy evaluation is what drew me into Haskell a few years ago, when someone
showed me how you could use it to beautifully read a file as if it were in-
memory, but done entirely as a stream. I thought that was amazing and I was
sold.

~~~
lumpypua
It's pretty, but lazy IO is evil:

[https://www.reddit.com/r/haskell/comments/1e8k3k/three_examp...](https://www.reddit.com/r/haskell/comments/1e8k3k/three_examples_of_problems_with_lazy_io/c9xyxxy)

~~~
tombert
I should have mentioned that in my initial comment; Lazy IO is beautiful and
OK for a simple application, but it's smarter to use a streaming library like
Pipes of Conduit for anything big.

------
saulrh
Doesn't using function* make laziness part of the type system, thereby
allowing us to detect when we're mixing lazy and eager in functions in the
same way we might detect mixing integer and floating-point math in variables?

------
j45
Really enjoyed reading this.

The funny thing about laziness is you have to work hard to then reach a state
of meaningful and productive laziness that isn't cutting corners or creating
the right balance between risk and technical debt.

------
throweway
Quick poll: who prefers laziness by default. Who prefers strictness by
default?

~~~
hackaflocka
I prefer the laziness of PHP and Python. JavaScript threw me off with its
eagerness. Took a while getting used to.

~~~
spicyj
In what ways are PHP and Python more lazy than JS?

~~~
RussianCow
I can't speak for PHP, but in Python (version 3 at least), a lot of the built-
in functions that return iterators, like `map` and `range`, are lazy.

~~~
daveguy
Also in python 2.7 (and I assume py3k stuff too) conditionals also have lazy
evaluation if you have the statement:

if False and (fibonacci(1000)/fibonacci(1000)):

    
    
      print "Nope"
    

It will immediately evaluate to False and pring "Nope" rather than checking
the (fibonacci(1000)/fibonacci(1000)) portion of the conditional.

EDIT: Side note, is there a shortcut for displaying code snippets within HN
posts? Extra returns (what I usually use for clarification by formatting) do
not display very well.

~~~
Buge
I think there is a bug in that code. It will not print Nope.

~~~
daveguy
You are right. I meant immediately evaluate False and not print Nope.

------
dluan
Just curious re: the findWith() example, how would you find the last number in
a list that is greater than the min while still being lazy?

~~~
throwanem
You wouldn't, because you can't be sure you've found the last value satisfying
the predicate until you reach the end of the list. To do it lazily, you'd have
to reverse the list first, but that's O(n), so not practical with very long
lists.

------
OneOneOneOne
Really nice site for small screens!

------
ruffni
is it possible that

for (const element of list) {

should actually be

for (let element of list) {

?

the former one raises "SyntaxError: invalid for/in left-hand side"

------
api
The secret to good UX design is to imagine yourself as the user and imagine
yourself raging against the hassle of having to do this.

~~~
throweway
Even better is to talk to users. They may get raged by different things to me
to you.

~~~
inopinatus
I like to include a bit of "watch someone use your product".

And I mean doing so with their permission, and at your request, but without
any prior instruction, and you're not allowed to intervene or supply hints.

------
mentos
Laziness is the mother of necessity...

------
dyscrete
Even better!

    
    
         ifThen(1 === 0, 5)
    

No need to calculate 2 + 3!

~~~
slaman
False. You did the arithmetic in stead of the computer, this is more work.

and if instead of addition it was a function call...

------
franciscop
Oh but you can become even lazier:

    
    
      function compare(list, val){
        return list.some(el => el === val);
      }
    

Read the documentation (preferably from MDN [1]) for Array methods. You'll get
your "laziness" to the next level: .filter(), .some(), .every(), .reduce(),
.concat() [vs .push()], etc.

[1] [https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Refe...](https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Reference/Global_Objects/Array/prototype)

