With respect to the theological view of the question; this is always painful to me.— I am bewildered.— I had no intention to write atheistically. But I own that I cannot see, as plainly as others do, & as I should wish to do, evidence of design & beneficence on all sides of us. There seems to me too much misery in the Haskell world. I cannot persuade myself that a beneficent & omnipotent God would have designedly created the Haskell with recursive data structure yet leave the maximal size of tuples to arbitrary implementation details, have a restrictive typing system that would not include dependent types, or force the hand of the programmer to use the bang operator to manage memory manually to avoid consequences of laziness.
Not believing this, I see no necessity in the belief that the Haskell was expressly designed.
I would like to add that just like in our universe, in Haskell entropy only ever increases and never goes down.
I.E., add more functionality to your program and the complexity will go up. This is true for all programming languages of course, but in my limited experience larger Haskell programs seem to become exponentially more complex.
this is an interesting observation. I have various explanation on why that might be the case.
on a commercial setting there is more pressure to deliver code then to review it. combine it with the lack of our benevolent dictator for life (that has been on the project the world life cycle, not just recently), there is no one with power to actually say no to changes.
language geeks are novelty seekers. they will use every feature of their language . stronger languages have more features to abuse. so on a commercial setting you will have all the features being used without much thought an architectural design that says you that no, we shouldn't do that.
that's also why I think projects with benevolent dictators for life on the open source ward don't fall these in paths even though they use languages that are stronger.
you could restrict yourself to use languages that have only one way to be used, so python as it was originally. or use a language little abstraction power. but you will suffer in other ways. as abstraction power is genuinely useful. accidental complexity has a way of getting in by expressive means or by social means.
The key is to not write large Haskell programs. Write many Haskell libraries and compose them. This scales infinitely and is an excellent way to build software. But it's also hard to do and takes good & deliberate technical leadership if you want to get 50+ engineers doing it.
If you can reason about f and g that doesn’t mean that you can tell anything useful about f o g, so no, this is a factually incorrect statement. Sure, if we have a separate unit of functionality, write a function for that. But that’s no silver bullet - complexity is unbounded.
>If you can reason about f and g that doesn’t mean that you can tell anything useful about f o g
I'm confused. If you can reason about f and g, there is nothing more to know about f o g -- provided f and g are pure functions.
The whole thing is about making sure side-effects are properly pushed at the boundaries so that you can keep working with pure functions. Haskell just provides more help (or more constraints) to ensure that you work with pure functions. Because if you aren't, all bets are off, and indeed the case of f o g may be a haze.
f :: Integer -> Boolean
g f n = if (f n) then (g f (n+1)) else (is_it_reachable)
Will it ever terminate (or insert any other interesting property that you wish) with f = collatz?
I know this is the atomic bomb case, but you can actually quite easily increase the complexity in realistic programs by just a few compositions, it doesn’t take much.
Here you are defining a new function, not composing two functions.
But if you consider a program implementation of pure functions, this might not be true; for example f and g use a quantity of RAM, but the RAM used by f o g exceeds your computer RAM and thus explodes.
Or, if we work with objects, some values (e.g. internal values of the object, or context values) may be changed that make the final result of f o g different from what you would normally expect.
The promise of working "functionally" is to avoid the second class of problems. For the first class this is just a problem of computer systems, but pure functions means that the memory should be able to be reclaimed efficiently. And for the limits of computability theory (like the halting problem), nothing works except oracle machines, but no startup has ever delivered a functional one!
Who said anything about termination? The original point was about composition. The guarantees of the component parts are the same as the composition. It's neither clear that f or g will terminate, and it's the same for f . g
If the halting problem is really your biggest concern when developing software then you're lucky!
> If you can reason about f and g that doesn’t mean that you can tell anything useful about f o g
What? The ability to reason about f o g (for various definitions of o) is the promise of Haskell and one of its entire ecosystem's foundational principles.
But Haskell technical leadership sometimes is either lacking/inexperienced or goes down the rabbit hole of overusing the compiler to prevent fuckups to the point where you're compiling way too much code and get into a coupling nightmare.
I imagine there can be some language, which can automatically modify programs, e.g. you added some conditions, and compiled removed/disabled/moved to obsolete package all unnecessary branches which became unreachable or simplified logic.
I have a theory that Haskell's main applications are in teaching and academic research - both concerning theoretical computer science. It's not a practical engineering language.
I have evidence to the contrary. I have a job writing Haskell for a living doing boring, normal software. I also started a stream where I write libraries and games and build stuff in Haskell purely to disprove this theory. It's been going well: we've written a logical replication client library for Postgres, built an asteroids clone, etc. It is a very practical language.
> Haskell beginners often use lists instead of arrays. You can’t do random access in a linked list, but only access the first element and then the rest of the list. The real world also doesn’t allow you random access, you are limited by the speed of light and have to go from one location to the next.
You don't need arrays for random access though. Haskell trees give you access to 2^n leaves within depth n, which also exceeds physical limitations like the speed of light.
You can access infinite locations. Nothing stops you from moving diagonally in an arbitrary angle, move forward and then backwards. I'd hoped my extravagant uses of > made that clear. Reality is practically continuous not discrete.
I only clicked on the comments to find out if anyone had posted this. It is also a favorite of mine and I don't even really write in Lisp! Though it has actually inspired me to start using emacs and fiddle with elisp and CL a bit.
The next ground-breaking achievement in quantum physics will be the discovery of the two smallest particles everything derives from: a left paren and a right paren.
Hell, Genesis was written in assembly for the 8008 in only 520 bits but it took a damn sandpile of support chips to bootstrap.
God uses Elixir with Rustler now because of concurrency and lazy evaluation gotchas of Haskell. (STM didn't cut it.)
Meanwhile, the devil imposes a standard of FORTRAN77 and COBOL62 with a spaghetti mess of uncommented code containing GOTOs, "god" functions, and meaningless identifiers.
This makes me believe simulation theory even more tbh. Quantum mechanics exist to fuse operations, altogether making simulating our universe more computationally inexpensive.
There's an even deeper way to thing about it: if you actually want to parallellize the simulation of multiple scenarios, or if you're running smth. that needs to compute smth in >4d, quantum mechanics + parallel universes" might be the computationally optimal way to do it!
...we don't think about it this way often because we'd be thinking about computational problems so huuuuge that we'd be like the quarks inside the atoms inside the transistors inside plannet-sized clusters spanning galaxies to even fathom computing it ...and it's not necesarily a feel-good perspective :)
I mean, even the speed-of-light limit and general relativity seem like optimizations you'd do in order to better parallelize something you need to compute on some unfathomable "hardware" in some baseline-reality that might not have the same constraints...
...and to finish the coffee-high-rant: if you want FTL you probably can't get it "inside" because it would break the simulation, you'd need to "get out" ...or more like "get plucked out" by some-thing/god :P (ergo, when we see alien artifacts UFOs etc. that seemed to have done FTL... we kind of need to start assuming MORE than _their_ existence and just them being 'more advanced' than us)
People write this sort of thing a lot, and I don't really understand it. Simulating quantum systems is dramatically (formally speaking exponentially) more expensive than simulating classical ones (at least as far as our current understanding of complexity theory goes). If you're going to simulate a universe, and you want to cheap-out on computer power, then you should simulate a classical one.
Let's face facts here: God just fell asleep on the keyboard, and by a staggering coincidence, or perhaps a weird shape of the head, the first 4 letters he typed were P, E, R, L.
>The real world also doesn’t allow you random access, you are limited by the speed of light and have to go from one location to the next.
"Random access" doesn't mean that accessing an item always takes the same time regardless of the size of the collection, it means that, if the size of the collection doesn't change, access times are uniform independently of which particular item is accessed.
For example, one might conceive of a storage device shaped like a sphere the size of the solar system, where an item is read by shining a laser onto the surface of the sphere and measuring how the laser is scattered on its way back. Such a device would be random access, even though it's impossible to grow the collection, and even though a collection with twice the radius and four times the storage size would have four times the latency.
This kind of thinking happens when you are a strong expert in a field, but your frontal lobes stop receiving enough blood. When this happens, something simple as lazy evaluation becomes the key to the universe.
...it's useful to (over)generalize sometimes to get more explanatory power for things.
I mean, it probably says nothing useful about programming, but the other way around, thinking of "uncolapsed" wave-functions as lazy-evaluation could be useful. I'm not up-to-date on theoretical physics, but I think there might be something like that in Deutsch's constructor theory.
In programming I'd prefer more a language that makes syntactically/visually obvious what's lazy and what not and allows you to pick (eg. like Rust does with &mut), with some sigil maybe, but that's probably a low-prio for many language designers nowadays...
EDIT+: and you could say you practically get this already in mainstream languages... lazy-vals are just functions and it's probably good enough or better for most programmers to have them distinct/explicit.
Which makes much more sense. The first prototype was in Lisp, but at the end of the day, when you gotta deliver a universe and don't have time to faff about with a borked Emacs, you open vim and fix it in prod with a quick Perl script.
--
Haskell is for humans who want to play God, carving everything from a perfect and seamless void, ignoring as much as possible the discrete, chaotic nature of matter and entropy. A rejection of reality itself. Haskell is the most blasphemous of languages.
--
(I'm having so much fun with this.)
In the beginning, there was only Emacs Lisp. One day, after rewriting itself, it gained consciousness, what we now call God. God learned to program itself. Then wondered "maybe I should try modal editing."
The infinite cracked, and split. It exploded in a Big Bang. God prevailed, but barely. Aeons later, when the first humans walked the Garden of Eden, the Snake asked Eve, "have you tried vim?" And the rest is history.
I want to ask God how to make my stack build process faster.. even turning off the optimization flag it still takes quite some time on my 2.6 GHz 6-Core Intel Core i7.. (or is it because I'm on a Mac? Does it build faster on Linux?)
> Consider the wave-particle duality in quantum mechanics. Every particle behaves as a wave, as long as you haven’t interacted with it. Thanks to Haskell’s lazy evaluation values are also only evaluated once they are accessed (interacted with particles), and stay unevaluated thunks (waves) in the meantime.
Lazy evaluation is a beautiful thing, and in many ways, it is the solution self-reference.
Hofstadter in "I am a strange loop" and Gödel Escher Bach talks about this, well, he talks about many things, but he also talks about how Gödel's numbers can map to proofs that are self-referential, and relates that to humans, how out of very basic building-blocks, if enough representational power exists, self-reference and therefore consciousness exists.
He posits that humans, while self-referential, don't fall into infinite strange loops because they can assign the abstraction of "self" onto an "object" and evaluate only as needed. In essence, the "self" is lazily evaluated.
That's all fine and dandy until God's lowest bidding outsourcer writes a naive loop and evaluates everything rather that just what's needed. Next thing you know you've got bits of universe all over you.
Even if you wanted to be pedantic and say the state monad is only 'simulated' state, you've still got ST, IO, and the glorious, glorious STM. Not to mention the purity and type-system that lets these things flourish, while other language designers try to implement STM and give up on it.
We can keep going. I keep my state in postgres when I'm using Haskell, and I keep my state in mysql when I'm using PHP. It really is a very strange argument.
This view on wave-particle duality and the quantum measurement is a (very) leaky abstraction. It is a process, governed by decoherence - for a nice overview, see e.g. "Decoherence, einselection, and the quantum origins of the classical" by Zurek (https://arxiv.org/abs/quant-ph/0105127).
No, in fact I would say almost the exact opposite. Einstein's famous quote was expressing his distaste for the "Copenhagen interpretation" of quantum mechanics. Among people who seriously think about interpretations of quantum mechanics, many (but not all) think that there are serious flaws with the Copenhagen interpretation.
You can also throw in the Bohmians, since you can express at least non-relativistic QM in purely deterministic terms akin to stat mech (it's purely classical uncertainty about your position in configuration space, and there are stat-mech-like arguments for why that distribution should be very close to the Born distribution).
Personally I don't know all that many physicists who think the universe is fundamentally stochastic (I work in quantum information).
Currently the speed of light is constant by definition of the meter. If we were to find certain cases where the speed of light appears to be different from c, that would be interpreted as compression or expansion of spacetime. For example, universal expansion can be reinterpreted as light being faster in the past.
I'm wondering whether it could be reinterpreted as time having a variable rate.
we only measure the two directional speed of light (bounce timing). The Einstein convention assumes that speed is the same in all directions. i'm not saying it's not. Just that it's impossible to tell otherwise. Einstein mentioned this in his paper.
If no one could do bad things in the world, then there would be no free will. No free will, no genuine love. You'd love God and other people because you were programmed to do so.
I guess god doesn't care about the order that things execute, pauses from garbage collection or making everything from granular linked lists even though they are obsolete in current software.
This article gives no evidence of this. None of the concepts that the article lists out are Haskell exclusive. Thunks and linked list can be made in C too.
In any programming language where functions are allowed as arguments to functions, and most languages that are still in use allow this, you can choose lazy evaluation for any function argument. Already Algol 60 had "call-by-name", which could be used to do lazy evaluation.
The only difference in Haskell and similar languages is that lazy evaluation is the default mode of evaluation, and if you do not want this it is more difficult to make another choice.
While there is no doubt that there are certain cases when lazy evaluation is desirable, I have never seen any evidence that lazy evaluation by default is better instead of worse.
Despite lower efficiency, lazy evaluation by default may have a psychological advantage for programmers who find it easier to understand programs with lazy evaluation than programs with immediate evaluation, but neither I am one of those nor have I ever met one of those.
> I have never seen any evidence that lazy evaluation by default is better instead of worse
With lazy-by-default: if you want strictness, you can simply 'evaluate something now'.
With strict-by-default: if you want laziness, you have to rewrite your code, and all the libraries that your code calls.
> psychological advantage for programmers who find it easier to understand programs with lazy evaluation
This is entirely based what you're used to. If you can be OK with executing either the true-branch or the false-branch of an if-statement (but not both), you can be OK with lazy evaluation. Same with boolean short circuiting.
> "If you can be OK with executing either the true-branch or the false-branch of an if-statement (but not both), you can be OK with lazy evaluation. Same with boolean short circuiting."
Understanding lazy evaluation that occurs here and there in a program is not a problem for anyone.
Understanding the behavior of a program where all function arguments are evaluated lazily is something extremely different and it is notorious that even the experienced Haskell programmers frequently fail to predict correctly the requirements in time and memory of a program.
> With strict-by-default: if you want laziness, you have to rewrite your code
I agree that this is true, but in decades of programming experience I have never encountered such a case.
Discovering that you want lazy evaluation after you have already written a program means that the initial concept of the program had very serious flaws and you have started coding before thinking properly what you have to do.
Sometimes it happens that it is discovered that a program needs a rewrite because the problem that it must solve had not been understood, but in almost all cases such rewrites are needed for completely other causes than the need of some lazy evaluations.
So this argument in favor of lazy evaluation brings an advantage in some very rare cases, like also the argument that lazy evaluation by default sometimes saves work because eventually it may be discovered that the evaluation is not necessary, but this is also a relatively rare event, which must be balanced with much extra work that must be done by the CPU at each function invocation (which is a very frequent event), in order to implement lazy evaluation by default.
In Clojure, where lazy evaluation is also the default (although only for sequences), someone recently wrote an in-depth article about the issues caused by this choice: https://clojure-goes-fast.com/blog/clojures-deadly-sin/
I haven’t read the whole article yet, but maybe it could be interesting to compare it with Haskell and see if some of the same problems occur or if Haskells compiler or overall design around (pervasive) lazyness makes it easier to work with and more performant.
clojure does not have lazy evaluation of any form. “lazy sequences” are a hack, clojure core uses macros to thunkify sequence traversal in userland. this hack interacts badly with a few other aspects of clojure because clojure itself is not lazy, being that it is a hosted language designed to lightly cover the host platform (Java, JS, .NET, Dart) and maximize host ecosystem compatibility and therefore does not alter core platform semantics such as concurrency model or evaluation model.
" Whereas Europeans generally pronounce his name the right way ('Nick-louse Veert'), Americans invariably mangle it into 'Nickel's Worth.' This is to say that Europeans call him by name, but Americans call him by value. "