Hacker News new | past | comments | ask | show | jobs | submit login

You bring up a good point, and I've had similar conversations with my friend who is an avid Unity developer, which has a mostly imperative programming (IP) runtime with a declarative GUI.

Since functional programming (FP) and declarative programming (DP) have many similarities, the same issue of hardness (for lack of a better term) exists in languages like Elixer/Clojure/Julia/Haskell/Scala/F# etc etc.

This all comes down to one of the most poorly understood concepts in FP, the monad:

https://en.wikipedia.org/wiki/Monad_(functional_programming)...

Honestly, I'm still not sure I understand internally how monads work, or if there are related terms for how I think of them. But monads (and related devices) are a way to deal with imperative logic and side effects when pure FP can't.

Imagine you're writing Tetris in a spreadsheet. It's easy to visualize how you might enter a piece's (x,y) coordinate in 2 cells and then pipe that through a series of cells containing logic that examines the blocks and outputs the next (x,y) coordinate for the following main loop.

But the actual main loop (that grabs input events) is very hard to visualize as a spreadsheet. This "glue" with the imperative stuff is where the monad comes in. Another way to think of it is, how to represent exceptions in a spreadsheet or a pure functional implementation of a promise chain (you can't without monads).

To write Tetris in HTML/CSS with no Javascript, we'd probably have to write a ton of CSS rules to show/hide/move DOM elements based on the state. That's an intractable problem for humans, so we'd probably compile the CSS from another language, which defeats the whole purpose. Maybe someone has tried it, I can't tell:

https://news.ycombinator.com/item?id=6709466

https://codepen.io/alexdevp/pen/pgdKf

But if we had something like a CSS monad, then we could write rules that are based on dynamic state from other rules. Writing this all out now, something like this might be useful, if anyone knows of such a thing.

Anyway, from what I understand, languages like Clojurescript get around this by suspending execution via a monad and then entering again with new input. Here is an almost purely functional Tetris clone written in Clojurescript:

https://github.com/yogthos/clj-tetris

Piece definitions, with code to offset/rotate the matrices and check for collisions by counting the number of cells to see if any have overlapped:

https://github.com/yogthos/clj-tetris/blob/master/src/tetris...

https://github.com/yogthos/clj-tetris/blob/master/src/tetris...

What I assume to be a thread sleep and JS setTimeout monads:

https://github.com/yogthos/clj-tetris/blob/master/src/tetris...

https://github.com/yogthos/clj-tetris/blob/master/src/tetris...

https://github.com/yogthos/clj-tetris/blob/master/src/tetris...

I know that code looks a little strange. But conceptually it's orders of magnitude simpler than that equivalent imperative code. Note that this is different from easy:

https://medium.com/@kiksdium/simple-vs-easy-bfd897ab293c

TL;DR; you bring up a really good point. FP and DP probably shouldn't be considered proven techniques until they can be used in the same problem domains as IP.




>This all comes down to one of the most poorly understood concepts in FP, the monad:

Of the languages you mentioned just Haskell, and Scala to some extent, has any deep concept of monads (and even in haskell they’re not part of the language, just a common idiom and ”easily” implemented)

Also, I’m using FP for the exact problems I used to use IP for, not sure what needs ti be proven..?


Oh sorry ya I went off the deep end a bit.

What I was trying to get at is that (IMHO) somehow the FP and DP paradigm has a huge blind spot when it comes to tackling the problems that IP solves handily. These are the problems that mostly involve integrating with the outside world - things like mutating state, exceptional situations, side effects etc.

Sure FP can largely avoid them via strong type handling and categories etc, but I've rarely seen good explanations of how to REALLY do something that programmers like to do, like write video games and social media feeds (things which the parent poster alluded to).

It's easy to visualize FP as a spreadsheet where the inputs generate a certain output. It's even straightforward to visualize how FP and IP are equivalent:

https://cs.stackexchange.com/questions/44305/why-are-functio...

https://cstheory.stackexchange.com/questions/625/relationshi...

https://en.wikipedia.org/wiki/Church–Turing_thesis

But I'm not convinced that pure functional programming can handle the simplest thing like new input without suspending execution or via a device like a monad. I also wonder if using those things makes functional programming impure. Which would make pure functional programming a fantasy..

This is maybe just a misunderstanding on my part since I haven't actually written much FP since learning Scheme back in the 90s. I'm all ears if you or anyone else knows of any articles exploring these issues.


Scheme was my first functional language actually. The general computational model is very similar to haskell untill you get to homoiconicity/metaprogramming.

Purity (and lazyness) is primarily there to help you reason about your code with referential transparency (purity ensures no side effects, and lazyness makes dealing with (certain parts of) bottom (think undefined) easier).

Of course any program needs to communicate with the outside world, and monads are a very general way of dealing with this. It’s worth noting that the first releases of haskell did not handle IO with monads.

At the same time, haskell contains functions to subvert lazyness (the function seq, which forces strict evaluation), and purity (the Debug.trace function which prints values in the middle of evaluating a pure function).

There’s also some fairly well-founded arguments that the State-monad violates the monad laws, which causes problems for referential transparency.

And you can still write a divergent function, in which case all your referential transparency goes out the window (kinda)..

Does this make haskell less pure? Probably, but since these are generally considered escape hatches or debug functions, the main paradigm of the language allows you (imho) to reason about your code in a way that’s (again imho) more amenable to separating your problem logically into discrete parts.

At the same time these escape hatches allow us to use haskell in the situations you describe. And you can often wrap them in a monad, in a way that allows you to reason about that boundary of the more purely logical parts of your code.

(There is obviously a tradeoff to using a type system like haskell’s. I’m a big fan of elm, precisely because it makes typed functional progamming easier to explain, and doesn’t distract people with all the fancy type theory stuff you can find in haskell (which I also love of course, and don’t get me started on agda))


Ok that all makes sense (I think). OK after sleeping on it, I had an epiphany about monads:

A monad is analogous to the imaginary number i.

So in FP, a monad gets passed along as a placeholder until it can be computed. Here is a pretty good description of how they work:

https://medium.com/@newhouseb/motivating-monads-f93c9ddf7014

I agree with his analysis except that in his "MAGIC TRICK" example, I would have used something that is UNDEFINED in the interim but can be computed to a real value at the end (just like how we use i as a placeholder device in math). His example:

f(x) = x^2 + x + 1

g(x) = 2x

h(x) = sqrt(x)

m(x, f) = UNDEFINED, if x is UNDEFINED f(x), if it isn’t

m(m(h(-1), g), f) ~= (2(sqrt(-1)))^2 + 2(sqrt(-1)) + 1 ~= -3 + 2i ~= UNDEFINED

In this case, the 2(sqrt(x)) is still undefined for -1, so can't be simplified further, so the answer is UNDEFINED.

But I probably would have used an example like:

f(x) = x^2 + 1 # <- note that his was x^2 + x + 1

g(x) = 2x

h(x) = sqrt(x)

m(x, f) = UNDEFINED, if x is UNDEFINED f(x), if it isn’t

m(m(h(-1), g), f) ~= (2(sqrt(-1)))^2 + 1 ~= (2(i))^2 + 1 = -3

See, using i or a monad as a placeholder allows us to compute a real answer.

In FP, all computable parts of the stack could be collapsed (optimized). Then if anything is left uncomputed, the runtime could block on any involved monad. Then when the monad is set to a real value from the input stream, it could be collapsed completely and return the final answer.

I haven't succeeded in learning Haskell yet, but when I tried, I got stuck on its mutability implementation, which seems to use monads:

https://en.wikibooks.org/wiki/Haskell/Mutable_objects

Honestly today, I would prefer to use pure FP. I think that allowing mutability anywhere in the language is a huge cost, because we can no longer visualize the whole thing as a spreadsheet.

I'd really like to know if another FP language solved the mutating state problem by doing something more like ClojureScript (CS), where it stops execution until the monad gets set from the runtime. Unfortunately, it looks like CS has mutable variables via reset!:

https://stackoverflow.com/a/36847124/539149

So it's not pure either :-/

Anyway, I'm still learning. Hope this helps someone.


I’m not sure that discussion of monads is super helpful. It’s not neccessarily wrong, but it doesn’t get to the meat of the matter.

Monads doesn’t represent uncomputable values like sqrt(-1), it’s more like they represent a context, where computations take place.

So you can separate all computations involving IO into the parameterized function type ’Monad a’, where a is your result type


(In the IO case that would be ’IO a’)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: