
What Color Is Your Function? - jashkenas
http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function
======
tel
We can massively generalize this by calling "blue" "pure" and "red" "impure".
The end result is essentially Haskell (but you can take it _much_ further,
too!).

\---

There's something horrifyingly restrictive about having merely (blue -> pure,
red -> impure), though. All "blue" functions behave roughly the same (e.g.,
very, very nicely and compositionally) but "red" functions come in many
classes: async, stateful, exception-throwing, non-deterministic, CPS'd, IO-
effectful, combinations of the prior.

What we want is a nice way of representing different kinds of red functions
and how they all interact.

What we'd also like is a nice way of composing different kinds of red
functions so that the bundled piece has a sensible kind of redness too and we
can _keep_ composing.

And this is exactly monads and monad transformers.

There are other ways to achieve this end as well all under the general name
"Effect Typing". Very cool stuff.

But what I'd like to emphasize is that Java/C#/Go have _not_ solved this
larger problem. They each introduce a fixed number of "rednesses" and very
specific ways that different kinds of red can be connected together. Monads
are a generalizable solution.

The situation is exactly the same as HOFs themselves. We've had subroutines
and functions for a long time, but first-order and higher-order functions are
a marked increase in power since you can now refer to these "function things"
directly.

Monads take the notion of "CPS Transform" and allow you to refer to it
directly, to mark sections of code which use it and compose them
intelligently. They allow you to invent your own kinds of redness on the fly
and ensure that your custom colorations compose just as nicely as the built-in
ones.

If this article is even slightly interesting then you owe it to yourself to
learn much more about effect typing. It'll change your world.

(That and sum types, because _nobody_ is served by a bunch of integers named
Token.LEFT_BRACKET.)

~~~
Nitramp
Effect Typing and Monads are yet another colour for the lipstick you put on
the pig. You still have non-composability (i.e. the async monad spreads
through your code), you still have the complexity, you still don't have proper
tracebacks without special runtime support, and you still have the
inefficiency that Bob didn't actually mention (unwinding all those stacks all
the time).

Languages with threads (Java, C#) don't quite solve the problem as their
threads have other disadvantages, e.g. creating too many threads causing
contention, though it's much less of a problem than people make it to be.

Go actually completely removes the problem - you don't get a nice tool to pack
the problem into boxes more easily, you don't get a different fancy colour for
your lipstick, you just don't have the problem at all - you wouldn't know it
existed was it not for those other languages.

There might be a deeper lesson here about good, pragmatic engineering vs
awesome computer language geekery.

~~~
rntz
Go's "good, pragmatic engineering" solution to the problem - lightweight
threads, channels, and the communicating sequential processes model - is built
on the back of decades of academic research, including programming languages
research.

And now for some computer language geekery:

Go solves _one_ "code color" problem (concurrency/asynchrony), by hard-coding
the solution into the language. Go programs are written in a particular monad
(the "Go" monad, if you will), and so have one color. Indeed, monads
originally arose in the study of the semantics of imperative languages; to be
imperative _is_ to be inside a particular monad. This is exactly the "async
monad spreading through your code" problem taken to its logical conclusion:
just put everything in the same monad!

This hard-coding solution has very concrete advantages. For one, you don't
need to think about separate colors and how to transition between them,
combine them, define them, etc. (IOW, monads are complex; by picking one, you
reduce cognitive load.) For another, it's easier to optimize your
implementation and tooling for that monad (lightweight threads, useful
tracebacks).

But PL academics aren't satisfied with this solution, because it only solves
_one_ color problem, and it solves it _non-composably_ : you can't put the
solution in a library, you have to invent a whole new language for it. _That
's_ the holy grail (some) PL research is chasing. It's a question of concrete
benefits in the here-and-now versus abstract benefits in the hopeful-future.
But it pays to remember that today's concrete benefits are yesterday's green-
field research.

~~~
Nitramp
Indeed, Go only solves one problem by baking it into the language/runtime, it
doesn't have a generic mechanism to solve similar problems.

But my point is that it solves the problem much better than monads do - in
particular if you don't just look at the syntax/user interface level, but at
the non-functional aspects, like debuggability, performance, etc.

I remember a time when Aspect Oriented Programming was all the rage, with
point cuts etc to capture cross-cutting aspects of your program. AOP is
certainly less generic than monads, but even with rather specialized tools, it
turns out the two "killer apps" of AOP were much better served by runtime
support in the VM (logging/debugging) and explicit code (transactions).

I'm skeptical about the holy grail that you describe. I see how it's
attractive, but in my work experience, the pragmatics of solving a particular
issue often turn out to be harder (sometimes much) than the theoretical
aspects.

There's certainly influence from more theoretical research in all we do in CS,
but for example with Go, it's only based on Hoare's CSP model in the most
abstract sense, and its implementation of lightweight threads, its stack model
etc are definitely in the domain of clever engineering, not breakthrough
research.

~~~
jerf
"AOP is certainly less generic than monads, but even with rather specialized
tools, it turns out the two "killer apps" of AOP were much better served by
runtime support in the VM (logging/debugging) and explicit code
(transactions)."

Indeed... beware the tech promoted with the same example over and over again.
That's a huge red flag.

------
overgard
I'm really out on most of the "async" stuff, after having used it. (Mostly in
Node and Tornado)

Remember in the early 90s when Windows and Mac OS were "cooperatively"
multitasked? Which is to say, you had to explicitly yield to allow other
applications to run (or risk locking up the entire system). And then it was
replaced with pre-emptive multitasking, which allowed the scheduler to figure
out what process deserved CPU time, while allowing the programmer not to have
to think about it. You could call a blocking IO function, and the OS would
just go do something else while you waited.

All this "async" stuff seems like a return of cooperative multitasking, only
worse. Not only do I have to explicitly yield, but now it's to some event loop
that can't even properly use multiple cores, or keep a coherent stack trace.
It's a nightmare to debug. It's theoretically fast... except if one request
forgets to yield, it can clog up the entire thing. I guess you use multiple
processes for that and a dispatcher, but at that point you've basically
reinvented preemptive multitasking... badly.

Threads aren't perfect, but excluding STM and actor models they definitely
suck the least.

~~~
woah
How does one request "forget to yield"?

~~~
overgard
I'm mostly thinking Tornado/Python, where the async stuff happened via
generators (IE, the "yield" keyword). But that meant there were large chunks
of the python standard library that were basically off limits because they
blocked and couldn't be used with a generator, so if you used those functions
the main event loop would be stuck waiting.

For node, we happen to have a server that calls into a geometric modeler (for
collaborative 3d modeling). Since it's doing a lot of math, you could totally
conceive that while an expensive modeling operation is running and chewing
through CPU cycles, all the other sessions on the system are just waiting.
That's kind of a specific use case admittedly, but with threads it wouldn't
even be an issue, but with async it's a problem. I get there's ways around it
(offload the work to a worker process asynchronously, for instance, which is
what we're doing), but it's annoying that it's a thing I have to think about
when the functionality is built into the OS.

------
jenius
So I've been writing javascript full time for a couple years at this point,
client, server, and open source, and what I have adopted is coercing
everything into promises, which I suppose would be the author's way of saying
making everything red.

If you have something that is not async mixed in with something that's async,
you can still add it to the promise chain and it will resolve right away. If
you have a library that uses callbacks or some other thing, you can just wrap
it such that it now uses promises. And then of course you can always look for
alternate libraries that use promises from the start and skip step as well.

I've found that using promises for everything works super well. There is no
confusion or doubt at all. Everything has the potential to branch into async
at any time with no consequences and without complicating the flow. And an
additional benefit is that rather than checking for errors after any operation
you do, you choose where to check for errors. When a promise rejects, it skips
everything else in the chain until it gets to a catch. So rather than running
4 async operations and doing an "if error do this" after each operation kind
of deal, you can catch the error in once place and handle it once. Promises
surpress the error in the promise chain until you choose to handle it, which
is dangerous if you don't understand how promises work, but really useful once
you do.

There are really solid promise-based libraries for all common operations in
node right now. When.js for general promises, composition, and coercion,
rest.js for network requests, bookshelf and knex for database connection and
orm stuff, etc. If you are a js developer, give them a shot!

Don't get me wrong, I'm not trying to claim that this is better than any other
language-level construct by any means, but if you are working in javascript,
where you have to deal with javascript's limitations as a language, from
experience I can say that working in an all-promises environment makes things
quite pleasant.

~~~
platz
This is great for one's own projects, but if creating something for more than
one's immediate project (i.e. libraries), it forces everyone else to adopt the
same style.

Maybe those other projects are also using _other_ libraries that don't use
promises, so now there is a problem. Do you wrap the other library in promises
too, if that is even a viable option for you?

Colorness is a problem for the whole ecosystem too.

~~~
esailija
Promise is the best thing an async library method could return because it's
always trivial to convert into anything you want (callback, stream,
async/await, yield whatever you want) because it's the only thing that's
standardized. However, nobody gets the callback contract
([https://gist.github.com/CrabDude/10907185](https://gist.github.com/CrabDude/10907185))
right. Even node core gets the callback contract wrong in different ways in
its different APIs.

In practice however most library callback apis resemble the callback contract
enough so that one-line promise-wrapping of the entire library is possible.

------
malfist
I had no clue this was about async functions. I assumed it was safe/unsafe
functions until I got to the part about it not being. I think that is a much
stronger issue than sync/async.

~~~
lmkg
I guess more generally, it's about _effects_. The same rant could apply to any
particular effect, because they share the property of being infectious: async,
unsafe (in C#, not Rust), throws (checked exceptions), IO, etc.

My question is: In a language that has a first-class effect system, does the
red/blue problem disappear? Does being able to generalize over effect allow
you to avoid cutting the world in half, and allow you to compose effects
easier?

~~~
tel
Absolutely, yes.

For instance, here is an abstraction of code which reads and writes

    
    
        class Monad m => MonadTeletype m where
          writeLn :: String -> m ()
          readLn  :: m String
    

Here is one which receives the current time

    
    
        class Monad m => MonadNow m where
          now :: m UTCTime
    

And here is code which transparently combines them

    
    
        echoTime :: (MonadNow m, MonadTeletype m) => m ()
        echoTime = do
          line <- readLn 
          t0   <- now
          writeLine (show t0 ++ ": " ++ line)
    

You then, when actually executing echoTime, have to create a monadic
implementation which you prove to instantiate both MonadTeletype and MonadNow.
For instance, we can always show that `echoTime` can be satisfied by the
Haskell "sin bin" type, IO:

    
    
        instance MonadTeletype IO where
          writeLn = putStrLn
          readLn  = getLine
    
        instance MonadTeletype IO where
          now = getCurrentTime
    

That said, it's easy to write monadic languages like MonadTeletype and
MonadNow which _aren 't_ trivially satisfied by IO. This occurs when you've
imputed new meaning and language into your monad, which is really cool. IO is
the "sum of all evils", but it's not terrifically expressive.

~~~
pron
Except that's not an effect systems because you only have one real effect -
the IO type. The challenge of effect systems is to describe effects and how
they interact. For example, you'll have an effect that says "a lock is
obtained" and one that says "a lock is released", and if you call them both,
_in the right order_ , in the same function, then that function has an effect
of "mutating something under lock".

~~~
tel
Sure it is! The effect `(MonadTeletype m, MonadNow m) => m a` is just as
_real_ as IO. There's nothing special about IO except that the Haskell RTS
knows how to interpret it.

I could, for instance, build a pure interpreter of those effects into, say, a
stream transformer or compile it into a different language (though we'd have
to do some tricks to expose Haskell's name binding).

If you want your lock obtained/lock released bit then you want indexed monads.
It's easy enough to do, but the safety/complexity tradeoff in the Haskell
community has landed on the other side of that... probably for more historical
reasons than actual technical ones.

~~~
pron
> There's nothing special about IO except that the Haskell RTS knows how to
> interpret it.

But that is what makes it an effect. Sure, you can _model_ effects in a
"hosted" language this way, but that's not what we mean by effects.

~~~
tel
I disagree. In force, the Haskell RTS is just a "model" of the IO effects as
well. In particular, there are at least two such models, the GHCi interpreter
and the GHC compiler!

As another example, it'd be completely possible to write your own IO and
interpret it in another language. You can read about this on Edward Kmett's
blog where he talks about implementing IO for Ermine [0]

[http://www.tuicool.com/articles/I3EJVb](http://www.tuicool.com/articles/I3EJVb)

Of course, there's something important that makes us want to differentiate
"real world" side effects from internal "model effects". Ultimately, from a
correctness and reasoning POV, there ought to be no difference. From a
practical point of view, some models are more interesting or important than
others. But as a compiler writer you're put right back into the same hot seat.

[0] Unfortunately, his blog appears to be OOC right now, so here's a weird
chinese mirror!

------
adrusi
A lot of commentors are mentioning that this is just a specific case of effect
typing. Haskell and monads have been brought up as an example of effects
typing, but I'd like to present another example that more closely resemble
familiar static type systems.

Nim[1], at least at one point (I'm looking at the current manual and can't
find it documented), had support for tagging functions with a pragma and the
compiler would enforce that functions without the pragma can't call function
with the pragma outside of a special block. The compiler interpreted certain
pragmas like "impure" and "exception" in a special way, outputting a warning
when certain language features were used inside functions marked with the
pragma. The language manual shows that the compiler still at least supports
these special pragmas. It's possible that it never supported custom pragmas
and I'm just misremembering.

Interestingly, the author dismisses promises as not a major improvement and
calls async/await and generators at least a half-way solution. It turns out
that the are just a simple syntactic transform that isn't powerful enough to
express everything that coroutines can. Promises, on the other hand, can.
Promises are actually a monad: `.then(...)` is the bind function (`>>=`). This
is essentially how the IO monad in Haskell works.

[1]: [https://nim-lang.org/](https://nim-lang.org/)

~~~
seanmcdirmid
Also, what about Koka ?

[http://koka.codeplex.com/](http://koka.codeplex.com/) and
[http://research.microsoft.com/en-
us/projects/koka/](http://research.microsoft.com/en-us/projects/koka/)

From the latter:

> The Koka project tries to see if effect inference can be used on larger
> scale programming. The language is strict (as in ML), but seperates pure
> values from side effecting computations (as in Haskell). Through the effect
> types, there is also a strong connection to its denotational semantics,
> where the effect type of each function has a direct translation to the type
> signature of the denotational function.

~~~
tel
I just wanted to say that I've finally taken a look at Koka after hearing you
mention in on HN many times. It's very nice! I appreciate how it provides most
of the bang of effect types much more conveniently than one might expect with
the explicit monadic structuring going on in Haskell.

Have you looked at Frank [0] by any chance? It has a very similar row-effect
type (I guess these were both first explored in Eff?) but in a CBPV language
which makes it clear how pure values are separated from effectful computation.

[0] [http://homepages.inf.ed.ac.uk/slindley/papers/frankly-
draft-...](http://homepages.inf.ed.ac.uk/slindley/papers/frankly-draft-
march2014.pdf)

~~~
seanmcdirmid
I'm not versed in this area, but I listen to Daan talk about Koka a lot. You
might want to ask him directly about Frank.

------
aidos
For those that haven't noticed this article is by the chap who wrote the
absolutely wonderful
[http://gameprogrammingpatterns.com/](http://gameprogrammingpatterns.com/)

------
gfxmonk
It seems like a lot of people are interested in fixing this, and would be keen
to see a solution. I believe StratifiedJS is precisely that solution (for JS
at least), and it has existed in working form for years:
[http://stratifiedjs.org/](http://stratifiedjs.org/) (it's not just an
experiment - it's remarkably stable).

StratidiedJS completely eliminates the sync/async distinction at the syntax
level, with the compiler/runtime managing continuations automatically. A `map`
function in SJS works just like you would want, regardless of whether the
mapping function is synchronous or not.

In addition, it provides _structured_ forms of concurrency at the language
level, as well as cancellation (which most async APIs don't support).

Disclosure: I work on StratifiedJS, and just wish more people knew about it.

~~~
larard
Looks like exactly what this article is talking about. I have no idea why
noone else has commented on it....

How does stratifiedjs work under the hood? Does it switch out the stack?

~~~
erjiang
It compiles down to JavaScript that you can then run on Node or the browser.

I once wrote a big long rant about the mess that JS and Node have made trying
to cope with async code and got tons of comments proposing X or Y library that
would "fix" the issue. Not a single person mentioned StratifiedJS. I wonder if
there was some history to it that prevented it from getting momentum.[0]

[0]
[http://notes.ericjiang.com/posts/791](http://notes.ericjiang.com/posts/791)

~~~
afri
I think there are a couple of problems:

1 - SJS effectively 'solves' the concurrency problem, but it is not a problem
that is on the top of most people's mind when they write an application. To a
first approximation, the concurrency problem in JS looks "solved" to people
already (promises, generators, etc), and it is only when you get down to it
and look at it in detail you see that SJS is actually a substantially more
complete solution to the problem.

2 - Many people see it as a 'cute' solution that doesn't scale to big
applications. To counter that point we've developed a complete SJS
client/server framework - [https://conductance.io](https://conductance.io) \-
and are writing big complicated apps on it (such as [http://magic-
angle.com/](http://magic-angle.com/) ). It's still rough around the edges, but
we're pretty confident that the upcoming release (scheduled for end March)
will show just how powerful the SJS paradigm is. There is a presentation on it
here: [http://www.infoq.com/presentations/real-time-app-
stratified-...](http://www.infoq.com/presentations/real-time-app-stratified-
javascript)

Disclaimer: I work on SJS!

------
viewer5
I don't have anything of substance to add, but author, if you're reading this,
I enjoyed your writing style a lot.

~~~
munificent
Thank you! I know the reader's time is precious so I try to cram as much
entertainment and information in there as I can.

~~~
otakucode
Spidermouth the Night Clown will stay with me for years.

~~~
munificent
At some point, the phrase "Night Clown" popped into my head and it's so
deliciously evocative I've had it stuck rattling around in there since then.

------
dvirsky
Tornado made async code slightly less painful by using yield and coroutines,
but you still have to run blocking methods on thread pools using futures. They
abstracted it really nicely and I can now write clean code if I need an
occasional blocking library in my tornado code.

But after writing tons of Go over the past 2-3 years, going back to async
code, even with the tornado sugar, just feels like driving a manual car after
getting used to automatic. It's just redundant. I've seen better, I've written
way cleaner code and got better concurrency. Promises, futures, yielded
generators - they are all syntactic hacks. The only language I've used that
really addresses this properly is Go (disclaimer: I haven't written any
Erlang).

~~~
erjiang
Really, any language that has threading gives you the easy concurrency that
you want. Now, if you limit your choices to "concurrency, but not with OS
threads", then your pool is a lot smaller.

I think Clojure (core.async), Haskell (GHC), and Rust would also give you what
you're looking for.

------
echoless
No actor-model based language has this problem, so perhaps all it comes down
to is baking in the right(or even any decent) concurrency support from the
start, at the language level.

~~~
sz4kerto
Of course they have. Well, everything is nice and dandy until your actors
never block. As soon as you start blocking, your async model has the same
problems as threads have. And you cannot really write real-life systems
without blocking actors.

Ask Erlang programmers whether they have dealt with this kind of stuff.

~~~
echoless
The main process or any parent process can be made to not block by spawning
sub-processes that handle an entire group of child processes , so I don't see
how that is a problem.

------
pka
[1] is a nice read regarding continuations (and the Cont monad), though a bit
more advanced.

[1] [http://blog.sigfpe.com/2008/12/mother-of-all-
monads.html](http://blog.sigfpe.com/2008/12/mother-of-all-monads.html)

------
Nilzor
His solutions is _threads_? Really? Has he read no history? There are problems
with threads. That's why async I/O is hot right now. Threads is a limited
resource. Threads are expensive to create and dispose. Context switches are
espensive. Threads must be synchronized. Threads can have race conditions.

Good rant, but I didn't expect him to serve such a shallow conclusion after a
solid and insightful introduction.

~~~
drostie
Since he's a Go fan, he might prefer lightweight threads running in an event
loop rather than real threads with their context-switches. Moreover his
concern is _syntactic_ , not semantic: so maybe he'd like something which
"looks thread-like" but "complies-to-CPS" too.

Some Microsoft engineers are working on a nice solution to the thread-race-
condition problem with a somewhat different approach: pretend your threaded
environment is a DVCS. When you want to spawn a new lightweight-thread, think
of it as a git clone. It makes its own changes to its own state, then you can
eventually pull its changes into the present state -- so you get deterministic
threading if you've got a deterministic merge algorithm.

[http://research.microsoft.com/en-
us/projects/revisions/](http://research.microsoft.com/en-
us/projects/revisions/)

~~~
seanmcdirmid
If you are interested in concurrent revisions, you might also be interested in
Glitch:

[http://research.microsoft.com/en-
us/people/smcdirm/managedti...](http://research.microsoft.com/en-
us/people/smcdirm/managedtime.aspx)

In contrast to Burckhardt et. al's work, Glitch re-executes computations to
reach a fixed point (logging side effects so they can be rolled back along the
way). This paper compares the various approaches in solving this problem
(including concurrent revisions, but also a few others):

[http://research.microsoft.com/pubs/211297/onward14.pdf](http://research.microsoft.com/pubs/211297/onward14.pdf)

~~~
drostie
This has been a fascinating read, but I worry that there's too much pressure
in the paper to view all of these different approaches as somehow "solving the
same problem". I'm not sure they do.

There is this "whoops, my state updated out from under me" concurrency
problem. To steal a metaphor from physics, the problem is that we expect the
spacetime to be "locally flat" but to curve at long scales -- similarly we
expect the state to somehow be locally private but globally we discover it's
shared. The multiple timelines (of operations in various threads) contain
updates to the shared state which are noncommutative; when we synchronize we
try to throw up these big walls, global across all timelines, across which
operations cannot pass.

Glitch's approach is to break these threads into commutative-and-
noncommutative parts (fixed-points and events). So the focus is not actually
on the fixed-points; they can be parallelized without fear because the
operations on the state commute. The focus is instead on the events. And there
it's not clear that the events solve the concurrency problem at all. (Please
don't take that as a criticism; I don't think you were trying to solve this
problem. Your approach reminds me a little of Sussman's propagators, and
definitely it has some nice implications for live-coding.)

Concurrent revisions are a more direct response. The basic insight is that no
matter what, "there is one authoritative timeline, let's call it the consumer-
timeline, which is how all of these noncommutative events from multiple
timelines will _actually_ be ordered." Given that insight, and the need for
the state to look locally-private, these explicit joins are an obvious
solution. The joins look a little jarring because in the spacetime analogy
we're talking about a 'piecewise-smooth' function, so there's a sort of
derivative-discontinuity happening here.

What would be really interesting is if the shared commutative operations of
Glitch could change "piecewise smooth" to "smooth", but I don't think these
ideas have that power.

------
anonymoushn
I don't really consider this solved in languages or runtimes that lack green
threads. If you want to make 300,000 threads in Lua or Go, go right ahead, but
if you port that application to Java you're going to have a bad time.

An orthogonal useful thing that is sometimes not solved in languages with
green threads is the ability to copy continuations. If you have call/cc or
coroutine.clone, you can e.g. use rollback netcode in your fighting game and
store state in execution states, but if you cannot copy execution states, you
will have to choose one or the other.

~~~
markc
>If you want to make 300,000 threads in Lua or Go, go right ahead, but if you
port that application to Java you're going to have a bad time.

In Java itself perhaps, but with Java you can use lightweight threads via
Quasar. [http://blog.paralleluniverse.co/2013/05/02/quasar-
pulsar/](http://blog.paralleluniverse.co/2013/05/02/quasar-pulsar/)

"a single machine can handle millions of them"

------
dwenzek
A related way of composing code is railway oriented programming already
commented on HN (1)

Underneath there are monads, sure, but the author has deliberately chosen a
more mechanical metaphor which may help those for whom monads sound too
abstract.

The post focuses more on how to handle errors that asynchrony, but it shows
well the key steps to lift a blue function into red ones and to compose these
constructions.

(1)
[https://news.ycombinator.com/item?id=7887134](https://news.ycombinator.com/item?id=7887134)

------
z3t4
I don't get it ... But hey, it took about six month for me to figure out how
to write asynchronous JavaScript ... The key is to not use anonymous
functions, it will flatten out the "Christmas tree" of callbacks. And it makes
it possible to read what the code does, or at least what the programmer wants
the code to do. It's much better then "then", then what, but, then, why
complicate things when it's actually possible to be verbose.

~~~
Sharlin
It helps syntactically but does nothing about the main problem: composability.
Each of your functions has to know which function to call next, instead of
just having one function that invokes n other functions in order. That's what
next() is meant for.

~~~
z3t4
For composability in JS I use small utility modules like "inOrder":

    
    
      function beforeWifeComesHome() {
        takeTheTrashOut();
        function takeTheTrashOut() {
          inOrder([openDoor, goOut, emptyTrash, goIn, closeDoor]);
        }
      }
    

But it would be easier to let the compiler take care of the async calls and
insert callbacks automatically so that you can think synchronously while the
program works asynchronously. And if you want to run stuff in parallel you
should use child processes.

------
kmike84
[https://glyph.twistedmatrix.com/2014/02/unyielding.html](https://glyph.twistedmatrix.com/2014/02/unyielding.html)
is a good read - there are reasons why explicit sync-async "coloring" (i.e.
await/yield) is better than green threads/coroutines which author admires.

~~~
munificent
That's a really long post, and I dimly recall reading it a while back, but
after you scrape off it saying the same thing over and over again, I think it
reduces down to an uncompelling argument.

It's basically, "I want context switches syntactically explicit in my code. If
they aren't, reasoning about it is exponentially harder."

And I think that's pretty clearly a strawman. Everything the author claims
about threaded code is true of _any_ re-entrant code, multi-threaded or not.
If your function inadvertently calls a function which calls the original
function recursively, you have the exact same problem.

But, guess what, that just doesn't happen that often. Most code isn't re-
entrant. Most state isn't shared.

For code that is concurrent and does interact in interesting ways, you _are_
going to have to reason about it carefully. Smearing "yield from" all over
your code doesn't solve.

In practice, you'll end up with so many "yield from" lines in your code that
you're right back to "well, I guess I could context switch just about
anywhere", which is the problem you were trying to avoid in the first place.

~~~
glyph
I don't think you read the article very carefully. Specifically, what I am
claiming about (shared-state) threaded code which is _not_ true of "any re-
entrant code" is the fact that you cannot tell whether threaded code is re-
entrant or not without a comprehensive, combinatorial whole-program analysis.
It's not feasible to know what you might be re-entrant with, because you might
be re-entrant with anything. With preemptive threads you really just can't do
it at all, but with green threads you have to follow every call stack all the
way to the bottom, because nothing else about it the bottom of the stack tells
you whether you're going to context switch or not. When you get back a
Deferred (or a Promise or a Future or a thunk or whatever), then the stack-
inspection can be _shallow_ ; O(1) on the depth of your stack instead of
O((stack_depth) * (function_length)).

------
fixermark
Sadly, you can build a Java API to introduce callback hell if you want to.

[https://developer.android.com/reference/android/hardware/cam...](https://developer.android.com/reference/android/hardware/camera2/package-
summary.html)

It's nice you don't have to, though.

------
Roboprog
Funny. I thought he was talking about Java 8 for a while. We (eventually, when
the rest of the stack catches up) get functors / lambdas, but:

* red = instance methods.

* blue = static methods, which cannot call an instance method.

~~~
jongraehl
That's not true. You can call an instance method from anywhere given an
object. There's just no implicit 'this' in a static method.

~~~
Roboprog
Er, what's that? I seem to have lost a receiver for your message...

Once you go functional, instance methods start to look like a special klugery
for currying the first argument to a function. Go and Nim (for instance) hint
at this more than a little, as well, with their OOP syntax.

Bundling two kinds of methods within a "class" starts to feel weird when you
start using individual functions.

------
StrykerKKD
What about isolates in Dart? I mean isolates are isolated processes, which
also can be a thread and they also can communicate with each other.

~~~
munificent
Unfortunately:

1\. Isolates can only communicate with each other using asynchronous method
calls. So even though you can move some work to another isolate, you can't
_block_ waiting for it to complete, so your function still has to be red.

2\. Isolates are very limited in what you can send between them, which makes
then not very useful in practice for much of anything.

~~~
StrykerKKD
But waiting is non-blocking right? It means that we can just make one
isolate's role to wait for all other isolates to complete.

For me the bad part of "red" functions is that testing is harder for it or I
just suck at it.

------
totony
You can always make a function that sync all async operations: sleep until a
global variable is changed by the callback.

A pain, but still not that bad.

~~~
jakobegger
Yes, I've used a similar technique myself; however I consider it only a last
resort. But doesn't that require your language to have threads? Is it possible
to do this in eg. Javascript?

~~~
totony
There is no sleep call that i know of in javascript, but you are right it
depends on the language and its implementation of sleep.

Edit: although you could emulate a sleep call by setting a (pediodic) timeout

------
mmphosis

      POLLIT
          CMP $C050
          BNE POLLIT
    

It's not a function, it's a procedure. It doesn't need to return anything, it
produces an effect. Interrupts will break it.

    
    
       await until an as yet indeterminable time in the future

------
Paradigma11
Relevant paper:
[http://www.info.ucl.ac.be/~pvr/VanRoyChapter.pdf](http://www.info.ucl.ac.be/~pvr/VanRoyChapter.pdf)
Especially p34+ regarding concurrency paradigms.

------
dmitrig01
Interesting allegory. Pretty much the exact same thing could be said about
pure/impure functions in a language like Haskell -- where I thought this was
going (until I realized it was about JS)

------
karlheinz
js-csp enables go-like concurrency in javascript:
[https://github.com/ubolonton/js-csp](https://github.com/ubolonton/js-csp)

------
aidenn0
Or you could be like scheme and make all functions red.

------
parfamz
What about FRP?

~~~
Gurkenmaster
FRP has nothing to do with this. Concurrency is only an implementation detail
of FRP.

